Estimation in Dirichlet random effects models
Kyung, Minjung; Casella, George; 10.1214/09-AOS731
2010-01-01
We develop a new Gibbs sampler for a linear mixed model with a Dirichlet process random effect term, which is easily extended to a generalized linear mixed model with a probit link function. Our Gibbs sampler exploits the properties of the multinomial and Dirichlet distributions, and is shown to be an improvement, in terms of operator norm and efficiency, over other commonly used MCMC algorithms. We also investigate methods for the estimation of the precision parameter of the Dirichlet process, finding that maximum likelihood may not be desirable, but a posterior mode is a reasonable approach. Examples are given to show how these models perform on real data. Our results complement both the theoretical basis of the Dirichlet process nonparametric prior and the computational work that has been done to date.
Performance of Random Effects Model Estimators under Complex Sampling Designs
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…
The problematic estimation of "imitation effects" in multilevel models
Directory of Open Access Journals (Sweden)
2003-09-01
Full Text Available It seems plausible that a person's demographic behaviour may be influenced by that among other people in the community, for example because of an inclination to imitate. When estimating multilevel models from clustered individual data, some investigators might perhaps feel tempted to try to capture this effect by simply including on the right-hand side the average of the dependent variable, constructed by aggregation within the clusters. However, such modelling must be avoided. According to simulation experiments based on real fertility data from India, the estimated effect of this obviously endogenous variable can be very different from the true effect. Also the other community effect estimates can be strongly biased. An "imitation effect" can only be estimated under very special assumptions that in practice will be hard to defend.
Estimation of Nonlinear Dynamic Panel Data Models with Individual Effects
Directory of Open Access Journals (Sweden)
Yi Hu
2014-01-01
Full Text Available This paper suggests a generalized method of moments (GMM based estimation for dynamic panel data models with individual specific fixed effects and threshold effects simultaneously. We extend Hansen’s (Hansen, 1999 original setup to models including endogenous regressors, specifically, lagged dependent variables. To address the problem of endogeneity of these nonlinear dynamic panel data models, we prove that the orthogonality conditions proposed by Arellano and Bond (1991 are valid. The threshold and slope parameters are estimated by GMM, and asymptotic distribution of the slope parameters is derived. Finite sample performance of the estimation is investigated through Monte Carlo simulations. It shows that the threshold and slope parameter can be estimated accurately and also the finite sample distribution of slope parameters is well approximated by the asymptotic distribution.
Nonparametric Estimation of Distributions in Random Effects Models
Hart, Jeffrey D.
2011-01-01
We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.
Fan, Xitao; Wang, Lin; Thompson, Bruce
1999-01-01
A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)
Estimating a marriage matching model with spillover effects.
Choo, Eugene; Siow, Aloysius
2006-08-01
We use marriage matching functions to study how marital patterns change when population supplies change. Specifically, we use a behavioral marriage matching function with spillover effects to rationalize marriage and cohabitation behavior in contemporary Canada. The model can estimate a couple's systematic gains to marriage and cohabitation relative to remaining single. These gains are invariant to changes in population supplies. Instead, changes in population supplies redistribute these gains between a couple. Although the model is behavioral, it is nonparametric. It can fit any observed cross-sectional marriage matching distribution. We use the estimated model to quantify the impacts of gender differences in mortality rates and the baby boom on observed marital behavior in Canada. The higher mortality rate of men makes men scarcer than women. We show that the scarceness of men modestly reduced the welfare of women and increased the welfare of men in the marriage market. On the other hand, the baby boom increased older men's net gains to entering the marriage market and lowered middle-aged women's net gains.
Reduced Noise Effect in Nonlinear Model Estimation Using Multiscale Representation
Directory of Open Access Journals (Sweden)
Mohamed N. Nounou
2010-01-01
Full Text Available Nonlinear process models are widely used in various applications. In the absence of fundamental models, it is usually relied on empirical models, which are estimated from measurements of the process variables. Unfortunately, measured data are usually corrupted with measurement noise that degrades the accuracy of the estimated models. Multiscale wavelet-based representation of data has been shown to be a powerful data analysis and feature extraction tool. In this paper, these characteristics of multiscale representation are utilized to improve the estimation accuracy of the linear-in-the-parameters nonlinear model by developing a multiscale nonlinear (MSNL modeling algorithm. The main idea in this MSNL modeling algorithm is to decompose the data at multiple scales, construct multiple nonlinear models at multiple scales, and then select among all scales the model which best describes the process. The main advantage of the developed algorithm is that it integrates modeling and feature extraction to improve the robustness of the estimated model to the presence of measurement noise in the data. This advantage of MSNL modeling is demonstrated using a nonlinear reactor model.
Multivariate Logistic Model to estimate Effective Rainfall for an Event
Singh, S. K.; Patil, Sachin; Bárdossy, A.
2009-04-01
Multivariate logistic models are widely used in biological, medical, and social sciences but logistic models are seldom applied to hydrological problems. A logistic function behaves linear in the mid range and tends to be non-linear as it approaches to the extremes, hence it is more flexible than a linear function and capable of dealing with skew-distributed variables. They seem to bear good potential to handle asymmetrically distributed hydrological variables of extreme occurrence. In this study, logistic regression approach is implemented to derive a multivariate logistic function for effective rainfall; in the process runoff coefficient is assumed to be a Bernoulli-distributed dependent variable. A backward stepwise logistic regression procedure was performed to derive the logistic transfer function between runoff coefficient and catchment as well as event variables (e.g. drainage density, soil moisture etc). The investigation was carried out using data base for 244 rainfall-runoff events from 42 mesoscale catchments located in south-west Germany. The performance of the derived logistic transfer function was compared with that of SCS method for estimation of effective rainfall.
Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects
Directory of Open Access Journals (Sweden)
Guangjie Li
2015-07-01
Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.
Estimation of Spatial Dynamic Nonparametric Durbin Models with Fixed Effects
Qian, Minghui; Hu, Ridong; Chen, Jianwei
2016-01-01
Spatial panel data models have been widely studied and applied in both scientific and social science disciplines, especially in the analysis of spatial influence. In this paper, we consider the spatial dynamic nonparametric Durbin model (SDNDM) with fixed effects, which takes the nonlinear factors into account base on the spatial dynamic panel…
School Processes Mediate School Compositional Effects: Model Specification and Estimation
Liu, Hongqiang; Van Damme, Jan; Gielen, Sarah; Van Den Noortgate, Wim
2015-01-01
School composition effects have been consistently verified, but few studies ever attempted to study how school composition affects school achievement. Based on prior research findings, we employed multilevel mediation modeling to examine whether school processes mediate the effect of school composition upon school outcomes based on the data of 28…
Quasi-likelihood estimation of average treatment effects based on model information
Institute of Scientific and Technical Information of China (English)
Zhi-hua SUN
2007-01-01
In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods.All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.
Quasi-likelihood estimation of average treatment effects based on model information
Institute of Scientific and Technical Information of China (English)
2007-01-01
In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods. All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.
Effective single scattering albedo estimation using regional climate model
CSIR Research Space (South Africa)
Tesfaye, M
2011-09-01
Full Text Available In this study, by modifying the optical parameterization of Regional Climate model (RegCM), the authors have computed and compared the Effective Single-Scattering Albedo (ESSA) which is a representative of VIS spectral region. The arid, semi...
Causal Effect Estimation Methods
2014-01-01
Relationship between two popular modeling frameworks of causal inference from observational data, namely, causal graphical model and potential outcome causal model is discussed. How some popular causal effect estimators found in applications of the potential outcome causal model, such as inverse probability of treatment weighted estimator and doubly robust estimator can be obtained by using the causal graphical model is shown. We confine to the simple case of binary outcome and treatment vari...
robustlmm: An R Package for Robust Estimation of Linear Mixed-Effects Models
Directory of Open Access Journals (Sweden)
Manuel Koller
2016-12-01
Full Text Available As any real-life data, data modeled by linear mixed-effects models often contain outliers or other contamination. Even little contamination can drive the classic estimates far away from what they would be without the contamination. At the same time, datasets that require mixed-effects modeling are often complex and large. This makes it difficult to spot contamination. Robust estimation methods aim to solve both problems: to provide estimates where contamination has only little influence and to detect and flag contamination. We introduce an R package, robustlmm, to robustly fit linear mixed-effects models. The package's functions and methods are designed to closely equal those offered by lme4, the R package that implements classic linear mixed-effects model estimation in R. The robust estimation method in robustlmm is based on the random effects contamination model and the central contamination model. Contamination can be detected at all levels of the data. The estimation method does not make any assumption on the data's grouping structure except that the model parameters are estimable. robustlmm supports hierarchical and non-hierarchical (e.g., crossed grouping structures. The robustness of the estimates and their asymptotic efficiency is fully controlled through the function interface. Individual parts (e.g., fixed effects and variance components can be tuned independently. In this tutorial, we show how to fit robust linear mixed-effects models using robustlmm, how to assess the model fit, how to detect outliers, and how to compare different fits.
Estimating the Effects of Parental Divorce and Death With Fixed Effects Models
Amato, Paul R.; Anthony, Christopher J.
2014-01-01
The authors used child fixed effects models to estimate the effects of parental divorce and death on a variety of outcomes using 2 large national data sets: (a) the Early Childhood Longitudinal Study, Kindergarten Cohort (kindergarten through the 5th grade) and (b) the National Educational Longitudinal Study (8th grade to the senior year of high school). In both data sets, divorce and death were associated with multiple negative outcomes among children. Although evidence for a causal effect o...
Simultaneous optimal estimates of fixed effects and variance components in the mixed model
Institute of Scientific and Technical Information of China (English)
WU Mixia; WANG Songgui
2004-01-01
For a general linear mixed model with two variance components, a set of simple conditions is obtained, under which, (i) the least squares estimate of the fixed effects and the analysis of variance (ANOVA) estimates of variance components are proved to be uniformly minimum variance unbiased estimates simultaneously; (ii) the exact confidence intervals of the fixed effects and uniformly optimal unbiased tests on variance components are given; (iii) the exact probability expression of ANOVA estimates of variance components taking negative value is obtained.
Estimating the Effects of Parental Divorce and Death With Fixed Effects Models.
Amato, Paul R; Anthony, Christopher J
2014-04-01
The authors used child fixed effects models to estimate the effects of parental divorce and death on a variety of outcomes using 2 large national data sets: (a) the Early Childhood Longitudinal Study, Kindergarten Cohort (kindergarten through the 5th grade) and (b) the National Educational Longitudinal Study (8th grade to the senior year of high school). In both data sets, divorce and death were associated with multiple negative outcomes among children. Although evidence for a causal effect of divorce on children was reasonably strong, effect sizes were small in magnitude. A second analysis revealed a substantial degree of variability in children's outcomes following parental divorce, with some children declining, others improving, and most not changing at all. The estimated effects of divorce appeared to be strongest among children with the highest propensity to experience parental divorce.
Estimating dose painting effects in radiotherapy: a mathematical model.
Directory of Open Access Journals (Sweden)
Juan Carlos López Alfonso
Full Text Available Tumor heterogeneity is widely considered to be a determinant factor in tumor progression and in particular in its recurrence after therapy. Unfortunately, current medical techniques are unable to deduce clinically relevant information about tumor heterogeneity by means of non-invasive methods. As a consequence, when radiotherapy is used as a treatment of choice, radiation dosimetries are prescribed under the assumption that the malignancy targeted is of a homogeneous nature. In this work we discuss the effects of different radiation dose distributions on heterogeneous tumors by means of an individual cell-based model. To that end, a case is considered where two tumor cell phenotypes are present, which we assume to strongly differ in their respective cell cycle duration and radiosensitivity properties. We show herein that, as a result of such differences, the spatial distribution of the corresponding phenotypes, whence the resulting tumor heterogeneity can be predicted as growth proceeds. In particular, we show that if we start from a situation where a majority of ordinary cancer cells (CCs and a minority of cancer stem cells (CSCs are randomly distributed, and we assume that the length of CSC cycle is significantly longer than that of CCs, then CSCs become concentrated at an inner region as tumor grows. As a consequence we obtain that if CSCs are assumed to be more resistant to radiation than CCs, heterogeneous dosimetries can be selected to enhance tumor control by boosting radiation in the region occupied by the more radioresistant tumor cell phenotype. It is also shown that, when compared with homogeneous dose distributions as those being currently delivered in clinical practice, such heterogeneous radiation dosimetries fare always better than their homogeneous counterparts. Finally, limitations to our assumptions and their resulting clinical implications will be discussed.
Nonlinear Random Effects Mixture Models: Maximum Likelihood Estimation via the EM Algorithm.
Wang, Xiaoning; Schumitzky, Alan; D'Argenio, David Z
2007-08-15
Nonlinear random effects models with finite mixture structures are used to identify polymorphism in pharmacokinetic/pharmacodynamic phenotypes. An EM algorithm for maximum likelihood estimation approach is developed and uses sampling-based methods to implement the expectation step, that results in an analytically tractable maximization step. A benefit of the approach is that no model linearization is performed and the estimation precision can be arbitrarily controlled by the sampling process. A detailed simulation study illustrates the feasibility of the estimation approach and evaluates its performance. Applications of the proposed nonlinear random effects mixture model approach to other population pharmacokinetic/pharmacodynamic problems will be of interest for future investigation.
Budic, Lara; Didenko, Gregor; Dormann, Carsten F
2016-01-01
In species distribution analyses, environmental predictors and distribution data for large spatial extents are often available in long-lat format, such as degree raster grids. Long-lat projections suffer from unequal cell sizes, as a degree of longitude decreases in length from approximately 110 km at the equator to 0 km at the poles. Here we investigate whether long-lat and equal-area projections yield similar model parameter estimates, or result in a consistent bias. We analyzed the environmental effects on the distribution of 12 ungulate species with a northern distribution, as models for these species should display the strongest effect of projectional distortion. Additionally we choose four species with entirely continental distributions to investigate the effect of incomplete cell coverage at the coast. We expected that including model weights proportional to the actual cell area should compensate for the observed bias in model coefficients, and similarly that using land coverage of a cell should decrease bias in species with coastal distribution. As anticipated, model coefficients were different between long-lat and equal-area projections. Having progressively smaller and a higher number of cells with increasing latitude influenced the importance of parameters in models, increased the sample size for the northernmost parts of species ranges, and reduced the subcell variability of those areas. However, this bias could be largely removed by weighting long-lat cells by the area they cover, and marginally by correcting for land coverage. Overall we found little effect of using long-lat rather than equal-area projections in our analysis. The fitted relationship between environmental parameters and occurrence probability differed only very little between the two projection types. We still recommend using equal-area projections to avoid possible bias. More importantly, our results suggest that the cell area and the proportion of a cell covered by land should be
DEFF Research Database (Denmark)
Petersen, Jørgen Holm
2016-01-01
. For each term in the composite likelihood, a conditional likelihood is used that eliminates the influence of the random effects, which results in a composite conditional likelihood consisting of only one-dimensional integrals that may be solved numerically. Good properties of the resulting estimator......This paper describes a new approach to the estimation in a logistic regression model with two crossed random effects where special interest is in estimating the variance of one of the effects while not making distributional assumptions about the other effect. A composite likelihood is studied...
Continuous Time Model Estimation
Carl Chiarella; Shenhuai Gao
2004-01-01
This paper introduces an easy to follow method for continuous time model estimation. It serves as an introduction on how to convert a state space model from continuous time to discrete time, how to decompose a hybrid stochastic model into a trend model plus a noise model, how to estimate the trend model by simulation, and how to calculate standard errors from estimation of the noise model. It also discusses the numerical difficulties involved in discrete time models that bring about the unit ...
A single-step genomic model with direct estimation of marker effects.
Liu, Z; Goddard, M E; Reinhardt, F; Reents, R
2014-09-01
Compared with the currently widely used multi-step genomic models for genomic evaluation, single-step genomic models can provide more accurate genomic evaluation by jointly analyzing phenotypes and genotypes of all animals and can properly correct for the effect of genomic preselection on genetic evaluations. The objectives of this study were to introduce a single-step genomic model, allowing a direct estimation of single nucleotide polymorphism (SNP) effects, and to develop efficient computing algorithms for solving equations of the single-step SNP model. We proposed an alternative to the current single-step genomic model based on the genomic relationship matrix by including an additional step for estimating the effects of SNP markers. Our single-step SNP model allowed flexible modeling of SNP effects in terms of the number and variance of SNP markers. Moreover, our single-step SNP model included a residual polygenic effect with trait-specific variance for reducing inflation in genomic prediction. A kernel calculation of the SNP model involved repeated multiplications of the inverse of the pedigree relationship matrix of genotyped animals with a vector, for which numerical methods such as preconditioned conjugate gradients can be used. For estimating SNP effects, a special updating algorithm was proposed to separate residual polygenic effects from the SNP effects. We extended our single-step SNP model to general multiple-trait cases. By taking advantage of a block-diagonal (co)variance matrix of SNP effects, we showed how to estimate multivariate SNP effects in an efficient way. A general prediction formula was derived for candidates without phenotypes, which can be used for frequent, interim genomic evaluations without running the whole genomic evaluation process. We discussed various issues related to implementation of the single-step SNP model in Holstein populations with an across-country genomic reference population.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Normalized difference vegetation index (NDVI) data, obtained from remote sensing information, are essential in the Shuttleworth-Wallace(S-W) model for estimation of evapotranspiration. In order to study the effect of temporal resolution of NDVI on potential evapotranspiration (PET) estimation and hydrological model performance,monthly and 10-day NDVI data set were used to estimate potential evapotranspiration from January 1985 to December 1987 in Huangnizhuang catchment, Anhui Province, China. The differences of the two calculation results were analyzed and used to drive the block-wise use of the TOPMODEL with the Muskingum-Cunge routing (BTOPMC) model to test the effect on model performance. The results show that both annual and monthly PETs estimated by 10-day NDVI are lower than those estimated by monthly NDVI. Annual PET from the vegetation root zone (PETr) lowers 9.77%-13.64% and monthly PETr lowers 3.28%-17.44% in the whole basin. PET from the vegetation interception (PETi) shows the same trend as PETr. In addition, temporal resolution of NDVI has more effect on PETr in summer and on PETi in winter. The correlation between PETr as estimated by 10-day NDVI and pan measurement (R2= 0.835)is better than that between monthly NDVI and pan measurement (R2 = 0.775). The two potential evapotranspiration estimates were used to drive the BTOPMC model and calibrate parameters, and model performance was found to be similar. In summary, the effect of temporal resolution of NDVI on potential evapotranspiration estimation is significant,but trivial on hydrological model performance.
Wyss, Richard; Girman, Cynthia J; LoCasale, Robert J; Brookhart, Alan M; Stürmer, Til
2013-01-01
It is often preferable to simplify the estimation of treatment effects on multiple outcomes by using a single propensity score (PS) model. Variable selection in PS models impacts the efficiency and validity of treatment effects. However, the impact of different variable selection strategies on the estimated treatment effects in settings involving multiple outcomes is not well understood. The authors use simulations to evaluate the impact of different variable selection strategies on the bias and precision of effect estimates to provide insight into the performance of various PS models in settings with multiple outcomes. Simulated studies consisted of dichotomous treatment, two Poisson outcomes, and eight standard-normal covariates. Covariates were selected for the PS models based on their effects on treatment, a specific outcome, or both outcomes. The PSs were implemented using stratification, matching, and weighting (inverse probability treatment weighting). PS models including only covariates affecting a specific outcome (outcome-specific models) resulted in the most efficient effect estimates. The PS model that only included covariates affecting either outcome (generic-outcome model) performed best among the models that simultaneously controlled measured confounding for both outcomes. Similar patterns were observed over the range of parameter values assessed and all PS implementation methods. A single, generic-outcome model performed well compared with separate outcome-specific models in most scenarios considered. The results emphasize the benefit of using prior knowledge to identify covariates that affect the outcome when constructing PS models and support the potential to use a single, generic-outcome PS model when multiple outcomes are being examined. Copyright © 2012 John Wiley & Sons, Ltd.
Lee, Duncan; Rushworth, Alastair; Sahu, Sujit K
2014-06-01
Estimation of the long-term health effects of air pollution is a challenging task, especially when modeling spatial small-area disease incidence data in an ecological study design. The challenge comes from the unobserved underlying spatial autocorrelation structure in these data, which is accounted for using random effects modeled by a globally smooth conditional autoregressive model. These smooth random effects confound the effects of air pollution, which are also globally smooth. To avoid this collinearity a Bayesian localized conditional autoregressive model is developed for the random effects. This localized model is flexible spatially, in the sense that it is not only able to model areas of spatial smoothness, but also it is able to capture step changes in the random effects surface. This methodological development allows us to improve the estimation performance of the covariate effects, compared to using traditional conditional auto-regressive models. These results are established using a simulation study, and are then illustrated with our motivating study on air pollution and respiratory ill health in Greater Glasgow, Scotland in 2011. The model shows substantial health effects of particulate matter air pollution and nitrogen dioxide, whose effects have been consistently attenuated by the currently available globally smooth models.
Model for estimating the effects of surface roughness on mass ejection from shocked materials
Energy Technology Data Exchange (ETDEWEB)
Asay, J R; Bertholf, L D
1978-10-01
A statistical model is presented for estimating the effects of surface roughness on mass ejection from shocked surfaces. In the model, roughness is characterized by the total volume of defects, such as pits, scratches and machine marks, on a surface. The amount of material ejected from these defects during shock loading can be estimated by assuming that jetting from surface depressions is the primary mode of ejection and by making simplifying assumptions about jetting processes. Techniques are discussed for estimating the effects of distribution in defect size and shape, and results are presented for several different geometries of defects. The model is used to compare predicted and measured ejecta masses from six different materials. Surface defects in these materials range from pits and scratches on polished surfaces to prepared defects such as machined or porous surfaces. Good agreement is achieved between predicted and measured results which suggests general applicability of the model.
Estimation of rotor effective wind speeds using autoregressive models on Lidar data
Giyanani, A.; Bierbooms, W. A. A. M.; van Bussel, G. J. W.
2016-09-01
Lidars have become increasingly useful for providing accurate wind speed measurements in front of the wind turbine. The wind field measured at distant meteorological masts changes its structure or was too distorted before it reaches the turbine. Thus, one cannot simply apply Taylor's frozen turbulence for representing this distant flow field at the rotor. Wind turbine controllers can optimize the energy output and reduce the loads significantly, if the wind speed estimates were known in advance with high accuracy and low uncertainty. The current method to derive wind speed estimations from aerodynamic torque, pitch angle and tip speed ratio after the wind field flows past the turbine and have their limitations, e.g. in predicting gusts. Therefore, an estimation model coupled with the measuring capability of nacelle based Lidars was necessary for detecting extreme events and for estimating accurate wind speeds at the rotor disc. Nacelle-mounted Lidars measure the oncoming wind field from utpo 400m(5D) in front of the turbine and appropriate models could be used for deriving the rotor effective wind speed from these measurements. This article proposes an auto-regressive model combined with a method to include the blockage factor in order to estimate the wind speeds accurately using Lidar measurements. An Armax model was used to determine the transfer function that models the physical evolution of wind towards the wind turbine, incorporating the effect of surface roughness, wind shear and wind variability at the site. The model could incorporate local as well as global effects and was able to predict the rotor effective wind speeds with adequate accuracy for wind turbine control actions. A high correlation of 0.86 was achieved as the Armax modelled signal was compared to a reference signal. The model could also be extended to estimate the damage potential during high wind speeds, gusts or abrupt change in wind directions, allowing the controller to act appropriately
EFFECTS OF OCEAN TIDE MODELS ON GNSS-ESTIMATED ZTD AND PWV IN TURKEY
Directory of Open Access Journals (Sweden)
G. Gurbuz
2015-12-01
Full Text Available Global Navigation Satellite System (GNSS observations can precisely estimate the total zenith tropospheric delay (ZTD and precipitable water vapour (PWV for weather prediction and atmospheric research as a continuous and all-weather technique. However, apart from GNSS technique itself, estimations of ZTD and PWV are subject to effects of geophysical models with large uncertainties, particularly imprecise ocean tide models in Turkey. In this paper, GNSS data from Jan. 1st to Dec. 31st of 2014 are processed at 4 co-located GNSS stations (GISM, DIYB, GANM, and ADAN with radiosonde from Turkish Met-Office along with several nearby IGS stations. The GAMIT/GLOBK software has been used to process GNSS data of 30-second sample using the Vienna Mapping Function and 10° elevation cut-off angle. Also tidal and non-tidal atmospheric pressure loadings (ATML at the observation level are also applied in GAMIT/GLOBK. Several widely used ocean tide models are used to evaluate their effects on GNSS-estimated ZTD and PWV estimation, such as IERS recommended FES2004, NAO99b from a barotropic hydrodynamic model, CSR4.0 obtained from TOPEX/Poseidon altimetry with the model FES94.1 as the reference model and GOT00 which is again long wavelength adjustments of FES94.1 using TOPEX/Poseidon data at 0.5 by 0.5 degree grid. The ZTD and PWV computed from radiosonde profile observations are regarded as reference values for the comparison and validation. In the processing phase, five different strategies are taken without ocean tide model and with four aforementioned ocean tide models, respectively, which are used to evaluate ocean tide models effects on GNSS-estimated ZTD and PWV estimation through comparing with co-located Radiosonde. Results showed that ocean tide models have greatly affected the estimation of the ZTD in centimeter level and thus the precipitable water vapour in millimeter level, respectively at stations near coasts. The ocean tide model FES2004 that is
Global estimation of effective plant rooting depth: Implications for hydrological modeling
Yang, Yuting; Donohue, Randall J.; McVicar, Tim R.
2016-10-01
Plant rooting depth (Zr) is a key parameter in hydrological and biogeochemical models, yet the global spatial distribution of Zr is largely unknown due to the difficulties in its direct measurement. Additionally, Zr observations are usually only representative of a single plant or several plants, which can differ greatly from the effective Zr over a modeling unit (e.g., catchment or grid-box). Here, we provide a global parameterization of an analytical Zr model that balances the marginal carbon cost and benefit of deeper roots, and produce a climatological (i.e., 1982-2010 average) global Zr map. To test the Zr estimates, we apply the estimated Zr in a highly transparent hydrological model (i.e., the Budyko-Choudhury-Porporato (BCP) model) to estimate mean annual actual evapotranspiration (E) across the globe. We then compare the estimated E with both water balance-based E observations at 32 major catchments and satellite grid-box retrievals across the globe. Our results show that the BCP model, when implemented with Zr estimated herein, optimally reproduced the spatial pattern of E at both scales (i.e., R2 = 0.94, RMSD = 74 mm yr-1 for catchments, and R2 = 0.90, RMSD = 125 mm yr-1 for grid-boxes) and provides improved model outputs when compared to BCP model results from two already existing global Zr data sets. These results suggest that our Zr estimates can be effectively used in state-of-the-art hydrological models, and potentially biogeochemical models, where the determination of Zr currently largely relies on biome type-based look-up tables.
Goldfeld, K S
2014-03-30
Cost-effectiveness analysis is an important tool that can be applied to the evaluation of a health treatment or policy. When the observed costs and outcomes result from a nonrandomized treatment, making causal inference about the effects of the treatment requires special care. The challenges are compounded when the observation period is truncated for some of the study subjects. This paper presents a method of unbiased estimation of cost-effectiveness using observational study data that is not fully observed. The method-twice-weighted multiple interval estimation of a marginal structural model-was developed in order to analyze the cost-effectiveness of treatment protocols for advanced dementia residents living nursing homes when they become acutely ill. A key feature of this estimation approach is that it facilitates a sensitivity analysis that identifies the potential effects of unmeasured confounding on the conclusions concerning cost-effectiveness.
Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood
Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.
2011-01-01
Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are
Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable
du Toit, Stephen H. C.; Cudeck, Robert
2009-01-01
A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…
Klein, Andreas G.; Muthen, Bengt O.
2007-01-01
In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…
Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood
Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.
2011-01-01
Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are upd
Energy Technology Data Exchange (ETDEWEB)
Rupšys, P. [Aleksandras Stulginskis University, Studenų g. 11, Akademija, Kaunas district, LT – 53361 Lithuania (Lithuania)
2015-10-28
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.
Rupšys, P.
2015-10-01
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.
Schick, Robert S; Kraus, Scott D; Rolland, Rosalind M; Knowlton, Amy R; Hamilton, Philip K; Pettis, Heather M; Thomas, Len; Harwood, John; Clark, James S
2016-01-01
Right whales are vulnerable to many sources of anthropogenic disturbance including ship strikes, entanglement with fishing gear, and anthropogenic noise. The effect of these factors on individual health is unclear. A statistical model using photographic evidence of health was recently built to infer the true or hidden health of individual right whales. However, two important prior assumptions about the role of missing data and unexplained variance on the estimates were not previously assessed. Here we tested these factors by varying prior assumptions and model formulation. We found sensitivity to each assumption and used the output to make guidelines on future model formulation.
Energy Technology Data Exchange (ETDEWEB)
Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.
2003-10-29
Quantitative analysis of uptake and washout of cardiac single photon emission computed tomography (SPECT) radiopharmaceuticals has the potential to provide better contrast between healthy and diseased tissue, compared to conventional reconstruction of static images. Previously, we used B-splines to model time-activity curves (TACs) for segmented volumes of interest and developed fast least-squares algorithms to estimate spline TAC coefficients and their statistical uncertainties directly from dynamic SPECT projection data. This previous work incorporated physical effects of attenuation and depth-dependent collimator response. In the present work, we incorporate scatter and use a computer simulation to study how scatter modeling affects directly estimated TACs and subsequent estimates of compartmental model parameters. An idealized single-slice emission phantom was used to simulate a 15 min dynamic {sup 99m}Tc-teboroxime cardiac patient study in which 500,000 events containing scatter were detected from the slice. When scatter was modeled, unweighted least-squares estimates of TACs had root mean square (RMS) error that was less than 0.6% for normal left ventricular myocardium, blood pool, liver, and background tissue volumes and averaged 3% for two small myocardial defects. When scatter was not modeled, RMS error increased to average values of 16% for the four larger volumes and 35% for the small defects. Noise-to-signal ratios (NSRs) for TACs ranged between 1-18% for the larger volumes and averaged 110% for the small defects when scatter was modeled. When scatter was not modeled, NSR improved by average factors of 1.04 for the larger volumes and 1.25 for the small defects, as a result of the better-posed (though more biased) inverse problem. Weighted least-squares estimates of TACs had slightly better NSR and worse RMS error, compared to unweighted least-squares estimates. Compartmental model uptake and washout parameter estimates obtained from the TACs were less
A kinematic model to estimate effective dose of radioactive substances in a human body
Sasaki, S.; Yamada, T.
2013-05-01
The great earthquake occurred in the north-east area in Japan in March 11, 2011. Facility system to control Fukushima Daiichi nuclear power station was completely destroyed by the following giant tsunami. From the damaged reactor containment vessels, an amount of radioactive substances had leaked and diffused in the vicinity of this station. Radiological internal exposure became a serious social issue both in Japan and all over the world. The present study provides an easily understandable, kinematic-based model to estimate the effective dose of radioactive substances in a human body by simplifying the complicated mechanism of metabolism. International Commission on Radiological Protection (ICRP) has developed a sophisticated model, which is well-known as a standard method to calculate the effective dose for radiological protection. However, owing to that ICRP method is fine, it is rather difficult for non-professional people of radiology to gasp the whole images of the movement and the influences of radioactive substances in a human body. Therefore, in the present paper we propose a newly-derived and easily-understandable model to estimate the effective dose. The present method is very similar with the traditional and conventional tank model in hydrology. Ingestion flux of radioactive substances corresponds to rain intensity and the storage of radioactive substances to the water storage in a basin in runoff analysis. The key of the present method is to estimate the energy radiated in the radioactive nuclear disintegration of an atom by using classical theory of β decay and special relativity for various kinds of radioactive atoms. The parameters used in this model are only physical half-time and biological half-time, and there are no operational parameters or coefficients to adjust our theoretical runoff to ICRP. Figure shows the time-varying effective dose with ingestion duration, and we can confirm the validity of our model. The time-varying effective dose with
Effect of assay measurement error on parameter estimation in concentration-QTc interval modeling.
Bonate, Peter L
2013-01-01
Linear mixed-effects models (LMEMs) of concentration-double-delta QTc intervals (QTc intervals corrected for placebo and baseline effects) assume that the concentration measurement error is negligible, which is an incorrect assumption. Previous studies have shown in linear models that independent variable error can attenuate the slope estimate with a corresponding increase in the intercept. Monte Carlo simulation was used to examine the impact of assay measurement error (AME) on the parameter estimates of an LMEM and nonlinear MEM (NMEM) concentration-ddQTc interval model from a 'typical' thorough QT study. For the LMEM, the type I error rate was unaffected by assay measurement error. Significant slope attenuation ( > 10%) occurred when the AME exceeded > 40% independent of the sample size. Increasing AME also decreased the between-subject variance of the slope, increased the residual variance, and had no effect on the between-subject variance of the intercept. For a typical analytical assay having an assay measurement error of less than 15%, the relative bias in the estimates of the model parameters and variance components was less than 15% in all cases. The NMEM appeared to be more robust to AME error as most parameters were unaffected by measurement error. Monte Carlo simulation was then used to determine whether the simulation-extrapolation method of parameter bias correction could be applied to cases of large AME in LMEMs. For analytical assays with large AME ( > 30%), the simulation-extrapolation method could correct biased model parameter estimates to near-unbiased levels.
Estimating ETAS: the effects of truncation, missing data, and model assumptions
Seif, Stefanie; Mignan, Arnaud; Zechar, Jeremy; Werner, Maximilian; Wiemer, Stefan
2016-04-01
The Epidemic-Type Aftershock Sequence (ETAS) model is widely used to describe the occurrence of earthquakes in space and time, but there has been little discussion of the limits of, and influences on, its estimation. What has been established is that ETAS parameter estimates are influenced by missing data (e.g., earthquakes are not reliably detected during lively aftershock sequences) and by simplifying assumptions (e.g., that aftershocks are isotropically distributed). In this article, we investigate the effect of truncation: how do parameter estimates depend on the cut-off magnitude, Mcut, above which parameters are estimated? We analyze catalogs from southern California and Italy and find that parameter variations as a function of Mcut are caused by (i) changing sample size (which affects e.g. Omori's cconstant) or (ii) an intrinsic dependence on Mcut (as Mcut increases, absolute productivity and background rate decrease). We also explore the influence of another form of truncation - the finite catalog length - that can bias estimators of the branching ratio. Being also a function of Omori's p-value, the true branching ratio is underestimated by 45% to 5% for 1.05ETAS productivity parameters (α and K0) and the Omoris c-value are significantly changed only for low Mcut=2.5. We further find that conventional estimation errors for these parameters, inferred from simulations that do not account for aftershock incompleteness, are underestimated by, on average, a factor of six.
Estimating ETAS: The effects of truncation, missing data, and model assumptions
Seif, Stefanie; Mignan, Arnaud; Zechar, Jeremy Douglas; Werner, Maximilian Jonas; Wiemer, Stefan
2017-01-01
The Epidemic-Type Aftershock Sequence (ETAS) model is widely used to describe the occurrence of earthquakes in space and time, but there has been little discussion dedicated to the limits of, and influences on, its estimation. Among the possible influences we emphasize in this article the effect of the cutoff magnitude, Mcut, above which parameters are estimated; the finite length of earthquake catalogs; and missing data (e.g., during lively aftershock sequences). We analyze catalogs from Southern California and Italy and find that some parameters vary as a function of Mcut due to changing sample size (which affects, e.g., Omori's c constant) or an intrinsic dependence on Mcut (as Mcut increases, absolute productivity and background rate decrease). We also explore the influence of another form of truncation—the finite catalog length—that can bias estimators of the branching ratio. Being also a function of Omori's p value, the true branching ratio is underestimated by 45% to 5% for 1.05 < p < 1.2. Finite sample size affects the variation of the branching ratio estimates. Moreover, we investigate the effect of missing aftershocks and find that the ETAS productivity parameters (α and K0) and the Omori's c and p values are significantly changed for Mcut < 3.5. We further find that conventional estimation errors for these parameters, inferred from simulations that do not account for aftershock incompleteness, are underestimated by, on average, a factor of 8.
Application of a multistate model to estimate culvert effects on movement of small fishes
Norman, J.R.; Hagler, M.M.; Freeman, Mary C.; Freeman, B.J.
2009-01-01
While it is widely acknowledged that culverted road-stream crossings may impede fish passage, effects of culverts on movement of nongame and small-bodied fishes have not been extensively studied and studies generally have not accounted for spatial variation in capture probabilities. We estimated probabilities for upstream and downstream movement of small (30-120 mm standard length) benthic and water column fishes across stream reaches with and without culverts at four road-stream crossings over a 4-6-week period. Movement and reach-specific capture probabilities were estimated using multistate capture-recapture models. Although none of the culverts were complete barriers to passage, only a bottomless-box culvert appeared to permit unrestricted upstream and downstream movements by benthic fishes based on model estimates of movement probabilities. At two box culverts that were perched above the water surface at base flow, observed movements were limited to water column fishes and to intervals when runoff from storm events raised water levels above the perched level. Only a single fish was observed to move through a partially embedded pipe culvert. Estimates for probabilities of movement over distances equal to at least the length of one culvert were low (e.g., generally ???0.03, estimated for 1-2-week intervals) and had wide 95% confidence intervals as a consequence of few observed movements to nonadjacent reaches. Estimates of capture probabilities varied among reaches by a factor of 2 to over 10, illustrating the importance of accounting for spatially variable capture rates when estimating movement probabilities with capture-recapture data. Longer-term studies are needed to evaluate temporal variability in stream fish passage at culverts (e.g., in relation to streamflow variability) and to thereby better quantify the degree of population fragmentation caused by road-stream crossings with culverts. ?? American Fisheries Society 2009.
The effects of numerical-model complexity and observation type on estimated porosity values
Starn, J. Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.
2015-09-01
The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a "complex" highly parameterized porosity field and a "simple" parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.
Shan, Bonan; Wang, Jiang; Deng, Bin; Zhang, Zhen; Wei, Xile
2017-03-01
Assessment of the effective connectivity among different brain regions during seizure is a crucial problem in neuroscience today. As a consequence, a new model inversion framework of brain function imaging is introduced in this manuscript. This framework is based on approximating brain networks using a multi-coupled neural mass model (NMM). NMM describes the excitatory and inhibitory neural interactions, capturing the mechanisms involved in seizure initiation, evolution and termination. Particle swarm optimization method is used to estimate the effective connectivity variation (the parameters of NMM) and the epileptiform dynamics (the states of NMM) that cannot be directly measured using electrophysiological measurement alone. The estimated effective connectivity includes both the local connectivity parameters within a single region NMM and the remote connectivity parameters between multi-coupled NMMs. When the epileptiform activities are estimated, a proportional-integral controller outputs control signal so that the epileptiform spikes can be inhibited immediately. Numerical simulations are carried out to illustrate the effectiveness of the proposed framework. The framework and the results have a profound impact on the way we detect and treat epilepsy.
Population Intervention Models to Estimate Ambient NO2 Health Effects in Children with Asthma
Snowden, Jonathan M.; Mortimer, Kathleen M.; Dufour, Mi-Suk Kang; Tager, Ira B.
2015-01-01
Health effects of ambient air pollution are most frequently expressed in individual studies as responses to a standardized unit of air pollution changes (e.g., an interquartile interval), which is thought to enable comparison of findings across studies. However, this approach does not necessarily convey health effects in terms of a real-world air pollution scenario. In the present study, we employ population intervention modeling to estimate the effect of an air pollution intervention that makes explicit reference to the observed exposure data and is identifiable in those data. We calculate the association between ambient summertime NO2 and forced expiratory flow between 25% and 75% of forced vital capacity (FEF25–75) in a cohort of children with asthma in Fresno, California. We scale the effect size to reflect NO2 abatement on a majority of summer days. The effect estimates were small, imprecise, and consistently indicated improved pulmonary function with decreased NO2. The effects ranged from −0.8% of mean FEF25–75 (95% Confidence Interval: −3.4 , 1.7) to −3.3% (95% CI: −7.5, 0.9). We conclude by discussing the nature and feasibility of the exposure change analyzed here given the observed air pollution profile, and we propose additional applications of the population intervention model in environmental epidemiology. PMID:25182844
Billings, S. A.
1988-03-01
Time and frequency domain identification methods for nonlinear systems are reviewed. Parametric methods, prediction error methods, structure detection, model validation, and experiment design are discussed. Identification of a liquid level system, a heat exchanger, and a turbocharge automotive diesel engine are illustrated. Rational models are introduced. Spectral analysis for nonlinear systems is treated. Recursive estimation is mentioned.
Yang, Ji Seung; Cai, Li
2014-01-01
The main purpose of this study is to improve estimation efficiency in obtaining maximum marginal likelihood estimates of contextual effects in the framework of nonlinear multilevel latent variable model by adopting the Metropolis-Hastings Robbins-Monro algorithm (MH-RM). Results indicate that the MH-RM algorithm can produce estimates and standard…
Yang, Ji Seung; Cai, Li
2014-01-01
The main purpose of this study is to improve estimation efficiency in obtaining maximum marginal likelihood estimates of contextual effects in the framework of nonlinear multilevel latent variable model by adopting the Metropolis-Hastings Robbins-Monro algorithm (MH-RM). Results indicate that the MH-RM algorithm can produce estimates and standard…
Lassen, Brian; Ostergaard, Søren
2012-10-01
In this study, a stochastic predictive model stimulating a constant infection pressure of Eimeria was used to estimate production outcome, economic, and effects of treatment decisions in a dairy herd of 100 cows. The intestinal parasite cause problems mainly in calves, and is known to have long term effects on the growth rate, and in severe cases can result in mortalities. Due to the inconspicuous nature of the parasite, the clinical signs and sub-clinical manifestations it may produce can be overlooked. Acquired data from literature and Estonian dairy farms were implemented in the SimHerd IV model to simulate three scenarios of symptomatic treatment: no calves treated (NT), default estimate of the current treatment strategy (DT), and all calves treated (AT). Effects of metaphylactic treatment were studied as a lowering of the infection pressure. Delay in the age for beginning of insemination of heifers was the effect with the largest economic impact on the gross margin, followed by calf mortality and reduction in growth rate. Large expenses were associated with the introduction of replacement heifers and feeding of heifers as a result of the delay in reaching a specific body weight at calving. Compared to the control scenarios, with no effects and treatments of Eimeria, dairy farmers were estimated to incur annual losses ranging 8-9% in the balanced income. Providing metaphylactic drugs resulted in an increased gross margin of 6-7%. Purchase of new heifers compensated for some production losses that would otherwise have enhanced expenses related to Eimeria. The simulation illustrates how effects of Eimeria infections can have long lasting impact on interacting management factors. It was concluded that all three simulated symptomatic treatment regimes provided only small economic benefits if they were applied alone and not in combination with lowering of infection pressure.
A multilevel model to address batch effects in copy number estimation using SNP arrays.
Scharpf, Robert B; Ruczinski, Ingo; Carvalho, Benilton; Doan, Betty; Chakravarti, Aravinda; Irizarry, Rafael A
2011-01-01
Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of base pairs in the genome. Genomewide association studies (GWAS) may simultaneously screen for copy number phenotype and single nucleotide polymorphism (SNP) phenotype associations as part of the analytic strategy. However, genomewide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post hoc quality control procedures to exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of biallelic genotype calls from experimental data to estimate batch-specific and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in the quantile-normalized intensities, while the latter illustrates the robustness of our approach to a data set in which approximately 27% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R
Krishna Rao, Sreevidya; Mejia, Gloria C; Roberts-Thomson, Kaye; Logan, Richard M; Kamath, Veena; Kulkarni, Muralidhar; Mittinty, Murthy N
2015-07-01
Early life socioeconomic disadvantage could affect adult health directly or indirectly. To the best of our knowledge, there are no studies of the direct effect of early life socioeconomic conditions on oral cancer occurrence in adult life. We conducted a multicenter, hospital-based, case-control study in India between 2011 and 2012 on 180 histopathologically confirmed incident oral and/or oropharyngeal cancer cases, aged 18 years or more, and 272 controls that included hospital visitors, who were not diagnosed with any cancer in the same hospitals. Life-course data were collected on socioeconomic conditions, risk factors, and parental behavior through interview employing a life grid. The early life socioeconomic conditions measure was determined by occupation of the head of household in childhood. Adult socioeconomic measures included participant's education and current occupation of the head of household. Marginal structural models with stabilized inverse probability weights were used to estimate the controlled direct effects of early life socioeconomic conditions on oral cancer. The total effect model showed that those in the low socioeconomic conditions in the early years of childhood had 60% (risk ratio [RR] = 1.6 [95% confidence interval {CI} = 1.4, 1.9]) increased risk of oral cancer. From the marginal structural models, the estimated risk for developing oral cancer among those in low early life socioeconomic conditions was 50% (RR = 1.5 [95% CI = 1.4, 1.5]), 20% (RR = 1.2 [95% CI = 0.9, 1.7]), and 90% (RR = 1.9 [95% CI = 1.7, 2.2]) greater than those in the high socioeconomic conditions when controlled for smoking, chewing, and alcohol, respectively. When all the three mediators were controlled in a marginal structural model, the RR was 1.3 (95% CI = 1.0, 1.6). Early life low socioeconomic condition had a controlled direct effect on oral cancer when smoking, chewing tobacco, and alcohol were separately adjusted in marginal structural models.
A simple numerical model to estimate the effect of coal selection on pulverized fuel burnout
Energy Technology Data Exchange (ETDEWEB)
Sun, J.K.; Hurt, R.H.; Niksa, S.; Muzio, L.; Mehta, A.; Stallings, J. [Brown University, Providence, RI (USA). Division Engineering
2003-06-01
The amount of unburned carbon in ash is an important performance characteristic in commercial boilers fired with pulverized coal. Unburned carbon levels are known to be sensitive to fuel selection, and there is great interest in methods of estimating the burnout propensity of coals based on proximate and ultimate analysis - the only fuel properties readily available to utility practitioners. A simple numerical model is described that is specifically designed to estimate the effects of coal selection on burnout in a way that is useful for commercial coal screening. The model is based on a highly idealized description of the combustion chamber but employs detailed descriptions of the fundamental fuel transformations. The model is validated against data from laboratory and pilot-scale combustors burning a range of international coals, and then against data obtained from full-scale units during periods of coal switching. The validated model form is then used in a series of sensitivity studies to explore the role of various individual fuel properties that influence burnout.
Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H
2016-08-01
The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012.
Using simulation modeling to estimate effects of climate change on Great Plains woodlands
Energy Technology Data Exchange (ETDEWEB)
Guertin, D.S.; Easterling, W.E.; Brandle, J.R. [Univ. of Nebraska, Lincoln, NE (United States)
1995-06-01
The potential effects of climate change on biological diversity are uncertain and are likely to vary among systems. In intensively cultivated regions of the Great Plains, small unmanaged corridors of land between agricultural fields are particularly important in maintaining biological diversity, but possible responses of these corridors to climate change are mainly unknown. Many corridors are wooded, either as natural riparian forests or as planted shelterbelts. We used an individual-based community dynamics model to predict responses of linear Great Plains forests to climate change. The model estimates changes in forest communities as a result of responses of individual trees to environmental conditions. It is based on existing forest dynamics models, but incorporates features that are unique to landscapes with spatially segregated linear forest patches. In particular, we consider (1) the dispersal of seeds among patches as a function of distance between patches and species- specific probabilities of dispersal; and (2) species-specific responses to seasonal flooding and fluctuations in water table along riparian corridors. Using a nested regional climate model as a driver, we modeled changes in forest composition and structure resulting from climate changes in a doubled-CO{sub 2} atmosphere. The changes predicted by the model may have important implications for both floral and faunal diversity.
A structural dynamic factor model for the effects of monetary policy estimated by the EM algorithm
DEFF Research Database (Denmark)
Bork, Lasse
This paper applies the maximum likelihood based EM algorithm to a large-dimensional factor analysis of US monetary policy. Specifically, economy-wide effects of shocks to the US federal funds rate are estimated in a structural dynamic factor model in which 100+ US macroeconomic and financial time...... series are driven by the joint dynamics of the federal funds rate and a few correlated dynamic factors. This paper contains a number of methodological contributions to the existing literature on data-rich monetary policy analysis. Firstly, the identification scheme allows for correlated factor dynamics...... as opposed to the orthogonal factors resulting from the popular principal component approach to structural factor models. Correlated factors are economically more sensible and important for a richer monetary policy transmission mechanism. Secondly, I consider both static factor loadings as well as dynamic...
A structural dynamic factor model for the effects of monetary policy estimated by the EM algorithm
DEFF Research Database (Denmark)
Bork, Lasse
This paper applies the maximum likelihood based EM algorithm to a large-dimensional factor analysis of US monetary policy. Specifically, economy-wide effects of shocks to the US federal funds rate are estimated in a structural dynamic factor model in which 100+ US macroeconomic and financial time...... series are driven by the joint dynamics of the federal funds rate and a few correlated dynamic factors. This paper contains a number of methodological contributions to the existing literature on data-rich monetary policy analysis. Firstly, the identification scheme allows for correlated factor dynamics...... as opposed to the orthogonal factors resulting from the popular principal component approach to structural factor models. Correlated factors are economically more sensible and important for a richer monetary policy transmission mechanism. Secondly, I consider both static factor loadings as well as dynamic...
Holzkämper, Annelie; Honti, Mark; Fuhrer, Jürg
2015-04-01
Crop models are commonly applied to estimate impacts of projected climate change and to anticipate suitable adaptation measures. Thereby, uncertainties from global climate models, regional climate models, and impacts models cascade down to impact estimates. It is essential to quantify and understand uncertainties in impact assessments in order to provide informed guidance for decision making in adaptation planning. A question that has hardly been investigated in this context is how sensitive climate impact estimates are to the choice of the impact model approach. In a case study for Switzerland we compare results of three different crop modelling approaches to assess the relevance of impact model choice in relation to other uncertainty sources. The three approaches include an expert-based, a statistical and a process-based model. With each approach impact model parameter uncertainty and climate model uncertainty (originating from climate model chain and downscaling approach) are accounted for. ANOVA-based uncertainty partitioning is performed to quantify the relative importance of different uncertainty sources. Results suggest that uncertainty in estimated yield changes originating from the choice of the crop modelling approach can be greater than uncertainty from climate model chains. The uncertainty originating from crop model parameterization is small in comparison. While estimates of yield changes are highly uncertain, the directions of estimated changes in climatic limitations are largely consistent. This leads us to the conclusion that by focusing on estimated changes in climate limitations, more meaningful information can be provided to support decision making in adaptation planning - especially in cases where yield changes are highly uncertain.
Bazrgari, Babak; Nussbaum, Maury A; Madigan, Michael L
2012-01-01
The use of system identification to quantify trunk mechanical properties is growing in biomechanics research. The effects of several experimental and modelling factors involved in the system identification of trunk mechanical properties were investigated. Trunk kinematics and kinetics were measured in six individuals when exposed to sudden trunk perturbations. Effects of motion sensor positioning and properties of elements between the perturbing device and the trunk were investigated by adopting different models for system identification. Results showed that by measuring trunk kinematics at a location other than the trunk surface, the deformation of soft tissues is erroneously included into trunk kinematics and results in the trunk being predicted as a more damped structure. Results also showed that including elements between the trunk and the perturbing device in the system identification model did not substantially alter model predictions. Other important parameters that were found to substantially affect predictions were the cut-off frequency used when low-pass filtering raw data and the data window length used to estimate trunk properties.
Effect of optimal estimation of flux difference information on the lattice traffic flow model
Yang, Shu-hong; Li, Chun-gui; Tang, Xin-lai; Tian, Chuan
2016-12-01
In this paper, a new lattice model is proposed by considering the optimal estimation of flux difference information. The effect of this new consideration upon the stability of traffic flow is examined through linear stability analysis. Furthermore, a modified Korteweg-de Vries (mKdV) equation near the critical point is constructed and solved by means of nonlinear analysis method, and thus the propagation behavior of traffic jam can be described by the kink-antikink soliton solution of the mKdV equation. Numerical simulation is carried out under periodical condition with results in good agreement with theoretical analysis, therefore, it is verified that the new consideration can enhance the stability of traffic systems and suppress the emergence of traffic jams effectively.
Institute of Scientific and Technical Information of China (English)
ZHOU Jie; TANG Aiping; FENG Hailin
2016-01-01
The statistical inference for generalized mixed-effects state space models (MESSM) are investigated when the random effects are unknown.Two filtering algorithms are designed both of which are based on mixture Kalman filter.These algorithms are particularly useful when the longitudinal measurements are sparse.The authors also propose a globally convergent algorithm for parameter estimation of MESSM which can be used to locate the initial value of parameters for local while more efficient algorithms.Simulation examples are carried out which validate the efficacy of the proposed approaches.A data set from the clinical trial is investigated and a smaller mean square error is achieved compared to the existing results in literatures.
Levin, Bruce; Leu, Cheng-Shiun
2013-01-01
We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.
An Improved Heat Budget Estimation Including Bottom Effects for General Ocean Circulation Models
Carder, Kendall; Warrior, Hari; Otis, Daniel; Chen, R. F.
2001-01-01
This paper studies the effects of the underwater light field on heat-budget calculations of general ocean circulation models for shallow waters. The presence of a bottom significantly alters the estimated heat budget in shallow waters, which affects the corresponding thermal stratification and hence modifies the circulation. Based on the data collected during the COBOP field experiment near the Bahamas, we have used a one-dimensional turbulence closure model to show the influence of the bottom reflection and absorption on the sea surface temperature field. The water depth has an almost one-to-one correlation with the temperature rise. Effects of varying the bottom albedo by replacing the sea grass bed with a coral sand bottom, also has an appreciable effect on the heat budget of the shallow regions. We believe that the differences in the heat budget for the shallow areas will have an influence on the local circulation processes and especially on the evaporative and long-wave heat losses for these areas. The ultimate effects on humidity and cloudiness of the region are expected to be significant as well.
The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates
Sivo, Stephen; Fan, Xitao; Witta, Lea
2005-01-01
The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…
A model to estimate the cost effectiveness of the indoorenvironment improvements in office work
Energy Technology Data Exchange (ETDEWEB)
Seppanen, Olli; Fisk, William J.
2004-06-01
Deteriorated indoor climate is commonly related to increases in sick building syndrome symptoms, respiratory illnesses, sick leave, reduced comfort and losses in productivity. The cost of deteriorated indoor climate for the society is high. Some calculations show that the cost is higher than the heating energy costs of the same buildings. Also building-level calculations have shown that many measures taken to improve indoor air quality and climate are cost-effective when the potential monetary savings resulting from an improved indoor climate are included as benefits gained. As an initial step towards systemizing these building level calculations we have developed a conceptual model to estimate the cost-effectiveness of various measures. The model shows the links between the improvements in the indoor environment and the following potential financial benefits: reduced medical care cost, reduced sick leave, better performance of work, lower turn over of employees, and lower cost of building maintenance due to fewer complaints about indoor air quality and climate. The pathways to these potential benefits from changes in building technology and practices go via several human responses to the indoor environment such as infectious diseases, allergies and asthma, sick building syndrome symptoms, perceived air quality, and thermal environment. The model also includes the annual cost of investments, operation costs, and cost savings of improved indoor climate. The conceptual model illustrates how various factors are linked to each other. SBS symptoms are probably the most commonly assessed health responses in IEQ studies and have been linked to several characteristics of buildings and IEQ. While the available evidence indicates that SBS symptoms can affect these outcomes and suspects that such a linkage exists, at present we can not quantify the relationships sufficiently for cost-benefit modeling. New research and analyses of existing data to quantify the financial
The effect of coupling hydrologic and hydrodynamic models on probable maximum flood estimation
Felder, Guido; Zischg, Andreas; Weingartner, Rolf
2017-07-01
Deterministic rainfall-runoff modelling usually assumes stationary hydrological system, as model parameters are calibrated with and therefore dependant on observed data. However, runoff processes are probably not stationary in the case of a probable maximum flood (PMF) where discharge greatly exceeds observed flood peaks. Developing hydrodynamic models and using them to build coupled hydrologic-hydrodynamic models can potentially improve the plausibility of PMF estimations. This study aims to assess the potential benefits and constraints of coupled modelling compared to standard deterministic hydrologic modelling when it comes to PMF estimation. The two modelling approaches are applied using a set of 100 spatio-temporal probable maximum precipitation (PMP) distribution scenarios. The resulting hydrographs, the resulting peak discharges as well as the reliability and the plausibility of the estimates are evaluated. The discussion of the results shows that coupling hydrologic and hydrodynamic models substantially improves the physical plausibility of PMF modelling, although both modelling approaches lead to PMF estimations for the catchment outlet that fall within a similar range. Using a coupled model is particularly suggested in cases where considerable flood-prone areas are situated within a catchment.
Model Effects on GLAS-Based Regional Estimates of Forest Biomass and Carbon
Nelson, Ross F.
2010-01-01
Ice, Cloud, and land Elevation Satellite (ICESat) / Geosciences Laser Altimeter System (GLAS) waveform data are used to estimate biomass and carbon on a 1.27 X 10(exp 6) square km study area in the Province of Quebec, Canada, below the tree line. The same input datasets and sampling design are used in conjunction with four different predictive models to estimate total aboveground dry forest biomass and forest carbon. The four models include non-stratified and stratified versions of a multiple linear model where either biomass or (biomass)(exp 0.5) serves as the dependent variable. The use of different models in Quebec introduces differences in Provincial dry biomass estimates of up to 0.35 G, with a range of 4.94 +/- 0.28 Gt to 5.29 +/-0.36 Gt. The differences among model estimates are statistically non-significant, however, and the results demonstrate the degree to which carbon estimates vary strictly as a function of the model used to estimate regional biomass. Results also indicate that GLAS measurements become problematic with respect to height and biomass retrievals in the boreal forest when biomass values fall below 20 t/ha and when GLAS 75th percentile heights fall below 7 m.
Effect of Temporal Residual Correlation on Estimation of Model Averaging Weights
Ye, M.; Lu, D.; Curtis, G. P.; Meyer, P. D.; Yabusaki, S.
2010-12-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are always calculated using model selection criteria such as AIC, AICc, BIC, and KIC. However, this method sometimes leads to an unrealistic situation in which one model receives overwhelmingly high averaging weight (even 100%), which cannot be justified by available data and knowledge. It is found in this study that the unrealistic situation is due partly, if not solely, to ignorance of residual correlation when estimating the negative log-likelihood function common to all the model selection criteria. In the context of maximum-likelihood or least-square inverse modeling, the residual correlation is accounted for in the full covariance matrix; when the full covariance matrix is replaced by its diagonal counterpart, it assumes data independence and ignores the correlation. As a result, treating the correlated residuals as independent distorts the distance between observations and simulations of alternative models. As a result, it may lead to incorrect estimation of model selection criteria and model averaging weights. This is illustrated for a set of surface complexation models developed to simulate uranium transport based on a series of column experiments. The residuals are correlated in time, and the time correlation is addressed using a second-order autoregressive model. The modeling results reveal importance of considering residual correlation in the estimation of model averaging weights.
A review of existing models and methods to estimate employment effects of pollution control policies
Energy Technology Data Exchange (ETDEWEB)
Darwin, R.F.; Nesse, R.J.
1988-02-01
The purpose of this paper is to provide information about existing models and methods used to estimate coal mining employment impacts of pollution control policies. The EPA is currently assessing the consequences of various alternative policies to reduce air pollution. One important potential consequence of these policies is that coal mining employment may decline or shift from low-sulfur to high-sulfur coal producing regions. The EPA requires models that can estimate the magnitude and cost of these employment changes at the local level. This paper contains descriptions and evaluations of three models and methods currently used to estimate the size and cost of coal mining employment changes. The first model reviewed is the Coal and Electric Utilities Model (CEUM), a well established, general purpose model that has been used by the EPA and other groups to simulate air pollution control policies. The second model reviewed is the Advanced Utility Simulation Model (AUSM), which was developed for the EPA specifically to analyze the impacts of air pollution control policies. Finally, the methodology used by Arthur D. Little, Inc. to estimate the costs of alternative air pollution control policies for the Consolidated Coal Company is discussed. These descriptions and evaluations are based on information obtained from published reports and from draft documentation of the models provided by the EPA. 12 refs., 1 fig.
Noise Model Analysis and Estimation of Effect due to Wind Driven Ambient Noise in Shallow Water
Directory of Open Access Journals (Sweden)
S. Sakthivel Murugan
2011-01-01
Full Text Available Signal transmission in ocean using water as a channel is a challenging process due to attenuation, spreading, reverberation, absorption, and so forth, apart from the contribution of acoustic signals due to ambient noises. Ambient noises in sea are of two types: manmade (shipping, aircraft over the sea, motor on boat, etc. and natural (rain, wind, seismic, etc., apart from marine mammals and phytoplanktons. Since wind exists in all places and at all time: its effect plays a major role. Hence, in this paper, we concentrate on estimating the effects of wind. Seven sets of data with various wind speeds ranging from 2.11 m/s to 6.57 m/s were used. The analysis is performed for frequencies ranging from 100 Hz to 8 kHz. It is found that a linear relationship between noise spectrum and wind speed exists for the entire frequency range. Further, we developed a noise model for analyzing the noise level. The results of the empirical data are found to fit with results obtained with the aid of noise model.
Safford, Bob; Dickens, Andrea; Halleron, Nadine; Briggs, David; Carthew, Philip; Baker, Valerie
2003-10-01
The advantages that regular consumption of a diet containing soy may have on human health have been enshrined in a major health claim that has been approved by the Food and Drug Administration in the USA, regarding potential protection from heart disease by soy. This could have a major influence on the dietary consumption patterns of soy for consumers and lead to the development of soy enriched foods to enable consumers to achieve the benefits thought to be associated with increased soy consumption in a Western diet. If an increase in soy consumption is beneficial to particular disease conditions, there is always the possibility that there will be effects other than those that are desirable. For soy-containing foods there has been concern that the phytoestrogen content of soy, which is composed of several isoflavones, could be a separate health issue, due to the oestrogen-like activity of isoflavones. To address this, a method has been developed to estimate, relative to 17-beta oestradiol, the activity of the common isoflavones present in soy phytoestrogens, based on their binding to and transcriptional activation of the major oestrogen receptor sub-types alpha and beta. Using this approach, the additional oestrogen-like activity that would be expected from inclusion of soy supplemented foodstuffs in a Western diet, can be determined for different sub-populations, who may have different susceptibilities to the potential for the unwanted biological effects occurring with consumption of soy enriched foods. Because of the theoretical nature of this model, and the controversy over the nature of whether some of the oestrogen-like effects of phytoestrogens are adverse, the biological effects of soy isoflavones and their potential for adverse effects in man, is also reviewed. The question that is critical to the long term safe use of foods enriched in soy is, which observed biological effects in animal studies are likely to also occur in man and whether these would have
Fang, Chih-Chiang; Yeh, Chun-Wu
2016-09-01
The quantitative evaluation of software reliability growth model is frequently accompanied by its confidence interval of fault detection. It provides helpful information to software developers and testers when undertaking software development and software quality control. However, the explanation of the variance estimation of software fault detection is not transparent in previous studies, and it influences the deduction of confidence interval about the mean value function that the current study addresses. Software engineers in such a case cannot evaluate the potential hazard based on the stochasticity of mean value function, and this might reduce the practicability of the estimation. Hence, stochastic differential equations are utilised for confidence interval estimation of the software fault-detection process. The proposed model is estimated and validated using real data-sets to show its flexibility.
The effect of compression on tuning estimates in a simple nonlinear auditory filter model
DEFF Research Database (Denmark)
Marschall, Marton; MacDonald, Ewen; Dau, Torsten
2013-01-01
, there is evidence that human frequency-selectivity estimates depend on whether an iso-input or an iso-response measurement paradigm is used (Eustaquio-Martin et al., 2011). This study presents simulated tuning estimates using a simple compressive auditory filter model, the bandpass nonlinearity (BPNL), which......, then compression alone may explain a large part of the behaviorally observed differences in tuning between simultaneous and forward-masking conditions....
A Novel Statistical Model to Estimate Host Genetic Effects Affecting Disease Transmission
Anacleto, Osvaldo; Garcia-Cortés, Luis Alberto; Lipschutz-Powell, Debby; Woolliams, John A.; Doeschl-Wilson, Andrea B.
2015-01-01
There is increasing recognition that genetic diversity can affect the spread of diseases, potentially affecting plant and livestock disease control as well as the emergence of human disease outbreaks. Nevertheless, even though computational tools can guide the control of infectious diseases, few epidemiological models can simultaneously accommodate the inherent individual heterogeneity in multiple infectious disease traits influencing disease transmission, such as the frequently modeled propensity to become infected and infectivity, which describes the host ability to transmit the infection to susceptible individuals. Furthermore, current quantitative genetic models fail to fully capture the heritable variation in host infectivity, mainly because they cannot accommodate the nonlinear infection dynamics underlying epidemiological data. We present in this article a novel statistical model and an inference method to estimate genetic parameters associated with both host susceptibility and infectivity. Our methodology combines quantitative genetic models of social interactions with stochastic processes to model the random, nonlinear, and dynamic nature of infections and uses adaptive Bayesian computational techniques to estimate the model parameters. Results using simulated epidemic data show that our model can accurately estimate heritabilities and genetic risks not only of susceptibility but also of infectivity, therefore exploring a trait whose heritable variation is currently ignored in disease genetics and can greatly influence the spread of infectious diseases. Our proposed methodology offers potential impacts in areas such as livestock disease control through selective breeding and also in predicting and controlling the emergence of disease outbreaks in human populations. PMID:26405030
Estimation of direct effects for survival data by using the Aalen additive hazards model
DEFF Research Database (Denmark)
Martinussen, T.; Vansteelandt, S.; Gerster, M.
2011-01-01
We extend the definition of the controlled direct effect of a point exposure on a survival outcome, other than through some given, time-fixed intermediate variable, to the additive hazard scale. We propose two-stage estimators for this effect when the exposure is dichotomous and randomly assigned...
Bret C. Harvey; Steven F. Railsback
2007-01-01
While the concept of cumulative effects is prominent in legislation governing environmental management, the ability to estimate cumulative effects remains limited. One reason for this limitation is that important natural resources such as fish populations may exhibit complex responses to changes in environmental conditions, particularly to alteration of multiple...
On estimation and tests of time-varying effects in the proportional hazards model
DEFF Research Database (Denmark)
Scheike, Thomas Harder; Martinussen, Torben
Grambsch and Therneau (1994) suggest to use Schoenfeld's Residuals to investigate whether some of the regression coefficients in Cox'(1972) proportional hazards model are time-dependent. Their method is a one-step procedure based on Cox' initial estimate. We suggest an algorithm which in the first...
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-10
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Estimating the effect of a variable in a high-dimensional regression model
DEFF Research Database (Denmark)
Jensen, Peter Sandholt; Wurtz, Allan
assume that the effect is identified in a high-dimensional linear model specified by unconditional moment restrictions. We consider properties of the following methods, which rely on lowdimensional models to infer the effect: Extreme bounds analysis, the minimum t-statistic over models, Sala...
Effect of Estimated Daily Global Solar Radiation Data on the Results of Crop Growth Models
Directory of Open Access Journals (Sweden)
Herbert Formayer
2007-10-01
Full Text Available The results of previous studies have suggested that estimated daily globalradiation (RG values contain an error that could compromise the precision of subsequentcrop model applications. The following study presents a detailed site and spatial analysis ofthe RG error propagation in CERES and WOFOST crop growth models in Central Europeanclimate conditions. The research was conducted i at the eight individual sites in Austria andthe Czech Republic where measured daily RG values were available as a reference, withseven methods for RG estimation being tested, and ii for the agricultural areas of the CzechRepublic using daily data from 52 weather stations, with five RG estimation methods. In thelatter case the RG values estimated from the hours of sunshine using the ÃƒÂ¥ngstrÃƒÂ¶m-Prescottformula were used as the standard method because of the lack of measured RG data. At thesite level we found that even the use of methods based on hours of sunshine, which showedthe lowest bias in RG estimates, led to a significant distortion of the key crop model outputs.When the ÃƒÂ¥ngstrÃƒÂ¶m-Prescott method was used to estimate RG, for example, deviationsgreater than Ã‚Â±10 per cent in winter wheat and spring barley yields were noted in 5 to 6 percent of cases. The precision of the yield estimates and other crop model outputs was lowerwhen RG estimates based on the diurnal temperature range and cloud cover were used (mean bias error 2.0 to 4.1 per cent. The methods for estimating RG from the diurnal temperature range produced a wheat yield bias of more than 25 per cent in 12 to 16 per cent of the seasons. Such uncertainty in the crop model outputs makes the reliability of any seasonal yield forecasts or climate change impact assessments questionable if they are based on this type of data. The spatial assessment of the RG data uncertainty propagation over the winter wheat yields also revealed significant differences within the study area. We
Directory of Open Access Journals (Sweden)
Howey Richard
2012-06-01
Full Text Available Abstract Background Here we present two new computer tools, PREMIM and EMIM, for the estimation of parental and child genetic effects, based on genotype data from a variety of different child-parent configurations. PREMIM allows the extraction of child-parent genotype data from standard-format pedigree data files, while EMIM uses the extracted genotype data to perform subsequent statistical analysis. The use of genotype data from the parents as well as from the child in question allows the estimation of complex genetic effects such as maternal genotype effects, maternal-foetal interactions and parent-of-origin (imprinting effects. These effects are estimated by EMIM, incorporating chosen assumptions such as Hardy-Weinberg equilibrium or exchangeability of parental matings as required. Results In application to simulated data, we show that the inference provided by EMIM is essentially equivalent to that provided by alternative (competing software packages such as MENDEL and LEM. However, PREMIM and EMIM (used in combination considerably outperform MENDEL and LEM in terms of speed and ease of execution. Conclusions Together, EMIM and PREMIM provide easy-to-use command-line tools for the analysis of pedigree data, giving unbiased estimates of parental and child genotype relative risks.
Meier, Petra S; Holmes, John; Angus, Colin; Ally, Abdallah K; Meng, Yang; Brennan, Alan
2016-02-01
While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO "best buy" intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities. An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol); and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA) and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as follows: current tax
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Energy Technology Data Exchange (ETDEWEB)
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.
2013-07-23
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-09-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, Cɛ, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Directory of Open Access Journals (Sweden)
Rosa Ana Salas
2013-11-01
Full Text Available We propose a modeling procedure specifically designed for a ferrite inductor excited by a waveform in time domain. We estimate the loss resistance in the core (parameter of the electrical model of the inductor by means of a Finite Element Method in 2D which leads to significant computational advantages over the 3D model. The methodology is validated for an RM (rectangular modulus ferrite core working in the linear and the saturation regions. Excellent agreement is found between the experimental data and the computational results.
de Vries, R.; Van Bergen, J.E.A.M.; de Jong-van den Berg, Lolkje; Postma, Maarten
2006-01-01
To estimate the cost-effectiveness of a systematic one-off Chlamydia trachomatis (CT) screening program including partner treatment for Dutch young adults. Data on infection prevalence, participation rates, and sexual behavior were obtained from a large pilot study conducted in The Netherlands. Oppo
Wang, Meng; Brunekreef, Bert; Gehring, Ulrike; Szpiro, Adam; Hoek, Gerard; Beelen, Rob
2016-01-01
BACKGROUND: Leave-one-out cross-validation that fails to account for variable selection does not properly reflect prediction accuracy when the number of training sites is small. The impact on health effect estimates has rarely been studied. METHODS: We randomly generated ten training and test sets f
Correcting for Test Score Measurement Error in ANCOVA Models for Estimating Treatment Effects
Lockwood, J. R.; McCaffrey, Daniel F.
2014-01-01
A common strategy for estimating treatment effects in observational studies using individual student-level data is analysis of covariance (ANCOVA) or hierarchical variants of it, in which outcomes (often standardized test scores) are regressed on pretreatment test scores, other student characteristics, and treatment group indicators. Measurement…
Energy Technology Data Exchange (ETDEWEB)
Woods, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Winkler, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, D. [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2013-01-01
This study examines the effective moisture penetration depth (EMPD) model, and its suitability for building simulations. The EMPD model is a compromise between the simple, inaccurate effective capacitance approach and the complex, yet accurate, finite-difference approach. Two formulations of the EMPD model were examined, including the model used in the EnergyPlus building simulation software. An error in the EMPD model we uncovered was fixed with the release of EnergyPlus version 7.2, and the EMPD model in earlier versions of EnergyPlus should not be used.
The effect of PLS regression in PLS path model estimation when multicollinearity is present
DEFF Research Database (Denmark)
Nielsen, Rikke; Kristensen, Kai; Eskildsen, Jacob
PLS path modelling has previously been found to be robust to multicollinearity both between latent variables and between manifest variables of a common latent variable (see e.g. Cassel et al. (1999), Kristensen, Eskildsen (2005), Westlund et al. (2008)). However, most of the studies investigate...... models with relatively few variables and very simple dependence structures compared to the models that are often estimated in practical settings. A recent study by Nielsen et al. (2009) found that when model structure is more complex, PLS path modelling is not as robust to multicollinearity between...... latent variables as previously assumed. A difference in the standard error of path coefficients of as much as 83% was found between moderate and severe levels of multicollinearity. Large differences were found not only for large path coefficients, but also for small path coefficients and in some cases...
Directory of Open Access Journals (Sweden)
Alexandre eBureau
2015-07-01
Full Text Available Effects of genetic variants on the risk of complex diseases estimated from association studies are typically small. Nonetheless, variants may have important effects in presence of specific levels of environmental exposures, and when a trait related to the disease (endophenotype is either normal or impaired. We propose polytomous and transition models to represent the relationship between disease, endophenotype, genotype and environmental exposure in family studies. Model coefficients were estimated using generalized estimating equations and were used to derive gene-environment interaction effects and genotype effects at specific levels of exposure. In a simulation study, estimates of the effect of a genetic variant were substantially higher when both an endophenotype and an environmental exposure modifying the variant effect were taken into account, particularly under transition models, compared to the alternative of ignoring the endophenotype. Illustration of the proposed modeling with the metabolic syndrome, abdominal obesity, physical activity and polymorphisms in the NOX3 gene in the Quebec Family Study revealed that the positive association of the A allele of rs1375713 with the metabolic syndrome at high levels of physical activity was only detectable in subjects without abdominal obesity, illustrating the importance of taking into account the abdominal obesity endophenotype in this analysis.
Directory of Open Access Journals (Sweden)
Petra S Meier
2016-02-01
Full Text Available While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO "best buy" intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities.An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol; and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as
Plan, Elodie L; Maloney, Alan; Mentré, France; Karlsson, Mats O; Bertrand, Julie
2012-09-01
Estimation methods for nonlinear mixed-effects modelling have considerably improved over the last decades. Nowadays, several algorithms implemented in different software are used. The present study aimed at comparing their performance for dose-response models. Eight scenarios were considered using a sigmoid E(max) model, with varying sigmoidicity and residual error models. One hundred simulated datasets for each scenario were generated. One hundred individuals with observations at four doses constituted the rich design and at two doses, the sparse design. Nine parametric approaches for maximum likelihood estimation were studied: first-order conditional estimation (FOCE) in NONMEM and R, LAPLACE in NONMEM and SAS, adaptive Gaussian quadrature (AGQ) in SAS, and stochastic approximation expectation maximization (SAEM) in NONMEM and MONOLIX (both SAEM approaches with default and modified settings). All approaches started first from initial estimates set to the true values and second, using altered values. Results were examined through relative root mean squared error (RRMSE) of the estimates. With true initial conditions, full completion rate was obtained with all approaches except FOCE in R. Runtimes were shortest with FOCE and LAPLACE and longest with AGQ. Under the rich design, all approaches performed well except FOCE in R. When starting from altered initial conditions, AGQ, and then FOCE in NONMEM, LAPLACE in SAS, and SAEM in NONMEM and MONOLIX with tuned settings, consistently displayed lower RRMSE than the other approaches. For standard dose-response models analyzed through mixed-effects models, differences were identified in the performance of estimation methods available in current software, giving material to modellers to identify suitable approaches based on an accuracy-versus-runtime trade-off.
Stallinga, S.; Rieger, B.
2012-01-01
We introduce a method for determining the position and orientation of fixed dipole emitters based on a combination of polarimetry and spot shape detection. A key element is an effective Point Spread Function model based on Hermite functions. The model offers a good description of the shape variation
Estimating Modifying Effect of Age on Genetic and Environmental Variance Components in Twin Models.
He, Liang; Sillanpää, Mikko J; Silventoinen, Karri; Kaprio, Jaakko; Pitkäniemi, Janne
2016-04-01
Twin studies have been adopted for decades to disentangle the relative genetic and environmental contributions for a wide range of traits. However, heritability estimation based on the classical twin models does not take into account dynamic behavior of the variance components over age. Varying variance of the genetic component over age can imply the existence of gene-environment (G×E) interactions that general genome-wide association studies (GWAS) fail to capture, which may lead to the inconsistency of heritability estimates between twin design and GWAS. Existing parametricG×Einteraction models for twin studies are limited by assuming a linear or quadratic form of the variance curves with respect to a moderator that can, however, be overly restricted in reality. Here we propose spline-based approaches to explore the variance curves of the genetic and environmental components. We choose the additive genetic, common, and unique environmental variance components (ACE) model as the starting point. We treat the component variances as variance functions with respect to age modeled by B-splines or P-splines. We develop an empirical Bayes method to estimate the variance curves together with their confidence bands and provide an R package for public use. Our simulations demonstrate that the proposed methods accurately capture dynamic behavior of the component variances in terms of mean square errors with a data set of >10,000 twin pairs. Using the proposed methods as an alternative and major extension to the classical twin models, our analyses with a large-scale Finnish twin data set (19,510 MZ twins and 27,312 DZ same-sex twins) discover that the variances of the A, C, and E components for body mass index (BMI) change substantially across life span in different patterns and the heritability of BMI drops to ∼50% after middle age. The results further indicate that the decline of heritability is due to increasing unique environmental variance, which provides more
Comparison among Models to Estimate the Shielding Effectiveness Applied to Conductive Textiles
Directory of Open Access Journals (Sweden)
Alberto Lopez
2013-01-01
Full Text Available The purpose of this paper is to present a comparison among two models and its measurement to calculate the shielding effectiveness of electromagnetic barriers, applying it to conductive textiles. Each one, models a conductive textile as either a (1 wire mesh screen or (2 compact material. Therefore, the objective is to perform an analysis of the models in order to determine which one is a better approximation for electromagnetic shielding fabrics. In order to provide results for the comparison, the shielding effectiveness of the sample has been measured by means of the standard ASTM D4935-99.
DEFF Research Database (Denmark)
Martinussen, T.; Vansteelandt, S.; Tchetgen, E. J. Tchetgen;
2016-01-01
of complications due to censoring and survivorship bias. In this paper, we make a novel proposal under a class of structural cumulative survival models which parameterize time-varying effects of a point exposure directly on the scale of the survival function; these models are essentially equivalent with a semi......-parametric variant of the instrumental variables additive hazards model. We propose a class of recursive instrumental variable estimators for these exposure effects, and derive their large sample properties along with inferential tools. We examine the performance of the proposed method in simulation studies...
Methods of statistical model estimation
Hilbe, Joseph
2013-01-01
Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th
Meta-analysis of choice set generation effects on route choice model estimates and predictions
DEFF Research Database (Denmark)
Prato, Carlo Giacomo
2012-01-01
Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior...... modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation....... Initially, path generation techniques are implemented within a synthetic network to generate possible subjective choice sets considered by travelers. Next, ‘true model estimates’ and ‘postulated predicted routes’ are assumed from the simulation of a route choice model. Then, objective choice sets...
Grieger, Jessica A; Johnson, Brittany J; Wycherley, Thomas P; Golley, Rebecca K
2017-05-01
Background: Dietary simulation modeling can predict dietary strategies that may improve nutritional or health outcomes.Objectives: The study aims were to undertake a systematic review of simulation studies that model dietary strategies aiming to improve nutritional intake, body weight, and related chronic disease, and to assess the methodologic and reporting quality of these models.Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guided the search strategy with studies located through electronic searches [Cochrane Library, Ovid (MEDLINE and Embase), EBSCOhost (CINAHL), and Scopus]. Study findings were described and dietary modeling methodology and reporting quality were critiqued by using a set of quality criteria adapted for dietary modeling from general modeling guidelines.Results: Forty-five studies were included and categorized as modeling moderation, substitution, reformulation, or promotion dietary strategies. Moderation and reformulation strategies targeted individual nutrients or foods to theoretically improve one particular nutrient or health outcome, estimating small to modest improvements. Substituting unhealthy foods with healthier choices was estimated to be effective across a range of nutrients, including an estimated reduction in intake of saturated fatty acids, sodium, and added sugar. Promotion of fruits and vegetables predicted marginal changes in intake. Overall, the quality of the studies was moderate to high, with certain features of the quality criteria consistently reported.Conclusions: Based on the results of reviewed simulation dietary modeling studies, targeting a variety of foods rather than individual foods or nutrients theoretically appears most effective in estimating improvements in nutritional intake, particularly reducing intake of nutrients commonly consumed in excess. A combination of strategies could theoretically be used to deliver the best improvement in outcomes. Study quality was moderate to
Investigation of effects of varying model inputs on mercury deposition estimates in the Southwest US
Directory of Open Access Journals (Sweden)
T. Myers
2012-04-01
Full Text Available The Community Multiscale Air Quality (CMAQ model version 4.7.1 was used to simulate mercury wet and dry deposition for a domain covering the contiguous United States (US. The simulations used MM5-derived meteorological input fields and the US Environmental Protection Agency (EPA Clear Air Mercury Rule (CAMR emissions inventory. Using sensitivity simulations with different boundary conditions and tracer simulations, this investigation focuses on the contributions of boundary concentrations to deposited mercury in the Southwest (SW US. Concentrations of oxidized mercury species along the boundaries of the domain, in particular the upper layers of the domain, can make significant contributions to the simulated wet and dry deposition of mercury in the SW US. In order to better understand the contributions of boundary conditions to deposition, inert tracer simulations were conducted to quantify the relative amount of an atmospheric constituent transported across the boundaries of the domain at various altitudes and to quantify the amount that reaches and potentially deposits to the land surface in the SW US. Simulations using alternate sets of boundary concentrations, including estimates from global models (Goddard Earth Observing System-Chem (GEOS-Chem and the Global/Regional Atmospheric Heavy Metals (GRAHM model, and alternate meteorological input fields (for different years are analyzed in this paper. CMAQ dry deposition in the SW US is sensitive to differences in the atmospheric dynamics and atmospheric mercury chemistry parameterizations between the global models used for boundary conditions.
Investigation of effects of varying model inputs on mercury deposition estimates in the Southwest US
Directory of Open Access Journals (Sweden)
T. Myers
2013-01-01
Full Text Available The Community Multiscale Air Quality (CMAQ model version 4.7.1 was used to simulate mercury wet and dry deposition for a domain covering the continental United States (US. The simulations used MM5-derived meteorological input fields and the US Environmental Protection Agency (EPA Clear Air Mercury Rule (CAMR emissions inventory. Using sensitivity simulations with different boundary conditions and tracer simulations, this investigation focuses on the contributions of boundary concentrations to deposited mercury in the Southwest (SW US. Concentrations of oxidized mercury species along the boundaries of the domain, in particular the upper layers of the domain, can make significant contributions to the simulated wet and dry deposition of mercury in the SW US. In order to better understand the contributions of boundary conditions to deposition, inert tracer simulations were conducted to quantify the relative amount of an atmospheric constituent transported across the boundaries of the domain at various altitudes and to quantify the amount that reaches and potentially deposits to the land surface in the SW US. Simulations using alternate sets of boundary concentrations, including estimates from global models (Goddard Earth Observing System-Chem (GEOS-Chem and the Global/Regional Atmospheric Heavy Metals (GRAHM model, and alternate meteorological input fields (for different years are analyzed in this paper. CMAQ dry deposition in the SW US is sensitive to differences in the atmospheric dynamics and atmospheric mercury chemistry parameterizations between the global models used for boundary conditions.
Carpintero, Elisabet; González-Dugo, María P.; José Polo, María; Hain, Christopher; Nieto, Héctor; Gao, Feng; Andreu, Ana; Kustas, William; Anderson, Martha
2017-04-01
The integration of currently available satellite data into surface energy balance models can provide estimates of evapotranspiration (ET) with spatial and temporal resolutions determined by sensor characteristics. The use of data fusion techniques may increase the temporal resolution of these estimates using multiple satellites, providing a more frequent ET monitoring for hydrological purposes. The objective of this work is to analyze the effects of pixel resolution on the estimation of evapotranspiration using different remote sensing platforms, and to provide continuous monitoring of ET over a water-controlled ecosystem, the Holm oak savanna woodland known as dehesa. It is an agroforestry system with a complex canopy structure characterized by widely-spaced oak trees combined with crops, pasture and shrubs. The study was carried out during two years, 2013 and 2014, combining ET estimates at different spatial and temporal resolutions and applying data fusion techniques for a frequent monitoring of water use at fine spatial resolution. A global and daily ET product at 5 km resolution, developed with the ALEXI model using MODIS day-night temperature difference (Anderson et al., 2015a) was used as a starting point. The associated flux disaggregation scheme, DisALEXI (Norman et al., 2003), was later applied to constrain higher resolution ET from both MODIS and Landsat 7/8 images. The Climate Forecast System Reanalysis (CFSR) provided the meteorological data. Finally, a data fusion technique, the STARFM model (Gao et al., 2006), was applied to fuse MODIS and Landsat ET maps in order to obtain daily ET at 30 m resolution. These estimates were validated and analyzed at two different scales: at local scale over a dehesa experimental site and at watershed scale with a predominant Mediterranean oak savanna landscape, both located in Southern Spain. Local ET estimates from the modeling system were validated with measurements provided by an eddy covariance tower installed in
Estimating required information size by quantifying diversity in random-effects model meta-analyses
DEFF Research Database (Denmark)
Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper;
2009-01-01
an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...
Integrated human-clothing system model for estimating the effect of walking on clothing insulation
Energy Technology Data Exchange (ETDEWEB)
Ghaddar, Nesreen [American University of Beirut, Faculty of Engineering and Architecture, P.O. Box 11-236, Riad ElSolh, 1107 2020, Beirut (Lebanon); Ghali, Kamel [Beirut Arab University, Faculty of Engineering, Beirut (Lebanon); Jones, Byron [Kansas State University, College of Engineering, 148 Rathbone Hall, 66506-5202, Manhattan, KS (United States)
2003-06-01
The objective of this work is to develop a 1-D transient heat and mass transfer model of a walking clothed human to predict the dynamic clothing dry heat insulation values and vapor resistances. Developing an integrated model of human and clothing system under periodic ventilation requires estimation of the heat and mass transfer film coefficients at the skin to the air layer subject to oscillating normal flow. Experiments were conducted in an environmental chamber under controlled conditions of 25 C and 50% relative humidity to measure the mass transfer coefficient at the skin to the air layer separating the wet skin and the fabric. A 1-D mathematical model is developed to simulate the dynamic thermal behavior of clothing and its interaction with the human thermoregulation system under walking conditions. A modification of Gagge's two-node model is used to simulate the human physiological regulatory responses. The human model is coupled to a clothing three-node model of the fabric that takes into consideration the adsorption of water vapor in the fibers during the periodic ventilation of the fabric by the air motion in from ambient environment and out from the air layer adjacent to the moist skin. When physical activity and ambient conditions are specified, the integrated model of human-clothing can predict the thermo-regulatory responses of the body together with the temperature and insulation values of the fabric. The developed model is used to predict the periodic ventilation flow rate in and out of the fabric, the periodic fabric regain, the fabric temperature, the air layer temperature, the heat loss or gain from the skin, and dry and vapor resistances of the clothing. The heat loss from the skin increases with the increase of the frequency of ventilation and with the increased metabolic rate of the body. In addition, the dry resistance of the clothing fabrics, predicted by the current model, IS compared with published experimental data. The current
SPSS and SAS procedures for estimating indirect effects in simple mediation models.
Preacher, Kristopher J; Hayes, Andrew F
2004-11-01
Researchers often conduct mediation analysis in order to indirectly assess the effect of a proposed cause on some outcome through a proposed mediator. The utility of mediation analysis stems from its ability to go beyond the merely descriptive to a more functional understanding of the relationships among variables. A necessary component of mediation is a statistically and practically significant indirect effect. Although mediation hypotheses are frequently explored in psychological research, formal significance tests of indirect effects are rarely conducted. After a brief overview of mediation, we argue the importance of directly testing the significance of indirect effects and provide SPSS and SAS macros that facilitate estimation of the indirect effect with a normal theory approach and a bootstrap approach to obtaining confidence intervals, as well as the traditional approach advocated by Baron and Kenny (1986). We hope that this discussion and the macros will enhance the frequency of formal mediation tests in the psychology literature. Electronic copies of these macros may be downloaded from the Psychonomic Society's Web archive at www.psychonomic.org/archive/.
Kimball, John; Kang, Sinkyu
2003-01-01
The original objectives of this proposed 3-year project were to: 1) quantify the respective contributions of land cover and disturbance (i.e., wild fire) to uncertainty associated with regional carbon source/sink estimates produced by a variety of boreal ecosystem models; 2) identify the model processes responsible for differences in simulated carbon source/sink patterns for the boreal forest; 3) validate model outputs using tower and field- based estimates of NEP and NPP; and 4) recommend/prioritize improvements to boreal ecosystem carbon models, which will better constrain regional source/sink estimates for atmospheric C02. These original objectives were subsequently distilled to fit within the constraints of a 1 -year study. This revised study involved a regional model intercomparison over the BOREAS study region involving Biome-BGC, and TEM (A.D. McGuire, UAF) ecosystem models. The major focus of these revised activities involved quantifying the sensitivity of regional model predictions associated with land cover classification uncertainties. We also evaluated the individual and combined effects of historical fire activity, historical atmospheric CO2 concentrations, and climate change on carbon and water flux simulations within the BOREAS study region.
Estimation of health effects of prenatal methylmercury exposure using structural equation models
DEFF Research Database (Denmark)
Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe
2002-01-01
BACKGROUND: Observational studies in epidemiology always involve concerns regarding validity, especially measurement error, confounding, missing data, and other problems that may affect the study outcomes. Widely used standard statistical techniques, such as multiple regression analysis, may...... to some extent adjust for these shortcomings. However, structural equations may incorporate most of these considerations, thereby providing overall adjusted estimations of associations. This approach was used in a large epidemiological data set from a prospective study of developmental methyl......-mercury toxicity. RESULTS: Structural equation models were developed for assessment of the association between biomarkers of prenatal mercury exposure and neuropsychological test scores in 7 year old children. Eleven neurobehavioral outcomes were grouped into motor function and verbally mediated function...
Kelava, Augustin; Werner, Christina S.; Schermelleh-Engel, Karin; Moosbrugger, Helfried; Zapf, Dieter; Ma, Yue; Cham, Heining; Aiken, Leona S.; West, Stephen G.
2011-01-01
Interaction and quadratic effects in latent variable models have to date only rarely been tested in practice. Traditional product indicator approaches need to create product indicators (e.g., x[superscript 2] [subscript 1], x[subscript 1]x[subscript 4]) to serve as indicators of each nonlinear latent construct. These approaches require the use of…
Pan, Xinpeng; Zhang, Guangzhi; Yin, Xingyao
2017-10-01
Estimation of effective geostress parameters is fundamental to the trajectory design and hydraulic fracturing in shale-gas reservoirs. Considering the shale characteristics of excellent stratification, well-developed cracks or fractures and small-scale pores, an effective or suitable shale anisotropic rock-physics model contributes to achieving the accurate prediction of effective geostress parameters in shale-gas reservoirs. In this paper, we first built a shale anisotropic rock-physics model with orthorhombic symmetry, which helps to calculate the anisotropic and geomechanical parameters under the orthorhombic assumption. Then, we introduced an anisotropic stress model with orthorhombic symmetry compared with an isotropic stress model and a transversely isotropic stress model. Combining the effective estimation of the pore pressure and the vertical stress parameters, we finally obtained the effective geostress parameters including the minimum and maximum horizontal stress parameters, providing a useful guide for the exploration and development in shale-gas reservoirs. Of course, ultimately the optimal choice of the hydraulic-fracturing area may also take into consideration other multi-factors such as the rock brittleness, cracks or fractures, and hydrocarbon distribution.
Estimating Functions and Semiparametric Models
DEFF Research Database (Denmark)
Labouriau, Rodrigo
1996-01-01
The thesis is divided in two parts. The first part treats some topics of the estimation theory for semiparametric models in general. There the classic optimality theory is reviewed and exposed in a suitable way for the further developments given after. Further the theory of estimating functions...... contained in this part of the thesis constitutes an original contribution. There can be found the detailed characterization of the class of regular estimating functions, a calculation of efficient regular asymptotic linear estimating sequences (\\ie the classical optimality theory) and a discussion...... of the attainability of the bounds for the concentration of regular asymptotic linear estimating sequences by estimators derived from estimating functions. The main class of models considered in the second part of the thesis (chapter 5) are constructed by assuming that the expectation of a number of given square...
Analytical estimation of effective charges at saturation in Poisson-Boltzmann cell models
Trizac, E; Bocquet, L
2003-01-01
We propose a simple approximation scheme for computing the effective charges of highly charged colloids (spherical or cylindrical with infinite length). Within non-linear Poisson-Boltzmann theory, we start from an expression for the effective charge in the infinite-dilution limit which is asymptotically valid for large salt concentrations; this result is then extended to finite colloidal concentration, approximating the salt partitioning effect which relates the salt content in the suspension to that of a dialysing reservoir. This leads to an analytical expression for the effective charge as a function of colloid volume fraction and salt concentration. These results compare favourably with the effective charges at saturation (i.e. in the limit of large bare charge) computed numerically following the standard prescription proposed by Alexander et al within the cell model.
Mitrikas, V G
2015-01-01
Monitoring of the radiation loading on cosmonauts requires calculation of absorbed dose dynamics with regard to the stay of cosmonauts in specific compartments of the space vehicle that differ in shielding properties and lack means of radiation measurement. The paper discusses different aspects of calculation modeling of radiation effects on human body organs and tissues and reviews the effective dose estimates for cosmonauts working in one or another compartment over the previous period of the International space station operation. It was demonstrated that doses measured by a real or personal dosimeters can be used to calculate effective dose values. Correct estimation of accumulated effective dose can be ensured by consideration for time course of the space radiation quality factor.
Directory of Open Access Journals (Sweden)
Nengjun Yi
2011-12-01
Full Text Available Complex diseases and traits are likely influenced by many common and rare genetic variants and environmental factors. Detecting disease susceptibility variants is a challenging task, especially when their frequencies are low and/or their effects are small or moderate. We propose here a comprehensive hierarchical generalized linear model framework for simultaneously analyzing multiple groups of rare and common variants and relevant covariates. The proposed hierarchical generalized linear models introduce a group effect and a genetic score (i.e., a linear combination of main-effect predictors for genetic variants for each group of variants, and jointly they estimate the group effects and the weights of the genetic scores. This framework includes various previous methods as special cases, and it can effectively deal with both risk and protective variants in a group and can simultaneously estimate the cumulative contribution of multiple variants and their relative importance. Our computational strategy is based on extending the standard procedure for fitting generalized linear models in the statistical software R to the proposed hierarchical models, leading to the development of stable and flexible tools. The methods are illustrated with sequence data in gene ANGPTL4 from the Dallas Heart Study. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/.
Yi, Nengjun; Liu, Nianjun; Zhi, Degui; Li, Jun
2011-01-01
Complex diseases and traits are likely influenced by many common and rare genetic variants and environmental factors. Detecting disease susceptibility variants is a challenging task, especially when their frequencies are low and/or their effects are small or moderate. We propose here a comprehensive hierarchical generalized linear model framework for simultaneously analyzing multiple groups of rare and common variants and relevant covariates. The proposed hierarchical generalized linear models introduce a group effect and a genetic score (i.e., a linear combination of main-effect predictors for genetic variants) for each group of variants, and jointly they estimate the group effects and the weights of the genetic scores. This framework includes various previous methods as special cases, and it can effectively deal with both risk and protective variants in a group and can simultaneously estimate the cumulative contribution of multiple variants and their relative importance. Our computational strategy is based on extending the standard procedure for fitting generalized linear models in the statistical software R to the proposed hierarchical models, leading to the development of stable and flexible tools. The methods are illustrated with sequence data in gene ANGPTL4 from the Dallas Heart Study. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). PMID:22144906
Haber, M; An, Q; Foppa, I M; Shay, D K; Ferdinands, J M; Orenstein, W A
2015-05-01
As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARIs) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs.
Stallinga, Sjoerd; Rieger, Bernd
2012-03-12
We introduce a method for determining the position and orientation of fixed dipole emitters based on a combination of polarimetry and spot shape detection. A key element is an effective Point Spread Function model based on Hermite functions. The model offers a good description of the shape variations with dipole orientation and polarization detection channel, and provides computational advantages over the exact vectorial description of dipole image formation. The realized localization uncertainty is comparable to the free dipole case in which spots are rotationally symmetric and can be well modeled with a Gaussian. This result holds for all dipole orientations, for all practical signal levels, and for defocus values within the depth of focus, implying that the massive localization bias for defocused emitters with tilted dipole axis found with Gaussian spot fitting is eliminated.
The Additive Risk Model for Estimation of Effect of Haplotype Match in BMT Studies
DEFF Research Database (Denmark)
Scheike, Thomas; Martinussen, T; Zhang, MJ
2011-01-01
leads to a missing data problem. We show how Aalen's additive risk model can be applied in this setting with the benefit that the time-varying haplomatch effect can be easily studied. This problem has not been considered before, and the standard approach where one would use the expected-maximization (EM...... be developed using product-integration theory. Small sample properties are investigated using simulations in a setting that mimics the motivating haplomatch problem....
Directory of Open Access Journals (Sweden)
Tatsuhiko Sato
Full Text Available We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells and Neo cells (neomycin resistant gene-expressing HeLa cells irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities.
Sato, Tatsuhiko; Hamada, Nobuyuki
2014-01-01
We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities.
Mullah, Muhammad Abu Shadeque; Benedetti, Andrea
2016-11-01
Besides being mainly used for analyzing clustered or longitudinal data, generalized linear mixed models can also be used for smoothing via restricting changes in the fit at the knots in regression splines. The resulting models are usually called semiparametric mixed models (SPMMs). We investigate the effect of smoothing using SPMMs on the correlation and variance parameter estimates for serially correlated longitudinal normal, Poisson and binary data. Through simulations, we compare the performance of SPMMs to other simpler methods for estimating the nonlinear association such as fractional polynomials, and using a parametric nonlinear function. Simulation results suggest that, in general, the SPMMs recover the true curves very well and yield reasonable estimates of the correlation and variance parameters. However, for binary outcomes, SPMMs produce biased estimates of the variance parameters for high serially correlated data. We apply these methods to a dataset investigating the association between CD4 cell count and time since seroconversion for HIV infected men enrolled in the Multicenter AIDS Cohort Study.
Institute of Scientific and Technical Information of China (English)
Simon WU; Jonathan LI; Gordon HUANG; G.M.ZENG
2004-01-01
The horizontal accuracy of topographic data represented by digital elevation model (DEM) resolution brings about uncertainties in landscape process modeling with raster GIS. This paper presents a study on the effect of topographic variability on cell-based empirical estimation of soil loss and sediment transport. An original DEM of 10m resolution for a case watershed was re-sampled to three realizations of higher grid sizes for a comparative examination. Equations based on the USLE are applied to the watershed to calculate soil loss from each cell and total sediment transport to streams. The study found that the calculated total soil loss from the watershed decreases with the increasing DEM resolution with a linear correlation as spatial variability is reduced by cell aggregation. The USLE topographic factors (LS) extracted from applied DEMs represent spatial variability, and determine the estimations as shown in the modeling results. The commonly used USGS 30m DEM appears to be able to reflect essential spatial variability and suitable for the empirical estimation. The appropriateness of a DEM resolution is dependent upon specific landscape characteristics, applied model and its parameterization. This work attempts to provide a general framework for the research in the DEM-based empirical modeling.
Directory of Open Access Journals (Sweden)
Mei-Yu LEE
2014-11-01
Full Text Available This paper investigates the effect of the nonzero autocorrelation coefficients on the sampling distributions of the Durbin-Watson test estimator in three time-series models that have different variance-covariance matrix assumption, separately. We show that the expected values and variances of the Durbin-Watson test estimator are slightly different, but the skewed and kurtosis coefficients are considerably different among three models. The shapes of four coefficients are similar between the Durbin-Watson model and our benchmark model, but are not the same with the autoregressive model cut by one-lagged period. Second, the large sample case shows that the three models have the same expected values, however, the autoregressive model cut by one-lagged period explores different shapes of variance, skewed and kurtosis coefficients from the other two models. This implies that the large samples lead to the same expected values, 2(1 – ρ0, whatever the variance-covariance matrix of the errors is assumed. Finally, comparing with the two sample cases, the shape of each coefficient is almost the same, moreover, the autocorrelation coefficients are negatively related with expected values, are inverted-U related with variances, are cubic related with skewed coefficients, and are U related with kurtosis coefficients.
Model estimation of land-use effects on water levels of northern prairie wetlands.
Voldseth, Richard A; Johnson, W Carter; Gilmanov, Tagir; Guntenspergen, Glenn R; Millett, Bruce V
2007-03-01
Wetlands of the Prairie Pothole Region exist in a matrix of grassland dominated by intensive pastoral and cultivation agriculture. Recent conservation management has emphasized the conversion of cultivated farmland and degraded pastures to intact grassland to improve upland nesting habitat. The consequences of changes in land-use cover that alter watershed processes have not been evaluated relative to their effect on the water budgets and vegetation dynamics of associated wetlands. We simulated the effect of upland agricultural practices on the water budget and vegetation of a semipermanent prairie wetland by modifying a previously published mathematical model (WETSIM). Watershed cover/land-use practices were categorized as unmanaged grassland (native grass, smooth brome), managed grassland (moderately heavily grazed, prescribed burned), cultivated crops (row crop, small grain), and alfalfa hayland. Model simulations showed that differing rates of evapotranspiration and runoff associated with different upland plant-cover categories in the surrounding catchment produced differences in wetland water budgets and linked ecological dynamics. Wetland water levels were highest and vegetation the most dynamic under the managed-grassland simulations, while water levels were the lowest and vegetation the least dynamic under the unmanaged-grassland simulations. The modeling results suggest that unmanaged grassland, often planted for waterfowl nesting, may produce the least favorable wetland conditions for birds, especially in drier regions of the Prairie Pothole Region. These results stand as hypotheses that urgently need to be verified with empirical data.
PARAMETER ESTIMATION OF ENGINEERING TURBULENCE MODEL
Institute of Scientific and Technical Information of China (English)
钱炜祺; 蔡金狮
2001-01-01
A parameter estimation algorithm is introduced and used to determine the parameters in the standard k-ε two equation turbulence model (SKE). It can be found from the estimation results that although the parameter estimation method is an effective method to determine model parameters, it is difficult to obtain a set of parameters for SKE to suit all kinds of separated flow and a modification of the turbulence model structure should be considered. So, a new nonlinear k-ε two-equation model (NNKE) is put forward in this paper and the corresponding parameter estimation technique is applied to determine the model parameters. By implementing the NNKE to solve some engineering turbulent flows, it is shown that NNKE is more accurate and versatile than SKE. Thus, the success of NNKE implies that the parameter estimation technique may have a bright prospect in engineering turbulence model research.
Kerboua, Kaouther; Hamdaoui, Oualid
2018-01-01
Based on two different assumptions regarding the equation describing the state of the gases within an acoustic cavitation bubble, this paper studies the sonochemical production of hydrogen, through two numerical models treating the evolution of a chemical mechanism within a single bubble saturated with oxygen during an oscillation cycle in water. The first approach is built on an ideal gas model, while the second one is founded on Van der Waals equation, and the main objective was to analyze the effect of the considered state equation on the ultrasonic hydrogen production retrieved by simulation under various operating conditions. The obtained results show that even when the second approach gives higher values of temperature, pressure and total free radicals production, yield of hydrogen does not follow the same trend. When comparing the results released by both models regarding hydrogen production, it was noticed that the ratio of the molar amount of hydrogen is frequency and acoustic amplitude dependent. The use of Van der Waals equation leads to higher quantities of hydrogen under low acoustic amplitude and high frequencies, while employing ideal gas law based model gains the upper hand regarding hydrogen production at low frequencies and high acoustic amplitudes. Copyright © 2017 Elsevier B.V. All rights reserved.
Putti, Fernando Ferrari; Filho, Luis Roberto Almeida Gabriel; Gabriel, Camila Pires Cremasco; Neto, Alfredo Bonini; Bonini, Carolina Dos Santos Batista; Rodrigues Dos Reis, André
2017-06-01
This study aimed to develop a fuzzy mathematical model to estimate the impacts of global warming on the vitality of Laelia purpurata growing in different Brazilian environmental conditions. In order to develop the mathematical model was considered as intrinsic factors the parameters: temperature, humidity and shade conditions to determine the vitality of plants. Fuzzy model results could accurately predict the optimal conditions for cultivation of Laelia purpurata in several sites of Brazil. Based on fuzzy model results, we found that higher temperatures and lacking of properly shading can reduce the vitality of orchids. Fuzzy mathematical model could precisely detect the effect of higher temperatures causing damages on vitality of plants as a consequence of global warming. Copyright © 2017 Elsevier Inc. All rights reserved.
Mode choice model parameters estimation
Strnad, Irena
2010-01-01
The present work focuses on parameter estimation of two mode choice models: multinomial logit and EVA 2 model, where four different modes and five different trip purposes are taken into account. Mode choice model discusses the behavioral aspect of mode choice making and enables its application to a traffic model. Mode choice model includes mode choice affecting trip factors by using each mode and their relative importance to choice made. When trip factor values are known, it...
A conceptual model to estimate cost effectiveness of the indoor environment improvements
Energy Technology Data Exchange (ETDEWEB)
Seppanen, Olli; Fisk, William J.
2003-06-01
Macroeconomic analyses indicate a high cost to society of a deteriorated indoor climate. The few example calculations performed to date indicate that measures taken to improve IEQ are highly cost-effective when health and productivity benefits are considered. We believe that cost-benefit analyses of building designs and operations should routinely incorporate health and productivity impacts. As an initial step, we developed a conceptual model that shows the links between improvements in IEQ and the financial gains from reductions in medical care and sick leave, improved work performance, lower employee turn over, and reduced maintenance due to fewer complaints.
Energy Technology Data Exchange (ETDEWEB)
Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.
2001-04-30
Artifacts can result when reconstructing a dynamic image sequence from inconsistent single photon emission computed tomography (SPECT) projections acquired by a slowly rotating gantry. The artifacts can lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying volumes of interest on the images. To overcome these biases in conventional image based dynamic data analysis, we have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view. In previous work we developed computationally efficient methods for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions [1] and their statistical uncertainties [2] from dynamic SPECT projection data, using a spatial segmentation and temporal B-splines. In addition, we studied the bias that results from modeling various orders of temporal continuity and using various time samplings [1]. In the present work, we use the methods developed in [1, 2] and Monte Carlo simulations to study the effects of the temporal modeling on the statistical variability of the reconstructed distributions.
Directory of Open Access Journals (Sweden)
Quang Duy Pham
Full Text Available Vietnam has been largely reliant on international support in its HIV response. Over 2006-2010, a total of US$480 million was invested in its HIV programmes, more than 70% of which came from international sources. This study investigates the potential epidemiological impacts of these programmes and their cost-effectiveness.We conducted a data synthesis of HIV programming, spending, epidemiological, and clinical outcomes. Counterfactual scenarios were defined based on assumed programme coverage and behaviours had the programmes not been implemented. An epidemiological model, calibrated to reflect the actual epidemiological trends, was used to estimate plausible ranges of programme impacts. The model was then used to estimate the costs per averted infection, death, and disability adjusted life-year (DALY.Based on observed prevalence reductions amongst most population groups, and plausible counterfactuals, modelling suggested that antiretroviral therapy (ART and prevention programmes over 2006-2010 have averted an estimated 50,600 [95% uncertainty bound: 36,300-68,900] new infections and 42,600 [36,100-54,100] deaths, resulting in 401,600 [312,200-496,300] fewer DALYs across all population groups. HIV programmes in Vietnam have cost an estimated US$1,972 [1,447-2,747], US$2,344 [1,843-2,765], and US$248 [201-319] for each averted infection, death, and DALY, respectively.Our evaluation suggests that HIV programmes in Vietnam have most likely had benefits that are cost-effective. ART and direct HIV prevention were the most cost-effective interventions in reducing HIV disease burden.
Li, Shuli; Gray, Robert J
2016-09-01
We consider methods for estimating the treatment effect and/or the covariate by treatment interaction effect in a randomized clinical trial under noncompliance with time-to-event outcome. As in Cuzick et al. (2007), assuming that the patient population consists of three (possibly latent) subgroups based on treatment preference: the ambivalent group, the insisters, and the refusers, we estimate the effects among the ambivalent group. The parameters have causal interpretations under standard assumptions. The article contains two main contributions. First, we propose a weighted per-protocol (Wtd PP) estimator through incorporating time-varying weights in a proportional hazards model. In the second part of the article, under the model considered in Cuzick et al. (2007), we propose an EM algorithm to maximize a full likelihood (FL) as well as the pseudo likelihood (PL) considered in Cuzick et al. (2007). The E step of the algorithm involves computing the conditional expectation of a linear function of the latent membership, and the main advantage of the EM algorithm is that the risk parameters can be updated by fitting a weighted Cox model using standard software and the baseline hazard can be updated using closed-form solutions. Simulations show that the EM algorithm is computationally much more efficient than directly maximizing the observed likelihood. The main advantage of the Wtd PP approach is that it is more robust to model misspecifications among the insisters and refusers since the outcome model does not impose distributional assumptions among these two groups. © 2016, The International Biometric Society.
Coates, P A; Ollerton, R L; Luzio, S D; Ismail, I S; Owens, D R
1993-11-01
Recent work in healthy subjects, the aged, and subjects with gestational diabetes or drug-induced insulin resistance using minimal model analysis of the tolbutamide-modified frequently sampled intravenous glucose tolerance test suggested that a reduced sampling regimen of 12 time points produced unbiased and generally acceptable estimates of insulin sensitivity (SI) and glucose effectiveness (SG) compared with a full sampling schedule of 30 time points. We have used data from 26 insulin-modified frequently sampled intravenous glucose tolerance tests in 21 subjects with NIDDM to derive and compare estimates of SI and SG from the full sampling schedule (SI(30), SG(30)) with those estimated from the suggested 12 time points (SI(12), SG(12)) and those estimated with the addition of a 25-min time point (SI(13), SG(13)). Percentage relative errors were calculated relative to the corresponding 30 time-point values. A statistically significant bias of 15% (97% confidence interval from 7.4 to 25.6%, interquartile range 25%) was introduced by the estimation of SI(12) but not SI(13) (1%, 97% confidence interval from -9.4 to 9.3%, interquartile range 21%). Results for SG(12) (-12%, 97% confidence interval from -46.7 to 1.2%, interquartile range 49%) and SG(13) (-5%, 97% confidence interval from -27.8 to 6.8%, interquartile range 37%) were statistically equivocal. The precision of estimation of SI(12), SG(12), and SG(13) measured by the interquartile range of the percentage relative errors was poor. The precision of determination measured by the median minimal model coefficient of variation was 18, 29, and 27% for SI(30), SI(12), and SI(13) and 9, 11, and 11% for SG(30), SG(12), and SG(13), respectively.(ABSTRACT TRUNCATED AT 250 WORDS)
Estimating the Effects of Obesity and Weight Change on Mortality Using a Dynamic Causal Model
Bochen Cao
2015-01-01
Background A well-known challenge in estimating the mortality risks of obesity is reverse causality attributable to illness-associated and smoking-associated weight loss. Given that the likelihood of chronic and acute illnesses rises with age, reverse causality is most threatening to estimates derived from elderly populations. Methods I analyzed data from 12,523 respondents over 50 years old from a nationally representative longitudinal dataset, the Health and Retirement Study (HRS). The effe...
Ippei Fujiwara
2003-01-01
In this paper, I estimate the monetary business cycle model of the Japanese economy by the method advocated by Ireland (2002a), the max- imum likelihood estimation of the dynamic stochastic general equilibrium model in a state-space representation. The model estimated here includes the direct role of money on output and inflation so that we could study the alternative transmission mecha- nism of monetary policy to traditional interest rate channel, which may even work under the zero nominal i...
Estimating cost-effectiveness in public health: a summary of modelling and valuation methods.
Marsh, Kevin; Phillips, Ceri J; Fordham, Richard; Bertranou, Evelina; Hale, Janine
2012-09-03
It is acknowledged that economic evaluation methods as they have been developed for Health Technology Assessment do not capture all the costs and benefits relevant to the assessment of public health interventions. This paper reviews methods that could be employed to measure and value the broader set of benefits generated by public health interventions. It is proposed that two key developments are required if this vision is to be achieved. First, there is a trend to modelling approaches that better capture the effects of public health interventions. This trend needs to continue, and economists need to consider a broader range of modelling techniques than are currently employed to assess public health interventions. The selection and implementation of alternative modelling techniques should be facilitated by the production of better data on the behavioural outcomes generated by public health interventions. Second, economists are currently exploring a number of valuation paradigms that hold the promise of more appropriate valuation of public health interventions outcomes. These include the capabilities approach and the subjective well-being approach, both of which offer the possibility of broader measures of value than the approaches currently employed by health economists. These developments should not, however, be made by economists alone. These questions, in particular what method should be used to value public health outcomes, require social value judgements that are beyond the capacity of economists. This choice will require consultation with policy makers, and perhaps even the general public. Such collaboration would have the benefit of ensuring that the methods developed are useful for decision makers.
Bayesian estimation of the network autocorrelation model
Dittrich, D.; Leenders, R.T.A.J.; Mulder, J.
2017-01-01
The network autocorrelation model has been extensively used by researchers interested modeling social influence effects in social networks. The most common inferential method in the model is classical maximum likelihood estimation. This approach, however, has known problems such as negative bias of
A feature-based inference model of numerical estimation: the split-seed effect.
Murray, Kyle B; Brown, Norman R
2009-07-01
Prior research has identified two modes of quantitative estimation: numerical retrieval and ordinal conversion. In this paper we introduce a third mode, which operates by a feature-based inference process. In contrast to prior research, the results of three experiments demonstrate that people estimate automobile prices by combining metric information associated with two critical features: product class and brand status. In addition, Experiments 2 and 3 demonstrated that when participants are seeded with the actual current base price of one of the to-be-estimated vehicles, they respond by revising the general metric and splitting the information carried by the seed between the two critical features. As a result, the degree of post-seeding revision is directly related to the number of these features that the seed and the transfer items have in common. The paper concludes with a general discussion of the practical and theoretical implications of our findings.
Energy Technology Data Exchange (ETDEWEB)
Han, Seok-Jung; KEUM, Dong-Kwon; Jang, Seung-Cheol [KAERI, Daejeon (Korea, Republic of)
2015-05-15
The FCM includes complex transport phenomena of radiation materials on a biokinetic system of contaminated environments. An estimation of chronic health effects is a key part of the level 3 PSA (Probabilistic Safety Assessment), which depends on the FCM estimation from contaminated foods ingestion. A cultural ingestion habit of a local region and agricultural productions are different to the general features over worldwide scale or case by case. This is a reason to develop a domestic FCM data for the level 3 PSA. However, a generation of the specific FCM data is a complex process and under a large degree of uncertainty due to inherent biokinetic models. As a preliminary study, the present study focuses on an infrastructure development to generation of a specific FCM data. During this process, the features of FCM data to generate a domestic FCM data were investigated. Based on the insights obtained from this process, a specific domestic FCM data was developed. The present study was developed a domestic FCM data to estimate the chronic health effects of off-site consequence analysis. From this study, an insight was obtained, that a domestic FCM data is roughly 20 times higher than the MACCS2 defaults data. Based on this observation, it is clear that the specific chronic health effects of a domestic plant site should be considered in the off-site consequence analysis.
Estimating Absolute Site Effects
Energy Technology Data Exchange (ETDEWEB)
Malagnini, L; Mayeda, K M; Akinci, A; Bragato, P L
2004-07-15
The authors use previously determined direct-wave attenuation functions as well as stable, coda-derived source excitation spectra to isolate the absolute S-wave site effect for the horizontal and vertical components of weak ground motion. They used selected stations in the seismic network of the eastern Alps, and find the following: (1) all ''hard rock'' sites exhibited deamplification phenomena due to absorption at frequencies ranging between 0.5 and 12 Hz (the available bandwidth), on both the horizontal and vertical components; (2) ''hard rock'' site transfer functions showed large variability at high-frequency; (3) vertical-motion site transfer functions show strong frequency-dependence, and (4) H/V spectral ratios do not reproduce the characteristics of the true horizontal site transfer functions; (5) traditional, relative site terms obtained by using reference ''rock sites'' can be misleading in inferring the behaviors of true site transfer functions, since most rock sites have non-flat responses due to shallow heterogeneities resulting from varying degrees of weathering. They also use their stable source spectra to estimate total radiated seismic energy and compare against previous results. they find that the earthquakes in this region exhibit non-constant dynamic stress drop scaling which gives further support for a fundamental difference in rupture dynamics between small and large earthquakes. To correct the vertical and horizontal S-wave spectra for attenuation, they used detailed regional attenuation functions derived by Malagnini et al. (2002) who determined frequency-dependent geometrical spreading and Q for the region. These corrections account for the gross path effects (i.e., all distance-dependent effects), although the source and site effects are still present in the distance-corrected spectra. The main goal of this study is to isolate the absolute site effect (as a function of frequency
Hedeker, D; Flay, B R; Petraitis, J
1996-02-01
Methods are proposed and described for estimating the degree to which relations among variables vary at the individual level. As an example of the methods, M. Fishbein and I. Ajzen's (1975; I. Ajzen & M. Fishbein, 1980) theory of reasoned action is examined, which posits first that an individual's behavioral intentions are a function of 2 components: the individual's attitudes toward the behavior and the subjective norms as perceived by the individual. A second component of their theory is that individuals may weight these 2 components differently in assessing their behavioral intentions. This article illustrates the use of empirical Bayes methods based on a random-effects regression model to estimate these individual influences, estimating an individual's weighting of both of these components (attitudes toward the behavior and subjective norms) in relation to their behavioral intentions. This method can be used when an individual's behavioral intentions, subjective norms, and attitudes toward the behavior are all repeatedly measured. In this case, the empirical Bayes estimates are derived as a function of the data from the individual, strengthened by the overall sample data.
Institute of Scientific and Technical Information of China (English)
YU Shan Fa; NAKATA Akinori; GU Gui Zhen; SWANSON Naomi G; ZHOU Wen Hui; HE Li Hua; WANG Sheng
2013-01-01
Objective To investigate the co-effect of Demand-control-support (DCS) model and Effort-reward Imbalance (ERI) model on the risk estimation of depression in humans in comparison with the effects when they are used respectively. Methods A total of 3 632 males and 1 706 females from 13 factories and companies in Henan province were recruited in this cross-sectional study. Perceived job stress was evaluated with the Job Content Questionnaire and Effort-Reward Imbalance Questionnaire (Chinese version). Depressive symptoms were assessed by using the Center for Epidemiological Studies Depression Scale (CES-D). Results DC (demands/job control ratio) and ERI were shown to be independently associated with depressive symptoms. The outcome of low social support and overcommitment were similar. High DC and low social support (SS), high ERI and high overcommitment, and high DC and high ERI posed greater risks of depressive symptoms than each of them did alone. ERI model and SS model seem to be effective in estimating the risk of depressive symptoms if they are used respectively. Conclusion The DC had better performance when it was used in combination with low SS. The effect on physical demands was better than on psychological demands. The combination of DCS and ERI models could improve the risk estimate of depressive symptoms in humans.
Amplitude Models for Discrimination and Yield Estimation
Energy Technology Data Exchange (ETDEWEB)
Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-09-01
This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.
Jacob Strunk; Hailemariam Temesgen; Hans-Erik Andersen; James P. Flewelling; Lisa Madsen
2012-01-01
Using lidar in an area-based model-assisted approach to forest inventory has the potential to increase estimation precision for some forest inventory variables. This study documents the bias and precision of a model-assisted (regression estimation) approach to forest inventory with lidar-derived auxiliary variables relative to lidar pulse density and the number of...
The effect of position sources on estimated eigenvalues in intensity modeled data
Hendrikse, Anne; Veldhuis, Raymond; Spreeuwers, Luuk; Goseling, Jasper; Weber, Jos H.
2010-01-01
In biometrics, often models are used in which the data distributions are approximated with normal distributions. In particular, the eigenface method models facial data as a mixture of fixed-position intensity signals with a normal distribution. The model parameters, a mean value and a covariance mat
Estimating network effects in China's mobile telecommunications
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
A model is proposed along with empirical investigation to prove the existence of network effects in China's mobile telecommunications market. Futhernore, network effects on China's mobile telecommunications are estimated with a dynamic model. The structural parameters are identified from regression coefficients and the results are analyzed and compared with another literature. Data and estimation issues are also discussed. Conclusions are drawn that network effects are significant in China's mobile telecommunications market, and that ignoring network effects leads to bad policy making.
DeMarco, J J; Cagnon, C H; Cody, D D; Stevens, D M; McCollough, C H; Zankl, M; Angel, E; McNitt-Gray, M F
2007-05-07
The purpose of this work is to examine the effects of patient size on radiation dose from CT scans. To perform these investigations, we used Monte Carlo simulation methods with detailed models of both patients and multidetector computed tomography (MDCT) scanners. A family of three-dimensional, voxelized patient models previously developed and validated by the GSF was implemented as input files using the Monte Carlo code MCNPX. These patient models represent a range of patient sizes and ages (8 weeks to 48 years) and have all radiosensitive organs previously identified and segmented, allowing the estimation of dose to any individual organ and calculation of patient effective dose. To estimate radiation dose, every voxel in each patient model was assigned both a specific organ index number and an elemental composition and mass density. Simulated CT scans of each voxelized patient model were performed using a previously developed MDCT source model that includes scanner specific spectra, including bowtie filter, scanner geometry and helical source path. The scan simulations in this work include a whole-body scan protocol and a thoracic CT scan protocol, each performed with fixed tube current. The whole-body scan simulation yielded a predictable decrease in effective dose as a function of increasing patient weight. Results from analysis of individual organs demonstrated similar trends, but with some individual variations. A comparison with a conventional dose estimation method using the ImPACT spreadsheet yielded an effective dose of 0.14 mSv mAs(-1) for the whole-body scan. This result is lower than the simulations on the voxelized model designated 'Irene' (0.15 mSv mAs(-1)) and higher than the models 'Donna' and 'Golem' (0.12 mSv mAs(-1)). For the thoracic scan protocol, the ImPACT spreadsheet estimates an effective dose of 0.037 mSv mAs(-1), which falls between the calculated values for Irene (0.042 mSv mAs(-1)) and Donna (0.031 mSv mAs(-1)) and is higher relative
DEFF Research Database (Denmark)
Henriksen, Lars Christian; Hansen, Morten Hartvig; Poulsen, Niels Kjølstad
2013-01-01
Model‐based state space controllers require knowledge of states, both measurable and unmeasurable, and state estimation algorithms are typically employed to obtain estimates of the unmeasurable states. For the control of wind turbines, a good estimate of the free mean wind speed is important...... for the closed‐loop dynamics of the system, and an appropriate level of modelling detail is required to obtain good estimates of the free mean wind speed. In this work, three aerodynamic models based on blade element momentum theory are presented and compared with the aero‐servo‐elastic code HAWC2. The first...... in the aero‐servo‐elastic code HAWC2 compare the ability to estimate the free mean wind speed when either the first or third model is included in the estimation algorithm. Both a simplified example with a deterministic step in wind speed and full degrees‐of‐freedom simulations with turbulent wind fields...
Sasaki, S.; Yamada, T.
2013-12-01
The great earthquake attacked the north-east area in Japan in March 11, 2011. The system of electrical facilities to control Fukushima Daiichi nuclear power station was completely destroyed by the following tsunamis. From the damaged reactor containment vessels, an amount of radioactive substances had leaked and been diffused in the vicinity of this station. Radiological internal exposure becomes a serious social issue both in Japan and all over the world. The present study provides an easily understandable, kinematic-based model to estimate the effective dose of radioactive substances in a human body by simplified the complicated mechanism of metabolism. International Commission on Radiological Protection (ICRP) has developed an exact model, which is well-known as a standard method to calculate the effective dose for radiological protection. However, owing to that the above method accord too much with the actual mechanism of metabolism in human bodies, it becomes rather difficult for non-professional people of radiology to gasp the whole images of the movement and the influences of radioactive substances in a human body. Therefore, in the present paper we propose a newly-derived and easily-understandable model to estimate the effective dose. The present method is very similar with the traditional and conventional hydrological tank model. Ingestion flux of radioactive substances corresponds to rain intensity and the storage of radioactive substances to the water storage in a basin in runoff analysis. The key of this method is to estimate the energy radiated from the radioactive nuclear disintegration of an atom by using classical theory of E. Fermi of beta decay and special relativity for various kinds of radioactive atoms. The parameters used in this study are only physical half-time and biological half-time, and there are no intentional and operational parameters of coefficients to adjust our theoretical runoff to observation of ICRP. Figure.1 compares time
Brown, D J A; Doyle, A P; Gillon, M; Lendl, M; Anderson, D R; Cameron, A Collier; Hébrard, G; Hellier, C; Lovis, C; Maxted, P F L; Pepe, F; Pollacco, D; Queloz, D; Smalley, B
2016-01-01
We present new measurements of the projected spin--orbit angle $\\lambda$ for six WASP hot Jupiters, four of which are new to the literature (WASP-61, -62, -76, and -78), and two of which are new analyses of previously measured systems using new data (WASP-71, and -79). We use three different models based on two different techniques: radial velocity measurements of the Rossiter--McLaughlin effect, and Doppler tomography. Our comparison of the different models reveals that they produce projected stellar rotation velocities ($v \\sin I_{\\rm s}$) measurements often in disagreement with each other and with estimates obtained from spectral line broadening. The Bou\\'e model for the Rossiter--McLaughlin effect consistently underestimates the value of $v\\sin I_{\\rm s}$ compared to the Hirano model. Although $v \\sin I_s$ differed, the effect on $\\lambda$ was small for our sample, with all three methods producing values in agreement with each other. Using Doppler tomography, we find that WASP-61\\,b ($\\lambda=4^\\circ.0^{+...
Spatial scale effects on model parameter estimation and predictive uncertainty in ungauged basins
CSIR Research Space (South Africa)
Hughes, DA
2013-06-01
Full Text Available The most appropriate scale to use for hydrological modelling depends on the structure of the chosen model, the purpose of the results and the resolution of the available data used to quantify parameter values and provide the climatic forcing data...
The Office of Pesticide Programs models daily aquatic pesticide exposure values for 30 years in its risk assessments. However, only a fraction of that information is typically used in these assessments. The population model employed herein is a deterministic, density-dependent pe...
Banack, Hailey R; Kaufman, Jay S
2016-01-15
Obesity and smoking are independently associated with a higher mortality risk, but previous studies have reported conflicting results about the relationship between these 2 time-varying exposures. Using prospective longitudinal data (1987-2007) from the Atherosclerosis Risk in Communities Study, our objective in the present study was to estimate the joint effects of obesity and smoking on all-cause mortality and investigate whether there were additive or multiplicative interactions. We fit a joint marginal structural Poisson model to account for time-varying confounding affected by prior exposure to obesity and smoking. The incidence rate ratios from the joint model were 2.00 (95% confidence interval (CI): 1.79, 2.24) for the effect of smoking on mortality among nonobese persons, 1.31 (95% CI: 1.13, 1.51) for the effect of obesity on mortality among nonsmokers, and 1.97 (95% CI: 1.73, 2.22) for the joint effect of smoking and obesity on mortality. The negative product term from the exponential model revealed a submultiplicative interaction between obesity and smoking (β = -0.28, 95% CI: -0.45, -0.11; P obesity.
Jiang, Xuelian; Kang, Shaozhong; Tong, Ling; Li, Fusheng
2016-07-01
To estimate evapotranspiration (ET) of heterogeneous canopy of maize for seed production accurately, an ET model was developed based on effective resistance after field experiments were conducted from March to September in 2013 and 2014 in an arid region of northwest China. The effective resistance of maize including effective surface (rce) and aerodynamic (rae) resistance was estimated using different methods, and then the Penman-Monteith model (P-M model) based on effective resistance was used to estimate daily ET of maize over the whole growing stage. Results showed that when the fraction cover of the canopy (fc) = 1, the estimated rce by aggregating female and male canopy resistances in parallel, was closer to the measured rce (rcec), which was obtained by inverting the P-M model based on effective resistance using measured ET by the eddy covariance (EC) system. When fc maize for seed production in the arid region of northwest China.
INTEGRATED SPEED ESTIMATION MODEL FOR MULTILANE EXPREESSWAYS
Hong, Sungjoon; Oguchi, Takashi
In this paper, an integrated speed-estimation model is developed based on empirical analyses for the basic sections of intercity multilane expressway un der the uncongested condition. This model enables a speed estimation for each lane at any site under arb itrary highway-alignment, traffic (traffic flow and truck percentage), and rainfall conditions. By combin ing this model and a lane-use model which estimates traffic distribution on the lanes by each vehicle type, it is also possible to es timate an average speed across all the lanes of one direction from a traffic demand by vehicle type under specific highway-alignment and rainfall conditions. This model is exp ected to be a tool for the evaluation of traffic performance for expressways when the performance me asure is travel speed, which is necessary for Performance-Oriented Highway Planning and Design. Regarding the highway-alignment condition, two new estimators, called effective horizo ntal curvature and effective vertical grade, are proposed in this paper which take into account the influence of upstream and downstream alignment conditions. They are applied to the speed-estimation model, and it shows increased accuracy of the estimation.
Model error estimation in ensemble data assimilation
Directory of Open Access Journals (Sweden)
S. Gillijns
2007-01-01
Full Text Available A new methodology is proposed to estimate and account for systematic model error in linear filtering as well as in nonlinear ensemble based filtering. Our results extend the work of Dee and Todling (2000 on constant bias errors to time-varying model errors. In contrast to existing methodologies, the new filter can also deal with the case where no dynamical model for the systematic error is available. In the latter case, the applicability is limited by a matrix rank condition which has to be satisfied in order for the filter to exist. The performance of the filter developed in this paper is limited by the availability and the accuracy of observations and by the variance of the stochastic model error component. The effect of these aspects on the estimation accuracy is investigated in several numerical experiments using the Lorenz (1996 model. Experimental results indicate that the availability of a dynamical model for the systematic error significantly reduces the variance of the model error estimates, but has only minor effect on the estimates of the system state. The filter is able to estimate additive model error of any type, provided that the rank condition is satisfied and that the stochastic errors and measurement errors are significantly smaller than the systematic errors. The results of this study are encouraging. However, it remains to be seen how the filter performs in more realistic applications.
J.S. Mandelblatt (Jeanne); K.A. Cronin (Kathleen); S. Bailey (Stephanie); D.A. Berry (Donald); H.J. de Koning (Harry); G. Draisma (Gerrit); H. Huang (Hailiang); S.J. Lee (Stephanie Joi); M.F. Munsell (Mark); S.K. Plevritis (Sylvia); P.M. Ravdin (P.); C.B. Schechter (Clyde); B. Sigal (Bronislava); M.A. Stoto (Michael); N.K. Stout (Natasha); N.T. van Ravesteyn (Nicolien); J. Venier (John); M. Zelen (Marvin); E. Feuer (Eric)
2009-01-01
textabstractBackground: Despite trials of mammography and widespread use, optimal screening policy is controversial. Objective: To evaluate U.S. breast cancer screening strategies. Design: 6 models using common data elements. Data Sources: National data on age-specific incidence, competing mortality
2006-01-01
METHOD 2.1 Building the model Using existing task analyses of navy sonar systems (Matthews, Greenley and Webb, 1991) and with the assistance of...Critical Operator Tasks. DRDC Toronto Report # CR-2003-131 Matthews, M.L., Greenley , M. and Webb, R.D.G (1991). Presentation of Information from Towed
McCarthy, Robert J; Levine, Stephen H; Reed, J Michael
2013-08-15
To predict effectiveness of 3 interventional methods of population control for feral cat colonies. Population model. Estimates of vital data for feral cats. Data were gathered from the literature regarding the demography and mating behavior of feral cats. An individual-based stochastic simulation model was developed to evaluate the effectiveness of trap-neuter-release (TNR), lethal control, and trap-vasectomy-hysterectomy-release (TVHR) in decreasing the size of feral cat populations. TVHR outperformed both TNR and lethal control at all annual capture probabilities between 10% and 90%. Unless > 57% of cats were captured and neutered annually by TNR or removed by lethal control, there was minimal effect on population size. In contrast, with an annual capture rate of ≥ 35%, TVHR caused population size to decrease. An annual capture rate of 57% eliminated the modeled population in 4,000 days by use of TVHR, whereas > 82% was required for both TNR and lethal control. When the effect of fraction of adult cats neutered on kitten and young juvenile survival rate was included in the analysis, TNR performed progressively worse and could be counterproductive, such that population size increased, compared with no intervention at all. TVHR should be preferred over TNR for management of feral cats if decrease in population size is the goal. This model allowed for many factors related to the trapping program and cats to be varied and should be useful for determining the financial and person-effort commitments required to have a desired effect on a given feral cat population.
Institute of Scientific and Technical Information of China (English)
Nguyen ThanhSon; Guo Shuxu; Chen Haipeng
2013-01-01
Multipath arrivals in an Ultra-WideBand (UWB) channel have a long time intervals between clusters and rays where the signal takes on zero or negligible values.It is precisely the signal sparsity of the impulse response of the UWB channel that is exploited in this work aiming at UWB channel estimation based on Compressed Sensing (CS).However,these multipath arrivals mainly depend on the channel environments that generate different sparse levels (low-sparse or high-sparse) of the UWB channels.According to this basis,we have analyzed the two most basic recovery algorithms,one based on linear programming Basis Pursuit (BP),another using greedy method Orthogonal Matching Pursuit (OMP),and chosen the best recovery algorithm which are suitable to the sparse level for each type of channel environment.Besides,the results of this work is an open topic for further research aimed at creating a optimal algorithm specially for application of CS based UWB systems.
Designing a model to estimate the impact and cost effectiveness of online learning platforms
Meza-Bolaños, Doris Verónica; Compañ, Patricia; Satorre Cuerda, Rosana
2016-01-01
There are different methodologies that have attempted to measure profitability and impact of using online learning platforms, all of which focus on a set of quantifiable and non-quantifiable aspects. These methods consider different items from various perspectives. The purpose of this work is to assess each one and determine the suitable indicators applicable to the reality of higher education in Ecuador. A hybrid model with characteristics from different perspectives could be designed wit...
Estimating Price Effects in an Almost Ideal Demand Model of Outbound Thai Tourism to East Asia
C-L. Chang (Chia-Lin); T. Khamkaew (Tanchanok); M.J. McAleer (Michael)
2010-01-01
textabstractThis paper analyzes the responsiveness of Thai outbound tourism to East Asian destinations, namely China, Hong Kong, Japan, Taiwan and Korea, to changes in effective relative price of tourism, total real total tourism expenditure, and one-off events. The nonlinear and linear Almost Ideal
Yang, Ji Seung; Cai, Li
2013-01-01
The main purpose of this study is to improve estimation efficiency in obtaining full-information maximum likelihood (FIML) estimates of contextual effects in the framework of a nonlinear multilevel latent variable model by adopting the Metropolis-Hastings Robbins-Monro algorithm (MH-RM; Cai, 2008, 2010a, 2010b). Results indicate that the MH-RM…
In this work, we address uncertainty analysis for a model, presented in a companion paper, quantifying the effect of soil moisture and plant development on soybean (Glycine max (L.) Merr.) leaf conductance. To achieve this we present several methods for confidence interval estimation. Estimation ...
Directory of Open Access Journals (Sweden)
Yanchun Li
Full Text Available Proper development of a seed requires coordinated exchanges of signals among the three components that develop side by side in the seed. One of these is the maternal integument that encloses the other two zygotic components, i.e., the diploid embryo and its nurturing annex, the triploid endosperm. Although the formation of the embryo and endosperm contains the contributions of both maternal and paternal parents, maternally and paternally derived alleles may be expressed differently, leading to a so-called parent-of-origin or imprinting effect. Currently, the nature of how genes from the maternal and zygotic genomes interact to affect seed development remains largely unknown. Here, we present a novel statistical model for estimating the main and interaction effects of quantitative trait loci (QTLs that are derived from different genomes and further testing the imprinting effects of these QTLs on seed development. The experimental design used is based on reciprocal backcrosses toward both parents, so that the inheritance of parent-specific alleles could be traced. The computing model and algorithm were implemented with the maximum likelihood approach. The new strategy presented was applied to study the mode of inheritance for QTLs that control endoreduplication traits in maize endosperm. Monte Carlo simulation studies were performed to investigate the statistical properties of the new model with the data simulated under different imprinting degrees. The false positive rate of imprinting QTL discovery by the model was examined by analyzing the simulated data that contain no imprinting QTL. The reciprocal design and a series of analytical and testing strategies proposed provide a standard procedure for genomic mapping of QTLs involved in the genetic control of complex seed development traits in flowering plants.
Brown, D. J. A.; Triaud, A. H. M. J.; Doyle, A. P.; Gillon, M.; Lendl, M.; Anderson, D. R.; Cameron, A. Collier; Hébrard, G.; Hellier, C.; Lovis, C.; Maxted, P. F. L.; Pepe, F.; Pollacco, D.; Queloz, D.; Smalley, B.
2016-09-01
We present new measurements of the projected spin-orbit angle λ for six WASP hot Jupiters, four of which are new to the literature (WASP-61, -62, -76, and -78), and two of which are new analyses of previously measured systems using new data (WASP-71, and -79). We use three different models based on two different techniques: radial velocity measurements of the Rossiter-McLaughlin effect, and Doppler tomography. Our comparison of the different models reveals that they produce projected stellar rotation velocities (vsin Is) measurements often in disagreement with each other and with estimates obtained from spectral line broadening. The Boué model for the Rossiter-McLaughlin effect consistently underestimates the value of vsin Is compared to the Hirano model. Although vsin Is differed, the effect on λ was small for our sample, with all three methods producing values in agreement with each other. Using Doppler tomography, we find that WASP-61 b (λ =4.0°+17.1-18.4), WASP-71 b (λ =-1.9°+7.1-7.5), and WASP-78 b (λ = -6.4° ± 5.9) are aligned. WASP-62 b (λ =19.4°+5.1-4.9) is found to be slightly misaligned, while WASP-79 b (λ =-95.2°+0.9-1.0) is confirmed to be strongly misaligned and has a retrograde orbit. We explore a range of possibilities for the orbit of WASP-76 b, finding that the orbit is likely to be strongly misaligned in the positive λ direction.
Mathieu, John E; Aguinis, Herman; Culpepper, Steven A; Chen, Gilad
2012-09-01
Cross-level interaction effects lie at the heart of multilevel contingency and interactionism theories. Researchers have often lamented the difficulty of finding hypothesized cross-level interactions, and to date there has been no means by which the statistical power of such tests can be evaluated. We develop such a method and report results of a large-scale simulation study, verify its accuracy, and provide evidence regarding the relative importance of factors that affect the power to detect cross-level interactions. Our results indicate that the statistical power to detect cross-level interactions is determined primarily by the magnitude of the cross-level interaction, the standard deviation of lower level slopes, and the lower and upper level sample sizes. We provide a Monte Carlo tool that enables researchers to a priori design more efficient multilevel studies and provides a means by which they can better interpret potential explanations for nonsignificant results. We conclude with recommendations for how scholars might design future multilevel studies that will lead to more accurate inferences regarding the presence of cross-level interactions.
Cheng, Linsong; Gu, Hao; Huang, Shijun
2016-11-01
The aim of this work is to present a comprehensive mathematical model for estimating oil drainage rate in Steam-assisted gravity drainage (SAGD) process, more importantly, wellbore/formation coupling effect is considered. Firstly, mass and heat transfer in vertical and horizontal wellbores are described briefly. Then, a function of steam chamber height is introduced and the expressions for oil drainage rate in rising and expanding steam chamber stages are derived in detail. Next, a calculation flowchart is provided and an example is given to introduce how to use the proposed method. Finally, after the mathematical model is validated, the effects of wellhead steam injection rate on simulated results are further analyzed. The results indicate that heat injection power per meter reduces gradually along the horizontal wellbore, which affects both steam chamber height and oil drainage rate in the SAGD process. In addition, when production time is the same, the calculated oil drainage rate from the new method is lower than that from Butler's method. Moreover, the paper shows that when wellhead steam injection rate is low enough, the steam chamber is not formed at the horizontal well's toe position and enhancing the wellhead steam injection rate can increase the oil drainage rate.
Algebraic Lens Distortion Model Estimation
Directory of Open Access Journals (Sweden)
Luis Alvarez
2010-07-01
Full Text Available A very important property of the usual pinhole model for camera projection is that 3D lines in the scene are projected to 2D lines. Unfortunately, wide-angle lenses (specially low-cost lenses may introduce a strong barrel distortion, which makes the usual pinhole model fail. Lens distortion models try to correct such distortion. We propose an algebraic approach to the estimation of the lens distortion parameters based on the rectification of lines in the image. Using the proposed method, the lens distortion parameters are obtained by minimizing a 4 total-degree polynomial in several variables. We perform numerical experiments using calibration patterns and real scenes to show the performance of the proposed method.
Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H
2016-01-01
The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivat...
Jhorar, R.K.
2002-01-01
Key words: evapotranspiration, effective soil hydraulic parameters, remote sensing, regional water management, groundwater use, Bhakra Irrigation System, India.The meaningful application of water management simulation models at regional scale for the analysis of alternate water manage
Estimating haplotype effects for survival data
DEFF Research Database (Denmark)
Scheike, Thomas; Martinussen, Torben; Silver, J
2010-01-01
Genetic association studies often investigate the effect of haplotypes on an outcome of interest. Haplotypes are not observed directly, and this complicates the inclusion of such effects in survival models. We describe a new estimating equations approach for Cox's regression model to assess haplo...
Directory of Open Access Journals (Sweden)
M Wahab Amjad
Full Text Available Biomolecules have been widely investigated as potential therapeutics for various diseases. However their use is limited due to rapid degradation and poor cellular uptake in vitro and in vivo. To address this issue, we synthesized a new nano-carrier system comprising of cholic acid-polyethylenimine (CA-PEI copolymer micelles, via carbodiimide-mediated coupling for the efficient delivery of small interfering ribonucleic acid (siRNA and bovine serum albumin (BSA as model protein. The mean particle size of siRNA- or BSA-loaded CA-PEI micelles ranged from 100-150 nm, with zeta potentials of +3-+11 mV, respectively. Atomic force, transmission electron and field emission scanning electron microscopy demonstrated that the micelles exhibited excellent spherical morphology. No significant morphology or size changes were observed in the CA-PEI micelles after siRNA and BSA loading. CA-PEI micelles exhibited sustained release profile, the effective diffusion coefficients were successfully estimated using a mathematically-derived cylindrical diffusion model and the release data of siRNA and BSA closely fitted into this model. High siRNA and BSA binding and loading efficiencies (95% and 70%, respectively were observed for CA-PEI micelles. Stability studies demonstrated that siRNA and BSA integrity was maintained after loading and release. The CA-PEI micelles were non cytotoxic to V79 and DLD-1 cells, as shown by alamarBlue and LIVE/DEAD cell viability assays. RT-PCR study revealed that siRNA-loaded CA-PEI micelles suppressed the mRNA for ABCB1 gene. These results revealed the promising potential of CA-PEI micelles as a stable, safe, and versatile nano-carrier for siRNA and the model protein delivery.
Beloconi, Anton; Benas, Nikolaos; Chrysoulakis, Nektarios; Kamarianakis, Yiannis
2008-11-01
Linear mixed effects models were developed for the estimation of the average daily Particulate Matter (PM) concentration spatial distribution over the area of Greater London (UK). Both fine (PM2.5) and coarse (PM10) concentrations were predicted for the 2002- 2012 time period, based on satellite data. The latter included Aerosol Optical Thickness (AOT) at 3×3 km spatial resolution, as well as the Surface Relative Humidity, Surface Temperature and K-Index derived from MODIS (Moderate Resolution Imaging Spectroradiometer) sensor. For a meaningful interpretation of the association among these variables, all data were homogenized with regard to spatial support and geographic projection, thus addressing the change of support problem and leading to a valid statistical inference. To this end, spatial (2D) and spatio- temporal (3D) kriging techniques were applied to in-situ particulate matter concentrations and the leave-one- station-out cross-validation was performed on a daily level to gauge the quality of the predictions. Satellite- derived covariates displayed clear seasonal patterns; in order to work with data which is stationary in mean, for each covariate, deviations from its estimated annual profiles were computed using nonlinear least squares and nonlinear absolute deviations. High-resolution land- cover and morphology static datasets were additionally incorporated in the analysis in order to catch the effects of nearby emission sources and sequestration sites. For pairwise comparisons of the particulate matter concentration means at distinct land-cover classes, the pairwise comparisons method for unequal sample sizes, known as Tukey's method, was performed. The use of satellite-derived products allowed better assessment of space-time interactions of PM, since these daily spatial measurements were able to capture differences in PM concentrations between grid cells, while the use of high- resolution land-cover and morphology static datasets allowed accounting for
Developing Physician Migration Estimates for Workforce Models.
Holmes, George M; Fraher, Erin P
2017-02-01
To understand factors affecting specialty heterogeneity in physician migration. Physicians in the 2009 American Medical Association Masterfile data were matched to those in the 2013 file. Office locations were geocoded in both years to one of 293 areas of the country. Estimated utilization, calculated for each specialty, was used as the primary predictor of migration. Physician characteristics (e.g., specialty, age, sex) were obtained from the 2009 file. Area characteristics and other factors influencing physician migration (e.g., rurality, presence of teaching hospital) were obtained from various sources. We modeled physician location decisions as a two-part process: First, the physician decides whether to move. Second, conditional on moving, a conditional logit model estimates the probability a physician moved to a particular area. Separate models were estimated by specialty and whether the physician was a resident. Results differed between specialties and according to whether the physician was a resident in 2009, indicating heterogeneity in responsiveness to policies. Physician migration was higher between geographically proximate states with higher utilization for that specialty. Models can be used to estimate specialty-specific migration patterns for more accurate workforce modeling, including simulations to model the effect of policy changes. © Health Research and Educational Trust.
FREQUENTIST MODEL AVERAGING ESTIMATION: A REVIEW
Institute of Scientific and Technical Information of China (English)
Haiying WANG; Xinyu ZHANG; Guohua ZOU
2009-01-01
In applications, the traditional estimation procedure generally begins with model selection.Once a specific model is selected, subsequent estimation is conducted under the selected model without consideration of the uncertainty from the selection process. This often leads to the underreporting of variability and too optimistic confidence sets. Model averaging estimation is an alternative to this procedure, which incorporates model uncertainty into the estimation process. In recent years, there has been a rising interest in model averaging from the frequentist perspective, and some important progresses have been made. In this paper, the theory and methods on frequentist model averaging estimation are surveyed. Some future research topics are also discussed.
Estimation of health effects for PM10 exposure using of Air Q model in Ahvaz City during 2009
Directory of Open Access Journals (Sweden)
Gh. Goudarzi
2015-08-01
Conclusion: Using data processing in Excel, AirQ software calculates relative risks, attributable proportion, and baseline incidence and the final output would be displayed in the form of death toll. It is noteworthy that there is no model that can estimate the effect of all pollutants together and simultaneously. In addition, it was found that the annual PM10 emission mean, the summer mean, winter mean, and 98 percentile were 261, 376, 170, and 1268 μg/m3 in Ahvaz City. Cumulative number of persons for total number of deaths attributed to PM10 exposure was 1165 in 2009; Out of which, 44% has occurred in the days with concentrations lower than 250 μg/m3. It should be noted that 62% of this value is corresponded to the days with concentrations below 350 μg/m3. The total cumulative number of cardiovascular death attributed to the exposure with PM10 during one year of monitoring was 612 persons. On the other hand, 52% of these cases have occurred in days with PM10 levels not exceeding 300 μg/m3. Cumulative number of Hospital Admissions Respiratory Disease attributed to exposure with PM10 during one year of monitoring was 1551 persons; Out of which, 36 % occurred in days with PM10 levels not exceeding 200 μg/m3.
Estimating Stochastic Volatility Models using Prediction-based Estimating Functions
DEFF Research Database (Denmark)
Lunde, Asger; Brix, Anne Floor
In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared to the p......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from...
Astolfi, Laura; Cincotti, Febo; Mattia, Donatella; Salinari, Serenella; Babiloni, Claudio; Basilisco, Alessandra; Rossini, Paolo Maria; Ding, Lei; Ni, Ying; He, Bin; Marciani, Maria Grazia; Babiloni, Fabio
2004-12-01
Different brain imaging devices are presently available to provide images of the human functional cortical activity, based on hemodynamic, metabolic or electromagnetic measurements. However, static images of brain regions activated during particular tasks do not convey the information of how these regions are interconnected. The concept of brain connectivity plays a central role in the neuroscience, and different definitions of connectivity, functional and effective, have been adopted in literature. While the functional connectivity is defined as the temporal coherence among the activities of different brain areas, the effective connectivity is defined as the simplest brain circuit that would produce the same temporal relationship as observed experimentally among cortical sites. The structural equation modeling (SEM) is the most used method to estimate effective connectivity in neuroscience, and its typical application is on data related to brain hemodynamic behavior tested by functional magnetic resonance imaging (fMRI), whereas the directed transfer function (DTF) method is a frequency-domain approach based on both a multivariate autoregressive (MVAR) modeling of time series and on the concept of Granger causality. This study presents advanced methods for the estimation of cortical connectivity by applying SEM and DTF on the cortical signals estimated from high-resolution electroencephalography (EEG) recordings, since these signals exhibit a higher spatial resolution than conventional cerebral electromagnetic measures. To estimate correctly the cortical signals, we used a subject's multicompartment head model (scalp, skull, dura mater, cortex) constructed from individual MRI, a distributed source model and a regularized linear inverse source estimates of cortical current density. Before the application of SEM and DTF methodology to the cortical waveforms estimated from high-resolution EEG data, we performed a simulation study, in which different main factors
Directory of Open Access Journals (Sweden)
Bouix Jacques
2001-07-01
Full Text Available Abstract Simulations were used to study the influence of model adequacy and data structure on the estimation of genetic parameters for traits governed by direct and maternal effects. To test model adequacy, several data sets were simulated according to different underlying genetic assumptions and analysed by comparing the correct and incorrect models. Results showed that omission of one of the random effects leads to an incorrect decomposition of the other components. If maternal genetic effects exist but are neglected, direct heritability is overestimated, and sometimes more than double. The bias depends on the value of the genetic correlation between direct and maternal effects. To study the influence of data structure on the estimation of genetic parameters, several populations were simulated, with different degrees of known paternity and different levels of genetic connectedness between flocks. Results showed that the lack of connectedness affects estimates when flocks have different genetic means because no distinction can be made between genetic and environmental differences between flocks. In this case, direct and maternal heritabilities are under-estimated, whereas maternal environmental effects are overestimated. The insufficiency of pedigree leads to biased estimates of genetic parameters.
Westcott, David A; Bentrupperbäumer, Joan; Bradford, Matt G; McKeown, Adam
2005-11-01
The processes determining where seeds fall relative to their parent plant influence the spatial structure and dynamics of plant populations and communities. For animal dispersed species the factors influencing seed shadows are poorly understood. In this paper we test the hypothesis that the daily temporal distribution of disperser behaviours, for example, foraging and movement, influences dispersal outcomes, in particular the shape and scale of dispersal curves. To do this, we describe frugivory and the dispersal curves produced by the southern cassowary, Casuarius casuarius, the only large-bodied disperser in Australia's rainforests. We found C. casuarius consumed fruits of 238 species and of all fleshy-fruit types. In feeding trials, seeds of 11 species were retained on average for 309 min (+/-256 SD). Sampling radio-telemetry data randomly, that is, assuming foraging occurs at random times during the day, gives an estimated average dispersal distance of 239 m (+/-207 SD) for seeds consumed by C. casuarius. Approximately 4% of seeds were dispersed further than 1,000 m. However, observation of wild birds indicated that foraging and movement occur more frequently early and late in the day. Seeds consumed early in the day were estimated to receive dispersal distances 1.4 times the 'random' average estimate, while afternoon consumed seeds received estimated mean dispersal distances of 0.46 times the 'random' estimate. Sampling movement data according to the daily distribution of C. casuarius foraging gives an estimated mean dispersal distance of 337 m (+/-194 SD). Most animals' behaviour has a non-random temporal distribution. Consequently such effects should be common and need to be incorporated into seed shadow estimation. Our results point to dispersal curves being an emergent property of the plant-disperser interaction rather than being a property of a plant or species.
Energy Technology Data Exchange (ETDEWEB)
Robertson, Neil M. [44 Ardgowan Street, Greenock PA16 8EL (United Kingdom). E-mail: neil.robertson at physics.org; Diaz-Gomez, Manuel [Plaza Alcalde Horacio Hermoso, 2, 3-A 41013 Seville (Spain). E-mail: manolo-diaz at latinmail.com; Condon, Barrie [Department of Clinical Physics, Institute of Neurological Sciences, Glasgow G51 4TF (United Kingdom). E-mail: barrie.condon at udcf.gla.ac.uk
2000-12-01
Mitral and aortic valve replacement is a procedure which is common in cardiac surgery. Some of these replacement valves are mechanical and contain moving metal parts. Should the patient in whom such a valve has been implanted be involved in magnetic resonance imaging, there is a possible dangerous interaction between the moving metal parts and the static magnetic field due to the Lenz effect. Mathematical models of two relatively common forms of single-leaflet valves have been derived and the magnitude of the torque which opposes the motion of the valve leaflet has been calculated for a valve disc of solid metal. In addition, a differential model of a ring-strengthener valve type has been considered to determine the likely significance of the Lenz effect in the context of the human heart. For common magnetic field strengths at present, i.e. 1 to 2 T, the effect is not particularly significant. However, there is a marked increase in back pressure as static magnetic field strength increases. There are concerns that, since field strengths in the range 3 to 4 T are increasingly being used, the Lenz effect could become significant. At 5 to 10 T the malfunction of the mechanical heart valve could cause the heart to behave as though it is diseased. For unhealthy or old patients this could possibly prove fatal. (author)
Jiang, Xiaoqi; Kopp-Schneider, Annette
2014-05-01
Dose-response studies are performed to investigate the potency of a compound. EC50 is the concentration of the compound that gives half-maximal response. Dose-response data are typically evaluated by using a log-logistic model that includes EC50 as one of the model parameters. Often, more than one experiment is carried out to determine the EC50 value for a compound, requiring summarization of EC50 estimates from a series of experiments. In this context, mixed-effects models are designed to estimate the average behavior of EC50 values over all experiments by considering the variabilities within and among experiments simultaneously. However, fitting nonlinear mixed-effects models is more complicated than in a linear situation, and convergence problems are often encountered. An alternative strategy is the application of a meta-analysis approach, which combines EC50 estimates obtained from separate log-logistic model fitting. These two proposed strategies to summarize EC50 estimates from multiple experiments are compared in a simulation study and real data example. We conclude that the meta-analysis strategy is a simple and robust method to summarize EC50 estimates from multiple experiments, especially suited in the case of a small number of experiments.
Chaves, Luciano Eustáquio; Nascimento, Luiz Fernando Costa; Rizol, Paloma Maria Silva Rocha
2017-06-22
Predict the number of hospitalizations for asthma and pneumonia associated with exposure to air pollutants in the city of São José dos Campos, São Paulo State. This is a computational model using fuzzy logic based on Mamdani's inference method. For the fuzzification of the input variables of particulate matter, ozone, sulfur dioxide and apparent temperature, we considered two relevancy functions for each variable with the linguistic approach: good and bad. For the output variable number of hospitalizations for asthma and pneumonia, we considered five relevancy functions: very low, low, medium, high and very high. DATASUS was our source for the number of hospitalizations in the year 2007 and the result provided by the model was correlated with the actual data of hospitalization with lag from zero to two days. The accuracy of the model was estimated by the ROC curve for each pollutant and in those lags. In the year of 2007, 1,710 hospitalizations by pneumonia and asthma were recorded in São José dos Campos, State of São Paulo, with a daily average of 4.9 hospitalizations (SD = 2.9). The model output data showed positive and significant correlation (r = 0.38) with the actual data; the accuracies evaluated for the model were higher for sulfur dioxide in lag 0 and 2 and for particulate matter in lag 1. Fuzzy modeling proved accurate for the pollutant exposure effects and hospitalization for pneumonia and asthma approach. Prever o número de internações por asma e pneumonia associadas à exposição a poluentes do ar no município em São José dos Campos, estado de São Paulo. Trata-se de um modelo computacional que utiliza a lógica fuzzy baseado na técnica de inferência de Mamdani. Para a fuzzificação das variáveis de entrada material particulado, ozônio, dióxido de enxofre e temperatura aparente foram consideradas duas funções de pertinência para cada variável com abordagem linguísticas: bom e ruim. Para a variável de saída número interna
Kalman filter estimation model in flood forecasting
Husain, Tahir
Elementary precipitation and runoff estimation problems associated with hydrologic data collection networks are formulated in conjunction with the Kalman Filter Estimation Model. Examples involve the estimation of runoff using data from a single precipitation station and also from a number of precipitation stations. The formulations demonstrate the role of state-space, measurement, and estimation equations of the Kalman Filter Model in flood forecasting. To facilitate the formulation, the unit hydrograph concept and antecedent precipitation index is adopted in the estimation model. The methodology is then applied to estimate various flood events in the Carnation Creek of British Columbia.
Discrete Choice Models - Estimation of Passenger Traffic
DEFF Research Database (Denmark)
Sørensen, Majken Vildrik
2003-01-01
), which simultaneously finds optimal coefficients values (utility elements) and parameter values (distributed terms) in the utility function. The shape of the distributed terms is specified prior to the estimation; hence, the validity is not tested during the estimation. The proposed method, assesses...... for data, a literature review follows. Models applied for estimation of discrete choice models are described by properties and limitations, and relations between these are established. Model types are grouped into three classes, Hybrid choice models, Tree models and Latent class models. Relations between...... the shape of the distribution from data, by means of repetitive model estimation. In particular, one model was estimated for each sub-sample of data. The shape of distributions is assessed from between model comparisons. This is not to be regarded as an alternative to MSL estimation, rather...
Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M
2017-06-01
Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.
Directory of Open Access Journals (Sweden)
Ingo W Nader
Full Text Available Parameters of the two-parameter logistic model are generally estimated via the expectation-maximization algorithm, which improves initial values for all parameters iteratively until convergence is reached. Effects of initial values are rarely discussed in item response theory (IRT, but initial values were recently found to affect item parameters when estimating the latent distribution with full non-parametric maximum likelihood. However, this method is rarely used in practice. Hence, the present study investigated effects of initial values on item parameter bias and on recovery of item characteristic curves in BILOG-MG 3, a widely used IRT software package. Results showed notable effects of initial values on item parameters. For tighter convergence criteria, effects of initial values decreased, but item parameter bias increased, and the recovery of the latent distribution worsened. For practical application, it is advised to use the BILOG default convergence criterion with appropriate initial values when estimating the latent distribution from data.
Hirai, Toshinori; Kimura, Toshimi; Echizen, Hirotoshi
2016-01-01
Whether renal dysfunction influences the hypouricemic effect of febuxostat, a xanthine oxidase (XO) inhibitor, in patients with hyperuricemia due to overproduction or underexcretion of uric acid (UA) remains unclear. We aimed to address this question with a modeling and simulation approach. The pharmacokinetics (PK) of febuxostat were analyzed using data from the literature. A kinetic model of UA was retrieved from a previous human study. Renal UA clearance was estimated as a function of creatinine clearance (CLcr) but non-renal UA clearance was assumed constant. A reversible inhibition model for bovine XO was adopted. Integrating these kinetic formulas, we developed a PK-pharmacodynamic (PK-PD) model for estimating the time course of the hypouricemic effect of febuxostat as a function of baseline UA level, febuxostat dose, treatment duration, body weight, and CLcr. Using the Monte Carlo simulation method, we examined the performance of the model by comparing predicted UA levels with those reported in the literature. We also modified the models for application to hyperuricemia due to UA overproduction or underexcretion. Thirty-nine data sets comprising 735 volunteers or patients were retrieved from the literature. A good correlation was observed between the hypouricemic effects of febuxostat estimated by our PK-PD model and those reported in the articles (observed) (r=0.89, p<0.001). The hypouricemic effect was estimated to be augmented in patients with renal dysfunction irrespective of the etiology of hyperuricemia. While validation in clinical studies is needed, the modeling and simulation approach may be useful for individualizing febuxostat doses in patients with various clinical characteristics.
Parameter estimation, model reduction and quantum filtering
Chase, Bradley A.
rise to effective non-linearities which enhance the effect of Larmor precession allowing for improved magnetic field estimation. I then turn to the topic of model reduction, which is the search for a reduced computational model of a dynamical system. This is a particularly important task for quantum mechanical systems, whose state grows exponentially in the number of subsystems. In the quantum filtering setting, I study the use of model reduction in developing a feedback controller for continuous-time quantum error correction. By studying the propagation of errors in a noisy quantum memory, I present a computation model which scales polynomially, rather than exponentially, in the number of physical qubits of the system. Although inexact, a feedback controller using this model performs almost indistinguishably from one using the full model. I finally review an exact but polynomial model of collective qubit systems undergoing arbitrary symmetric dynamics which allows for the efficient simulation of spontaneous-emission and related open quantum system phenomenon.
Energy Technology Data Exchange (ETDEWEB)
O`Brien, R.S. [Australian Radiation Lab., Melbourne, VIC (Australia)
1996-01-01
This paper discusses some of the implications of using the new ICRP 66 respiratory tract model for calculation of the committed effective dose(CED), for a period of 50 years post-intake, together with the annual limit on intake(ALI), for radioactive dusts encountered in the uranium and mineral sand mining and processing industries. Some of the differences between the old ICRP 30 respiratory tract model and the LUDEP 1.1 computer code, which is based on the new ICRP 66 respiratory tract model, are discussed and a comparison of values obtained using both models is given. 4 figs; 8 tabs; 16 refs.
Entropy Based Modelling for Estimating Demographic Trends.
Directory of Open Access Journals (Sweden)
Guoqi Li
Full Text Available In this paper, an entropy-based method is proposed to forecast the demographical changes of countries. We formulate the estimation of future demographical profiles as a constrained optimization problem, anchored on the empirically validated assumption that the entropy of age distribution is increasing in time. The procedure of the proposed method involves three stages, namely: 1 Prediction of the age distribution of a country's population based on an "age-structured population model"; 2 Estimation the age distribution of each individual household size with an entropy-based formulation based on an "individual household size model"; and 3 Estimation the number of each household size based on a "total household size model". The last stage is achieved by projecting the age distribution of the country's population (obtained in stage 1 onto the age distributions of individual household sizes (obtained in stage 2. The effectiveness of the proposed method is demonstrated by feeding real world data, and it is general and versatile enough to be extended to other time dependent demographic variables.
Outlier Rejecting Multirate Model for State Estimation
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Wavelet transform was introduced to detect and eliminate outliers in time-frequency domain. The outlier rejection and multirate information extraction were initially incorporated by wavelet transform, a new outlier rejecting multirate model for state estimation was proposed. The model is applied to state estimation with interacting multiple model, as the outlier is eliminated and more reasonable multirate information is extracted, the estimation accuracy is greatly enhanced. The simulation results prove that the new model is robust to outliers and the estimation performance is significantly improved.
Yamazaki, T; Hagiya, K; Takeda, H; Osawa, T; Yamaguchi, S; Nagamine, Y
2016-08-01
Pregnancy and calving are elements indispensable for dairy production, but the daily milk yield of cows decline as pregnancy progresses, especially during the late stages. Therefore, the effect of stage of pregnancy on daily milk yield must be clarified to accurately estimate the breeding values and lifetime productivity of cows. To improve the genetic evaluation model for daily milk yield and determine the effect of the timing of pregnancy on productivity, we used a test-day model to assess the effects of stage of pregnancy on variance component estimates, daily milk yields and 305-day milk yield during the first three lactations of Holstein cows. Data were 10 646 333 test-day records for the first lactation; 8 222 661 records for the second; and 5 513 039 records for the third. The data were analyzed within each lactation by using three single-trait random regression animal models: one model that did not account for the stage of pregnancy effect and two models that did. The effect of stage of pregnancy on test-day milk yield was included in the model by applying a regression on days pregnant or fitting a separate lactation curve for each days open (days from calving to pregnancy) class (eight levels). Stage of pregnancy did not affect the heritability estimates of daily milk yield, although the additive genetic and permanent environmental variances in late lactation were decreased by accounting for the stage of pregnancy effect. The effects of days pregnant on daily milk yield during late lactation were larger in the second and third lactations than in the first lactation. The rates of reduction of the 305-day milk yield of cows that conceived fewer than 90 days after the second or third calving were significantly (Ppregnancy in the first, compared with later, lactations should be included when determining the optimal number of days open to maximize lifetime productivity in dairy cows.
Efficient Estimation in Heteroscedastic Varying Coefficient Models
Directory of Open Access Journals (Sweden)
Chuanhua Wei
2015-07-01
Full Text Available This paper considers statistical inference for the heteroscedastic varying coefficient model. We propose an efficient estimator for coefficient functions that is more efficient than the conventional local-linear estimator. We establish asymptotic normality for the proposed estimator and conduct some simulation to illustrate the performance of the proposed method.
Estimating Canopy Dark Respiration for Crop Models
Monje Mejia, Oscar Alberto
2014-01-01
Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.
Directory of Open Access Journals (Sweden)
Neumann Anne
2011-11-01
Full Text Available Abstract Background Type 2 diabetes mellitus (T2D poses a large worldwide burden for health care systems. One possible tool to decrease this burden is primary prevention. As it is unethical to wait until perfect data are available to conclude whether T2D primary prevention intervention programmes are cost-effective, we need a model that simulates the effect of prevention initiatives. Thus, the aim of this study is to investigate the long-term cost-effectiveness of lifestyle intervention programmes for the prevention of T2D using a Markov model. As decision makers often face difficulties in applying health economic results, we visualise our results with health economic tools. Methods We use four-state Markov modelling with a probabilistic cohort analysis to calculate the cost per quality-adjusted life year (QALY gained. A one-year cycle length and a lifetime time horizon are applied. Best available evidence supplies the model with data on transition probabilities between glycaemic states, mortality risks, utility weights, and disease costs. The costs are calculated from a societal perspective. A 3% discount rate is used for costs and QALYs. Cost-effectiveness acceptability curves are presented to assist decision makers. Results The model indicates that diabetes prevention interventions have the potential to be cost-effective, but the outcome reveals a high level of uncertainty. Incremental cost-effectiveness ratios (ICERs were negative for the intervention, ie, the intervention leads to a cost reduction for men and women aged 30 or 50 years at initiation of the intervention. For men and women aged 70 at initiation of the intervention, the ICER was EUR27,546/QALY gained and EUR19,433/QALY gained, respectively. In all cases, the QALYs gained were low. Cost-effectiveness acceptability curves show that the higher the willingness-to-pay threshold value, the higher the probability that the intervention is cost-effective. Nonetheless, all curves are
Efficient estimation of moments in linear mixed models
Wu, Ping; Zhu, Li-Xing; 10.3150/10-BEJ330
2012-01-01
In the linear random effects model, when distributional assumptions such as normality of the error variables cannot be justified, moments may serve as alternatives to describe relevant distributions in neighborhoods of their means. Generally, estimators may be obtained as solutions of estimating equations. It turns out that there may be several equations, each of them leading to consistent estimators, in which case finding the efficient estimator becomes a crucial problem. In this paper, we systematically study estimation of moments of the errors and random effects in linear mixed models.
Quantum metrology and estimation of Unruh effect.
Wang, Jieci; Tian, Zehua; Jing, Jiliang; Fan, Heng
2014-11-26
We study the quantum metrology for a pair of entangled Unruh-Dewitt detectors when one of them is accelerated and coupled to a massless scalar field. Comparing with previous schemes, our model requires only local interaction and avoids the use of cavities in the probe state preparation process. We show that the probe state preparation and the interaction between the accelerated detector and the external field have significant effects on the value of quantum Fisher information, correspondingly pose variable ultimate limit of precision in the estimation of Unruh effect. We find that the precision of the estimation can be improved by a larger effective coupling strength and a longer interaction time. Alternatively, the energy gap of the detector has a range that can provide us a better precision. Thus we may adjust those parameters and attain a higher precision in the estimation. We also find that an extremely high acceleration is not required in the quantum metrology process.
Huang, Guowen; Lee, Duncan; Scott, Marian
2015-01-01
The long-term health effects of air pollution can be estimated using a spatio-temporal ecological study, where the disease data are counts of hospital admissions from populations in small areal units at yearly intervals. Spatially representative pollution concentrations for each areal unit are typically estimated by applying Kriging to data from a sparse monitoring network, or by computing averages over grid level concentrations from an atmospheric dispersion model. We propose a novel fusion model for estimating spatially aggregated pollution concentrations using both the modelled and monitored data, and relate these concentrations to respiratory disease in a new study in Scotland between 2007 and 2011.
Eppelbaum, Lev; Meirova, Tatiana
2015-04-01
, EGU2014-2424, Vienna, Austria, 1-5. Eppelbaum, L.V. and Katz, Y.I., 2014b. First Maps of Mesozoic and Cenozoic Structural-Sedimentation Floors of the Easternmost Mediterranean and their Relationship with the Deep Geophysical-Geological Zonation. Proceed. of the 19th Intern. Congress of Sedimentologists, Geneva, Switzerland, 1-3. Eppelbaum, L.V. and Katz, Yu.I., 2015a. Newly Developed Paleomagnetic Map of the Easternmost Mediterranean Unmasks Geodynamic History of this Region. Central European Jour. of Geosciences, 6, No. 4 (in Press). Eppelbaum, L.V. and Katz, Yu.I., 2015b. Application of Integrated Geological-Geophysical Analysis for Development of Paleomagnetic Maps of the Easternmost Mediterranean. In: (Eppelbaum L., Ed.), New Developments in Paleomagnetism Research, Nova Publisher, NY (in Press). Eppelbaum, L.V. and Khesin, B.E., 2004. Advanced 3-D modelling of gravity field unmasks reserves of a pyrite-polymetallic deposit: A case study from the Greater Caucasus. First Break, 22, No. 11, 53-56. Eppelbaum, L.V., Nikolaev, A.V. and Katz, Y.I., 2014. Space location of the Kiama paleomagnetic hyperzone of inverse polarity in the crust of the eastern Mediterranean. Doklady Earth Sciences (Springer), 457, No. 6, 710-714. Haase, J.S., Park, C.H., Nowack, R.L. and Hill, J.R., 2010. Probabilistic seismic hazard estimates incorporating site effects - An example from Indiana, U.S.A. Environmental and Engineering Geoscience, 16, No. 4, 369-388. Hough, S.E., Borcherdt, R. D., Friberg, P. A., Busby, R., Field, E. and Jacob, K. N., 1990. The role of sediment-induced amplification in the collapse of the Nimitz freeway. Nature, 344, 853-855. Khesin, B.E. Alexeyev, V.V. and Eppelbaum, L.V., 1996. Interpretation of Geophysical Fields in Complicated Environments. Kluwer Academic Publ., Ser.: Advanced Appr. in Geophysics, Dordrecht - London - Boston. KlokoÄník, J., Kostelecký, J., Eppelbaum, L. and BezdÄk, A., 2014. Gravity Disturbances, the Marussi Tensor, Invariants and
Analysis of Empirical Software Effort Estimation Models
Basha, Saleem
2010-01-01
Reliable effort estimation remains an ongoing challenge to software engineers. Accurate effort estimation is the state of art of software engineering, effort estimation of software is the preliminary phase between the client and the business enterprise. The relationship between the client and the business enterprise begins with the estimation of the software. The credibility of the client to the business enterprise increases with the accurate estimation. Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently under constrained problem. Accurate estimation is a complex process because it can be visualized as software effort prediction, as the term indicates prediction never becomes an actual. This work follows the basics of the empirical software effort estimation models. The goal of this paper is to study the empirical software effort estimation. The primary conclusion is that no single technique is best for all sit...
DEFF Research Database (Denmark)
Stygar, Anna Helena; Krogh, Mogens Agerbo; Kristensen, Troels
2017-01-01
. The objective of this study was to construct a tool to assess the intervention effect on milk production in an evolutionary operations setup. The method used for this purpose was a dynamic linear model (DLM) with Kalman filtering. The DLM consisted of parameters describing milk yield in a herd, individual cows...
On parameter estimation in deformable models
DEFF Research Database (Denmark)
Fisker, Rune; Carstensen, Jens Michael
1998-01-01
Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian...... method is based on a modified version of the EM algorithm. Experimental results for a deformable template used for textile inspection are presented...
Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien
2016-08-15
The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd.
Parameter Estimation, Model Reduction and Quantum Filtering
Chase, Bradley A
2009-01-01
This dissertation explores the topics of parameter estimation and model reduction in the context of quantum filtering. Chapters 2 and 3 provide a review of classical and quantum probability theory, stochastic calculus and filtering. Chapter 4 studies the problem of quantum parameter estimation and introduces the quantum particle filter as a practical computational method for parameter estimation via continuous measurement. Chapter 5 applies these techniques in magnetometry and studies the estimator's uncertainty scalings in a double-pass atomic magnetometer. Chapter 6 presents an efficient feedback controller for continuous-time quantum error correction. Chapter 7 presents an exact model of symmetric processes of collective qubit systems.
Distribution Theory for Glass's Estimator of Effect Size and Related Estimators.
Hedges, Larry V.
1981-01-01
Glass's estimator of effect size, the sample mean difference divided by the sample standard deviation, is studied in the context of an explicit statistical model. The exact distribution of Glass's estimator is obtained and the estimator is shown to have a small sample bias. Alternatives are proposed and discussed. (Author/JKS)
Beaudoin, Christopher E; Chen, Hongliang; Agha, Sohail
2016-01-01
Rapid population growth in Pakistan poses major risks, including those pertinent to public health. In the context of family planning in Pakistan, the current study evaluates the Touch condom media campaign and its effects on condom-related awareness, attitudes, behavioral intention, and behavior. This evaluation relies on 3 waves of panel survey data from men married to women ages 15-49 living in urban and rural areas in Pakistan (N = 1,012): Wave 1 was March 15 to April 7, 2009; Wave 2 was August 10 to August 24, 2009; and Wave 3 was May 1 to June 13, 2010. Analysis of variance provided evidence of improvements in 10 of 11 condom-related outcomes from Wave 1 to Wave 2 and Wave 3. In addition, there was no evidence of outcome decay 1 year after the conclusion of campaign advertising dissemination. To help compensate for violating the assumption of random assignment, propensity score modeling offered evidence of the beneficial effects of confirmed Touch ad recall on each of the 11 outcomes in at least 1 of 3 time-lagged scenarios. By using these different time-lagged scenarios (i.e., from Wave 1 to Wave 2, from Wave 1 to Wave 3, and from Wave 2 to Wave 3), propensity score modeling permitted insights into how the campaign had time-variant effects on the different types of condom-related outcomes, including carryover effects of the media campaign.
Yunyun Feng; Dengsheng Lu; Qi Chen; Michael Keller; Emilio Moran; Maiza Nara dos-Santos; Edson Luis Bolfe; Mateus Batistella
2017-01-01
Previous research has explored the potential to integrate lidar and optical data in aboveground biomass (AGB) estimation, but how different data sources, vegetation types, and modeling algorithms influence AGB estimation is poorly understood. This research conducts a comparative analysis of different data sources and modeling approaches in improving AGB estimation....
Mineral resources estimation based on block modeling
Bargawa, Waterman Sulistyana; Amri, Nur Ali
2016-02-01
The estimation in this paper uses three kinds of block models of nearest neighbor polygon, inverse distance squared and ordinary kriging. The techniques are weighting scheme which is based on the principle that block content is a linear combination of the grade data or the sample around the block being estimated. The case study in Pongkor area, here is gold-silver resource modeling that allegedly shaped of quartz vein as a hydrothermal process of epithermal type. Resources modeling includes of data entry, statistical and variography analysis of topography and geological model, the block model construction, estimation parameter, presentation model and tabulation of mineral resources. Skewed distribution, here isolated by robust semivariogram. The mineral resources classification generated in this model based on an analysis of the kriging standard deviation and number of samples which are used in the estimation of each block. Research results are used to evaluate the performance of OK and IDS estimator. Based on the visual and statistical analysis, concluded that the model of OK gives the estimation closer to the data used for modeling.
Institute of Scientific and Technical Information of China (English)
WANG Yehong; ZHAO Yuchun; CUI Chunguang
2007-01-01
On the basis of the joint estimated 1-h precipitation from Changde, Jingzhou, and Yichang Doppler radars as well as Wuhan digital radar, and the retrieved wind fields from Yichang and Jingzhou Doppler radars, a series of numerical experiments with an advanced regional η-coordinate model (AREM) under different model initial schemes, i.e., Grapes-3DVAR, Barnes objective analysis, and Barnes-3DVAR, are carried out for a torrential rain process occurring along the Yangtze River in the 24-h period from 2000 BT 22 July 2002 to investigate the effects of the Doppler-radar estimated rainfall and retrieved winds on the rainfall forecast. The main results are as follows: (1) The simulations are obviously different under three initial schemes with the same data source (the radiosounding and T213L31 analysis). On the whole,Barnes-3DVAR, which combines the advantages of the Barnes objective analysis and the Grapes-3DVAR method, gives the best simulations: well-simulated rain band and clear mesoscale structures, as well as their location and intensity close to observations. (2) Both Barnes-3DVAR and Grapes-3DVAR schemes are able to assimilate the Doppler-radar estimated rainfall and retrieved winds, but differences in simulation results are very large, with Barnes-3DVAR's simulation much better than Grapes-3DVAR's. (3) Under Grapes3DVAR scheme, the simulation of 24-h rainfall is improved obviously when assimilating the Doppler-radar estimated precipitation into the model in compared with the control experiment; but it becomes a little worse when assimilating the Doppler-radar retrieved winds into the model, and it becomes worse obviously when assimilating the Doppler-radar estimated precipitation as well as retrieved winds into the model. However,the simulation is different under Barnes-3DVAR scheme. The simulation is improved to a certain degree no matter assimilating the estimated precipitation or retrieved winds, or both of them. The result is the best when assimilating both
Bukoski, J. J.; Broadhead, J. S.; Donato, D.; Murdiyarso, D.; Gregoire, T. G.
2016-12-01
Mangroves provide extensive ecosystem services that support both local livelihoods and international environmental goals, including coastal protection, water filtration, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects that seek to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through measurement, reporting and verification (MRV) activities. To streamline MRV activities in mangrove C forestry projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We use linear mixed effect models to account for spatial correlation in modeling the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, and are found to explain a substantial proportion of variance within the estimation datasets. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm 3 (14.1% of mean soil C). A substantial proportion of the variation in soil C, however, is explained by the random effects and thus the use of the SOC model may be most valuable for sites in which field measurements of soil C exist.
Estimation of Wind Turbulence Using Spectral Models
DEFF Research Database (Denmark)
Soltani, Mohsen; Knudsen, Torben; Bak, Thomas
2011-01-01
The production and loading of wind farms are significantly influenced by the turbulence of the flowing wind field. Estimation of turbulence allows us to optimize the performance of the wind farm. Turbulence estimation is; however, highly challenging due to the chaotic behavior of the wind....... In this paper, a method is presented for estimation of the turbulence. The spectral model of the wind is used in order to provide the estimations. The suggested estimation approach is applied to a case study in which the objective is to estimate wind turbulence at desired points using the measurements of wind...... speed outside the wind field. The results show that the method is able to provide estimations which explain more than 50% of the wind turbulence from the distance of about 300 meters....
Robust estimation of hydrological model parameters
Directory of Open Access Journals (Sweden)
A. Bárdossy
2008-11-01
Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
A new estimate of the parameters in linear mixed models
Institute of Scientific and Technical Information of China (English)
王松桂; 尹素菊
2002-01-01
In linear mixed models, there are two kinds of unknown parameters: one is the fixed effect, theother is the variance component. In this paper, new estimates of these parameters, called the spectral decom-position estimates, are proposed, Some important statistical properties of the new estimates are established,in particular the linearity of the estimates of the fixed effects with many statistical optimalities. A new methodis applied to two important models which are used in economics, finance, and mechanical fields. All estimatesobtained have good statistical and practical meaning.
Bukoski, Jacob J.; Broadhead, Jeremy S.; Donato, Daniel C.; Murdiyarso, Daniel; Gregoire, Timothy G.
2017-01-01
Mangroves provide extensive ecosystem services that support local livelihoods and international environmental goals, including coastal protection, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects seeking to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through field inventories. To streamline C quantification in mangrove conservation projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We compile datasets of mangrove biomass C (197 observations from 48 sites) and soil organic C (99 observations from 27 sites) to parameterize the predictive models, and use linear mixed effect models to model the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, which are found to explain a substantial proportion of variance within the estimation datasets and indicate significant heterogeneity across-sites within the region. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm3 (14.1% of mean soil C). The results point to a need for standardization of forest metrics to facilitate meta-analyses, as well as provide important considerations for refining ecosystem C stock models in mangroves. PMID:28068361
Bukoski, Jacob J; Broadhead, Jeremy S; Donato, Daniel C; Murdiyarso, Daniel; Gregoire, Timothy G
2017-01-01
Mangroves provide extensive ecosystem services that support local livelihoods and international environmental goals, including coastal protection, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects seeking to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through field inventories. To streamline C quantification in mangrove conservation projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We compile datasets of mangrove biomass C (197 observations from 48 sites) and soil organic C (99 observations from 27 sites) to parameterize the predictive models, and use linear mixed effect models to model the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, which are found to explain a substantial proportion of variance within the estimation datasets and indicate significant heterogeneity across-sites within the region. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm3 (14.1% of mean soil C). The results point to a need for standardization of forest metrics to facilitate meta-analyses, as well as provide important considerations for refining ecosystem C stock models in mangroves.
DR-model-based estimation algorithm for NCS
Institute of Scientific and Technical Information of China (English)
HUANG Si-niu; CHEN Zong-ji; WEI Chen
2006-01-01
A novel estimation scheme based on dead reckoning (DR) model for networked control system (NCS)is proposed in this paper.Both the detailed DR estimation algorithm and the stability analysis of the system are given.By using DR estimation of the state,the effect of communication delays is overcome.This makes a controller designed without considering delays still applicable in NCS Moreover,the scheme can effectively solve the problem of data packet loss or timeout.
Energy Technology Data Exchange (ETDEWEB)
Kooperman, G. J.; Pritchard, M. S.; Ghan, Steven J.; Wang, Minghuai; Somerville, Richard C.; Russell, Lynn
2012-12-11
Natural modes of variability on many timescales influence aerosol particle distributions and cloud properties such that isolating statistically significant differences in cloud radiative forcing due to anthropogenic aerosol perturbations (indirect effects) typically requires integrating over long simulations. For state-of-the-art global climate models (GCM), especially those in which embedded cloud-resolving models replace conventional statistical parameterizations (i.e. multi-scale modeling framework, MMF), the required long integrations can be prohibitively expensive. Here an alternative approach is explored, which implements Newtonian relaxation (nudging) to constrain simulations with both pre-industrial and present-day aerosol emissions toward identical meteorological conditions, thus reducing differences in natural variability and dampening feedback responses in order to isolate radiative forcing. Ten-year GCM simulations with nudging provide a more stable estimate of the global-annual mean aerosol indirect radiative forcing than do conventional free-running simulations. The estimates have mean values and 95% confidence intervals of -1.54 ± 0.02 W/m2 and -1.63 ± 0.17 W/m2 for nudged and free-running simulations, respectively. Nudging also substantially increases the fraction of the world’s area in which a statistically significant aerosol indirect effect can be detected (68% and 25% of the Earth's surface for nudged and free-running simulations, respectively). One-year MMF simulations with and without nudging provide global-annual mean aerosol indirect radiative forcing estimates of -0.80 W/m2 and -0.56 W/m2, respectively. The one-year nudged results compare well with previous estimates from three-year free-running simulations (-0.77 W/m2), which showed the aerosol-cloud relationship to be in better agreement with observations and high-resolution models than in the results obtained with conventional parameterizations.
Parameter and Uncertainty Estimation in Groundwater Modelling
DEFF Research Database (Denmark)
Jensen, Jacob Birk
The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... and uncertainty estimation. Essential issues relating to calibration are discussed. The classical regression methods are described; however, the main focus is on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The next two chapters describe case studies in which the GLUE methodology...
A Dynamic Travel Time Estimation Model Based on Connected Vehicles
Directory of Open Access Journals (Sweden)
Daxin Tian
2015-01-01
Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.
Regional fuzzy chain model for evapotranspiration estimation
Güçlü, Yavuz Selim; Subyani, Ali M.; Şen, Zekai
2017-01-01
Evapotranspiration (ET) is one of the main hydrological cycle components that has extreme importance for water resources management and agriculture especially in arid and semi-arid regions. In this study, regional ET estimation models based on the fuzzy logic (FL) principles are suggested, where the first stage includes the ET calculation via Penman-Monteith equation, which produces reliable results. In the second phase, ET estimations are produced according to the conventional FL inference system model. In this paper, regional fuzzy model (RFM) and regional fuzzy chain model (RFCM) are proposed through the use of adjacent stations' data in order to fill the missing ones. The application of the two models produces reliable and satisfactory results for mountainous and sea region locations in the Kingdom of Saudi Arabia, but comparatively RFCM estimations have more accuracy. In general, the mean absolute percentage error is less than 10%, which is acceptable in practical applications.
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei
2013-09-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.
The Adaptive LASSO Spline Estimation of Single-Index Model
Institute of Scientific and Technical Information of China (English)
LU Yiqiang; ZHANG Riquan; HU Bin
2016-01-01
In this paper,based on spline approximation,the authors propose a unified variable selection approach for single-index model via adaptive L1 penalty.The calculation methods of the proposed estimators are given on the basis of the known lars algorithm.Under some regular conditions,the authors demonstrate the asymptotic properties of the proposed estimators and the oracle properties of adaptive LASSO (aLASSO) variable selection.Simulations are used to investigate the performances of the proposed estimator and illustrate that it is effective for simultaneous variable selection as well as estimation of the single-index models.
Comparison of Estimation Procedures for Multilevel AR(1 Models
Directory of Open Access Journals (Sweden)
Tanja eKrone
2016-04-01
Full Text Available To estimate a time series model for multiple individuals, a multilevel model may be used.In this paper we compare two estimation methods for the autocorrelation in Multilevel AR(1 models, namely Maximum Likelihood Estimation (MLE and Bayesian Markov Chain Monte Carlo.Furthermore, we examine the difference between modeling fixed and random individual parameters.To this end, we perform a simulation study with a fully crossed design, in which we vary the length of the time series (10 or 25, the number of individuals per sample (10 or 25, the mean of the autocorrelation (-0.6 to 0.6 inclusive, in steps of 0.3 and the standard deviation of the autocorrelation (0.25 or 0.40.We found that the random estimators of the population autocorrelation show less bias and higher power, compared to the fixed estimators. As expected, the random estimators profit strongly from a higher number of individuals, while this effect is small for the fixed estimators.The fixed estimators profit slightly more from a higher number of time points than the random estimators.When possible, random estimation is preferred to fixed estimation.The difference between MLE and Bayesian estimation is nearly negligible. The Bayesian estimation shows a smaller bias, but MLE shows a smaller variability (i.e., standard deviation of the parameter estimates.Finally, better results are found for a higher number of individuals and time points, and for a lower individual variability of the autocorrelation. The effect of the size of the autocorrelation differs between outcome measures.
Conditional shape models for cardiac motion estimation
DEFF Research Database (Denmark)
Metz, Coert; Baka, Nora; Kirisli, Hortense
2010-01-01
We propose a conditional statistical shape model to predict patient specific cardiac motion from the 3D end-diastolic CTA scan. The model is built from 4D CTA sequences by combining atlas based segmentation and 4D registration. Cardiac motion estimation is, for example, relevant in the dynamic...
Statistical Model-Based Face Pose Estimation
Institute of Scientific and Technical Information of China (English)
GE Xinliang; YANG Jie; LI Feng; WANG Huahua
2007-01-01
A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.
Hydrograph estimation with fuzzy chain model
Güçlü, Yavuz Selim; Şen, Zekai
2016-07-01
Hydrograph peak discharge estimation is gaining more significance with unprecedented urbanization developments. Most of the existing models do not yield reliable peak discharge estimations for small basins although they provide acceptable results for medium and large ones. In this study, fuzzy chain model (FCM) is suggested by considering the necessary adjustments based on some measurements over a small basin, Ayamama basin, within Istanbul City, Turkey. FCM is based on Mamdani and the Adaptive Neuro Fuzzy Inference Systems (ANFIS) methodologies, which yield peak discharge estimation. The suggested model is compared with two well-known approaches, namely, Soil Conservation Service (SCS)-Snyder and SCS-Clark methodologies. In all the methods, the hydrographs are obtained through the use of dimensionless unit hydrograph concept. After the necessary modeling, computation, verification and adaptation stages comparatively better hydrographs are obtained by FCM. The mean square error for the FCM is many folds smaller than the other methodologies, which proves outperformance of the suggested methodology.
Lin, Zhoumeng; Cuneo, Matthew; Rowe, Joan D.; Li, Mengjie; Tell, Lisa A; Allison, Shayna; Carlson, Jan; Riviere, Jim E.; Gehring, Ronette
2016-01-01
Background Extra-label use of tulathromycin in lactating goats is common and may cause violative residues in milk. The objective of this study was to develop a nonlinear mixed-effects pharmacokinetic (NLME-PK) model to estimate tulathromycin depletion in plasma and milk of lactating goats. Eight lactating goats received two subcutaneous injections of 2.5 mg/kg tulathromycin 7 days apart; blood and milk samples were analyzed for concentrations of tulathromycin and the common fragment of tulath...
Estimation methods for nonlinear state-space models in ecology
DEFF Research Database (Denmark)
Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro
2011-01-01
The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...
Bärgman, Jonas; Boda, Christian-Nils; Dozza, Marco
2017-05-01
As the development and deployment of in-vehicle intelligent safety systems (ISS) for crash avoidance and mitigation have rapidly increased in the last decades, the need to evaluate their prospective safety benefits before introduction has never been higher. Counterfactual simulations using relevant mathematical models (for vehicle dynamics, sensors, the environment, ISS algorithms, and models of driver behavior) have been identified as having high potential. However, although most of these models are relatively mature, models of driver behavior in the critical seconds before a crash are still relatively immature. There are also large conceptual differences between different driver models. The objective of this paper is, firstly, to demonstrate the importance of the choice of driver model when counterfactual simulations are used to evaluate two ISS: Forward collision warning (FCW), and autonomous emergency braking (AEB). Secondly, the paper demonstrates how counterfactual simulations can be used to perform sensitivity analyses on parameter settings, both for driver behavior and ISS algorithms. Finally, the paper evaluates the effect of the choice of glance distribution in the driver behavior model on the safety benefit estimation. The paper uses pre-crash kinematics and driver behavior from 34 rear-end crashes from the SHRP2 naturalistic driving study for the demonstrations. The results for FCW show a large difference in the percent of avoided crashes between conceptually different models of driver behavior, while differences were small for conceptually similar models. As expected, the choice of model of driver behavior did not affect AEB benefit much. Based on our results, researchers and others who aim to evaluate ISS with the driver in the loop through counterfactual simulations should be sure to make deliberate and well-grounded choices of driver models: the choice of model matters. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bayesian mixture models for spectral density estimation
Cadonna, Annalisa
2017-01-01
We introduce a novel Bayesian modeling approach to spectral density estimation for multiple time series. Considering first the case of non-stationary timeseries, the log-periodogram of each series is modeled as a mixture of Gaussiandistributions with frequency-dependent weights and mean functions. The implied model for the log-spectral density is a mixture of linear mean functionswith frequency-dependent weights. The mixture weights are built throughsuccessive differences of a logit-normal di...
Estimation and uncertainty of reversible Markov models
Trendelkamp-Schroer, Benjamin; Paul, Fabian; Noé, Frank
2015-01-01
Reversibility is a key concept in the theory of Markov models, simplified kinetic models for the conforma- tion dynamics of molecules. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model relies heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is therefore crucial to the successful application of the previously developed theory. In this work we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference.
Bayesian parameter estimation for effective field theories
Wesolowski, S; Furnstahl, R J; Phillips, D R; Thapaliya, A
2015-01-01
We present procedures based on Bayesian statistics for effective field theory (EFT) parameter estimation from data. The extraction of low-energy constants (LECs) is guided by theoretical expectations that supplement such information in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools are developed that analyze the fit and ensure that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems and the extraction of LECs for the nucleon mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.
Bayesian parameter estimation for effective field theories
Wesolowski, S.; Klco, N.; Furnstahl, R. J.; Phillips, D. R.; Thapaliya, A.
2016-07-01
We present procedures based on Bayesian statistics for estimating, from data, the parameters of effective field theories (EFTs). The extraction of low-energy constants (LECs) is guided by theoretical expectations in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools is developed that analyzes the fit and ensures that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems, including the extraction of LECs for the nucleon-mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.
Error estimation and adaptive chemical transport modeling
Directory of Open Access Journals (Sweden)
Malte Braack
2014-09-01
Full Text Available We present a numerical method to use several chemical transport models of increasing accuracy and complexity in an adaptive way. In largest parts of the domain, a simplified chemical model may be used, whereas in certain regions a more complex model is needed for accuracy reasons. A mathematically derived error estimator measures the modeling error and provides information where to use more accurate models. The error is measured in terms of output functionals. Therefore, one has to consider adjoint problems which carry sensitivity information. This concept is demonstrated by means of ozone formation and pollution emission.
Parameter Estimation for Thurstone Choice Models
Energy Technology Data Exchange (ETDEWEB)
Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-24
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.
Estimation of growth parameters using a nonlinear mixed Gompertz model.
Wang, Z; Zuidhof, M J
2004-06-01
In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.
Gaussian estimation for discretely observed Cox-Ingersoll-Ross model
Wei, Chao; Shu, Huisheng; Liu, Yurong
2016-07-01
This paper is concerned with the parameter estimation problem for Cox-Ingersoll-Ross model based on discrete observation. First, a new discretized process is built based on the Euler-Maruyama scheme. Then, the parameter estimators are obtained by employing the maximum likelihood method and the explicit expressions of the error of estimation are given. Subsequently, the consistency property of all parameter estimators are proved by applying the law of large numbers for martingales, Holder's inequality, B-D-G inequality and Cauchy-Schwarz inequality. Finally, a numerical simulation example for estimators and the absolute error between estimators and true values is presented to demonstrate the effectiveness of the estimation approach used in this paper.
Estimating Model Evidence Using Data Assimilation
Carrassi, Alberto; Bocquet, Marc; Hannart, Alexis; Ghil, Michael
2017-04-01
We review the field of data assimilation (DA) from a Bayesian perspective and show that, in addition to its by now common application to state estimation, DA may be used for model selection. An important special case of the latter is the discrimination between a factual model - which corresponds, to the best of the modeller's knowledge, to the situation in the actual world in which a sequence of events has occurred-and a counterfactual model, in which a particular forcing or process might be absent or just quantitatively different from the actual world. Three different ensemble-DA methods are reviewed for this purpose: the ensemble Kalman filter (EnKF), the ensemble four-dimensional variational smoother (En-4D-Var), and the iterative ensemble Kalman smoother (IEnKS). An original contextual formulation of model evidence (CME) is introduced. It is shown how to apply these three methods to compute CME, using the approximated time-dependent probability distribution functions (pdfs) each of them provide in the process of state estimation. The theoretical formulae so derived are applied to two simplified nonlinear and chaotic models: (i) the Lorenz three-variable convection model (L63), and (ii) the Lorenz 40- variable midlatitude atmospheric dynamics model (L95). The numerical results of these three DA-based methods and those of an integration based on importance sampling are compared. It is found that better CME estimates are obtained by using DA, and the IEnKS method appears to be best among the DA methods. Differences among the performance of the three DA-based methods are discussed as a function of model properties. Finally, the methodology is implemented for parameter estimation and for event attribution.
Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C
2012-06-01
Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications.
Stamou, Anastasios I; Kamizoulis, George
2009-02-01
A preliminary investigation was performed to estimate the effect of the degree of treatment in Sewage Treatment Plants (STPs) on the status of pollution along the coastlines of the Mediterranean Sea. Data from questionnaires and the literature were collected and processed (a) to identify 18 approximate 1D surface coastal currents, (b) to estimate their prevailing direction and average flow velocity and (c) to estimate the water pollution loads and identify their locations of discharge. Then, a simplified 1D water quality model was formulated and applied for the existing conditions and two hypothetical scenarios: (1) all coastal cities have STPs with secondary treatment and (2) all coastal cities have STPs with tertiary treatment to determine BOD(5), TN and TP concentrations in the 18 surface coastal currents. Calculated concentrations were compared and discussed. A main conclusion is that, to reduce pollution in the Mediterranean Sea measures should be adopted for upgrading the water quality of the rivers discharging into the Mediterranean Sea, along with the construction of STPs for all the coastal cities.
Lim, Seung Joo; Fox, Peter
2014-02-01
The effects of halogenated aromatics/aliphatics and nitrogen(N)-heterocyclic aromatics on estimating the persistence of future pharmaceutical compounds were investigated using a modified half life equation. The potential future pharmaceutical compounds investigated were approximately 2000 pharmaceutical drugs currently undergoing the United States Food and Drug Administration (US FDA) testing. EPI Suite (BIOWIN) model estimates the fates of compounds based on the biodegradability under aerobic conditions. While BIOWIN considered the biodegradability of a compound only, the half life equation used in this study was modified by biodegradability, sorption and cometabolic oxidation. It was possible that the potential future pharmaceutical compounds were more accurately estimated using the modified half life equation. The modified half life equation considered sorption and cometabolic oxidation of halogenated aromatic/aliphatics and nitrogen(N)-heterocyclic aromatics in the sub-surface, while EPI Suite (BIOWIN) did not. Halogenated aliphatics in chemicals were more persistent than halogenated aromatics in the sub-surface. In addition, in the sub-surface environment, the fates of organic chemicals were much more affected by halogenation in chemicals than by nitrogen(N)-heterocyclic aromatics.
Zein, Rizqy Amelia; Suhariadi, Fendy; Hendriani, Wiwin
2017-01-01
The research aimed to investigate the effect of lay knowledge of pulmonary tuberculosis (TB) and prior contact with pulmonary TB patients on a health-belief model (HBM) as well as to identify the social determinants that affect lay knowledge. Survey research design was conducted, where participants were required to fill in a questionnaire, which measured HBM and lay knowledge of pulmonary TB. Research participants were 500 residents of Semampir, Asemrowo, Bubutan, Pabean Cantian, and Simokerto districts, where the risk of pulmonary TB transmission is higher than other districts in Surabaya. Being a female, older in age, and having prior contact with pulmonary TB patients significantly increase the likelihood of having a higher level of lay knowledge. Lay knowledge is a substantial determinant to estimate belief in the effectiveness of health behavior and personal health threat. Prior contact with pulmonary TB patients is able to explain the belief in the effectiveness of a health behavior, yet fails to estimate participants' belief in the personal health threat. Health authorities should prioritize males and young people as their main target groups in a pulmonary TB awareness campaign. The campaign should be able to reconstruct people's misconception about pulmonary TB, thereby bringing around the health-risk perception so that it is not solely focused on improving lay knowledge.
Robust estimation procedure in panel data model
Energy Technology Data Exchange (ETDEWEB)
Shariff, Nurul Sima Mohamad [Faculty of Science of Technology, Universiti Sains Islam Malaysia (USIM), 71800, Nilai, Negeri Sembilan (Malaysia); Hamzah, Nor Aishah [Institute of Mathematical Sciences, Universiti Malaya, 50630, Kuala Lumpur (Malaysia)
2014-06-19
The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependence is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.
Maruyama, A.; Kuwagata, T.
2010-12-01
The effect of changes in the growing season for rice, which are agronomical adaptation to climate change, on water use efficiency (WUE) was estimated using a coupled land surface and crop growth model. The crop growth model consisted of calculations of phenological development (Ps), growth of leaf area index (LAI) and canopy height (h). The land surface model consisted of calculations of energy budget on the surface, radiation transport and stomatal movements using the output of the crop growth model (Ps, LAI, h). An empirical relationship between stomatal conductance and phenological stage was used for this calculation. The relationship between leaf geometry and phenological stage was also used to express the change in radiation transport in the canopy. Variations in evapotranspiration (ET) were estimated using the coupled model for five different transplanting times (from March to July) based on climatic data for the Miyazaki Plain, Japan. The seasonal variation in ET showed a common pattern, where most of the ET just after transplanting consisted of evaporation (E), and the transpiration (T) increased with rice growth until heading. However, the timing of the increase in T varied with the growing seasons due to the difference of LAI growth rate. The ratios of total transpiration to evapotranspiration (T/ET) were 40, 48, 48, 46 and 36% for transplanting on March 1st, April 1st, May 1st, June 1st and July 1st, respectively. Assuming the amount of production by photosynthesis is proportional to transpiration; our results suggest that the WUE would be higher at mid growing season.
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Hadiyanto Hadiyanto; AJB van Boxtel
2012-01-01
Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally pro...
Adaptive Covariance Estimation with model selection
Biscay, Rolando; Loubes, Jean-Michel
2012-01-01
We provide in this paper a fully adaptive penalized procedure to select a covariance among a collection of models observing i.i.d replications of the process at fixed observation points. For this we generalize previous results of Bigot and al. and propose to use a data driven penalty to obtain an oracle inequality for the estimator. We prove that this method is an extension to the matricial regression model of the work by Baraud.
Error Estimates of Theoretical Models: a Guide
Dobaczewski, J; Reinhard, P -G
2014-01-01
This guide offers suggestions/insights on uncertainty quantification of nuclear structure models. We discuss a simple approach to statistical error estimates, strategies to assess systematic errors, and show how to uncover inter-dependencies by correlation analysis. The basic concepts are illustrated through simple examples. By providing theoretical error bars on predicted quantities and using statistical methods to study correlations between observables, theory can significantly enhance the feedback between experiment and nuclear modeling.
Estimating an Activity Driven Hidden Markov Model
Meyer, David A.; Shakeel, Asif
2015-01-01
We define a Hidden Markov Model (HMM) in which each hidden state has time-dependent $\\textit{activity levels}$ that drive transitions and emissions, and show how to estimate its parameters. Our construction is motivated by the problem of inferring human mobility on sub-daily time scales from, for example, mobile phone records.
Solar energy estimation using REST2 model
Directory of Open Access Journals (Sweden)
M. Rizwan, Majid Jamil, D. P. Kothari
2010-03-01
Full Text Available The network of solar energy measuring stations is relatively rare through out the world. In India, only IMD (India Meteorological Department Pune provides data for quite few stations, which is considered as the base data for research purposes. However, hourly data of measured energy is not available, even for those stations where measurement has already been done. Due to lack of hourly measured data, the estimation of solar energy at the earth’s surface is required. In the proposed study, hourly solar energy is estimated at four important Indian stations namely New Delhi, Mumbai, Pune and Jaipur keeping in mind their different climatic conditions. For this study, REST2 (Reference Evaluation of Solar Transmittance, 2 bands, a high performance parametric model for the estimation of solar energy is used. REST2 derivation uses the two-band scheme as used in the CPCR2 (Code for Physical Computation of Radiation, 2 bands but CPCR2 does not include NO2 absorption, which is an important parameter for estimating solar energy. In this study, using ground measurements during 1986-2000 as reference, a MATLAB program is written to evaluate the performance of REST2 model at four proposed stations. The solar energy at four stations throughout the year is estimated and compared with CPCR2. The results obtained from REST2 model show the good agreement against the measured data on horizontal surface. The study reveals that REST2 models performs better and evaluate the best results as compared to the other existing models under cloudless sky for Indian climatic conditions.
Lin, Zhoumeng; Cuneo, Matthew; Rowe, Joan D; Li, Mengjie; Tell, Lisa A; Allison, Shayna; Carlson, Jan; Riviere, Jim E; Gehring, Ronette
2016-11-18
Extra-label use of tulathromycin in lactating goats is common and may cause violative residues in milk. The objective of this study was to develop a nonlinear mixed-effects pharmacokinetic (NLME-PK) model to estimate tulathromycin depletion in plasma and milk of lactating goats. Eight lactating goats received two subcutaneous injections of 2.5 mg/kg tulathromycin 7 days apart; blood and milk samples were analyzed for concentrations of tulathromycin and the common fragment of tulathromycin (i.e., the marker residue CP-60,300), respectively, using liquid chromatography mass spectrometry. Based on these new data and related literature data, a NLME-PK compartmental model with first-order absorption and elimination was used to model plasma concentrations and cumulative excreted amount in milk. Monte Carlo simulations with 100 replicates were performed to predict the time when the upper limit of the 95% confidence interval of milk concentrations was below the tolerance. All animals were healthy throughout the study with normal appetite and milk production levels, and with mild-moderate injection-site reactions that diminished by the end of the study. The measured data showed that milk concentrations of the marker residue of tulathromycin were below the limit of detection (LOD = 1.8 ng/ml) 39 days after the second injection. A 2-compartment model with milk as an excretory compartment best described tulathromycin plasma and CP-60,300 milk pharmacokinetic data. The model-predicted data correlated with the measured data very well. The NLME-PK model estimated that tulathromycin plasma concentrations were below LOD (1.2 ng/ml) 43 days after a single injection, and 62 days after the second injection with a 95% confidence. These estimated times are much longer than the current meat withdrawal time recommendation of 18 days for tulathromycin in non-lactating cattle. The results suggest that twice subcutaneous injections of 2.5 mg/kg tulathromycin are a clinically
DEFF Research Database (Denmark)
Mailund, Thomas; Dutheil, Julien; Hobolth, Asger
2011-01-01
event has occurred to split them apart. The size of these segments of constant divergence depends on the recombination rate, but also on the speciation time, the effective population size of the ancestral population, as well as demographic effects and selection. Thus, inference of these parameters may......ue to genetic variation in the ancestor of two populations or two species, the divergence time for DNA sequences from two populations is variable along the genome. Within genomic segments all bases will share the same divergence—because they share a most recent common ancestor—when no recombination...
Liu, Hongyu; Crawford, James; Ham, Seung-Hee; Zhang, Bo; Kato, Seiji; Voulgarakis, Apostolos; Chen, Gao; Fairlie, Duncan; Duncan, Bryan; Yantosca, Robert
2017-01-01
Clouds directly affect tropospheric photochemistry through modification of solar radiation that determines photolysis frequencies. This effect is an important component of global tropospheric chemistry-climate interaction, and its understanding is thus essential for predicting the feedback of climate change on tropospheric chemistry.
ICA Model Order Estimation Using Clustering Method
Directory of Open Access Journals (Sweden)
P. Sovka
2007-12-01
Full Text Available In this paper a novel approach for independent component analysis (ICA model order estimation of movement electroencephalogram (EEG signals is described. The application is targeted to the brain-computer interface (BCI EEG preprocessing. The previous work has shown that it is possible to decompose EEG into movement-related and non-movement-related independent components (ICs. The selection of only movement related ICs might lead to BCI EEG classification score increasing. The real number of the independent sources in the brain is an important parameter of the preprocessing step. Previously, we used principal component analysis (PCA for estimation of the number of the independent sources. However, PCA estimates only the number of uncorrelated and not independent components ignoring the higher-order signal statistics. In this work, we use another approach - selection of highly correlated ICs from several ICA runs. The ICA model order estimation is done at significance level ÃŽÂ± = 0.05 and the model order is less or more dependent on ICA algorithm and its parameters.
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing...... cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss......Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent...
Verkaik-Kloosterman, Janneke; Dodd, Kevin W; Dekkers, Arnold L M; van 't Veer, Pieter; Ocké, Marga C
2011-11-01
Statistical modeling of habitual micronutrient intake from food and dietary supplements using short-term measurements is hampered by heterogeneous variances and multimodality. Summing short-term intakes from food and dietary supplements prior to simple correction for within-person variation (first add then shrink) may produce estimates of habitual total micronutrient intake so badly biased as to be smaller than estimates of habitual intake from food sources only. A 3-part model using a first shrink then add approach is proposed to estimate the habitual micronutrient intake from food among nonsupplement users, food among supplement users, and supplements. The population distribution of habitual total micronutrient intake is estimated by combining these 3 habitual intake distributions, accounting for possible interdependence between Eq. 2 and 3. The new model is an extension of a model developed by the USA National Cancer Institute. Habitual total vitamin D intake among young children was estimated using the proposed model and data from the Dutch food consumption survey (n = 1279). The model always produced habitual total intakes similar to or higher than habitual intakes from food sources only and also preserved the multimodal shape of the observed total vitamin D intake distribution. This proposed method incorporates several sources of covariate information that should provide more precise estimates of the habitual total intake distribution and the proportion of the population with intakes below/above cutpoint values. The proposed methodology could be useful for other complex situations, e.g. where high concentrations of micronutrients appear in episodically consumed foods.
High-dimensional model estimation and model selection
CERN. Geneva
2015-01-01
I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.
Estimating the effects of Exchange and Interest Rates on Stock ...
African Journals Online (AJOL)
Estimating the effects of Exchange and Interest Rates on Stock Market in ... The need to empirically determine the predictive power of exchange rate and ... Keywords: Exchange rate, interest rate, All-share index, multiple regression models
Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model
Rizvi, Farheen
2016-01-01
Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.
Energy Technology Data Exchange (ETDEWEB)
Weiland, J., E-mail: elfjw@chalmers.se [Chalmers University of Technology and EURATOM-VR Association (Sweden)
2016-05-15
Basic aspects of turbulent transport in toroidal magnetized plasmas are discussed. In particular the fluid closure has strong effects on zonal flows which are needed to create an absorbing boundary for long wave lengths and also to obtain the Dimits nonlinear upshift. The fluid resonance in the energy equation is found to be instrumental for generating the L–H transition, the spin-up of poloidal rotation in internal transport barriers, as well as the nonlinear Dimits upshift. The difference between the linearly fastest growing mode number and the corresponding longer nonlinear correlation length is also addressed. It is found that the Kadomtsev mixing length result is consistent with the non-Markovian diagonal limit of the transport at the nonlinearly obtained correlation length.
DEFF Research Database (Denmark)
Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan
This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set of ...... Carlo experiment. We find that estimation of the parameters in the transition function can be problematic but that there may be significant benefits in terms of forecast performance....... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...
Extreme gust wind estimation using mesoscale modeling
DEFF Research Database (Denmark)
Larsén, Xiaoli Guo; Kruger, Andries
2014-01-01
through turbulent eddies. This process is modeled using the mesoscale Weather Forecasting and Research (WRF) model. The gust at the surface is calculated as the largest winds over a layer where the averaged turbulence kinetic energy is greater than the averaged buoyancy force. The experiments have been......Currently, the existing estimation of the extreme gust wind, e.g. the 50-year winds of 3 s values, in the IEC standard, is based on a statistical model to convert the 1:50-year wind values from the 10 min resolution. This statistical model assumes a Gaussian process that satisfies the classical...... done for Denmark and two areas in South Africa. For South Africa, the extreme gust atlases from South Africa were created from the output of the mesoscale modelling using Climate Forecasting System Reanalysis (CFSR) forcing for the period 1998 – 2010. The extensive measurements including turbulence...
Rutten, M J M; Bovenhuis, H; van Arendonk, J A M
2010-10-01
Fourier transform infrared spectroscopy is a suitable method to determine bovine milk fat composition. However, the determination of fat composition by gas chromatography, required for calibration of the infrared prediction model, is expensive and labor intensive. It has recently been shown that the number of calibration samples is strongly related to the model's validation r(2) (i.e., accuracy of prediction). However, the effect of the number of calibration samples used, and therefore validation r(2), on the estimated genetic parameters of data predicted using the model needs to be established. To this end, 235 calibration data subsets of different sizes were sampled: n=100, n=250, n=500, and n=1,000 calibration samples. Subsequently, these data subsets were used to calibrate fat composition prediction models for 2 specific fatty acids: C16:0 and C18u (where u=unsaturated). Next, genetic parameters were estimated on predicted fat composition data for these fatty acids. Strong relationships between the number of calibration samples and validation r(2), as well as strong genetic correlations were found. However, the use of n=100 calibration samples resulted in a broad range of validation r(2) values and genetic correlations. Subsequent increases of the number of calibration samples resulted in narrowing patterns for validation r(2) as well as genetic correlations. The use of n=1,000 calibration samples resulted in estimated genetic correlations varying within a range of 0.10 around the average, which seems acceptable. Genetic analyses for the human health-related fatty acids C14:0, C16:0, and C18u, and the ratio of saturated fatty acids to unsaturated fatty acids showed that replacing observations on fat composition determined by gas chromatography by predictions based on infrared spectra reduced the potential genetic gain to 98, 86, 96, and 99% for the 4 fatty acid traits, respectively, in dairy breeding schemes where progeny testing is practiced. We conclude that
Yangui, Ahmed1; Font, Montserrat Costa; Gil, José María
2013-01-01
Due to the increasing interest on understanding the formation of consumer’s food choice process, the hybrid choice model (HCM) has been developed. HCM represents a promising new class of models which merge classic choice models with structural equations models (SEM) for latent variables (LV). Regardless of their conceptual appeal, up to date the application of HCM in agro food marketing remains very scarce. The present work extends previous HCM applications by first estimating a random parame...
Directory of Open Access Journals (Sweden)
T. S. Bates
2006-01-01
Full Text Available The largest uncertainty in the radiative forcing of climate change over the industrial era is that due to aerosols, a substantial fraction of which is the uncertainty associated with scattering and absorption of shortwave (solar radiation by anthropogenic aerosols in cloud-free conditions (IPCC, 2001. Quantifying and reducing the uncertainty in aerosol influences on climate is critical to understanding climate change over the industrial period and to improving predictions of future climate change for assumed emission scenarios. Measurements of aerosol properties during major field campaigns in several regions of the globe during the past decade are contributing to an enhanced understanding of atmospheric aerosols and their effects on light scattering and climate. The present study, which focuses on three regions downwind of major urban/population centers (North Indian Ocean (NIO during INDOEX, the Northwest Pacific Ocean (NWP during ACE-Asia, and the Northwest Atlantic Ocean (NWA during ICARTT, incorporates understanding gained from field observations of aerosol distributions and properties into calculations of perturbations in radiative fluxes due to these aerosols. This study evaluates the current state of observations and of two chemical transport models (STEM and MOZART. Measurements of burdens, extinction optical depth (AOD, and direct radiative effect of aerosols (DRE – change in radiative flux due to total aerosols are used as measurement-model check points to assess uncertainties. In-situ measured and remotely sensed aerosol properties for each region (mixing state, mass scattering efficiency, single scattering albedo, and angular scattering properties and their dependences on relative humidity are used as input parameters to two radiative transfer models (GFDL and University of Michigan to constrain estimates of aerosol radiative effects, with uncertainties in each step propagated through the analysis. Constraining the radiative
Bates, T. S.; Anderson, T. L.; Baynard, T.; Bond, T.; Boucher, O.; Carmichael, G.; Clarke, A.; Erlick, C.; Guo, H.; Horowitz, L.; Howell, S.; Kulkarni, S.; Maring, H.; McComiskey, A.; Middlebrook, A.; Noone, K.; O'Dowd, C. D.; Ogren, J.; Penner, J.; Quinn, P. K.; Ravishankara, A. R.; Savoie, D. L.; Schwartz, S. E.; Shinozuka, Y.; Tang, Y.; Weber, R. J.; Wu, Y.
2006-05-01
The largest uncertainty in the radiative forcing of climate change over the industrial era is that due to aerosols, a substantial fraction of which is the uncertainty associated with scattering and absorption of shortwave (solar) radiation by anthropogenic aerosols in cloud-free conditions (IPCC, 2001). Quantifying and reducing the uncertainty in aerosol influences on climate is critical to understanding climate change over the industrial period and to improving predictions of future climate change for assumed emission scenarios. Measurements of aerosol properties during major field campaigns in several regions of the globe during the past decade are contributing to an enhanced understanding of atmospheric aerosols and their effects on light scattering and climate. The present study, which focuses on three regions downwind of major urban/population centers (North Indian Ocean (NIO) during INDOEX, the Northwest Pacific Ocean (NWP) during ACE-Asia, and the Northwest Atlantic Ocean (NWA) during ICARTT), incorporates understanding gained from field observations of aerosol distributions and properties into calculations of perturbations in radiative fluxes due to these aerosols. This study evaluates the current state of observations and of two chemical transport models (STEM and MOZART). Measurements of burdens, extinction optical depth (AOD), and direct radiative effect of aerosols (DRE - change in radiative flux due to total aerosols) are used as measurement-model check points to assess uncertainties. In-situ measured and remotely sensed aerosol properties for each region (mixing state, mass scattering efficiency, single scattering albedo, and angular scattering properties and their dependences on relative humidity) are used as input parameters to two radiative transfer models (GFDL and University of Michigan) to constrain estimates of aerosol radiative effects, with uncertainties in each step propagated through the analysis. Constraining the radiative transfer
Directory of Open Access Journals (Sweden)
T. S. Bates
2006-01-01
Full Text Available The largest uncertainty in the radiative forcing of climate change over the industrial era is that due to aerosols, a substantial fraction of which is the uncertainty associated with scattering and absorption of shortwave (solar radiation by anthropogenic aerosols in cloud-free conditions (IPCC, 2001. Quantifying and reducing the uncertainty in aerosol influences on climate is critical to understanding climate change over the industrial period and to improving predictions of future climate change for assumed emission scenarios. Measurements of aerosol properties during major field campaigns in several regions of the globe during the past decade are contributing to an enhanced understanding of atmospheric aerosols and their effects on light scattering and climate. The present study, which focuses on three regions downwind of major urban/population centers (North Indian Ocean (NIO during INDOEX, the Northwest Pacific Ocean (NWP during ACE-Asia, and the Northwest Atlantic Ocean (NWA during ICARTT, incorporates understanding gained from field observations of aerosol distributions and properties into calculations of perturbations in radiative fluxes due to these aerosols. This study evaluates the current state of observations and of two chemical transport models (STEM and MOZART. Measurements of burdens, extinction optical depth (AOD, and direct radiative effect of aerosols (DRE – change in radiative flux due to total aerosols are used as measurement-model check points to assess uncertainties. In-situ measured and remotely sensed aerosol properties for each region (mixing state, mass scattering efficiency, single scattering albedo, and angular scattering properties and their dependences on relative humidity are used as input parameters to two radiative transfer models (GFDL and University of Michigan to constrain estimates of aerosol radiative effects, with uncertainties in each step propagated through the analysis. Constraining the radiative
Comparisons of Estimation Procedures for Nonlinear Multilevel Models
Directory of Open Access Journals (Sweden)
Ali Reza Fotouhi
2003-05-01
Full Text Available We introduce General Multilevel Models and discuss the estimation procedures that may be used to fit multilevel models. We apply the proposed procedures to three-level binary data generated in a simulation study. We compare the procedures by two criteria, Bias and efficiency. We find that the estimates of the fixed effects and variance components are substantially and significantly biased using Longford's Approximation and Goldstein's Generalized Least Squares approaches by two software packages VARCL and ML3. These estimates are not significantly biased and are very close to real values when we use Markov Chain Monte Carlo (MCMC using Gibbs sampling or Nonparametric Maximum Likelihood (NPML approach. The Gaussian Quadrature (GQ approach, even with small number of mass points results in consistent estimates but computationally problematic. We conclude that the MCMC and the NPML approaches are the recommended procedures to fit multilevel models.
Model-based estimation of individual fitness
Link, W.A.; Cooch, E.G.; Cam, E.
2002-01-01
Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).
Hidden Markov models estimation and control
Elliott, Robert J; Moore, John B
1995-01-01
As more applications are found, interest in Hidden Markov Models continues to grow. Following comments and feedback from colleagues, students and other working with Hidden Markov Models the corrected 3rd printing of this volume contains clarifications, improvements and some new material, including results on smoothing for linear Gaussian dynamics. In Chapter 2 the derivation of the basic filters related to the Markov chain are each presented explicitly, rather than as special cases of one general filter. Furthermore, equations for smoothed estimates are given. The dynamics for the Kalman filte
Cvetkovic, V.; Cheng, H.; Widestrand, H.; ByegâRd, J.; Winberg, A.; Andersson, P.
2007-11-01
Transport and retention of sorbing tracers in a single, altered crystalline rock fracture on a 5 m scale is investigated. We evaluate the results of a comprehensive field study (referred to as Tracer Retention Understanding Experiments, first phase (TRUE-1)), at a 400 m depth of the Äspö Hard Rock Laboratory (Sweden). A total of 16 breakthrough curves are analyzed, from three test configurations using six radioactive tracers with a broad range of sorption properties. A transport-retention model is proposed, and its applicability is assessed based on available data. We find that the conventional model with an asymptotic power law slope of -3/2 (one-dimensional diffusion into an unlimited rock matrix) is a reasonable approximation for the conditions of the TRUE-1 tests. Retention in the altered rock of the rim zone appears to be significantly stronger than implied by retention properties inferred from generic (unaltered) rock samples. The effective physical parameters which control retention (matrix porosity and retention aperture) are comparable for all three test configurations. The most plausible in situ (rim zone) porosity is in the range 1%-2%, which constrains the effective retention aperture to the range 0.2-0.7 mm. For all sorbing tracers the estimated in situ sorption coefficient appears to be larger by at least a factor of 10, compared to the value inferred from through-diffusion tests using unaltered rock samples.
Research on the effect estimation of seismic safety evaluation
Institute of Scientific and Technical Information of China (English)
邹其嘉; 陶裕禄
2004-01-01
Seismic safety evaluation is a basic work for determining the seismic resistance requirements of major construction projects. The effect, especially the economic effect of the seismic safety evaluation has been generally concerned. The paper gives a model for estimating the effect of seismic safety evaluation and calculates roughly the economic effect of seismic safety evaluation with some examples.
Estimating Interaction Effects With Incomplete Predictor Variables
Enders, Craig K.; Baraldi, Amanda N.; Cham, Heining
2014-01-01
The existing missing data literature does not provide a clear prescription for estimating interaction effects with missing data, particularly when the interaction involves a pair of continuous variables. In this article, we describe maximum likelihood and multiple imputation procedures for this common analysis problem. We outline 3 latent variable model specifications for interaction analyses with missing data. These models apply procedures from the latent variable interaction literature to analyses with a single indicator per construct (e.g., a regression analysis with scale scores). We also discuss multiple imputation for interaction effects, emphasizing an approach that applies standard imputation procedures to the product of 2 raw score predictors. We thoroughly describe the process of probing interaction effects with maximum likelihood and multiple imputation. For both missing data handling techniques, we outline centering and transformation strategies that researchers can implement in popular software packages, and we use a series of real data analyses to illustrate these methods. Finally, we use computer simulations to evaluate the performance of the proposed techniques. PMID:24707955
Directory of Open Access Journals (Sweden)
Fragoulakis V
2013-06-01
Full Text Available Vassilis Fragoulakis, Nikolaos ManiadakisNational School of Public Health, Department of Health Services Management, Athens, GreeceObjective: To quantify the economic effects of a child conceived by in vitro fertilization (IVF in terms of net tax revenue from the state's perspective in Greece.Methods: Based on previous international experience, a mathematical model was developed to assess the lifetime productivity of a single individual and his/her lifetime transactions with governmental agencies. The model distinguished among three periods in the economic life cycle of an individual: (1 early life, when the government primarily contributes resources through child tax credits, health care, and educational expenses; (2 employment, when individuals begin returning resources through taxes; and (3 retirement, when the government expends additional resources on pensions and health care. The cost of a live birth with IVF was based on the modification of a previously published model developed by the authors. All outcomes were discounted at a 3% discount rate. The data inputs – namely, the economic or demographic variables – were derived from the National Statistical Secretariat of Greece and other relevant sources. To deal with uncertainty, bias-corrected uncertainty intervals (UIs were calculated based on 5000 Monte Carlo simulations. In addition, to examine the robustness of our results, other one-way sensitivity analyses were also employed.Results: The cost of IVF per birth was estimated at €17,015 (95% UI: €13,932–€20,200. The average projected income generated by an individual throughout his/her productive life was €258,070 (95% UI: €185,376–€339,831. In addition, his/her life tax contribution was estimated at €133,947 (95% UI: €100,126–€177,375, while the discounted governmental expenses for elderly and underage individuals were €67,624 (95% UI: €55,211–€83,930. Hence, the net present value of IVF was €60
Marginal Maximum Likelihood Estimation of Item Response Models in R
Directory of Open Access Journals (Sweden)
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
Directory of Open Access Journals (Sweden)
Fang-Rong Yan
2014-01-01
Full Text Available Population pharmacokinetic (PPK models play a pivotal role in quantitative pharmacology study, which are classically analyzed by nonlinear mixed-effects models based on ordinary differential equations. This paper describes the implementation of SDEs in population pharmacokinetic models, where parameters are estimated by a novel approximation of likelihood function. This approximation is constructed by combining the MCMC method used in nonlinear mixed-effects modeling with the extended Kalman filter used in SDE models. The analysis and simulation results show that the performance of the approximation of likelihood function for mixed-effects SDEs model and analysis of population pharmacokinetic data is reliable. The results suggest that the proposed method is feasible for the analysis of population pharmacokinetic data.
On Bayes linear unbiased estimation of estimable functions for the singular linear model
Institute of Scientific and Technical Information of China (English)
ZHANG Weiping; WEI Laisheng
2005-01-01
The unique Bayes linear unbiased estimator (Bayes LUE) of estimable functions is derived for the singular linear model. The superiority of Bayes LUE over ordinary best linear unbiased estimator is investigated under mean square error matrix (MSEM)criterion.
Ruff, Ryan Richard; Zhen, Chen
2015-05-01
Sugar-sweetened beverages (SSBs) contribute to weight gain and increase the risk of obesity. In this article, we determine the effects of an innovative SSB tax on weight and obesity in New York City adults. Dynamic weight loss models were used to estimate the effects of an expected 5800-calorie reduction resulting from an SSB tax on weight and obesity. Baseline data were derived from the New York City Community Health Survey. One, five, and 10-year simulations of weight loss were performed. Calorie reductions resulted in a per-person weight loss of 0.46 kg in year 1 and 0.92 kg in year 10. A total of 5,531,059 kg was expected to be lost over 10 years when weighted to the full New York City adult population. Approximately 50% of overall bodyweight loss occurred within the first year, and 95% within 5 years. Results showed consistent but nonsignificant decreases in obesity prevalence. SSB taxes may be viable strategies to reduce obesity when combined with other interventions to maximize effects in the population. Copyright © 2015 Elsevier Inc. All rights reserved.
Missing data estimation in fMRI dynamic causal modeling.
Zaghlool, Shaza B; Wyatt, Christopher L
2014-01-01
Dynamic Causal Modeling (DCM) can be used to quantify cognitive function in individuals as effective connectivity. However, ambiguity among subjects in the number and location of discernible active regions prevents all candidate models from being compared in all subjects, precluding the use of DCM as an individual cognitive phenotyping tool. This paper proposes a solution to this problem by treating missing regions in the first-level analysis as missing data, and performing estimation of the time course associated with any missing region using one of four candidate methods: zero-filling, average-filling, noise-filling using a fixed stochastic process, or one estimated using expectation-maximization. The effect of this estimation scheme was analyzed by treating it as a preprocessing step to DCM and observing the resulting effects on model evidence. Simulation studies show that estimation using expectation-maximization yields the highest classification accuracy using a simple loss function and highest model evidence, relative to other methods. This result held for various dataset sizes and varying numbers of model choice. In real data, application to Go/No-Go and Simon tasks allowed computation of signals from the missing nodes and the consequent computation of model evidence in all subjects compared to 62 and 48 percent respectively if no preprocessing was performed. These results demonstrate the face validity of the preprocessing scheme and open the possibility of using single-subject DCM as an individual cognitive phenotyping tool.
Yu, Wenhua; Li, Changping; Fu, Xiaomeng; Cui, Zhuang; Liu, Xiaoqian; Fan, Linlin; Zhang, Guan; Ma, Jun
2014-01-01
Based on the important changes in South Africa since 2009 and the Antiretroviral Treatment Guideline 2013 recommendations, we explored the cost-effectiveness of different strategy combinations according to the South African HIV-infected mothers' prompt treatments and different feeding patterns. A decision analytic model was applied to simulate cohorts of 10,000 HIV-infected pregnant women to compare the cost-effectiveness of two different HIV strategy combinations: (1) Women were tested and treated promptly at any time during pregnancy (Promptly treated cohort). (2) Women did not get testing or treatment until after delivery and appropriate standard treatments were offered as a remedy (Remedy cohort). Replacement feeding or exclusive breastfeeding was assigned in both strategies. Outcome measures included the number of infant HIV cases averted, the cost per infant HIV case averted, and the cost per life year (LY) saved from the interventions. One-way and multivariate sensitivity analyses were performed to estimate the uncertainty ranges of all outcomes. The remedy strategy does not particularly cost-effective. Compared with the untreated baseline cohort which leads to 1127 infected infants, 698 (61.93%) and 110 (9.76%) of pediatric HIV cases are averted in the promptly treated cohort and remedy cohort respectively, with incremental cost-effectiveness of $68.51 and $118.33 per LY, respectively. With or without the antenatal testing and treatments, breastfeeding is less cost-effective ($193.26 per LY) than replacement feeding ($134.88 per LY), without considering the impact of willingness to pay. Compared with the prompt treatments, remedy in labor or during the postnatal period is less cost-effective. Antenatal HIV testing and prompt treatments and avoiding breastfeeding are the best strategies. Although encouraging mothers to practice replacement feeding in South Africa is far from easy and the advantages of breastfeeding can not be ignored, we still suggest
Directory of Open Access Journals (Sweden)
Wenhua Yu
Full Text Available OBJECTIVES: Based on the important changes in South Africa since 2009 and the Antiretroviral Treatment Guideline 2013 recommendations, we explored the cost-effectiveness of different strategy combinations according to the South African HIV-infected mothers' prompt treatments and different feeding patterns. STUDY DESIGN: A decision analytic model was applied to simulate cohorts of 10,000 HIV-infected pregnant women to compare the cost-effectiveness of two different HIV strategy combinations: (1 Women were tested and treated promptly at any time during pregnancy (Promptly treated cohort. (2 Women did not get testing or treatment until after delivery and appropriate standard treatments were offered as a remedy (Remedy cohort. Replacement feeding or exclusive breastfeeding was assigned in both strategies. Outcome measures included the number of infant HIV cases averted, the cost per infant HIV case averted, and the cost per life year (LY saved from the interventions. One-way and multivariate sensitivity analyses were performed to estimate the uncertainty ranges of all outcomes. RESULTS: The remedy strategy does not particularly cost-effective. Compared with the untreated baseline cohort which leads to 1127 infected infants, 698 (61.93% and 110 (9.76% of pediatric HIV cases are averted in the promptly treated cohort and remedy cohort respectively, with incremental cost-effectiveness of $68.51 and $118.33 per LY, respectively. With or without the antenatal testing and treatments, breastfeeding is less cost-effective ($193.26 per LY than replacement feeding ($134.88 per LY, without considering the impact of willingness to pay. CONCLUSION: Compared with the prompt treatments, remedy in labor or during the postnatal period is less cost-effective. Antenatal HIV testing and prompt treatments and avoiding breastfeeding are the best strategies. Although encouraging mothers to practice replacement feeding in South Africa is far from easy and the advantages of
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Caricchi, Luca; Simpson, Guy; Schaltegger, Urs
2016-04-01
Magma fluxes in the Earth's crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes). Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions. Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.
Directory of Open Access Journals (Sweden)
Luca eCaricchi
2016-04-01
Full Text Available Magma fluxes in the Earth’s crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes. Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions.Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.
Bayesian Estimation of a Mixture Model
Directory of Open Access Journals (Sweden)
Ilhem Merah
2015-05-01
Full Text Available We present the properties of a bathtub curve reliability model having both a sufficient adaptability and a minimal number of parameters introduced by Idée and Pierrat (2010. This one is a mixture of a Gamma distribution G(2, (1/θ and a new distribution L(θ. We are interesting by Bayesian estimation of the parameters and survival function of this model with a squared-error loss function and non-informative prior using the approximations of Lindley (1980 and Tierney and Kadane (1986. Using a statistical sample of 60 failure data relative to a technical device, we illustrate the results derived. Based on a simulation study, comparisons are made between these two methods and the maximum likelihood method of this two parameters model.
Hierarchical Boltzmann simulations and model error estimation
Torrilhon, Manuel; Sarna, Neeraj
2017-08-01
A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.
Maximum likelihood estimation for semiparametric density ratio model.
Diao, Guoqing; Ning, Jing; Qin, Jing
2012-06-27
In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.
Application of variance components estimation to calibrate geoid error models.
Guo, Dong-Mei; Xu, Hou-Ze
2015-01-01
The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model.
Estimating Equilibrium Effects of Job Search Assistance
DEFF Research Database (Denmark)
Gautier, Pieter; Muller, Paul; van der Klaauw, Bas
that the nonparticipants in the experiment regions find jobs slower after the introduction of the activation program (relative to workers in other regions). We then estimate an equilibrium search model. This model shows that a large scale role out of the activation program decreases welfare, while a standard partial...
Kooperman, G. J.; Pritchard, M. S.; Ghan, S. J.; Wang, M.; Somerville, R. C.; Russell, L. M.
2012-12-01
The newest version of NCAR's Community Atmosphere Model (CAM5) produces a strong global mean aerosol indirect effect of -1.54 W/m2. However, when CAM5 is modified to include resolved scale convective processes in a new multi-scale modeling framework (MMF) the indirect aerosol forcing is reduced by almost half (-0.80 W/m2). In the MMF approach, conventional cloud parameterizations are replaced by embedded cloud-resolving models (CRM) in each grid column of CAM5, and aerosol on the global grid is linked to explicitly resolved CRM scale relative humidity and updraft velocities to determine the number of aerosol particles that activate to form cloud droplets at CRM resolution. However, the increased computational expense incurred by resolving convective processes makes long integrations with the MMF prohibitively expensive. This is a challenge for investigating aerosol indirect effects because it typically requires integrating over long simulations to isolate statistically significant differences in cloud radiative forcing due to anthropogenic aerosol perturbations from natural variability. Here an alternative approach is explored, which implements Newtonian relaxation (nudging) to constrain simulations with both pre-industrial and present-day aerosol emissions toward identical meteorological conditions, thus reducing the influences of natural variability so that the two models can be compared in short simulations. Using this approach in CAM5, we find high pattern correlations between one-year averages of aerosol indirect effect and the pattern of the signal produced in a 100-year average. Estimates of aerosol indirect effects in CAM5 with and without nudging have mean values and 95% confidence intervals of -1.54 ± 0.02 W/m2 and -1.63 ± 0.17 W/m2, respectively. The approach is applied in the MMF to investigate the mechanisms responsible for producing a weaker forcing than CAM5. These include weaker responses in liquid water content and droplet number concentrations
A Biomechanical Modeling Guided CBCT Estimation Technique.
Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing
2017-02-01
Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks.
Adaptive Estimation of Heteroscedastic Money Demand Model of Pakistan
Directory of Open Access Journals (Sweden)
Muhammad Aslam
2007-07-01
Full Text Available For the problem of estimation of Money demand model of Pakistan, money supply (M1 shows heteroscedasticity of the unknown form. For estimation of such model we compare two adaptive estimators with ordinary least squares estimator and show the attractive performance of the adaptive estimators, namely, nonparametric kernel estimator and nearest neighbour regression estimator. These comparisons are made on the basis standard errors of the estimated coefficients, standard error of regression, Akaike Information Criteria (AIC value, and the Durban-Watson statistic for autocorrelation. We further show that nearest neighbour regression estimator performs better when comparing with the other nonparametric kernel estimator.
Estimation of Model Parameters for Steerable Needles
Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451
Estimation of Model Parameters for Steerable Needles.
Park, Wooram; Reed, Kyle B; Okamura, Allison M; Chirikjian, Gregory S
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%.
Coupling Hydrologic and Hydrodynamic Models to Estimate PMF
Felder, G.; Weingartner, R.
2015-12-01
Most sophisticated probable maximum flood (PMF) estimations derive the PMF from the probable maximum precipitation (PMP) by applying deterministic hydrologic models calibrated with observed data. This method is based on the assumption that the hydrological system is stationary, meaning that the system behaviour during the calibration period or the calibration event is presumed to be the same as it is during the PMF. However, as soon as a catchment-specific threshold is reached, the system is no longer stationary. At or beyond this threshold, retention areas, new flow paths, and changing runoff processes can strongly affect downstream peak discharge. These effects can be accounted for by coupling hydrologic and hydrodynamic models, a technique that is particularly promising when the expected peak discharge may considerably exceed the observed maximum discharge. In such cases, the coupling of hydrologic and hydraulic models has the potential to significantly increase the physical plausibility of PMF estimations. This procedure ensures both that the estimated extreme peak discharge does not exceed the physical limit based on riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered. Our study discusses the prospect of considering retention effects on PMF estimations by coupling hydrologic and hydrodynamic models. This method is tested by forcing PREVAH, a semi-distributed deterministic hydrological model, with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to externally force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). Finally, the PMF estimation results obtained using the coupled modelling approach are compared to the results obtained using ordinary hydrologic modelling.
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Directory of Open Access Journals (Sweden)
Hadiyanto Hadiyanto
2012-05-01
Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels. Abstrak PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan
The Effect of Surface Heterogeneity on Cloud Absorption Estimates
Chiu, Jui-Yuan C.; Marshak, Alexander; Wiscombe, Warren J.
2004-01-01
This study presents a systematic and quantitative analysis of the effect of inhomogeneous surface albedo on shortwave cloud absorption estimates. We use 3D radiative transfer modeling with gradually complex clouds over a simplified surface to calculate cloud absorption. We find that averaging surface albedo always underestimates cloud absorption, and thus accounting for surface heterogeneity always enhances cloud absorption. However, the impact on cloud absorption estimates is not enough to explain the discrepancy between measured and model calculated shortwave cloud absorptions.
DEFF Research Database (Denmark)
Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.
2011-01-01
Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...... the uncertainty of the weather radar rainfall input. The main findings of this work, is that the input uncertainty propagate through the urban drainage model with significant effects on the model result. The GLUE methodology is in general a usable way to explore this uncertainty although; the exact width...... of the prediction bands can be questioned, due to the subjective nature of the method. Moreover, the method also gives very useful information about the model and parameter behaviour....
Development on electromagnetic impedance function modeling and its estimation
Energy Technology Data Exchange (ETDEWEB)
Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)
2015-09-30
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition
Perspectives on Modelling BIM-enabled Estimating Practices
Directory of Open Access Journals (Sweden)
Willy Sher
2014-12-01
Full Text Available BIM-enabled estimating processes do not replace or provide a substitute for the traditional approaches used in the architecture, engineering and construction industries. This paper explores the impact of BIM on these traditional processes. It identifies differences between the approaches used with BIM and other conventional methods, and between the various construction professionals that prepare estimates. We interviewed 17 construction professionals from client organizations, contracting organizations, consulting practices and specialist-project firms. Our analyses highlight several logical relationships between estimating processes and BIM attributes. Estimators need to respond to the challenges BIM poses to traditional estimating practices. BIM-enabled estimating circumvents long-established conventions and traditional approaches, and focuses on data management. Consideration needs to be given to the model data required for estimating, to the means by which these data may be harnessed when exported, to the means by which the integrity of model data are protected, to the creation and management of tools that work effectively and efficiently in multi-disciplinary settings, and to approaches that narrow the gap between virtual reality and actual reality. Areas for future research are also identified in the paper.
Estimating the effectiveness of further sampling in species inventories
Keating, K.A.; Quinn, J.F.; Ivie, M.A.; Ivie, L.L.
1998-01-01
Estimators of the number of additional species expected in the next ??n samples offer a potentially important tool for improving cost-effectiveness of species inventories but are largely untested. We used Monte Carlo methods to compare 11 such estimators, across a range of community structures and sampling regimes, and validated our results, where possible, using empirical data from vascular plant and beetle inventories from Glacier National Park, Montana, USA. We found that B. Efron and R. Thisted's 1976 negative binomial estimator was most robust to differences in community structure and that it was among the most accurate estimators when sampling was from model communities with structures resembling the large, heterogeneous communities that are the likely targets of major inventory efforts. Other estimators may be preferred under specific conditions, however. For example, when sampling was from model communities with highly even species-abundance distributions, estimates based on the Michaelis-Menten model were most accurate; when sampling was from moderately even model communities with S=10 species or communities with highly uneven species-abundance distributions, estimates based on Gleason's (1922) species-area model were most accurate. We suggest that use of such methods in species inventories can help improve cost-effectiveness by providing an objective basis for redirecting sampling to more-productive sites, methods, or time periods as the expectation of detecting additional species becomes unacceptably low.
Bayesian parameter estimation for nonlinear modelling of biological pathways
Directory of Open Access Journals (Sweden)
Ghasemi Omid
2011-12-01
Full Text Available Abstract Background The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. Results We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC method. We applied this approach to the biological pathways involved in the left ventricle (LV response to myocardial infarction (MI and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly
Near Shore Wave Modeling and applications to wave energy estimation
Zodiatis, G.; Galanis, G.; Hayes, D.; Nikolaidis, A.; Kalogeri, C.; Adam, A.; Kallos, G.; Georgiou, G.
2012-04-01
The estimation of the wave energy potential at the European coastline is receiving increased attention the last years as a result of the adaptation of novel policies in the energy market, the concernsfor global warming and the nuclear energy security problems. Within this framework, numerical wave modeling systems keep a primary role in the accurate description of wave climate and microclimate that is a prerequisite for any wave energy assessment study. In the present work two of the most popular wave models are used for the estimation of the wave parameters at the coastline of Cyprus: The latest parallel version of the wave model WAM (ECMWF version), which employs new parameterization of shallow water effects, and the SWAN model, classically used for near shore wave simulations. The results obtained from the wave models near shores are studied by an energy estimation point of view: The wave parameters that mainly affect the energy temporal and spatial distribution, that is the significant wave height and the mean wave period, are statistically analyzed,focusing onpossible different aspects captured by the two models. Moreover, the wave spectrum distribution prevailing in different areas are discussed contributing, in this way, to the wave energy assessmentin the area. This work is a part of two European projects focusing on the estimation of the wave energy distribution around Europe: The MARINA platform (http://www.marina-platform.info/ index.aspx) and the Ewave (http://www.oceanography.ucy.ac.cy/ewave/) projects.
Shape parameter estimate for a glottal model without time position
Degottex, Gilles; Roebel, Axel; Rodet, Xavier
2009-01-01
cote interne IRCAM: Degottex09a; None / None; National audience; From a recorded speech signal, we propose to estimate a shape parameter of a glottal model without estimating his time position. Indeed, the literature usually propose to estimate the time position first (ex. by detecting Glottal Closure Instants). The vocal-tract filter estimate is expressed as a minimum-phase envelope estimation after removing the glottal model and a standard lips radiation model. Since this filter is mainly b...
Estimating the ETAS model from an early aftershock sequence
Omi, Takahiro; Ogata, Yosihiko; Hirata, Yoshito; Aihara, Kazuyuki
2014-02-01
Forecasting aftershock probabilities, as early as possible after a main shock, is required to mitigate seismic risks in the disaster area. In general, aftershock activity can be complex, including secondary aftershocks or even triggering larger earthquakes. However, this early forecasting implementation has been difficult because numerous aftershocks are unobserved immediately after the main shock due to dense overlapping of seismic waves. Here we propose a method for estimating parameters of the epidemic type aftershock sequence (ETAS) model from incompletely observed aftershocks shortly after the main shock by modeling an empirical feature of data deficiency. Such an ETAS model can effectively forecast the following aftershock occurrences. For example, the ETAS model estimated from the first 24 h data after the main shock can well forecast secondary aftershocks after strong aftershocks. This method can be useful in early and unbiased assessment of the aftershock hazard.
The Impact of Statistical Leakage Models on Design Yield Estimation
Directory of Open Access Journals (Sweden)
Rouwaida Kanj
2011-01-01
Full Text Available Device mismatch and process variation models play a key role in determining the functionality and yield of sub-100 nm design. Average characteristics are often of interest, such as the average leakage current or the average read delay. However, detecting rare functional fails is critical for memory design and designers often seek techniques that enable accurately modeling such events. Extremely leaky devices can inflict functionality fails. The plurality of leaky devices on a bitline increase the dimensionality of the yield estimation problem. Simplified models are possible by adopting approximations to the underlying sum of lognormals. The implications of such approximations on tail probabilities may in turn bias the yield estimate. We review different closed form approximations and compare against the CDF matching method, which is shown to be most effective method for accurate statistical leakage modeling.
AMEM-ADL Polymer Migration Estimation Model User's Guide
The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.
Eckhard, Timo; Valero, Eva M; Hernández-Andrés, Javier; Heikkinen, Ville
2014-03-01
In this work, we evaluate the conditionally positive definite logarithmic kernel in kernel-based estimation of reflectance spectra. Reflectance spectra are estimated from responses of a 12-channel multispectral imaging system. We demonstrate the performance of the logarithmic kernel in comparison with the linear and Gaussian kernel using simulated and measured camera responses for the Pantone and HKS color charts. Especially, we focus on the estimation model evaluations in case the selection of model parameters is optimized using a cross-validation technique. In experiments, it was found that the Gaussian and logarithmic kernel outperformed the linear kernel in almost all evaluation cases (training set size, response channel number) for both sets. Furthermore, the spectral and color estimation accuracies of the Gaussian and logarithmic kernel were found to be similar in several evaluation cases for real and simulated responses. However, results suggest that for a relatively small training set size, the accuracy of the logarithmic kernel can be markedly lower when compared to the Gaussian kernel. Further it was found from our data that the parameter of the logarithmic kernel could be fixed, which simplified the use of this kernel when compared with the Gaussian kernel.
Genomic breeding value estimation using nonparametric additive regression models
Directory of Open Access Journals (Sweden)
Solberg Trygve
2009-01-01
Full Text Available Abstract Genomic selection refers to the use of genomewide dense markers for breeding value estimation and subsequently for selection. The main challenge of genomic breeding value estimation is the estimation of many effects from a limited number of observations. Bayesian methods have been proposed to successfully cope with these challenges. As an alternative class of models, non- and semiparametric models were recently introduced. The present study investigated the ability of nonparametric additive regression models to predict genomic breeding values. The genotypes were modelled for each marker or pair of flanking markers (i.e. the predictors separately. The nonparametric functions for the predictors were estimated simultaneously using additive model theory, applying a binomial kernel. The optimal degree of smoothing was determined by bootstrapping. A mutation-drift-balance simulation was carried out. The breeding values of the last generation (genotyped was predicted using data from the next last generation (genotyped and phenotyped. The results show moderate to high accuracies of the predicted breeding values. A determination of predictor specific degree of smoothing increased the accuracy.
Liu, Jingwei; Liu, Yi; Xu, Meizhi
2015-01-01
Parameter estimation method of Jelinski-Moranda (JM) model based on weighted nonlinear least squares (WNLS) is proposed. The formulae of resolving the parameter WNLS estimation (WNLSE) are derived, and the empirical weight function and heteroscedasticity problem are discussed. The effects of optimization parameter estimation selection based on maximum likelihood estimation (MLE) method, least squares estimation (LSE) method and weighted nonlinear least squares estimation (WNLSE) method are al...
Weibull Parameters Estimation Based on Physics of Failure Model
DEFF Research Database (Denmark)
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... distribution. Methods from structural reliability analysis are used to model the uncertainties and to assess the reliability for fatigue failure. Maximum Likelihood and Least Square estimation techniques are used to estimate fatigue life distribution parameters....
Institute of Scientific and Technical Information of China (English)
Yee LEUNG; WU Kefa; DONG Tianxin
2001-01-01
In this paper, a multivariate linear functional relationship model, where the covariance matrix of the observational errors is not restricted, is considered. The parameter estimation of this model is discussed. The estimators are shown to be a strongly consistent estimation under some mild conditions on the incidental parameters.
Bayesian model comparison and model averaging for small-area estimation
Aitkin, Murray; Liu, Charles C.; Chadwick, Tom
2009-01-01
This paper considers small-area estimation with lung cancer mortality data, and discusses the choice of upper-level model for the variation over areas. Inference about the random effects for the areas may depend strongly on the choice of this model, but this choice is not a straightforward matter. We give a general methodology for both evaluating the data evidence for different models and averaging over plausible models to give robust area effect distributions. We reanalyze the data of Tsutak...
Subdaily Earth Rotation Models Estimated From GPS and VLBI Data
Steigenberger, P.; Tesmer, V.; MacMillan, D.; Thaller, D.; Rothacher, M.; Fritsche, M.; Rülke, A.; Dietrich, R.
2007-12-01
Subdaily changes in Earth rotation at diurnal and semi-diurnal periods are mainly caused by ocean tides. Smaller effects are attributed to the interaction of the atmosphere with the solid Earth. As the tidal periods are well known, models for the ocean tidal contribution to high-frequency Earth rotation variations can be estimated from space- geodetic observations. The subdaily ERP model recommended by the latest IERS conventions was derived from an ocean tide model based on satellite altimetry. Another possibility is the determination of subdaily ERP models from GPS- and/or VLBI-derived Earth rotation parameter series with subdaily resolution. Homogeneously reprocessed long-time series of subdaily ERPs computed by GFZ/TU Dresden (12 years of GPS data), DGFI and GSFC (both with 24 years of VLBI data) provide the basis for the estimation of single-technique and combined subdaily ERP models. The impact of different processing options (e.g., weighting) and different temporal resolutions (1 hour vs. 2 hours) will be evaluated by comparisons of the different models amongst each other and with the IERS model. The analysis of the GPS and VLBI residual signals after subtracting the estimated ocean tidal contribution may help to answer the question whether the remaining signals are technique-specific artifacts and systematic errors or true geophysical signals detected by both techniques.
Modeling Uncertainty when Estimating IT Projects Costs
Winter, Michel; Mirbel, Isabelle; Crescenzo, Pierre
2014-01-01
In the current economic context, optimizing projects' cost is an obligation for a company to remain competitive in its market. Introducing statistical uncertainty in cost estimation is a good way to tackle the risk of going too far while minimizing the project budget: it allows the company to determine the best possible trade-off between estimated cost and acceptable risk. In this paper, we present new statistical estimators derived from the way IT companies estimate the projects' costs. In t...
Benefit Estimation Model for Tourist Spaceflights
Goehlich, Robert A.
2003-01-01
It is believed that the only potential means for significant reduction of the recurrent launch cost, which results in a stimulation of human space colonization, is to make the launcher reusable, to increase its reliability, and to make it suitable for new markets such as mass space tourism. But such space projects, that have long range aspects are very difficult to finance, because even politicians would like to see a reasonable benefit during their term in office, because they want to be able to explain this investment to the taxpayer. This forces planners to use benefit models instead of intuitive judgement to convince sceptical decision-makers to support new investments in space. Benefit models provide insights into complex relationships and force a better definition of goals. A new approach is introduced in the paper that allows to estimate the benefits to be expected from a new space venture. The main objective why humans should explore space is determined in this study to ``improve the quality of life''. This main objective is broken down in sub objectives, which can be analysed with respect to different interest groups. Such interest groups are the operator of a space transportation system, the passenger, and the government. For example, the operator is strongly interested in profit, while the passenger is mainly interested in amusement, while the government is primarily interested in self-esteem and prestige. This leads to different individual satisfactory levels, which are usable for the optimisation process of reusable launch vehicles.
Sreelash, K.; Buis, Samuel; Sekhar, M.; Ruiz, Laurent; Kumar Tomer, Sat; Guérif, Martine
2017-03-01
Characterization of the soil water reservoir is critical for understanding the interactions between crops and their environment and the impacts of land use and environmental changes on the hydrology of agricultural catchments especially in tropical context. Recent studies have shown that inversion of crop models is a powerful tool for retrieving information on root zone properties. Increasing availability of remotely sensed soil and vegetation observations makes it well suited for large scale applications. The potential of this methodology has however never been properly evaluated on extensive experimental datasets and previous studies suggested that the quality of estimation of soil hydraulic properties may vary depending on agro-environmental situations. The objective of this study was to evaluate this approach on an extensive field experiment. The dataset covered four crops (sunflower, sorghum, turmeric, maize) grown on different soils and several years in South India. The components of AWC (available water capacity) namely soil water content at field capacity and wilting point, and soil depth of two-layered soils were estimated by inversion of the crop model STICS with the GLUE (generalized likelihood uncertainty estimation) approach using observations of surface soil moisture (SSM; typically from 0 to 10 cm deep) and leaf area index (LAI), which are attainable from radar remote sensing in tropical regions with frequent cloudy conditions. The results showed that the quality of parameter estimation largely depends on the hydric regime and its interaction with crop type. A mean relative absolute error of 5% for field capacity of surface layer, 10% for field capacity of root zone, 15% for wilting point of surface layer and root zone, and 20% for soil depth can be obtained in favorable conditions. A few observations of SSM (during wet and dry soil moisture periods) and LAI (within water stress periods) were sufficient to significantly improve the estimation of AWC
Chatzinikos, Miltiadis; Dermanis, Athanasios
2016-11-01
By considering a deformable geodetic network, deforming in a linear-in-time mode, according to a coordinate-invariant model, it becomes possible to get an insight into the rank deficiency of the stacking procedure, which is the standard method for estimating initial station coordinates and constant velocities, from coordinate time series. Comparing any two out of the infinitely many least squares estimates of stacking unknowns (initial station coordinates, velocity components and transformation parameters for the reference system in each data epoch), it is proven that the two solutions differ only by a linear-in-time trend in the transformation parameters. These pass over to the initial coordinates (the constant term) and to the velocity estimates (the time coefficient part). While the difference in initial coordinates is equivalent to a change of the reference system at the initial epoch, the differences in velocity components do not comply with those predicted by the same change of reference system for all epochs. Consequently, the different velocity component estimates, obtained by introducing different sets of minimal constraints, correspond to physically different station velocities, which are therefore non-estimable quantities. The theoretical findings are numerically verified for a global, a regional and a local network, by obtaining solutions based on four different types of minimal constraints, three usual algebraic ones (inner or partial inner) and the lately introduced kinematic constraints. Finally, by resorting to the basic ideas of Felix Tisserand, it is explained why the station velocities are non-estimable quantities in a very natural way. The problem of the optimal choice of minimal constraints and, hence, of the corresponding spatio-temporal reference system is shortly discussed.
Chatzinikos, Miltiadis; Dermanis, Athanasios
2017-04-01
By considering a deformable geodetic network, deforming in a linear-in-time mode, according to a coordinate-invariant model, it becomes possible to get an insight into the rank deficiency of the stacking procedure, which is the standard method for estimating initial station coordinates and constant velocities, from coordinate time series. Comparing any two out of the infinitely many least squares estimates of stacking unknowns (initial station coordinates, velocity components and transformation parameters for the reference system in each data epoch), it is proven that the two solutions differ only by a linear-in-time trend in the transformation parameters. These pass over to the initial coordinates (the constant term) and to the velocity estimates (the time coefficient part). While the difference in initial coordinates is equivalent to a change of the reference system at the initial epoch, the differences in velocity components do not comply with those predicted by the same change of reference system for all epochs. Consequently, the different velocity component estimates, obtained by introducing different sets of minimal constraints, correspond to physically different station velocities, which are therefore non-estimable quantities. The theoretical findings are numerically verified for a global, a regional and a local network, by obtaining solutions based on four different types of minimal constraints, three usual algebraic ones (inner or partial inner) and the lately introduced kinematic constraints. Finally, by resorting to the basic ideas of Felix Tisserand, it is explained why the station velocities are non-estimable quantities in a very natural way. The problem of the optimal choice of minimal constraints and, hence, of the corresponding spatio-temporal reference system is shortly discussed.
Phase noise effects on turbulent weather radar spectrum parameter estimation
Lee, Jonggil; Baxa, Ernest G., Jr.
1990-01-01
Accurate weather spectrum moment estimation is important in the use of weather radar for hazardous windshear detection. The effect of the stable local oscillator (STALO) instability (jitter) on the spectrum moment estimation algorithm is investigated. Uncertainty in the stable local oscillator will affect both the transmitted signal and the received signal since the STALO provides transmitted and reference carriers. The proposed approach models STALO phase jitter as it affects the complex autocorrelation of the radar return. The results can therefore by interpreted in terms of any source of system phase jitter for which the model is appropriate and, in particular, may be considered as a cumulative effect of all radar system sources.
DEFF Research Database (Denmark)
Siersma, V; Als-Nielsen, B; Chen, Weikeng;
2007-01-01
. Therefore, a stable multivariable method that allows for heterogeneity is needed for assessing the 'bias coefficients'. We present two general statistical models for analysis of a study of 523 randomized trials from 48 meta-analyses in a random sample of Cochrane reviews: a logistic regression model uses...... by 'meta-epidemiological' re-analysis of data collected as part of systematic reviews. As inadequate quality components often co-occur, we maintain that the suspected biases must be evaluated simultaneously. Furthermore, the biases cannot safely be assumed to be homogeneous across systematic reviews...... the design of the trials as such to give estimates; a weighted regression model incorporates between-trial variation and thus gives wider confidence intervals, but is computationally lighter and can be used with trials of more general design. In both models, heterogeneity in the bias coefficients can...
Ridi Ferdiana; Paulus Insap Santoso; Lukito Edi Nugroho; Ahmad Ashari
2011-01-01
Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19]. Measuring the software through software estimation has purpose to know the complexity of the software, estimate the human resources, and get better visibility of execution and process model. There is a lot of software estimation that work sufficiently in certain conditions or s...
Directory of Open Access Journals (Sweden)
Ridi Ferdiana
2011-01-01
Full Text Available Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19]. Measuring the software through software estimation has purpose to know the complexity of the software, estimate the human resources, and get better visibility of execution and process model. There is a lot of software estimation that work sufficiently in certain conditions or step in software engineering for example measuring line of codes, function point, COCOMO, or use case points. This paper proposes another estimation technique called Distributed eXtreme Programming Estimation (DXP Estimation. DXP estimation provides a basic technique for the team that using eXtreme Programming method in onsite or distributed development. According to writer knowledge this is a first estimation technique that applied into agile method in eXtreme Programming.
Learning curve estimation in medical devices and procedures: hierarchical modeling.
Govindarajulu, Usha S; Stillo, Marco; Goldfarb, David; Matheny, Michael E; Resnic, Frederic S
2017-07-30
In the use of medical device procedures, learning effects have been shown to be a critical component of medical device safety surveillance. To support their estimation of these effects, we evaluated multiple methods for modeling these rates within a complex simulated dataset representing patients treated by physicians clustered within institutions. We employed unique modeling for the learning curves to incorporate the learning hierarchy between institution and physicians and then modeled them within established methods that work with hierarchical data such as generalized estimating equations (GEE) and generalized linear mixed effect models. We found that both methods performed well, but that the GEE may have some advantages over the generalized linear mixed effect models for ease of modeling and a substantially lower rate of model convergence failures. We then focused more on using GEE and performed a separate simulation to vary the shape of the learning curve as well as employed various smoothing methods to the plots. We concluded that while both hierarchical methods can be used with our mathematical modeling of the learning curve, the GEE tended to perform better across multiple simulated scenarios in order to accurately model the learning effect as a function of physician and hospital hierarchical data in the use of a novel medical device. We found that the choice of shape used to produce the 'learning-free' dataset would be dataset specific, while the choice of smoothing method was negligibly different from one another. This was an important application to understand how best to fit this unique learning curve function for hierarchical physician and hospital data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
2012-01-01
Background We examine the effect of heat waves on mortality, over and above what would be predicted on the basis of temperature alone. Methods Present modeling approaches may not fully capture extra effects relating to heat wave duration, possibly because the mechanisms of action and the population at risk are different under more extreme conditions. Modeling such extra effects can be achieved using the commonly left-out effect-modification between the lags of temperature in distributed lag models. Results Using data from Stockholm, Sweden, and a variety of modeling approaches, we found that heat wave effects amount to a stable and statistically significant 8.1-11.6% increase in excess deaths per heat wave day. The effects explicitly relating to heat wave duration (2.0–3.9% excess deaths per day) were more sensitive to the degrees of freedom allowed for in the overall temperature-mortality relationship. However, allowing for a very large number of degrees of freedom indicated over-fitting the overall temperature-mortality relationship. Conclusions Modeling additional heat wave effects, e.g. between lag effect-modification, can give a better description of the effects from extreme temperatures, particularly in the non-elderly population. We speculate that it is biologically plausible to differentiate effects from heat and heat wave duration. PMID:22490779
Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities
Baylin-Stern, Adam C.
This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.
Institute of Scientific and Technical Information of China (English)
GaoChunwen; XuJingzhen; RichardSinding-Larsen
2005-01-01
A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-01-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is...
Estimation of Effectivty Connectivity via Data-Driven Neural Modeling
Directory of Open Access Journals (Sweden)
Dean Robert Freestone
2014-11-01
Full Text Available This research introduces a new method for functional brain imaging via a process of model inversion. By estimating parameters of a computational model, we are able to track effective connectivity and mean membrane potential dynamics that cannot be directly measured using electrophysiological measurements alone. The ability to track the hidden aspects of neurophysiology will have a profound impact on the way we understand and treat epilepsy. For example, under the assumption the model captures the key features of the cortical circuits of interest, the framework will provide insights into seizure initiation and termination on a patient-specific basis. It will enable investigation into the effect a particular drug has on specific neural populations and connectivity structures using minimally invasive measurements. The method is based on approximating brain networks using an interconnected neural population model. The neural population model is based on a neural mass model that describes the functional activity of the brain, capturing the mesoscopic biophysics and anatomical structure. The model is made subject-specific by estimating the strength of intra-cortical connections within a region and inter-cortical connections between regions using a novel Kalman filtering method. We demonstrate through simulation how the framework can be used the track the mechanisms involved in seizure initiation and termination.
K factor estimation in distribution transformers using linear regression models
Directory of Open Access Journals (Sweden)
Juan Miguel Astorga Gómez
2016-06-01
Full Text Available Background: Due to massive incorporation of electronic equipment to distribution systems, distribution transformers are subject to operation conditions other than the design ones, because of the circulation of harmonic currents. It is necessary to quantify the effect produced by these harmonic currents to determine the capacity of the transformer to withstand these new operating conditions. The K-factor is an indicator that estimates the ability of a transformer to withstand the thermal effects caused by harmonic currents. This article presents a linear regression model to estimate the value of the K-factor, from total current harmonic content obtained with low-cost equipment.Method: Two distribution transformers that feed different loads are studied variables, current total harmonic distortion factor K are recorded, and the regression model that best fits the data field is determined. To select the regression model the coefficient of determination R2 and the Akaike Information Criterion (AIC are used. With the selected model, the K-factor is estimated to actual operating conditions.Results: Once determined the model it was found that for both agricultural cargo and industrial mining, present harmonic content (THDi exceeds the values that these transformers can drive (average of 12.54% and minimum 8,90% in the case of agriculture and average value of 18.53% and a minimum of 6.80%, for industrial mining case.Conclusions: When estimating the K factor using polynomial models it was determined that studied transformers can not withstand the current total harmonic distortion of their current loads. The appropriate K factor for studied transformer should be 4; this allows transformers support the current total harmonic distortion of their respective loads.
[Effect of speech estimation on social anxiety].
Shirotsuki, Kentaro; Sasagawa, Satoko; Nomura, Shinobu
2009-02-01
This study investigates the effect of speech estimation on social anxiety to further understanding of this characteristic of Social Anxiety Disorder (SAD). In the first study, we developed the Speech Estimation Scale (SES) to assess negative estimation before giving a speech which has been reported to be the most fearful social situation in SAD. Undergraduate students (n = 306) completed a set of questionnaires, which consisted of the Short Fear of Negative Evaluation Scale (SFNE), the Social Interaction Anxiety Scale (SIAS), the Social Phobia Scale (SPS), and the SES. Exploratory factor analysis showed an adequate one-factor structure with eight items. Further analysis indicated that the SES had good reliability and validity. In the second study, undergraduate students (n = 315) completed the SFNE, SIAS, SPS, SES, and the Self-reported Depression Scale (SDS). The results of path analysis showed that fear of negative evaluation from others (FNE) predicted social anxiety, and speech estimation mediated the relationship between FNE and social anxiety. These results suggest that speech estimation might maintain SAD symptoms, and could be used as a specific target for cognitive intervention in SAD.
Dvornikov, Anton; Sein, Dmitry; Ryabchenko, Vladimir; Gorchakov, Victor; Martjyanov, Stanislav
2016-04-01
This study is aimed to assess the impact of sea ice on the primary production of phytoplankton (PPP) and air-sea CO2 flux in the Barents Sea. To get the estimations, we apply a three-dimensional eco-hydrodynamic model based on the Princeton Ocean Model which includes: 1) a module of sea ice with 7 categories, and 2) the 11-component module of marine pelagic ecosystem developed in the St. Petersburg Branch, Institute of Oceanology. The model is driven by atmospheric forcing, prescribed from the reanalysis NCEP / NCAR, and conditions on the open sea boundary, prescribed from the regional model of the atmosphere-ocean-sea ice-ocean biogeochemistry, developed at Max Planck Institute for Meteorology, Hamburg. Comparison of the model results for the period 1998-2007 with satellite data showed that the model reproduces the main features of the evolution of the sea surface temperature, seasonal changes in the ice extent, surface chlorophyll "a" concentration and PPP in the Barents Sea. Model estimates of the annual PPP for whole sea, APPmod, appeared in 1.5-2.3 times more than similar estimates, APPdata, from satellite data. The main reasons for this discrepancy are: 1) APPdata refers to the open water, while APPmod, to the whole sea area (under the pack ice and marginal ice zone (MIZ) was produced 16 - 38% of PPP); and 2) values of APPdata are underestimated because of the subsurface chlorophyll maximum. During the period 1998-2007, the modelled maximal (in the seasonal cycle) sea ice area has decreased by 15%. This reduction was accompanied by an increase in annual PPP of the sea at 54 and 63%, based, respectively, on satellite data and the model for the open water. According to model calculations for the whole sea area, the increase is only 19%. Using a simple 7-component model of oceanic carbon cycle incorporated into the above hydrodynamic model, the CO2 exchange between the atmosphere and sea has been estimated in different conditions. In the absence of biological
Estimating the Multilevel Rasch Model: With the lme4 Package
Directory of Open Access Journals (Sweden)
Harold Doran
2007-02-01
Full Text Available Traditional Rasch estimation of the item and student parameters via marginal maximum likelihood, joint maximum likelihood or conditional maximum likelihood, assume individuals in clustered settings are uncorrelated and items within a test that share a grouping structure are also uncorrelated. These assumptions are often violated, particularly in educational testing situations, in which students are grouped into classrooms and many test items share a common grouping structure, such as a content strand or a reading passage. Consequently, one possible approach is to explicitly recognize the clustered nature of the data and directly incorporate random effects to account for the various dependencies. This article demonstrates how the multilevel Rasch model can be estimated using the functions in R for mixed-effects models with crossed or partially crossed random effects. We demonstrate how to model the following hierarchical data structures: a individuals clustered in similar settings (e.g., classrooms, schools, b items nested within a particular group (such as a content strand or a reading passage, and c how to estimate a teacher × content strand interaction.
On estimation of survival function under random censoring model
Institute of Scientific and Technical Information of China (English)
JIANG; Jiancheng(蒋建成); CHENG; Bo(程博); WU; Xizhi(吴喜之)
2002-01-01
We study an estimator of the survival function under the random censoring model. Bahadur-type representation of the estimator is obtained and asymptotic expression for its mean squared errors is given, which leads to the consistency and asymptotic normality of the estimator. A data-driven local bandwidth selection rule for the estimator is proposed. It is worth noting that the estimator is consistent at left boundary points, which contrasts with the cases of density and hazard rate estimation. A Monte Carlo comparison of different estimators is made and it appears that the proposed data-driven estimators have certain advantages over the common Kaplan-Meier estmator.
Sensorless position estimator applied to nonlinear IPMC model
Bernat, Jakub; Kolota, Jakub
2016-11-01
This paper addresses the issue of estimating position for an ionic polymer metal composite (IPMC) known as electro active polymer (EAP). The key step is the construction of a sensorless mode considering only current feedback. This work takes into account nonlinearities caused by electrochemical effects in the material. Owing to the recent observer design technique, the authors obtained both Lyapunov function based estimation law as well as sliding mode observer. To accomplish the observer design, the IPMC model was identified through a series of experiments. The research comprises time domain measurements. The identification process was completed by means of geometric scaling of three test samples. In the proposed design, the estimated position accurately tracks the polymer position, which is illustrated by the experiments.
Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.
Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Consistent estimators in random censorship semiparametric models
Institute of Scientific and Technical Information of China (English)
王启华
1996-01-01
For the fixed design regression modelwhen Y, are randomly censored on the right, the estimators of unknown parameter and regression function g from censored observations are defined in the two cases .where the censored distribution is known and unknown, respectively. Moreover, the sufficient conditions under which these estimators are strongly consistent and pth (p>2) mean consistent are also established.
Estimation of Wind Turbulence Using Spectral Models
DEFF Research Database (Denmark)
Soltani, Mohsen; Knudsen, Torben; Bak, Thomas
2011-01-01
The production and loading of wind farms are significantly influenced by the turbulence of the flowing wind field. Estimation of turbulence allows us to optimize the performance of the wind farm. Turbulence estimation is; however, highly challenging due to the chaotic behavior of the wind. In thi...
A Note on Structural Equation Modeling Estimates of Reliability
Yang, Yanyun; Green, Samuel B.
2010-01-01
Reliability can be estimated using structural equation modeling (SEM). Two potential problems with this approach are that estimates may be unstable with small sample sizes and biased with misspecified models. A Monte Carlo study was conducted to investigate the quality of SEM estimates of reliability by themselves and relative to coefficient…
Radiation risk estimation based on measurement error models
Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya
2017-01-01
This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.
How to perform meaningful estimates of genetic effects.
Alvarez-Castro, José M; Le Rouzic, Arnaud; Carlborg, Orjan
2008-05-02
Although the genotype-phenotype map plays a central role both in Quantitative and Evolutionary Genetics, the formalization of a completely general and satisfactory model of genetic effects, particularly accounting for epistasis, remains a theoretical challenge. Here, we use a two-locus genetic system in simulated populations with epistasis to show the convenience of using a recently developed model, NOIA, to perform estimates of genetic effects and the decomposition of the genetic variance that are orthogonal even under deviations from the Hardy-Weinberg proportions. We develop the theory for how to use this model in interval mapping of quantitative trait loci using Halley-Knott regressions, and we analyze a real data set to illustrate the advantage of using this approach in practice. In this example, we show that departures from the Hardy-Weinberg proportions that are expected by sampling alone substantially alter the orthogonal estimates of genetic effects when other statistical models, like F2 or G2A, are used instead of NOIA. Finally, for the first time from real data, we provide estimates of functional genetic effects as sets of effects of natural allele substitutions in a particular genotype, which enriches the debate on the interpretation of genetic effects as implemented both in functional and in statistical models. We also discuss further implementations leading to a completely general genotype-phenotype map.
How to perform meaningful estimates of genetic effects.
Directory of Open Access Journals (Sweden)
José M Alvarez-Castro
2008-05-01
Full Text Available Although the genotype-phenotype map plays a central role both in Quantitative and Evolutionary Genetics, the formalization of a completely general and satisfactory model of genetic effects, particularly accounting for epistasis, remains a theoretical challenge. Here, we use a two-locus genetic system in simulated populations with epistasis to show the convenience of using a recently developed model, NOIA, to perform estimates of genetic effects and the decomposition of the genetic variance that are orthogonal even under deviations from the Hardy-Weinberg proportions. We develop the theory for how to use this model in interval mapping of quantitative trait loci using Halley-Knott regressions, and we analyze a real data set to illustrate the advantage of using this approach in practice. In this example, we show that departures from the Hardy-Weinberg proportions that are expected by sampling alone substantially alter the orthogonal estimates of genetic effects when other statistical models, like F2 or G2A, are used instead of NOIA. Finally, for the first time from real data, we provide estimates of functional genetic effects as sets of effects of natural allele substitutions in a particular genotype, which enriches the debate on the interpretation of genetic effects as implemented both in functional and in statistical models. We also discuss further implementations leading to a completely general genotype-phenotype map.
Parameter estimation of hidden periodic model in random fields
Institute of Scientific and Technical Information of China (English)
何书元
1999-01-01
Two-dimensional hidden periodic model is an important model in random fields. The model is used in the field of two-dimensional signal processing, prediction and spectral analysis. A method of estimating the parameters for the model is designed. The strong consistency of the estimators is proved.
Voorhies, Coerte V.
1993-01-01
In the source-free mantle/frozen-flux core magnetic earth model, the non-linear inverse steady motional induction problem was solved using the method presented in Part 1B. How that method was applied to estimate steady, broad-scale fluid velocity fields near the top of Earth's core that induce the secular change indicated by the Definitive Geomagnetic Reference Field (DGRF) models from 1945 to 1980 are described. Special attention is given to the derivation of weight matrices for the DGRF models because the weights determine the apparent significance of the residual secular change. The derived weight matrices also enable estimation of the secular change signal-to-noise ratio characterizing the DGRF models. Two types of weights were derived in 1987-88: radial field weights for fitting the evolution of the broad-scale portion of the radial geomagnetic field component at Earth's surface implied by the DGRF's, and general weights for fitting the evolution of the broad-scale portion of the scalar potential specified by these models. The difference is non-trivial because not all the geomagnetic data represented by the DGRF's constrain the radial field component. For radial field weights (or general weights), a quantitatively acceptable explication of broad-scale secular change relative to the 1980 Magsat epoch must account for 99.94271 percent (or 99.98784 percent) of the total weighted variance accumulated therein. Tolerable normalized root-mean-square weighted residuals of 2.394 percent (or 1.103 percent) are less than the 7 percent errors expected in the source-free mantle/frozen-flux core approximation.
Efficient estimation of semiparametric copula models for bivariate survival data
Cheng, Guang
2014-01-01
A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.
Kiviet, J.F.; Phillips, G.D.A.
2014-01-01
In dynamic regression models conditional maximum likelihood (least-squares) coefficient and variance estimators are biased. Using expansion techniques an approximation is obtained to the bias in variance estimation yielding a bias corrected variance estimator. This is achieved for both the standard
Estimation of Stochastic Volatility Models by Nonparametric Filtering
DEFF Research Database (Denmark)
Kanaya, Shin; Kristensen, Dennis
2016-01-01
/estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases......A two-step estimation method of stochastic volatility models is proposed: In the first step, we nonparametrically estimate the (unobserved) instantaneous volatility process. In the second step, standard estimation methods for fully observed diffusion processes are employed, but with the filtered...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...
Mathematical model of transmission network static state estimation
Directory of Open Access Journals (Sweden)
Ivanov Aleksandar
2012-01-01
Full Text Available In this paper the characteristics and capabilities of the power transmission network static state estimator are presented. The solving process of the mathematical model containing the measurement errors and their processing is developed. To evaluate difference between the general model of state estimation and the fast decoupled state estimation model, the both models are applied to an example, and so derived results are compared.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-02-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.
Estimation in the polynomial errors-in-variables model
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Estimators are presented for the coefficients of the polynomial errors-in-variables (EV) model when replicated observations are taken at some experimental points. These estimators are shown to be strongly consistent under mild conditions.
Evaluation of black carbon estimations in global aerosol models
Directory of Open Access Journals (Sweden)
Y. Zhao
2009-11-01
Full Text Available We evaluate black carbon (BC model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD retrievals from AERONET and Ozone Monitoring Instrument (OMI and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.7 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50N, the average model is a factor of 8 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC ratio is 0.4 and models underestimate the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model
Forward models and state estimation in compensatory eye movements
Directory of Open Access Journals (Sweden)
Maarten A Frens
2009-11-01
Full Text Available The compensatory eye movement system maintains a stable retinal image, integrating information from different sensory modalities to compensate for head movements. Inspired by recent models of physiology of limb movements, we suggest that compensatory eye movements (CEM can be modeled as a control system with three essential building blocks: a forward model that predicts the effects of motor commands; a state estimator that integrates sensory feedback into this prediction; and, a feedback controller that translates a state estimate into motor commands. We propose a specific mapping of nuclei within the CEM system onto these control functions. Specifically, we suggest that the Flocculus is responsible for generating the forward model prediction and that the Vestibular Nuclei integrate sensory feedback to generate an estimate of current state. Finally, the brainstem motor nuclei – in the case of horizontal compensation this means the Abducens Nucleus and the Nucleus Prepositus Hypoglossi – implement a feedback controller, translating state into motor commands. While these efforts to understand the physiological control system as a feedback control system are in their infancy, there is the intriguing possibility that compensatory eye movements and targeted voluntary movements use the same cerebellar circuitry in fundamentally different ways.
Parametrically guided estimation in nonparametric varying coefficient models with quasi-likelihood.
Davenport, Clemontina A; Maity, Arnab; Wu, Yichao
2015-04-01
Varying coefficient models allow us to generalize standard linear regression models to incorporate complex covariate effects by modeling the regression coefficients as functions of another covariate. For nonparametric varying coefficients, we can borrow the idea of parametrically guided estimation to improve asymptotic bias. In this paper, we develop a guided estimation procedure for the nonparametric varying coefficient models. Asymptotic properties are established for the guided estimators and a method of bandwidth selection via bias-variance tradeoff is proposed. We compare the performance of the guided estimator with that of the unguided estimator via both simulation and real data examples.
Evaluation of Black Carbon Estimations in Global Aerosol Models
Energy Technology Data Exchange (ETDEWEB)
Koch, D.; Schulz, M.; Kinne, Stefan; McNaughton, C. S.; Spackman, J. R.; Balkanski, Y.; Bauer, S.; Berntsen, T.; Bond, Tami C.; Boucher, Olivier; Chin, M.; Clarke, A. D.; De Luca, N.; Dentener, F.; Diehl, T.; Dubovik, O.; Easter, Richard C.; Fahey, D. W.; Feichter, J.; Fillmore, D.; Freitag, S.; Ghan, Steven J.; Ginoux, P.; Gong, S.; Horowitz, L.; Iversen, T.; Kirkevag, A.; Klimont, Z.; Kondo, Yutaka; Krol, M.; Liu, Xiaohong; Miller, R.; Montanaro, V.; Moteki, N.; Myhre, G.; Penner, J.; Perlwitz, Ja; Pitari, G.; Reddy, S.; Sahu, L.; Sakamoto, H.; Schuster, G.; Schwarz, J. P.; Seland, O.; Stier, P.; Takegawa, Nobuyuki; Takemura, T.; Textor, C.; van Aardenne, John; Zhao, Y.
2009-11-27
We evaluate black carbon (BC) model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD) from AERONET and OMI retrievals and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column) AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.6 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50N, the average model is a factor of 10 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC is 0.6 and underestimate the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model generated a smaller change in model predictions than the
Evaluation of black carbon estimations in global aerosol models
Directory of Open Access Journals (Sweden)
D. Koch
2009-07-01
Full Text Available We evaluate black carbon (BC model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD from AERONET and Ozone Monitoring Instrument (OMI retrievals and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.6 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50 N, the average model is a factor of 10 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC is 0.6 and underestimates the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model generated a
Talerngsak Angkuraseranee
2010-01-01
The additive and dominance genetic variances of 5,801 Duroc reproductive and growth records were estimated usingBULPF90 PC-PACK. Estimates were obtained for number born alive (NBA), birth weight (BW), number weaned (NW), andweaning weight (WW). Data were analyzed using two mixed model equations. The first model included fixed effects andrandom effects identifying inbreeding depression, additive gene effect and permanent environments effects. The secondmodel was similar to the first model, but...
Mixed Effects Models for Complex Data
Wu, Lang
2009-01-01
Presenting effective approaches to address missing data, measurement errors, censoring, and outliers in longitudinal data, this book covers linear, nonlinear, generalized linear, nonparametric, and semiparametric mixed effects models. It links each mixed effects model with the corresponding class of regression model for cross-sectional data and discusses computational strategies for likelihood estimations of mixed effects models. The author briefly describes generalized estimating equations methods and Bayesian mixed effects models and explains how to implement standard models using R and S-Pl
Bayesian Estimation of Categorical Dynamic Factor Models
Zhang, Zhiyong; Nesselroade, John R.
2007-01-01
Dynamic factor models have been used to analyze continuous time series behavioral data. We extend 2 main dynamic factor model variations--the direct autoregressive factor score (DAFS) model and the white noise factor score (WNFS) model--to categorical DAFS and WNFS models in the framework of the underlying variable method and illustrate them with…
Lasso adjustments of treatment effect estimates in randomized experiments.
Bloniarz, Adam; Liu, Hanzhong; Zhang, Cun-Hui; Sekhon, Jasjeet S; Yu, Bin
2016-07-05
We provide a principled way for investigators to analyze randomized experiments when the number of covariates is large. Investigators often use linear multivariate regression to analyze randomized experiments instead of simply reporting the difference of means between treatment and control groups. Their aim is to reduce the variance of the estimated treatment effect by adjusting for covariates. If there are a large number of covariates relative to the number of observations, regression may perform poorly because of overfitting. In such cases, the least absolute shrinkage and selection operator (Lasso) may be helpful. We study the resulting Lasso-based treatment effect estimator under the Neyman-Rubin model of randomized experiments. We present theoretical conditions that guarantee that the estimator is more efficient than the simple difference-of-means estimator, and we provide a conservative estimator of the asymptotic variance, which can yield tighter confidence intervals than the difference-of-means estimator. Simulation and data examples show that Lasso-based adjustment can be advantageous even when the number of covariates is less than the number of observations. Specifically, a variant using Lasso for selection and ordinary least squares (OLS) for estimation performs particularly well, and it chooses a smoothing parameter based on combined performance of Lasso and OLS.
Energy Technology Data Exchange (ETDEWEB)
Jin, Cui; Xiao, Xiangming; Wagle, Pradeep; Griffis, Timothy; Dong, Jinwei; Wu, Chaoyang; Qin, Yuanwei; Cook, David R.
2015-11-01
Satellite-based Production Efficiency Models (PEMs) often require meteorological reanalysis data such as the North America Regional Reanalysis (NARR) by the National Centers for Environmental Prediction (NCEP) as model inputs to simulate Gross Primary Production (GPP) at regional and global scales. This study first evaluated the accuracies of air temperature (TNARR) and downward shortwave radiation (RNARR) of the NARR by comparing with in-situ meteorological measurements at 37 AmeriFlux non-crop eddy flux sites, then used one PEM – the Vegetation Photosynthesis Model (VPM) to simulate 8-day mean GPP (GPPVPM) at seven AmeriFlux crop sites, and investigated the uncertainties in GPPVPM from climate inputs as compared with eddy covariance-based GPP (GPPEC). Results showed that TNARR agreed well with in-situ measurements; RNARR, however, was positively biased. An empirical linear correction was applied to RNARR, and significantly reduced the relative error of RNARR by ~25% for crop site-years. Overall, GPPVPM calculated from the in-situ (GPPVPM(EC)), original (GPPVPM(NARR)) and adjusted NARR (GPPVPM(adjNARR)) climate data tracked the seasonality of GPPEC well, albeit with different degrees of biases. GPPVPM(EC) showed a good match with GPPEC for maize (Zea mays L.), but was slightly underestimated for soybean (Glycine max L.). Replacing the in-situ climate data with the NARR resulted in a significant overestimation of GPPVPM(NARR) (18.4/29.6% for irrigated/rainfed maize and 12.7/12.5% for irrigated/rainfed soybean). GPPVPM(adjNARR) showed a good agreement with GPPVPM(EC) for both crops due to the reduction in the bias of RNARR. The results imply that the bias of RNARR introduced significant uncertainties into the PEM-based GPP estimates, suggesting that more accurate surface radiation datasets are needed to estimate primary production of terrestrial ecosystems at regional and global scales.
Simultaneous estimation of parameters in the bivariate Emax model.
Magnusdottir, Bergrun T; Nyquist, Hans
2015-12-10
In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.
A Class of New Biased Estimators for Coefficients in Mixed Effect Linear Model%混合系数线性模型参数的一类有偏估计
Institute of Scientific and Technical Information of China (English)
张华伟
2013-01-01
Ab stract:In the repeated -measures data model,in order to deal with the multicollinearity,an estimator called s -K -B for the parameters in mixed effect linear model is proposed.Under certain conditions,s -K -B estimators are shown to superior to the Ridge estimators,Stein estimators,s -K estimators and least squared estimator,respectively.% 在连续测量数据的情况下，针对模型的复共线性，本文给出了混合系数线性模型参数的一类有偏估计，称之为 s －K －B 估计。在一定条件下证明了这类估计分别优于岭估计，Stein 估计，s －K 估计以及最小二乘估计。
A Prototypical Model for Estimating High Tech Navy Recruiting Markets
1991-12-01
Probability, Logit, and Probit Models, New York, New York 1990, p. 73. 37 Gujarati , D., ibid, p. 500. 31 V. MODELS ESTIMATION A. MODEL I ESTIMATION OF...Company, New York, N.Y., 1990. Gujarati , Damodar N., Basic Econometrics, Second Edition, McGraw-Hill Book Company, New York, N.Y., 1988. Jehn, Christopher
Identification and Estimation of Exchange Rate Models with Unobservable Fundamentals
Chambers, M.J.; McCrorie, J.R.
2004-01-01
This paper is concerned with issues of model specification, identification, and estimation in exchange rate models with unobservable fundamentals.We show that the model estimated by Gardeazabal, Reg´ulez and V´azquez (International Economic Review, 1997) is not identified and demonstrate how to spec
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.
Estimation of Boundary Conditions for Coastal Models,
1974-09-01
equation: h(i) y ( t — i) di (3) The solution to Eq. (3) may be obtained by Fourier transformation. Because covariance function and spectral density function form...the cross— spectral density function estimate by a numerical Fourier transform, the even and odd parts of the cross—covariance function are determined...by A(k) = ½ [Y ~~ (k) + y (k)] (5) B(k) = ½ [Yxy (k) - y (k) ] (6) from which the co— spectral density function is estimated : k m—l -. C (f) = 2T[A(o
Simulation model accurately estimates total dietary iodine intake.
Verkaik-Kloosterman, Janneke; van 't Veer, Pieter; Ocké, Marga C
2009-07-01
One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and probabilistic techniques was developed. Data from the Dutch National Food Consumption Survey (1997-1998) and an update of the Food Composition database were used to simulate 3 different scenarios: Dutch iodine legislation until July 2008, Dutch iodine legislation after July 2008, and a potential future situation. Results from studies measuring iodine excretion during the former legislation are comparable with the iodine intakes estimated with our model. For both former and current legislation, iodine intake was adequate for a large part of the Dutch population, but some young children (iodine levels, the percentage of the Dutch population with intakes that were too low increased (almost 10% of young children). To keep iodine intakes adequate, salt iodine levels should not be decreased, unless many more foods will contain iodized salt. Our model should be useful in predicting the effects of food reformulation or fortification on habitual nutrient intakes.
Parameter estimation and error analysis in environmental modeling and computation
Kalmaz, E. E.
1986-01-01
A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.
EFFICIENT ESTIMATION OF FUNCTIONAL-COEFFICIENT REGRESSION MODELS WITH DIFFERENT SMOOTHING VARIABLES
Institute of Scientific and Technical Information of China (English)
Zhang Riquan; Li Guoying
2008-01-01
In this article, a procedure for estimating the coefficient functions on the functional-coefficient regression models with different smoothing variables in different co-efficient functions is defined. First step, by the local linear technique and the averaged method, the initial estimates of the coefficient functions are given. Second step, based on the initial estimates, the efficient estimates of the coefficient functions are proposed by a one-step back-fitting procedure. The efficient estimators share the same asymptotic normalities as the local linear estimators for the functional-coefficient models with a single smoothing variable in different functions. Two simulated examples show that the procedure is effective.
Estimating Indirect Genetic Effects: Precision of Estimates and Optimum Designs
Bijma, P.
2010-01-01
Social interactions among individuals are abundant both in natural and domestic populations. Such social interactions cause phenotypes of individuals to depend on genes carried by other individuals, a phenomenon known as indirect genetic effects (IGE). Because IGEs have drastic effects on the rate a
A robust methodology for kinetic model parameter estimation for biocatalytic reactions
DEFF Research Database (Denmark)
Al-Haque, Naweed; Andrade Santacoloma, Paloma de Gracia; Lima Afonso Neto, Watson;
2012-01-01
Effective estimation of parameters in biocatalytic reaction kinetic expressions are very important when building process models to enable evaluation of process technology options and alternative biocatalysts. The kinetic models used to describe enzyme-catalyzed reactions generally include several...
On Frequency Domain Models for TDOA Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Nielsen, Jesper Kjær; Christensen, Mads Græsbøll
2015-01-01
of a much more general method. In this connection, we establish the conditions under which the cross-correlation method is a statistically efficient estimator. One of the conditions is that the source signal is periodic with a known fundamental frequency of 2π/N radians per sample, where N is the number...
Leaching of atmospherically deposited nitrogen from forested watersheds can acidify lakes and streams. Using a modified version of the Model of Acidification of Groundwater in Catchments, we made computer simulations of such effects for 36 lake catchments in the Adirondack Mount...
Robust Estimation and Forecasting of the Capital Asset Pricing Model
G. Bian (Guorui); M.J. McAleer (Michael); W-K. Wong (Wing-Keung)
2013-01-01
textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more
Robust Estimation and Forecasting of the Capital Asset Pricing Model
G. Bian (Guorui); M.J. McAleer (Michael); W-K. Wong (Wing-Keung)
2010-01-01
textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more
PARAMETER ESTIMATION IN LINEAR REGRESSION MODELS FOR LONGITUDINAL CONTAMINATED DATA
Institute of Scientific and Technical Information of China (English)
QianWeimin; LiYumei
2005-01-01
The parameter estimation and the coefficient of contamination for the regression models with repeated measures are studied when its response variables are contaminated by another random variable sequence. Under the suitable conditions it is proved that the estimators which are established in the paper are strongly consistent estimators.
Hanike, Yusrianti; Sadik, Kusman; Kurnia, Anang
2016-02-01
This research implemented unemployment rate in Indonesia that based on Poisson distribution. It would be estimated by modified the post-stratification and Small Area Estimation (SAE) model. Post-stratification was one of technique sampling that stratified after collected survey data. It's used when the survey data didn't serve for estimating the interest area. Interest area here was the education of unemployment which separated in seven category. The data was obtained by Labour Employment National survey (Sakernas) that's collected by company survey in Indonesia, BPS, Statistic Indonesia. This company served the national survey that gave too small sample for level district. Model of SAE was one of alternative to solved it. According the problem above, we combined this post-stratification sampling and SAE model. This research gave two main model of post-stratification sampling. Model I defined the category of education was the dummy variable and model II defined the category of education was the area random effect. Two model has problem wasn't complied by Poisson assumption. Using Poisson-Gamma model, model I has over dispersion problem was 1.23 solved to 0.91 chi square/df and model II has under dispersion problem was 0.35 solved to 0.94 chi square/df. Empirical Bayes was applied to estimate the proportion of every category education of unemployment. Using Bayesian Information Criteria (BIC), Model I has smaller mean square error (MSE) than model II.
Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui
2017-03-01
The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.
Obtaining Diagnostic Classification Model Estimates Using Mplus
Templin, Jonathan; Hoffman, Lesa
2013-01-01
Diagnostic classification models (aka cognitive or skills diagnosis models) have shown great promise for evaluating mastery on a multidimensional profile of skills as assessed through examinee responses, but continued development and application of these models has been hindered by a lack of readily available software. In this article we…
Lag space estimation in time series modelling
DEFF Research Database (Denmark)
Goutte, Cyril
1997-01-01
The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...
Estimation of the Dose and Dose Rate Effectiveness Factor
Chappell, L.; Cucinotta, F. A.
2013-01-01
Current models to estimate radiation risk use the Life Span Study (LSS) cohort that received high doses and high dose rates of radiation. Transferring risks from these high dose rates to the low doses and dose rates received by astronauts in space is a source of uncertainty in our risk calculations. The solid cancer models recommended by BEIR VII [1], UNSCEAR [2], and Preston et al [3] is fitted adequately by a linear dose response model, which implies that low doses and dose rates would be estimated the same as high doses and dose rates. However animal and cell experiments imply there should be curvature in the dose response curve for tumor induction. Furthermore animal experiments that directly compare acute to chronic exposures show lower increases in tumor induction than acute exposures. A dose and dose rate effectiveness factor (DDREF) has been estimated and applied to transfer risks from the high doses and dose rates of the LSS cohort to low doses and dose rates such as from missions in space. The BEIR VII committee [1] combined DDREF estimates using the LSS cohort and animal experiments using Bayesian methods for their recommendation for a DDREF value of 1.5 with uncertainty. We reexamined the animal data considered by BEIR VII and included more animal data and human chromosome aberration data to improve the estimate for DDREF. Several experiments chosen by BEIR VII were deemed inappropriate for application to human risk models of solid cancer risk. Animal tumor experiments performed by Ullrich et al [4], Alpen et al [5], and Grahn et al [6] were analyzed to estimate the DDREF. Human chromosome aberration experiments performed on a sample of astronauts within NASA were also available to estimate the DDREF. The LSS cohort results reported by BEIR VII were combined with the new radiobiology results using Bayesian methods.
Granato, Gregory; Jones, Susan C.
2015-01-01
Studies from North Carolina (NC) indicate that increasing concentrations of total phosphorus (TP) and other constituents are correlated to adverse effects on stream ecosystems as evidenced by differences in benthic macroinvertebrate populations in streams across the state. As a result, stringent in-stream criteria based on the Water Quality Assessed by Benthic macroinvertebrate health ratings (WQABI) have been proposed for regulating TP concentrations in stormwater discharges and for selecting stormwater best management practices (BMPs). The WQABI criteria concentrations may not be suitable for evaluating stormwater discharges because they are based on baseflow concentration statistics, the criteria do not include a clearly defined allowable exceedance frequency, and there are substantial uncertainties in estimating the quality of runoff, BMP discharge, and receiving waters for sites without monitoring data.
Highway traffic model-based density estimation
Morarescu, Irinel - Constantin; CANUDAS DE WIT, Carlos
2011-01-01
International audience; The travel time spent in traffic networks is one of the main concerns of the societies in developed countries. A major requirement for providing traffic control and services is the continuous prediction, for several minutes into the future. This paper focuses on an important ingredient necessary for the traffic forecasting which is the real-time traffic state estimation using only a limited amount of data. Simulation results illustrate the performances of the proposed ...
Modeling and Parameter Estimation of a Small Wind Generation System
Directory of Open Access Journals (Sweden)
Carlos A. Ramírez Gómez
2013-11-01
Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.
Projection-type estimation for varying coefficient regression models
Lee, Young K; Park, Byeong U; 10.3150/10-BEJ331
2012-01-01
In this paper we introduce new estimators of the coefficient functions in the varying coefficient regression model. The proposed estimators are obtained by projecting the vector of the full-dimensional kernel-weighted local polynomial estimators of the coefficient functions onto a Hilbert space with a suitable norm. We provide a backfitting algorithm to compute the estimators. We show that the algorithm converges at a geometric rate under weak conditions. We derive the asymptotic distributions of the estimators and show that the estimators have the oracle properties. This is done for the general order of local polynomial fitting and for the estimation of the derivatives of the coefficient functions, as well as the coefficient functions themselves. The estimators turn out to have several theoretical and numerical advantages over the marginal integration estimators studied by Yang, Park, Xue and H\\"{a}rdle [J. Amer. Statist. Assoc. 101 (2006) 1212--1227].
DEFF Research Database (Denmark)
Uthes, Sandra; Sattler, Claudia; Piorr, Annette
2010-01-01
production orientations and grassland types was modeled under the presence and absence of the grassland extensification scheme using the bio-economic model MODAM. Farms were based on available accountancy data and surveyed production data, while information on farm location within the district was derived...... and environmental effects were heterogeneous in space and farm types as a result of different agricultural production and site characteristics. On-farm costs ranged from zero up to almost 1500 Euro/ha. Such high costs occurred only in a very small part of the regional area, whereas the majority of the grassland had...
Integrated traffic conflict model for estimating crash modification factors.
Shahdah, Usama; Saccomanno, Frank; Persaud, Bhagwant
2014-10-01
Crash modification factors (CMFs) for road safety treatments are usually obtained through observational models based on reported crashes. Observational Bayesian before-and-after methods have been applied to obtain more precise estimates of CMFs by accounting for the regression-to-the-mean bias inherent in naive methods. However, sufficient crash data reported over an extended period of time are needed to provide reliable estimates of treatment effects, a requirement that can be a challenge for certain types of treatment. In addition, these studies require that sites analyzed actually receive the treatment to which the CMF pertains. Another key issue with observational approaches is that they are not causal in nature, and as such, cannot provide a sound "behavioral" rationale for the treatment effect. Surrogate safety measures based on high risk vehicle interactions and traffic conflicts have been proposed to address this issue by providing a more "causal perspective" on lack of safety for different road and traffic conditions. The traffic conflict approach has been criticized, however, for lacking a formal link to observed and verified crashes, a difficulty that this paper attempts to resolve by presenting and investigating an alternative approach for estimating CMFs using simulated conflicts that are linked formally to observed crashes. The integrated CMF estimates are compared to estimates from an empirical Bayes (EB) crash-based before-and-after analysis for the same sample of treatment sites. The treatment considered involves changing left turn signal priority at Toronto signalized intersections from permissive to protected-permissive. The results are promising in that the proposed integrated method yields CMFs that closely match those obtained from the crash-based EB before-and-after analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chen, Li; Han, Ting-Ting; Li, Tao; Ji, Ya-Qin; Bai, Zhi-Peng; Wang, Bin
2012-07-01
Due to the lack of a prediction model for current wind erosion in China and the slow development for such models, this study aims to predict the wind erosion of soil and the dust emission and develop a prediction model for wind erosion in Tianjin by investigating the structure, parameter systems and the relationships among the parameter systems of the prediction models for wind erosion in typical areas, using the U.S. wind erosion prediction system (WEPS) as reference. Based on the remote sensing technique and the test data, a parameter system was established for the prediction model of wind erosion and dust emission, and a model was developed that was suitable for the prediction of wind erosion and dust emission in Tianjin. Tianjin was divided into 11 080 blocks with a resolution of 1 x 1 km2, among which 7 778 dust emitting blocks were selected. The parameters of the blocks were localized, including longitude, latitude, elevation and direction, etc.. The database files of blocks were localized, including wind file, climate file, soil file and management file. The weps. run file was edited. Based on Microsoft Visualstudio 2008, secondary development was done using C + + language, and the dust fluxes of 7 778 blocks were estimated, including creep and saltation fluxes, suspension fluxes and PM10 fluxes. Based on the parameters of wind tunnel experiments in Inner Mongolia, the soil measurement data and climate data in suburbs of Tianjin, the wind erosion module, wind erosion fluxes, dust emission release modulus and dust release fluxes were calculated for the four seasons and the whole year in suburbs of Tianjin. In 2009, the total creep and saltation fluxes, suspension fluxes and PM10 fluxes in the suburbs of Tianjin were 2.54 x 10(6) t, 1.25 x 10(7) t and 9.04 x 10(5) t, respectively, among which, the parts pointing to the central district were 5.61 x 10(5) t, 2.89 x 10(6) t and 2.03 x 10(5) t, respectively.
Continuous Estimation of Wrist Torque from Surface EMG Signals Using Path-dependent Model
Institute of Scientific and Technical Information of China (English)
PAN Li-zhi; ZHANG Ding-guo; SHENG Xin-jun; ZHU Xiang-yang
2014-01-01
Continuous estimation of wrist torque from surface electromyography (EMG) signals has been studied by some research institutes. Hysteresis effect is a phenomenon in EMG force relationship. In this work, a path-dependent model based on hysteresis effect was used for continuously estimating wrist torque from surface EMG signals. The surface EMG signals of the flexor carpi ulnaris (FCU) and extensor carpi radialis (ECR) were collected along with wrist torque of flexion/extension degree-of-freedom. EMG signal of FCU was used to estimate the torque of wrist flexion and EMG signal of ECR to estimate the torque of wrist extension. The existence of hysteresis effect has been proven either during wrist flexion or extension on all subjects. And the estimation performance of path-dependent model is much better than the overall model. Thus, the path-dependent model is suitable to improve the wrist torque's estimation accuracy.
Estimation in partial linear EV models with replicated observations
Institute of Scientific and Technical Information of China (English)
CUI; Hengjian
2004-01-01
The aim of this work is to construct the parameter estimators in the partial linear errors-in-variables (EV) models and explore their asymptotic properties. Unlike other related References, the assumption of known error covariance matrix is removed when the sample can be repeatedly drawn at each designed point from the model. The estimators of interested regression parameters, and the model error variance, as well as the nonparametric function, are constructed. Under some regular conditions, all of the estimators prove strongly consistent. Meanwhile, the asymptotic normality for the estimator of regression parameter is also presented. A simulation study is reported to illustrate our asymptotic results.
A simulation of water pollution model parameter estimation
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
Assessing Uncertainty of Interspecies Correlation Estimation Models for Aromatic Compounds
We developed Interspecies Correlation Estimation (ICE) models for aromatic compounds containing 1 to 4 benzene rings to assess uncertainty in toxicity extrapolation in two data compilation approaches. ICE models are mathematical relationships between surrogate and predicted test ...
Estimation of a multivariate mean under model selection uncertainty
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2014-05-01
Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty. When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.
West, Tessa V; Popp, Danielle; Kenny, David A
2008-03-01
The study of gender differences is a pervasive topic in relationship science. However, there are several neglected issues in this area that require special care and attention. First, there is not just one gender effect but rather three gender effects: gender of the respondent, gender of the partner, and the gender of respondent by gender of the partner interaction. To separate these three effects, the dyadic research design should ideally have three different types of dyads: male-female, male-male, and female-female. Second, the analysis of gender differences in relational studies could benefit from the application of recent advances in the analysis of dyadic data, most notably the Actor-Partner Interdependence Model. Third, relationship researchers need to consider the confounding, mediating, and moderating effects of demographic variables. We use the American Couples (Blumstein & Schwartz, 1983) data set to illustrate these points.
Estimation for the simple linear Boolean model
2006-01-01
We consider the simple linear Boolean model, a fundamental coverage process also known as the Markov/General/infinity queue. In the model, line segments of independent and identically distributed length are located at the points of a Poisson process. The segments may overlap, resulting in a pattern of "clumps"-regions of the line that are covered by one or more segments-alternating with uncovered regions or "spacings". Study and application of the model have been impeded by the difficult...
Bregman divergence as general framework to estimate unnormalized statistical models
Gutmann, Michael
2012-01-01
We show that the Bregman divergence provides a rich framework to estimate unnormalized statistical models for continuous or discrete random variables, that is, models which do not integrate or sum to one, respectively. We prove that recent estimation methods such as noise-contrastive estimation, ratio matching, and score matching belong to the proposed framework, and explain their interconnection based on supervised learning. Further, we discuss the role of boosting in unsupervised learning.
Estimating Dynamic Equilibrium Models using Macro and Financial Data
DEFF Research Database (Denmark)
Christensen, Bent Jesper; Posch, Olaf; van der Wel, Michel
We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuous-time formulation conveniently accounts for the difference in observation...... of the estimators and estimate the model using 20 years of U.S. macro and financial data....
CONSISTENCY OF LS ESTIMATOR IN SIMPLE LINEAR EV REGRESSION MODELS
Institute of Scientific and Technical Information of China (English)
Liu Jixue; Chen Xiru
2005-01-01
Consistency of LS estimate of simple linear EV model is studied. It is shown that under some common assumptions of the model, both weak and strong consistency of the estimate are equivalent but it is not so for quadratic-mean consistency.
Estimated Frequency Domain Model Uncertainties used in Robust Controller Design
DEFF Research Database (Denmark)
Tøffner-Clausen, S.; Andersen, Palle; Stoustrup, Jakob;
1994-01-01
This paper deals with the combination of system identification and robust controller design. Recent results on estimation of frequency domain model uncertainty are......This paper deals with the combination of system identification and robust controller design. Recent results on estimation of frequency domain model uncertainty are...
Estimating Lead (Pb) Bioavailability In A Mouse Model
Children are exposed to Pb through ingestion of Pb-contaminated soil. Soil Pb bioavailability is estimated using animal models or with chemically defined in vitro assays that measure bioaccessibility. However, bioavailability estimates in a large animal model (e.g., swine) can be...
FUNCTIONAL-COEFFICIENT REGRESSION MODEL AND ITS ESTIMATION
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
In this paper,a class of functional-coefficient regression models is proposed and an estimation procedure based on the locally weighted least equares is suggested. This class of models,with the proposed estimation method,is a powerful means for exploratory data analysis.
Estimating High-Dimensional Time Series Models
DEFF Research Database (Denmark)
Medeiros, Marcelo C.; Mendes, Eduardo F.
We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly...
Estimates of current debris from flux models
Energy Technology Data Exchange (ETDEWEB)
Canavan, G.H.
1997-01-01
Flux models that balance accuracy and simplicity are used to predict the growth of space debris to the present. Known and projected launch rates, decay models, and numerical integrations are used to predict distributions that closely resemble the current catalog-particularly in the regions containing most of the debris.
Two-stage local M-estimation of additive models
Institute of Scientific and Technical Information of China (English)
JIANG JianCheng; LI JianTao
2008-01-01
This paper studies local M-estimation of the nonparametric components of additive models. A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives. Under very mild conditions, the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known. The established asymptotic results also hold for two particular local M-estimations: the local least squares and least absolute deviation estimations. However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions, its implementation is time-consuming. To reduce the computational burden, one-step approximations to the two-stage local M-estimators are developed. The one-step estimators are shown to achieve the same efficiency as the fully iterative two-stage local M-estimators, which makes the two-stage local M-estimation more feasible in practice. The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers. In addition, the practical implementation of the proposed estimation is considered in details. Simulations demonstrate the merits of the two-stage local M-estimation, and a real example illustrates the performance of the methodology.
Two-stage local M-estimation of additive models
Institute of Scientific and Technical Information of China (English)
2008-01-01
This paper studies local M-estimation of the nonparametric components of additive models.A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives.Under very mild conditions,the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known.The established asymptotic results also hold for two particular local M-estimations:the local least squares and least absolute deviation estimations.However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions,its implementation is time-consuming.To reduce the computational burden,one-step approximations to the two-stage local M-estimators are developed.The one-step estimators are shown to achieve the same effciency as the fully iterative two-stage local M-estimators,which makes the two-stage local M-estimation more feasible in practice.The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers.In addition,the practical implementation of the proposed estimation is considered in details.Simulations demonstrate the merits of the two-stage local M-estimation,and a real example illustrates the performance of the methodology.
ESTIMATION DU MODELE LINEAIRE GENERALISE ET APPLICATION
Directory of Open Access Journals (Sweden)
Malika CHIKHI
2012-06-01
Full Text Available Cet article présente le modèle linéaire généralisé englobant les techniques de modélisation telles que la régression linéaire, la régression logistique, la régression log linéaire et la régression de Poisson . On Commence par la présentation des modèles des lois exponentielles pour ensuite estimer les paramètres du modèle par la méthode du maximum de vraisemblance. Par la suite on teste les coefficients du modèle pour voir leurs significations et leurs intervalles de confiances, en utilisant le test de Wald qui porte sur la signification de la vraie valeur du paramètre basé sur l'estimation de l'échantillon.
Zhou, Xinyao; Bi, Shaojie; Yang, Yonghui; Tian, Fei; Ren, Dandan
2014-11-01
The three-temperature (3T) model is a simple model which estimates plant transpiration from only temperature data. In-situ field experimental results have shown that 3T is a reliable evapotranspiration (ET) estimation model. Despite encouraging results from recent efforts extending the 3T model to remote sensing applications, literature shows limited comparisons of the 3T model with other remote sensing driven ET models. This research used ET obtained from eddy covariance to evaluate the 3T model and in turn compared the model-simulated ET with that of the more traditional SEBAL (Surface Energy Balance Algorithm for Land) model. A field experiment was conducted in the cotton fields of Taklamakan desert oasis in Xinjiang, Northwest China. Radiation and surface temperature were obtained from hyperspectral and thermal infrared images for clear days in 2013. The images covered the time period of 0900-1800 h at four different phenological stages of cotton. Meteorological data were automatically recorded in a station located at the center of the cotton field. Results showed that the 3T model accurately captured daily and seasonal variations in ET. As low dry soil surface temperatures induced significant errors in the 3T model, it was unsuitable for estimating ET in the early morning and late afternoon periods. The model-simulated ET was relatively more accurate for squaring, bolling and boll-opening stages than for seedling stage of cotton during when ET was generally low. Wind speed was apparently not a limiting factor of ET in the 3T model. This was attributed to the fact that surface temperature, a vital input of the model, indirectly accounted for the effect of wind speed on ET. Although the 3T model slightly overestimated ET compared with SEBAL and eddy covariance, it was generally reliable for estimating daytime ET during 0900-1600 h.
These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.
Estimating parameters for generalized mass action models with connectivity information
Directory of Open Access Journals (Sweden)
Voit Eberhard O
2009-05-01
Full Text Available Abstract Background Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems. Results In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters. Conclusion The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out
Recharge estimation for transient ground water modeling.
Jyrkama, Mikko I; Sykes, Jon F; Normani, Stefano D
2002-01-01
Reliable ground water models require both an accurate physical representation of the system and appropriate boundary conditions. While physical attributes are generally considered static, boundary conditions, such as ground water recharge rates, can be highly variable in both space and time. A practical methodology incorporating the hydrologic model HELP3 in conjunction with a geographic information system was developed to generate a physically based and highly detailed recharge boundary condition for ground water modeling. The approach uses daily precipitation and temperature records in addition to land use/land cover and soils data. The importance of the method in transient ground water modeling is demonstrated by applying it to a MODFLOW modeling study in New Jersey. In addition to improved model calibration, the results from the study clearly indicate the importance of using a physically based and highly detailed recharge boundary condition in ground water quality modeling, where the detailed knowledge of the evolution of the ground water flowpaths is imperative. The simulated water table is within 0.5 m of the observed values using the method, while the water levels can differ by as much as 2 m using uniform recharge conditions. The results also show that the combination of temperature and precipitation plays an important role in the amount and timing of recharge in cooler climates. A sensitivity analysis further reveals that increasing the leaf area index, the evaporative zone depth, or the curve number in the model will result in decreased recharge rates over time, with the curve number having the greatest impact.
Adaptive Unified Biased Estimators of Parameters in Linear Model
Institute of Scientific and Technical Information of China (English)
Hu Yang; Li-xing Zhu
2004-01-01
To tackle multi collinearity or ill-conditioned design matrices in linear models,adaptive biased estimators such as the time-honored Stein estimator,the ridge and the principal component estimators have been studied intensively.To study when a biased estimator uniformly outperforms the least squares estimator,some suficient conditions are proposed in the literature.In this paper,we propose a unified framework to formulate a class of adaptive biased estimators.This class includes all existing biased estimators and some new ones.A suficient condition for outperforming the least squares estimator is proposed.In terms of selecting parameters in the condition,we can obtain all double-type conditions in the literature.
Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model
Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami
2017-06-01
A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.
Directory of Open Access Journals (Sweden)
Wararit PANICHKITKOSOLKUL
2012-09-01
Full Text Available Guttman and Tiao [1], and Chang [2] showed that the effect of outliers may cause serious bias in estimating autocorrelations, partial correlations, and autoregressive moving average parameters (cited in Chang et al. [3]. This paper presents a modified weighted symmetric estimator for a Gaussian first-order autoregressive AR(1 model with additive outliers. We apply the recursive median adjustment based on an exponentially weighted moving average (EWMA to the weighted symmetric estimator of Park and Fuller [4]. We consider the following estimators: the weighted symmetric estimator (, the recursive mean adjusted weighted symmetric estimator ( proposed by Niwitpong [5], the recursive median adjusted weighted symmetric estimator ( proposed by Panichkitkosolkul [6], and the weighted symmetric estimator using adjusted recursive median based on EWMA (. Using Monte Carlo simulations, we compare the mean square error (MSE of estimators. Simulation results have shown that the proposed estimator, , provides a MSE lower than those of , and for almost all situations.
Hospital Case Cost Estimates Modelling - Algorithm Comparison
Andru, Peter
2008-01-01
Ontario (Canada) Health System stakeholders support the idea and necessity of the integrated source of data that would include both clinical (e.g. diagnosis, intervention, length of stay, case mix group) and financial (e.g. cost per weighted case, cost per diem) characteristics of the Ontario healthcare system activities at the patient-specific level. At present, the actual patient-level case costs in the explicit form are not available in the financial databases for all hospitals. The goal of this research effort is to develop financial models that will assign each clinical case in the patient-specific data warehouse a dollar value, representing the cost incurred by the Ontario health care facility which treated the patient. Five mathematical models have been developed and verified using real dataset. All models can be classified into two groups based on their underlying method: 1. Models based on using relative intensity weights of the cases, and 2. Models based on using cost per diem.
A regression model to estimate regional ground water recharge.
Lorenz, David L; Delin, Geoffrey N
2007-01-01
A regional regression model was developed to estimate the spatial distribution of ground water recharge in subhumid regions. The regional regression recharge (RRR) model was based on a regression of basin-wide estimates of recharge from surface water drainage basins, precipitation, growing degree days (GDD), and average basin specific yield (SY). Decadal average recharge, precipitation, and GDD were used in the RRR model. The RRR estimates were derived from analysis of stream base flow using a computer program that was based on the Rorabaugh method. As expected, there was a strong correlation between recharge and precipitation. The model was applied to statewide data in Minnesota. Where precipitation was least in the western and northwestern parts of the state (50 to 65 cm/year), recharge computed by the RRR model also was lowest (0 to 5 cm/year). A strong correlation also exists between recharge and SY. SY was least in areas where glacial lake clay occurs, primarily in the northwest part of the state; recharge estimates in these areas were in the 0- to 5-cm/year range. In sand-plain areas where SY is greatest, recharge estimates were in the 15- to 29-cm/year range on the basis of the RRR model. Recharge estimates that were based on the RRR model compared favorably with estimates made on the basis of other methods. The RRR model can be applied in other subhumid regions where region wide data sets of precipitation, streamflow, GDD, and soils data are available.
Directory of Open Access Journals (Sweden)
Panagiotis Kladisios, Athina Stegou-Sagia
2017-01-01
Full Text Available Photovoltaic modules operate under a large range of conditions. This, combined with the fact that manufacturers provide electrical parameters at specific conditions (STC, Standard Test Conditions renders the prediction of a PV module’s power efficiency very difficult. The most common model for calculating the electric characteristics and, consequently, the generated power of a photovoltaic unit under real transient conditions is the five-parameter model. It is noteworthy that this model demands a relatively small amount of data that are normally available from the manufacturer. The purpose of this paper is to determine the actual benefit in the power efficiency of a photovoltaic unit that has its cell temperature reduced using a phase change material. All approaches of the five-parameter model involve simulating the solar cell, photovoltaic module or photovoltaic array with an one-diode equivalent electrical circuit. The operation of such a circuit is defined by a characteristic I-V equation which contains five parameters: the photocurrent I0, the reverse saturation current Il, the series resistance Rs, the parallel resistance Rp and, depending on the approach, either the diode’s ideality factor m or the modified ideality factor α. For every pair of cell temperature T and solar radiation G, a new characteristic I-V is in effect and, therefore, the above parameters must be calculated anew using correlations between reference and non-reference values of the parameters.
A modified EM algorithm for estimation in generalized mixed models.
Steele, B M
1996-12-01
Application of the EM algorithm for estimation in the generalized mixed model has been largely unsuccessful because the E-step cannot be determined in most instances. The E-step computes the conditional expectation of the complete data log-likelihood and when the random effect distribution is normal, this expectation remains an intractable integral. The problem can be approached by numerical or analytic approximations; however, the computational burden imposed by numerical integration methods and the absence of an accurate analytic approximation have limited the use of the EM algorithm. In this paper, Laplace's method is adapted for analytic approximation within the E-step. The proposed algorithm is computationally straightforward and retains much of the conceptual simplicity of the conventional EM algorithm, although the usual convergence properties are not guaranteed. The proposed algorithm accommodates multiple random factors and random effect distributions besides the normal, e.g., the log-gamma distribution. Parameter estimates obtained for several data sets and through simulation show that this modified EM algorithm compares favorably with other generalized mixed model methods.
Ballistic model to estimate microsprinkler droplet distribution
Directory of Open Access Journals (Sweden)
Conceição Marco Antônio Fonseca
2003-01-01
Full Text Available Experimental determination of microsprinkler droplets is difficult and time-consuming. This determination, however, could be achieved using ballistic models. The present study aimed to compare simulated and measured values of microsprinkler droplet diameters. Experimental measurements were made using the flour method, and simulations using a ballistic model adopted by the SIRIAS computational software. Drop diameters quantified in the experiment varied between 0.30 mm and 1.30 mm, while the simulated between 0.28 mm and 1.06 mm. The greatest differences between simulated and measured values were registered at the highest radial distance from the emitter. The model presented a performance classified as excellent for simulating microsprinkler drop distribution.
Application of Bayesian Hierarchical Prior Modeling to Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Shutin, Dmitriy
2012-01-01
. The estimators result as an application of the variational message-passing algorithm on the factor graph representing the signal model extended with the hierarchical prior models. Numerical results demonstrate the superior performance of our channel estimators as compared to traditional and state......Existing methods for sparse channel estimation typically provide an estimate computed as the solution maximizing an objective function defined as the sum of the log-likelihood function and a penalization term proportional to the l1-norm of the parameter of interest. However, other penalization......-of-the-art sparse methods....
Estimation of the Heteroskedastic Canonical Contagion Model with Instrumental Variables
2016-01-01
Knowledge of contagion among economies is a relevant issue in economics. The canonical model of contagion is an alternative in this case. Given the existence of endogenous variables in the model, instrumental variables can be used to decrease the bias of the OLS estimator. In the presence of heteroskedastic disturbances this paper proposes the use of conditional volatilities as instruments. Simulation is used to show that the homoscedastic and heteroskedastic estimators which use them as instruments have small bias. These estimators are preferable in comparison with the OLS estimator and their asymptotic distribution can be used to construct confidence intervals. PMID:28030628
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Directory of Open Access Journals (Sweden)
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Estimation of shape model parameters for 3D surfaces
DEFF Research Database (Denmark)
Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen;
2008-01-01
Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D s...
An Estimated DSGE Model of the Indian Economy
2010-01-01
We develop a closed-economy DSGE model of the Indian economy and estimate it by Bayesian Maximum Likelihood methods using Dynare. We build up in stages to a model with a number of features important for emerging economies in general and the Indian economy in particular: a large proportion of credit-constrained consumers, a financial accelerator facing domestic firms seeking to finance their investment, and an informal sector. The simulation properties of the estimated model are examined under...
Modeling reactive transport with particle tracking and kernel estimators
Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-04-01
Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.
Energy Technology Data Exchange (ETDEWEB)
Doi, M. [National Inst. of Radiological Sciences, Chiba (Japan); Lagarde, F. [Karolinska Inst., Stockholm (Sweden). Inst. of Environmental Medicine; Falk, R.; Swedjemark, G.A. [Swedish Radiation Protection Inst., Stockholm (Sweden)
1996-12-01
Effective dose per unit radon progeny exposure to Swedish population in 1992 is estimated by the risk projection model based on the Swedish epidemiological study of radon and lung cancer. The resulting values range from 1.29 - 3.00 mSv/WLM and 2.58 - 5.99 mSv/WLM, respectively. Assuming a radon concentration of 100 Bq/m{sup 3}, an equilibrium factor of 0.4 and an occupancy factor of 0.6 in Swedish houses, the annual effective dose for the Swedish population is estimated to be 0.43 - 1.98 mSv/year, which should be compared to the value of 1.9 mSv/year, according to the UNSCEAR 1993 report. 27 refs, tabs, figs.
Explicit estimating equations for semiparametric generalized linear latent variable models
Ma, Yanyuan
2010-07-05
We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.
Fundamental Frequency and Model Order Estimation Using Spatial Filtering
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
extend this procedure to account for inharmonicity using unconstrained model order estimation. The simulations show that beamforming improves the performance of the joint estimates of fundamental frequency and the number of harmonics in low signal to interference (SIR) levels, and an experiment......In signal processing applications of harmonic-structured signals, estimates of the fundamental frequency and number of harmonics are often necessary. In real scenarios, a desired signal is contaminated by different levels of noise and interferers, which complicate the estimation of the signal...... parameters. In this paper, we present an estimation procedure for harmonic-structured signals in situations with strong interference using spatial filtering, or beamforming. We jointly estimate the fundamental frequency and the constrained model order through the output of the beamformers. Besides that, we...
Hayashi, Yoshihiro; Otoguro, Saori; Miura, Takahiro; Onuki, Yoshinori; Obata, Yasuko; Takayama, Kozo
2014-01-01
A multivariate statistical technique was applied to clarify the causal correlation between variables in the manufacturing process and the residual stress distribution of tablets. Theophylline tablets were prepared according to a Box-Behnken design using the wet granulation method. Water amounts (X1), kneading time (X2), lubricant-mixing time (X3), and compression force (X4) were selected as design variables. The Drucker-Prager cap (DPC) model was selected as the method for modeling the mechanical behavior of pharmaceutical powders. Simulation parameters, such as Young's modulus, Poisson rate, internal friction angle, plastic deformation parameters, and initial density of the powder, were measured. Multiple regression analysis demonstrated that the simulation parameters were significantly affected by process variables. The constructed DPC models were fed into the analysis using the finite element method (FEM), and the mechanical behavior of pharmaceutical powders during the tableting process was analyzed using the FEM. The results of this analysis revealed that the residual stress distribution of tablets increased with increasing X4. Moreover, an interaction between X2 and X3 also had an effect on shear and the x-axial residual stress of tablets. Bayesian network analysis revealed causal relationships between the process variables, simulation parameters, residual stress distribution, and pharmaceutical responses of tablets. These results demonstrated the potential of the FEM as a tool to help improve our understanding of the residual stress of tablets and to optimize process variables, which not only affect tablet characteristics, but also are risks of causing tableting problems.
INTERACTING MULTIPLE MODEL ALGORITHM BASED ON JOINT LIKELIHOOD ESTIMATION
Institute of Scientific and Technical Information of China (English)
Sun Jie; Jiang Chaoshu; Chen Zhuming; Zhang Wei
2011-01-01
A novel approach is proposed for the estimation of likelihood on Interacting Multiple-Model (IMM) filter.In this approach,the actual innovation,based on a mismatched model,can be formulated as sum of the theoretical innovation based on a matched model and the distance between matched and mismatched models,whose probability distributions are known.The joint likelihood of innovation sequence can be estimated by convolution of the two known probability density functions.The likelihood of tracking models can be calculated by conditional probability formula.Compared with the conventional likelihood estimation method,the proposed method improves the estimation accuracy of likelihood and robustness of IMM,especially when maneuver occurs.
System Level Modelling and Performance Estimation of Embedded Systems
DEFF Research Database (Denmark)
Tranberg-Hansen, Anders Sejer
is simulation based and allows performance estimation to be carried out throughout all design phases ranging from early functional to cycle accurate and bit true descriptions of the system, modelling both hardware and software components in a unied way. Design space exploration and performance estimation...... an efficient system level design methodology, a modelling framework for performance estimation and design space exploration at the system level is required. This thesis presents a novel component based modelling framework for system level modelling and performance estimation of embedded systems. The framework...... is performed by having the framework produce detailed quantitative information about the system model under investigation. The project is part of the national Danish research project, Danish Network of Embedded Systems (DaNES), which is funded by the Danish National Advanced Technology Foundation. The project...
Estimating Hydraulic Parameters When Poroelastic Effects Are Significant
Berg, S.J.; Hsieh, P.A.; Illman, W.A.
2011-01-01
For almost 80 years, deformation-induced head changes caused by poroelastic effects have been observed during pumping tests in multilayered aquifer-aquitard systems. As water in the aquifer is released from compressive storage during pumping, the aquifer is deformed both in the horizontal and vertical directions. This deformation in the pumped aquifer causes deformation in the adjacent layers, resulting in changes in pore pressure that may produce drawdown curves that differ significantly from those predicted by traditional groundwater theory. Although these deformation-induced head changes have been analyzed in several studies by poroelasticity theory, there are at present no practical guidelines for the interpretation of pumping test data influenced by these effects. To investigate the impact that poroelastic effects during pumping tests have on the estimation of hydraulic parameters, we generate synthetic data for three different aquifer-aquitard settings using a poroelasticity model, and then analyze the synthetic data using type curves and parameter estimation techniques, both of which are based on traditional groundwater theory and do not account for poroelastic effects. Results show that even when poroelastic effects result in significant deformation-induced head changes, it is possible to obtain reasonable estimates of hydraulic parameters using methods based on traditional groundwater theory, as long as pumping is sufficiently long so that deformation-induced effects have largely dissipated. ?? 2011 The Author(s). Journal compilation ?? 2011 National Ground Water Association.
PRELIM: Predictive Relevance Estimation from Linked Models
2014-10-14
the 27th AAAI Conference on Artificial Intelligence (AAAI 13), AAAI, Bellevue, WA, 2013. http://www.cc.gatech.edu/~riedl/pubs/aaai13.pdf. [6] H...N. Pennington and R. Hastie. Explanation-based decision making: Effects of memory structure on judgment. Journal of Experimental Psychology : Learning
Battery Calendar Life Estimator Manual Modeling and Simulation
Energy Technology Data Exchange (ETDEWEB)
Jon P. Christophersen; Ira Bloom; Ed Thomas; Vince Battaglia
2012-10-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Estimation of Aging Effects on LOHS for CANDU-6
Energy Technology Data Exchange (ETDEWEB)
Yoon, Yong Ki; Moon, Bok Ja; Kim, Seoung Rae [Nuclear Engineering Service and Solution Co. Ltd., Daejeon (Korea, Republic of)
2014-05-15
To evaluate the Wolsong Unit 1's capacity to respond to large-scale natural disaster exceeding design, the loss of heat sink(LOHS) accident accompanied by loss of all electric power is simulated as a beyond design basis accident. This analysis is considered the aging effects of plant as the consequences of LOHS accident. Various components of primary heat transport system(PHTS) get aged and some of the important aging effects of CANDU reactor are pressure tube(PT) diametral creep, steam generator(SG) U-tube fouling, increased feeder roughness, and feeder orifice degradation. These effects result in higher inlet header temperatures, reduced flows in some fuel channels, and higher void fraction in fuel channel outlets. Fresh and aged models are established for the analysis where fresh model is the circuit model simulating the conditions at retubing and aged model corresponds to the model reflecting the aged condition at 11 EFPY after retubing. CATHENA computer code[1] is used for the analysis of the system behavior under LOHS condition. The LOHS accident is analyzed for fresh and aged models using CATHENA thermal hydraulic computer code. The decay heat removal is one of the most important factors for mitigation of this accident. The major aging effect on decay heat removal is the reduction of heat transfer efficiency by steam generator. Thus, the channel failure time cannot be conservatively estimated if aged model is applied for the analysis of this accident.
Model calibration criteria for estimating ecological flow characteristics
Vis, Marc; Knight, Rodney; Poole, Sandra; Wolfe, William; Seibert, Jan; Breuer, Lutz; Kraft, Philipp
2016-01-01
Quantification of streamflow characteristics in ungauged catchments remains a challenge. Hydrological modeling is often used to derive flow time series and to calculate streamflow characteristics for subsequent applications that may differ from those envisioned by the modelers. While the estimation of model parameters for ungauged catchments is a challenging research task in itself, it is important to evaluate whether simulated time series preserve critical aspects of the streamflow hydrograph. To address this question, seven calibration objective functions were evaluated for their ability to preserve ecologically relevant streamflow characteristics of the average annual hydrograph using a runoff model, HBV-light, at 27 catchments in the southeastern United States. Calibration trials were repeated 100 times to reduce parameter uncertainty effects on the results, and 12 ecological flow characteristics were computed for comparison. Our results showed that the most suitable calibration strategy varied according to streamflow characteristic. Combined objective functions generally gave the best results, though a clear underprediction bias was observed. The occurrence of low prediction errors for certain combinations of objective function and flow characteristic suggests that (1) incorporating multiple ecological flow characteristics into a single objective function would increase model accuracy, potentially benefitting decision-making processes; and (2) there may be a need to have different objective functions available to address specific applications of the predicted time series.
ASYMPTOTIC EFFICIENT ESTIMATION IN SEMIPARAMETRIC NONLINEAR REGRESSION MODELS
Institute of Scientific and Technical Information of China (English)
ZhuZhongyi; WeiBocheng
1999-01-01
In this paper, the estimation method based on the “generalized profile likelihood” for the conditionally parametric models in the paper given by Severini and Wong (1992) is extendedto fixed design semiparametrie nonlinear regression models. For these semiparametrie nonlinear regression models,the resulting estimator of parametric component of the model is shown to beasymptotically efficient and the strong convergence rate of nonparametric component is investigated. Many results (for example Chen (1988) ,Gao & Zhao (1993), Rice (1986) et al. ) are extended to fixed design semiparametric nonlinear regression models.
Institute of Scientific and Technical Information of China (English)
吴密霞; 赵延
2014-01-01
混合效应模型是统计模型中非常重要的一类模型,广泛地应用到许多领域.本文比较了该模型下方差分量的两种估计:方差分析(ANOVA)估计和谱分解(SD)估计,借助吴密霞和王松桂[A new method of spectral decomposition of covariance matrix in mixed effects models and its applications,Sci.China,Ser.A,2005,48:1451-1464]协方差矩阵的谱分解结果,给出了ANOVA估计和SD估计相等的两个充分条件及其相应的统计性质,并将以上的结果应用于圆形部件数据模型和混合方差分析模型.
Linear Factor Models and the Estimation of Expected Returns
Sarisoy, Cisil; de Goeij, Peter; Werker, Bas
2015-01-01
Estimating expected returns on individual assets or portfolios is one of the most fundamental problems of finance research. The standard approach, using historical averages,produces noisy estimates. Linear factor models of asset pricing imply a linear relationship between expected returns and exposu
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
2002-01-01
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Person Appearance Modeling and Orientation Estimation using Spherical Harmonics
Liem, M.C.; Gavrila, D.M.
2013-01-01
We present a novel approach for the joint estimation of a person's overall body orientation, 3D shape and texture, from overlapping cameras. Overall body orientation (i.e. rotation around torso major axis) is estimated by minimizing the difference between a learned texture model in a canonical orien
Simulation model accurately estimates total dietary iodine intake
Verkaik-Kloosterman, J.; Veer, van 't P.; Ocke, M.C.
2009-01-01
One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and p
A Framework for Non-Gaussian Signal Modeling and Estimation
1999-06-01
the minimum entropy estimator," Trabajos de Estadistica , vol. 19, pp. 55-65, 1968. XI_ ILlllgl_____l)___11-_11^· -^_X II- _ -- _ _ . III·III...Nonparametric Function Estimation, Modeling, and Simulation. Philadelphia: Society for Industrial and Applied Mathematics, 1990. [200] D. M. Titterington
A least squares estimation method for the linear learning model
B. Wierenga (Berend)
1978-01-01
textabstractThe author presents a new method for estimating the parameters of the linear learning model. The procedure, essentially a least squares method, is easy to carry out and avoids certain difficulties of earlier estimation procedures. Applications to three different data sets are reported, a
Trimmed Likelihood-based Estimation in Binary Regression Models
Cizek, P.
2005-01-01
The binary-choice regression models such as probit and logit are typically estimated by the maximum likelihood method.To improve its robustness, various M-estimation based procedures were proposed, which however require bias corrections to achieve consistency and their resistance to outliers is rela
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Change-point estimation for censored regression model
Institute of Scientific and Technical Information of China (English)
Zhan-feng WANG; Yao-hua WU; Lin-cheng ZHAO
2007-01-01
In this paper, we consider the change-point estimation in the censored regression model assuming that there exists one change point. A nonparametric estimate of the change-point is proposed and is shown to be strongly consistent. Furthermore, its convergence rate is also obtained.
Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model
DEFF Research Database (Denmark)
Åberg, Andreas; Widd, Anders; Abildskov, Jens;
2016-01-01
A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests, or p...
Percolation models of turbulent transport and scaling estimates
Energy Technology Data Exchange (ETDEWEB)
Bakunin, O.G. [FOM Instituut voor Plasmafysica ' Rijnhuizen' , Associate Euroatom-FOM, 3430 BE Nieuwegein (Netherlands) and Kurchatov Institute, Nuclear Fusion Institute, Kurchatov sq. 1, 123182 Moscow (Russian Federation)]. E-mail: oleg_bakunin@yahoo.com
2005-03-01
The variety of forms of turbulent transport requires not only special description methods, but also an analysis of general mechanisms. One such mechanism is the percolation transport. The percolation approach is based on fractality and scaling ideas. It is possible to explain the anomalous transport in two-dimensional random flow in terms of the percolation threshold. The percolation approach looks very attractive because it gives simple and, at same time, universal model of the behavior related to the strong correlation effects. In the present paper we concentrate our attention on scaling arguments that play the very important role in estimation of transport effects. We discuss the united approach to obtain the renormalization condition of the small parameter, which is responsible for the analytical description of the system near the percolation threshold. Both monoscale and multiscale models are treated. We consider the steady case, time-dependent perturbations, the influence of drift effects, the percolation transport in a stochastic magnetic field, and compressibility effects.
Parameter estimation of hydrologic models using data assimilation
Kaheil, Y. H.
2005-12-01
The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.
Stochastic magnetic measurement model for relative position and orientation estimation
Schepers, H.M.; Veltink, P.H.
2010-01-01
This study presents a stochastic magnetic measurement model that can be used to estimate relative position and orientation. The model predicts the magnetic field generated by a single source coil at the location of the sensor. The model was used in a fusion filter that predicts the change of positio
Stochastic magnetic measurement model for relative position and orientation estimation
Schepers, H. Martin; Veltink, Petrus H.
2010-01-01
This study presents a stochastic magnetic measurement model that can be used to estimate relative position and orientation. The model predicts the magnetic field generated by a single source coil at the location of the sensor. The model was used in a fusion filter that predicts the change of positio
Parameter Estimates in Differential Equation Models for Population Growth
Winkel, Brian J.
2011-01-01
We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…
Estimation of Potential Population Level Effects of Contaminants on Wildlife
Energy Technology Data Exchange (ETDEWEB)
Loar, J.M.
2001-06-11
an d development of new scaling models; (2) development of dose-response models for toxicity data presented in the literature; and (3) development of matrix-based population models that were coupled with dose-response models to provide realistic estimation of population-level effects for individual responses.
Models for estimation of land remote sensing satellites operational efficiency
Kurenkov, Vladimir I.; Kucherov, Alexander S.
2017-01-01
The paper deals with the problem of estimation of land remote sensing satellites operational efficiency. Appropriate mathematical models have been developed. Some results obtained with the help of the software worked out in Delphi programming support environment are presented.
Parameter Estimation for the Thurstone Case III Model.
Mackay, David B.; Chaiy, Seoil
1982-01-01
The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)
Allometric models for estimating biomass and carbon in Alnus acuminata
National Research Council Canada - National Science Library
William Fonseca; Laura Ruíz; Marylin Rojas; Federico Allice
2013-01-01
... (leaves, branches, stem and root) and total tree biomass in Alnus acuminata (Kunth) in Costa Rica. Additionally, models were developed to estimate biomass and carbon in trees per hectare and for total plant biomass per hectare...
Estimation of the Human Absorption Cross Section Via Reverberation Models
DEFF Research Database (Denmark)
Steinböck, Gerhard; Pedersen, Troels; Fleury, Bernard Henri;
2016-01-01
Since the presence of persons affects the reverberation time observed for in-room channels, the absorption cross section of a person can be estimated from measurements via Sabine's and Eyring's models for the reverberation time. We propose an estimator relying on the more accurate model by Eyring...... and compare the obtained results to those of Sabine's model. We find that the absorption by persons is large enough to be measured with a wideband channel sounder and that estimates of the human absorption cross section differ for the two models. The obtained values are comparable to values reported...... in the literature. We also suggest the use of controlled environments with low average absorption coefficients to obtain more reliable estimates. The obtained values can be used to predict the change of reverberation time with persons in the propagation environment. This allows prediction of channel characteristics...
NEW DOCTORAL DEGREE Parameter estimation problem in the Weibull model
Marković, Darija
2009-01-01
In this dissertation we consider the problem of the existence of best parameters in the Weibull model, one of the most widely used statistical models in reliability theory and life data theory. Particular attention is given to a 3-parameter Weibull model. We have listed some of the many applications of this model. We have described some of the classical methods for estimating parameters of the Weibull model, two graphical methods (Weibull probability plot and hazard plot), and two analyt...
Institute of Scientific and Technical Information of China (English)
Tao Hu; Heng-jian Cui; Xing-wei Tong
2009-01-01
This article considers a semiparametric varying-coefficient partially linear regression model with current status data. The semiparametric varying-coefficient partially linear regression model which is a gen-eralization of the partially linear regression model and varying-coefficient regression model that allows one to explore the possibly nonlinear effect of a certain covariate on the response variable. A Sieve maximum likelihood estimation method is proposed and the asymptotic properties of the proposed estimators are discussed. Under some mild conditions, the estimators are shown to be strongly consistent. The convergence rate of the estima-tor for the unknown smooth function is obtained and the estimator for the unknown parameter is shown to be asymptotically efficient and normally distributed. Simulation studies are conducted to examine the small-sample properties of the proposed estimates and a real dataset is used to illustrate our approach.
Estimating growth of SMES using a logit model: Evidence from manufacturing companies in Italy
Directory of Open Access Journals (Sweden)
Amith Vikram Megaravalli
2017-03-01
Full Text Available In this paper, an effort has been put to develop a model for estimating growth based on logit re-gression (logit and implemented the model to Italian manufacturing companies. Our data set consists of 8232 SMEs of Italy. To estimate the growth of the firm an innovative approach that considers annual statements issued the year before the accelerated growth has been considered as the effective estimators of firm growth. The result of the logit showed that return on asset, log (cash flow and log (Inventory positively affect in estimating the growth of the high growth firm whereas working capital turnover times negatively affects in estimating the growth of the firm. The discriminant power of the model using Receiver Operating Characteristics curve shows 72.35%, which means the model is fair in terms of estimating the growth.
E-model MOS Estimate Improvement through Jitter Buffer Packet Loss Modelling
Directory of Open Access Journals (Sweden)
Adrian Kovac
2011-01-01
Full Text Available Proposed article analyses dependence of MOS as a voice call quality (QoS measure estimated through ITU-T E-model under real network conditions with jitter. In this paper, a method of jitter effect is proposed. Jitter as voice packet time uncertainty appears as increased packet loss caused by jitter memory buffer under- or overflow. Jitter buffer behaviour at receiver’s side is modelled as Pareto/D/1/K system with Pareto-distributed packet interarrival times and its performance is experimentally evaluated by using statistic tools. Jitter buffer stochastic model is then incorporated into E-model in an additive manner accounting for network jitter effects via excess packet loss complementing measured network packet loss. Proposed modification of E-model input parameter adds two degrees of freedom in modelling: network jitter and jitter buffer size.
ASYMPTOTICS OF MEAN TRANSFORMATION ESTIMATORS WITH ERRORS IN VARIABLES MODEL
Institute of Scientific and Technical Information of China (English)
CUI Hengjian
2005-01-01
This paper addresses estimation and its asymptotics of mean transformation θ = E[h(X)] of a random variable X based on n iid. Observations from errors-in-variables model Y = X + v, where v is a measurement error with a known distribution and h(.) is a known smooth function. The asymptotics of deconvolution kernel estimator for ordinary smooth error distribution and expectation extrapolation estimator are given for normal error distribution respectively. Under some mild regularity conditions, the consistency and asymptotically normality are obtained for both type of estimators. Simulations show they have good performance.
ROBUST ESTIMATION IN PARTIAL LINEAR MIXED MODEL FOR LONGITUDINAL DATA
Institute of Scientific and Technical Information of China (English)
Qin Guoyou; Zhu Zhongyi
2008-01-01
In this article, robust generalized estimating equation for the analysis of par- tial linear mixed model for longitudinal data is used. The authors approximate the non- parametric function by a regression spline. Under some regular conditions, the asymptotic properties of the estimators are obtained. To avoid the computation of high-dimensional integral, a robust Monte Carlo Newton-Raphson algorithm is used. Some simulations are carried out to study the performance of the proposed robust estimators. In addition, the authors also study the robustness and the efficiency of the proposed estimators by simulation. Finally, two real longitudinal data sets are analyzed.
Adaptive quasi-likelihood estimate in generalized linear models
Institute of Scientific and Technical Information of China (English)
CHEN Xia; CHEN Xiru
2005-01-01
This paper gives a thorough theoretical treatment on the adaptive quasilikelihood estimate of the parameters in the generalized linear models. The unknown covariance matrix of the response variable is estimated by the sample. It is shown that the adaptive estimator defined in this paper is asymptotically most efficient in the sense that it is asymptotic normal, and the covariance matrix of the limit distribution coincides with the one for the quasi-likelihood estimator for the case that the covariance matrix of the response variable is completely known.
BAYESIAN ESTIMATION IN SHARED COMPOUND POISSON FRAILTY MODELS
Directory of Open Access Journals (Sweden)
David D. Hanagal
2015-06-01
Full Text Available In this paper, we study the compound Poisson distribution as the shared frailty distribution and two different baseline distributions namely Pareto and linear failure rate distributions for modeling survival data. We are using the Markov Chain Monte Carlo (MCMC technique to estimate parameters of the proposed models by introducing the Bayesian estimation procedure. In the present study, a simulation is done to compare the true values of parameters with the estimated values. We try to fit the proposed models to a real life bivariate survival data set of McGrilchrist and Aisbett (1991 related to kidney infection. Also, we present a comparison study for the same data by using model selection criterion, and suggest a better frailty model out of two proposed frailty models.
Modeling of Location Estimation for Object Tracking in WSN
Directory of Open Access Journals (Sweden)
Hung-Chi Chu
2013-01-01
Full Text Available Location estimation for object tracking is one of the important topics in the research of wireless sensor networks (WSNs. Recently, many location estimation or position schemes in WSN have been proposed. In this paper, we will propose the procedure and modeling of location estimation for object tracking in WSN. The designed modeling is a simple scheme without complex processing. We will use Matlab to conduct the simulation and numerical analyses to find the optimal modeling variables. The analyses with different variables will include object moving model, sensing radius, model weighting value α, and power-level increasing ratio k of neighboring sensor nodes. For practical consideration, we will also carry out the shadowing model for analysis.
CONSERVATIVE ESTIMATING FUNCTIONIN THE NONLINEAR REGRESSION MODEL WITHAGGREGATED DATA
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The purpose of this paper is to study the theory of conservative estimating functions in nonlinear regression model with aggregated data. In this model, a quasi-score function with aggregated data is defined. When this function happens to be conservative, it is projection of the true score function onto a class of estimation functions. By constructing, the potential function for the projected score with aggregated data is obtained, which have some properties of log-likelihood function.
Estimation linear model using block generalized inverse of a matrix
Jasińska, Elżbieta; Preweda, Edward
2013-01-01
The work shows the principle of generalized linear model, point estimation, which can be used as a basis for determining the status of movements and deformations of engineering objects. The structural model can be put on any boundary conditions, for example, to ensure the continuity of the deformations. Estimation by the method of least squares was carried out taking into account the terms and conditions of the Gauss- Markov for quadratic forms stored using Lagrange function. The original sol...
Model Averaging Software for Dichotomous Dose Response Risk Estimation
Directory of Open Access Journals (Sweden)
Matthew W. Wheeler
2008-02-01
Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, ﬁts the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulﬁlls a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.
The estimated impact of natural immunity on the effectiveness of human papillomavirus vaccination
Matthijsse, S.M.; Hontelez, J.A.C.; Naber, S.K.; Rosmalen, J. van; Rozemeijer, K.; Penning, C.; Bakker, R; Ballegooijen, M. van; Kok, I.M. de; Vlas, S.J. de
2015-01-01
BACKGROUND: Mathematical modelling is used to estimate the effectiveness of HPV vaccination. These estimates depend strongly on herd immunity and thus on naturally acquired immunity, a mechanism of which little is known. We estimated the impact of different vaccination strategies on HPV-16 and HPV-1
Estimation models of variance components for farrowing interval in swine
Directory of Open Access Journals (Sweden)
Aderbal Cavalcante Neto
2009-02-01
Full Text Available The main objective of this study was to evaluate the importance of including maternal genetic, common litter environmental and permanent environmental effects in estimation models of variance components for the farrowing interval trait in swine. Data consisting of 1,013 farrowing intervals of Dalland (C-40 sows recorded in two herds were analyzed. Variance components were obtained by the derivative-free restricted maximum likelihood method. Eight models were tested which contained the fixed effects(contemporary group and covariables and the direct genetic additive and residual effects, and varied regarding the inclusion of the maternal genetic, common litter environmental, and/or permanent environmental random effects. The likelihood-ratio test indicated that the inclusion of these effects in the model was unnecessary, but the inclusion of the permanent environmental effect caused changes in the estimates of heritability, which varied from 0.00 to 0.03. In conclusion, the heritability values obtained indicated that this trait appears to present no genetic gain as response to selection. The common litter environmental and the maternal genetic effects did not present any influence on this trait. The permanent environmental effect, however, should be considered in the genetic models for this trait in swine, because its presence caused changes in the additive genetic variance estimates.Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40, registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos
Efficient robust nonparametric estimation in a semimartingale regression model
Konev, Victor
2010-01-01
The paper considers the problem of robust estimating a periodic function in a continuous time regression model with dependent disturbances given by a general square integrable semimartingale with unknown distribution. An example of such a noise is non-gaussian Ornstein-Uhlenbeck process with the L\\'evy process subordinator, which is used to model the financial Black-Scholes type markets with jumps. An adaptive model selection procedure, based on the weighted least square estimates, is proposed. Under general moment conditions on the noise distribution, sharp non-asymptotic oracle inequalities for the robust risks have been derived and the robust efficiency of the model selection procedure has been shown.
Kriging with mixed effects models
Directory of Open Access Journals (Sweden)
Alessio Pollice
2007-10-01
Full Text Available In this paper the effectiveness of the use of mixed effects models for estimation and prediction purposes in spatial statistics for continuous data is reviewed in the classical and Bayesian frameworks. A case study on agricultural data is also provided.
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories.
An empirical model to estimate ultraviolet erythemal transmissivity
Antón, M.; Serrano, A.; Cancillo, M. L.; García, J. A.
2009-04-01
An empirical model to estimate the solar ultraviolet erythemal irradiance (UVER) for all-weather conditions is presented. This model proposes a power expression with the UV transmissivity as a dependent variable, and the slant ozone column and the clearness index as independent variables. The UVER were measured at three stations in South-Western Spain during a five year period (2001-2005). A dataset corresponding to the period 2001-2004 was used to develop the model and an independent dataset (year 2005) for validation purposes. For all three locations, the empirical model explains more than 95% of UV transmissivity variability due to changes in the two independent variables. In addition, the coefficients of the models show that when the slant ozone amount decreases 1%, UV transmissivity and, therefore, UVER values increase approximately 1.33%-1.35%. The coefficients also show that when the clearness index decreases 1%, UV transmissivity increase 0.75%-0.78%. The validation of the model provided satisfactory results, with low mean absolute bias error (MABE), about 7%-8% for all stations. Finally, a one-day ahead forecast of the UV Index for cloud-free cases is presented, assuming the persistence in the total ozone column. The percentage of days with differences between forecast and experimental UVI lower than ±0.5 unit and ±1 unit is within the range of 28% to 37%, and 60% to 75%, respectively. Therefore, the empirical model proposed in this work provides reliable forecast cloud-free UVI in order to inform the public about the possible harmful effects of UV radiation over-exposure.
An empirical model to estimate ultraviolet erythemal transmissivity
Energy Technology Data Exchange (ETDEWEB)
Anton, M.; Serrano, A.; Cancillo, M.L.; Garcia, J.A. [Universidad de Extremadura, Badajoz (Spain). Dept. de Fisica
2009-07-01
An empirical model to estimate the solar ultraviolet erythemal irradiance (UVER) for all-weather conditions is presented. This model proposes a power expression with the UV transmissivity as a dependent variable, and the slant ozone column and the clearness index as independent variables. The UVER were measured at three stations in South-Western Spain during a five year period (2001-2005). A dataset corresponding to the period 2001-2004 was used to develop the model and an independent dataset (year 2005) for validation purposes. For all three locations, the empirical model explains more than 95% of UV transmissivity variability due to changes in the two independent variables. In addition, the coefficients of the models show that when the slant ozone amount decreases 1%, UV transmissivity and, therefore, UVER values increase approximately 1.33%-1.35%. The coefficients also show that when the clearness index decreases 1%, UV transmissivity increase 0.75%-0.78%. The validation of the model provided satisfactory results, with low mean absolute bias error (MABE), about 7%-8% for all stations. Finally, a one-day ahead forecast of the UV Index for cloud-free cases is presented, assuming the persistence in the total ozone column. The percentage of days with differences between forecast and experimental UVI lower than {+-}0.5 unit and {+-}1 unit is within the range of 28% to 37%, and 60% to 75%, respectively. Therefore, the empirical model proposed in this work provides reliable forecast cloud-free UVI in order to inform the public about the possible harmful effects of UV radiation over-exposure. (orig.)
Directory of Open Access Journals (Sweden)
Holder Roger L
2009-07-01
Full Text Available Abstract Background Multiple imputation (MI provides an effective approach to handle missing covariate data within prognostic modelling studies, as it can properly account for the missing data uncertainty. The multiply imputed datasets are each analysed using standard prognostic modelling techniques to obtain the estimates of interest. The estimates from each imputed dataset are then combined into one overall estimate and variance, incorporating both the within and between imputation variability. Rubin's rules for combining these multiply imputed estimates are based on asymptotic theory. The resulting combined estimates may be more accurate if the posterior distribution of the population parameter of interest is better approximated by the normal distribution. However, the normality assumption may not be appropriate for all the parameters of interest when analysing prognostic modelling studies, such as predicted survival probabilities and model performance measures. Methods Guidelines for combining the estimates of interest when analysing prognostic modelling studies are provided. A literature review is performed to identify current practice for combining such estimates in prognostic modelling studies. Results Methods for combining all reported estimates after MI were not well reported in the current literature. Rubin's rules without applying any transformations were the standard approach used, when any method was stated. Conclusion The proposed simple guidelines for combining estimates after MI may lead to a wider and more appropriate use of MI in future prognostic modelling studies.
Leakage Current Estimation of CMOS Circuit with Stack Effect
Institute of Scientific and Technical Information of China (English)
Yong-Jun Xu; Zu-Ying Luo; Xiao-Wei Li; Li-Jian Li; Xian-Long Hong
2004-01-01
Leakage current of CMOS circuit increases dramatically with the technology scaling down and has become a critical issue of high performance system. Subthreshold, gate and reverse biased junction band-to-band tunneling (BTBT) leakages are considered three main determinants of total leakage current. Up to now, how to accurately estimate leakage current of large-scale circuits within endurable time remains unsolved, even though accurate leakage models have been widely discussed. In this paper, the authors first dip into the stack effect of CMOS technology and propose a new simple gate-level leakage current model. Then, a table-lookup based total leakage current simulator is built up according to the model. To validate the simulator, accurate leakage current is simulated at circuit level using popular simulator HSPICE for comparison. Some further studies such as maximum leakage current estimation, minimum leakage current generation and a high-level average leakage current macromodel are introduced in detail. Experiments on ISCAS85 and ISCAS89 benchmarks demonstrate that the two proposed leakage current estimation methods are very accurate and efficient.
Stenroos, Matti; Hauk, Olaf
2013-11-01
The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG+EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG+EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG+EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only.
An improved model for estimating pesticide emissions for agricultural LCA
DEFF Research Database (Denmark)
Dijkman, Teunis Johannes; Birkved, Morten; Hauschild, Michael Zwicky
2011-01-01
Credible quantification of chemical emissions in the inventory phase of Life Cycle Assessment (LCA) is crucial since chemicals are the dominating cause of the human and ecotoxicity-related environmental impacts in Life Cycle Impact Assessment (LCIA). When applying LCA for assessment of agricultural...... products, off-target pesticide emissions need to be quantified as accurately as possible because of the considerable toxicity effects associated with chemicals designed to have a high impact on biological organisms like for example insects or weed plants. PestLCI was developed to estimate the fractions...... of the applied pesticide that is emitted from a field to the surrounding environmental compartments: air, surface water, and ground water. However, the applicability of the model has been limited to 1 typical Danish soil type and 1 climatic profile obtained from the national Danish meteorological station...
Estimating the effect of multiple environmental stressors on coral bleaching and mortality
National Research Council Canada - National Science Library
Paul D Welle; Mitchell J Small; Scott C Doney; Inês L Azevedo
2017-01-01
.... We develop and use a novel regression approach, using non-linear parametric models that control for unobserved time invariant effects to estimate the effects on coral bleaching and mortality due...
Simultaneous Optimality of LSE and ANOVA Estimate in General Mixed Models
Institute of Scientific and Technical Information of China (English)
Mi Xia WU; Song Gui WANG; Kai Fun YU
2008-01-01
Problems of the simultaneous optimal estimates and the optimal tests in general mixed models are considered.A necessary and sufficient condition is presented for the least squares estimate of the fixed effects and the analysis of variance (Hendreson III's) estimate of variance components being uniformly minimum variance unbiased estimates simultaneously.This result can be applied to the problems of finding uniformly optimal unbiased tests and uniformly most accurate unbiased confidential interval on parameters of interest,and for finding equivalences of several common estimates of variance components.
Directory of Open Access Journals (Sweden)
Nakamichi Takeshi
2015-06-01
Full Text Available The characteristics of evapotranspiration estimated by the complementary relationship actual evapotranspiration (CRAE, the advection-aridity (AA, and the modified advection-aridity (MAA models were investigated in six pairs of rural and urban areas of Japan in order to evaluate the applicability of the three models the urban area. The main results are as follows: 1 The MAA model could apply to estimating the actual evapotranspiration in the urban area. 2 The actual evapotranspirations estimated by the three models were much less in the urban area than in the rural. 3 The difference among the estimated values of evapotranspiration in the urban areas was significant, depending on each model, while the difference among the values in the rural areas was relatively small. 4 All three models underestimated the actual evapotranspiration in the urban areas from humid surfaces where water and green spaces exist. 5 Each model could take the effect of urbanization into account.
Estimating hybrid choice models with the new version of Biogeme
Bierlaire, Michel
2010-01-01
Hybrid choice models integrate many types of discrete choice modeling methods, including latent classes and latent variables, in order to capture concepts such as perceptions, attitudes, preferences, and motivatio (Ben-Akiva et al., 2002). Although they provide an excellent framework to capture complex behavior patterns, their use in applications remains rare in the literature due to the difficulty of estimating the models. In this talk, we provide a short introduction to hybrid choice model...
Parameter Estimation and Experimental Design in Groundwater Modeling
Institute of Scientific and Technical Information of China (English)
SUN Ne-zheng
2004-01-01
This paper reviews the latest developments on parameter estimation and experimental design in the field of groundwater modeling. Special considerations are given when the structure of the identified parameter is complex and unknown. A new methodology for constructing useful groundwater models is described, which is based on the quantitative relationships among the complexity of model structure, the identifiability of parameter, the sufficiency of data, and the reliability of model application.
Activity Recognition Using Biomechanical Model Based Pose Estimation
Reiss, Attila; Hendeby, Gustaf; Bleser, Gabriele; Stricker, Didier
2010-01-01
In this paper, a novel activity recognition method based on signal-oriented and model-based features is presented. The model-based features are calculated from shoulder and elbow joint angles and torso orientation, provided by upper-body pose estimation based on a biomechanical body model. The recognition performance of signal-oriented and model-based features is compared within this paper, and the potential of improving recognition accuracy by combining the two approaches is proved: the accu...
Chishtie, F A; Jia, J; Mann, R B; McKeon, D G C; Sherry, T N; Steele, T G
2010-01-01
We consider the effective potential $V$ in the standard model with a single Higgs doublet in the limit that the only mass scale $\\mu$ present is radiatively generated. Using a technique that has been shown to determine $V$ completely in terms of the renormalization group (RG) functions when using the Coleman-Weinberg (CW) renormalization scheme, we first sum leading-log (LL) contributions to $V$ using the one loop RG functions, associated with five couplings (the top quark Yukawa coupling $x$, the quartic coupling of the Higgs field $y$, the $SU(3)$ gauge coupling $z$, and the $SU(2) \\times U(1)$ couplings $r$ and $s$). We then employ the two loop RG functions with the three couplings $x$, $y$, $z$ to sum the next-to-leading-log (NLL) contributions to $V$ and then the three to five loop RG functions with one coupling $y$ to sum all the $N^2LL \\ldots N^4LL$ contributions to $V$. In order to compute these sums, it is necessary to convert those RG functions that have been originally computed explicitly in the mi...
Bayesian estimation of parameters in a regional hydrological model
Directory of Open Access Journals (Sweden)
K. Engeland
2002-01-01
Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading
On estimating the effective diffusive properties of hardened cement pastes
Energy Technology Data Exchange (ETDEWEB)
Stora, E.; Bary, B. [CEA Saclay, DEN/DANS/DPC/SCCME, Lab Etud Comportement Betons et Argiles, F-91191 Gif Sur Yvette, (France); Stora, E.; He, Qi-Chang [Univ Paris Est, Lab Paris Est, F-77454 Marne La Vallee 2, (France)
2008-07-01
The effective diffusion coefficients of hardened cement pastes can vary between a few orders of magnitude. The paper aims at building a homogenization model to estimate these macroscopic diffusivities and capture such strong variations. For this purpose, a three-scale description of the paste is proposed, relying mainly on the fact that the initial cement grains hydrate forming a complex microstructure with a multi-scale pore structure. In particular, porosity is found to be well connected at a fine scale. However, only a few homogenization schemes are shown to be adequate to account for such connectivity. Among them, the mixed composite spheres assemblage estimate (Stora, E., He, Q.-C., Bary, B.: J. Appl. Phys. 100(8), 084910, 2006a) seems to be the only one that always complies with rigorous bounds and is consequently employed to predict the effects of this fine porosity on the material effective diffusivities. The model proposed provides predictions in good agreement with experimental results and is consistent with the numerous measurements of critical pore diameters issued from mercury intrusion porosimetry tests. The evolution of the effective diffusivities of cement pastes subjected to leaching is also assessed by adopting a simplified scenario of the decalcification process. (authors)
Estimating model parameters in nonautonomous chaotic systems using synchronization
Yang, Xiaoli; Xu, Wei; Sun, Zhongkui
2007-05-01
In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.
Estimating model parameters in nonautonomous chaotic systems using synchronization
Energy Technology Data Exchange (ETDEWEB)
Yang, Xiaoli [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)]. E-mail: yangxl205@mail.nwpu.edu.cn; Xu, Wei [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Sun, Zhongkui [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)
2007-05-07
In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.