The problematic estimation of "imitation effects" in multilevel models
Directory of Open Access Journals (Sweden)
2003-09-01
Full Text Available It seems plausible that a person's demographic behaviour may be influenced by that among other people in the community, for example because of an inclination to imitate. When estimating multilevel models from clustered individual data, some investigators might perhaps feel tempted to try to capture this effect by simply including on the right-hand side the average of the dependent variable, constructed by aggregation within the clusters. However, such modelling must be avoided. According to simulation experiments based on real fertility data from India, the estimated effect of this obviously endogenous variable can be very different from the true effect. Also the other community effect estimates can be strongly biased. An "imitation effect" can only be estimated under very special assumptions that in practice will be hard to defend.
Estimation of Nonlinear Dynamic Panel Data Models with Individual Effects
Directory of Open Access Journals (Sweden)
Yi Hu
2014-01-01
Full Text Available This paper suggests a generalized method of moments (GMM based estimation for dynamic panel data models with individual specific fixed effects and threshold effects simultaneously. We extend Hansen’s (Hansen, 1999 original setup to models including endogenous regressors, specifically, lagged dependent variables. To address the problem of endogeneity of these nonlinear dynamic panel data models, we prove that the orthogonality conditions proposed by Arellano and Bond (1991 are valid. The threshold and slope parameters are estimated by GMM, and asymptotic distribution of the slope parameters is derived. Finite sample performance of the estimation is investigated through Monte Carlo simulations. It shows that the threshold and slope parameter can be estimated accurately and also the finite sample distribution of slope parameters is well approximated by the asymptotic distribution.
Estimation and Inference for Very Large Linear Mixed Effects Models
Gao, K.; Owen, A. B.
2016-01-01
Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...
Nonparametric Estimation of Distributions in Random Effects Models
Hart, Jeffrey D.
2011-01-01
We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.
Macho, Siegfried; Ledermann, Thomas
2011-01-01
The phantom model approach for estimating, testing, and comparing specific effects within structural equation models (SEMs) is presented. The rationale underlying this novel method consists in representing the specific effect to be assessed as a total effect within a separate latent variable model, the phantom model that is added to the main…
Fan, Xitao; Wang, Lin; Thompson, Bruce
1999-01-01
A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)
Estimating a marriage matching model with spillover effects.
Choo, Eugene; Siow, Aloysius
2006-08-01
We use marriage matching functions to study how marital patterns change when population supplies change. Specifically, we use a behavioral marriage matching function with spillover effects to rationalize marriage and cohabitation behavior in contemporary Canada. The model can estimate a couple's systematic gains to marriage and cohabitation relative to remaining single. These gains are invariant to changes in population supplies. Instead, changes in population supplies redistribute these gains between a couple. Although the model is behavioral, it is nonparametric. It can fit any observed cross-sectional marriage matching distribution. We use the estimated model to quantify the impacts of gender differences in mortality rates and the baby boom on observed marital behavior in Canada. The higher mortality rate of men makes men scarcer than women. We show that the scarceness of men modestly reduced the welfare of women and increased the welfare of men in the marriage market. On the other hand, the baby boom increased older men's net gains to entering the marriage market and lowered middle-aged women's net gains.
Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects
Directory of Open Access Journals (Sweden)
Guangjie Li
2015-07-01
Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.
A High Effective Fuzzy Synthetic Evaluation Multi-model Estimation
Directory of Open Access Journals (Sweden)
Yang LIU
2014-01-01
Full Text Available In view of the questions that the algorithm flow of variable structure multi-model method (VSMM is too complex and the tracking performance is inefficient and therefore it is so difficult to apply VSMM into installing equipment. The paper presents a high-performance variable structure multi-model method basing on multi-factor fuzzy synthetic evaluation (HEFS_VSMM. Under the guidance of variable structure method, HEFS_VSMM uses the technique of multi-factor fuzzy synthetic evaluation in the strategy of model set adaptive to select the appropriate model set in real time and reduce the computation complexity of the model evaluation, firstly. Secondly, select the model set center according to the evaluation results of each model and set the property value for current model set. Thirdly, choose different processes basing on the current model set property value to simplify the logical complexity of the algorithm. At last, the algorithm gets the total estimation by the theories of optimal information fusion on the above-mentioned processing results. The results of simulation show that, compared with the FSMM and EMA, the mean of estimation error belonging to position, velocity and acceleration in the HEFS_VSMM is improved from -0.029 (m, -0.350 (m/s, -10.051(m/s2 to -0.023 (m, 0.052 (m/s, -5.531 (m/s2. The algorithm cycle is reduced from 0.0051(s to 0.0025 (s.
School Processes Mediate School Compositional Effects: Model Specification and Estimation
Liu, Hongqiang; Van Damme, Jan; Gielen, Sarah; Van Den Noortgate, Wim
2015-01-01
School composition effects have been consistently verified, but few studies ever attempted to study how school composition affects school achievement. Based on prior research findings, we employed multilevel mediation modeling to examine whether school processes mediate the effect of school composition upon school outcomes based on the data of 28…
Effects of uncertainty in model predictions of individual tree volume on large area volume estimates
Ronald E. McRoberts; James A. Westfall
2014-01-01
Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...
Effective single scattering albedo estimation using regional climate model
CSIR Research Space (South Africa)
Tesfaye, M
2011-09-01
Full Text Available In this study, by modifying the optical parameterization of Regional Climate model (RegCM), the authors have computed and compared the Effective Single-Scattering Albedo (ESSA) which is a representative of VIS spectral region. The arid, semi...
Belfield, Clive; Bailey, Thomas
2017-01-01
Recently, studies have adopted fixed effects modeling to identify the returns to college. This method has the advantage over ordinary least squares estimates in that unobservable, individual-level characteristics that may bias the estimated returns are differenced out. But the method requires extensive longitudinal data and involves complex…
Estimation of the house money effect using hurdle models
Engel, Christoph; Moffat, Peter G.
2012-01-01
Evidence from an experiment investigating the “house money effect” in the context of a public goods game is reconsidered. Analysis is performed within the framework of the panel hurdle model, in which subjects are assumed to be one of two types: free-riders, and potential contributors. The effect of house money is seen to be significant in the first hurdle: specifically, house money makes a subject more likely to be a potential contributor. Hence we find that the effect of house money is more...
E-Model MOS Estimate Precision Improvement and Modelling of Jitter Effects
Directory of Open Access Journals (Sweden)
Adrian Kovac
2012-01-01
Full Text Available This paper deals with the ITU-T E-model, which is used for non-intrusive MOS VoIP call quality estimation on IP networks. The pros of E-model are computational simplicity and usability on real-time traffic. The cons, as shown in our previous work, are the inability of E-model to reflect effects of network jitter present on real traffic flows and jitter-buffer behavior on end user devices. These effects are visible mostly on traffic over WAN, internet and radio networks and cause the E-model MOS call quality estimate to be noticeably too optimistic. In this paper, we propose a modification to E-model using previously proposed Pplef (effective packet loss using jitter and jitter-buffer model based on Pareto/D/1/K system. We subsequently perform optimization of newly added parameters reflecting jitter effects into E-model by using PESQ intrusive measurement method as a reference for selected audio codecs. Function fitting and parameter optimization is performed under varying delay, packet loss, jitter and different jitter-buffer sizes for both, correlated and uncorrelated long-tailed network traffic.
DEFF Research Database (Denmark)
Petersen, Jørgen Holm
2016-01-01
This paper describes a new approach to the estimation in a logistic regression model with two crossed random effects where special interest is in estimating the variance of one of the effects while not making distributional assumptions about the other effect. A composite likelihood is studied...
Estimating the Effects of Parental Divorce and Death With Fixed Effects Models.
Amato, Paul R; Anthony, Christopher J
2014-04-01
The authors used child fixed effects models to estimate the effects of parental divorce and death on a variety of outcomes using 2 large national data sets: (a) the Early Childhood Longitudinal Study, Kindergarten Cohort (kindergarten through the 5th grade) and (b) the National Educational Longitudinal Study (8th grade to the senior year of high school). In both data sets, divorce and death were associated with multiple negative outcomes among children. Although evidence for a causal effect of divorce on children was reasonably strong, effect sizes were small in magnitude. A second analysis revealed a substantial degree of variability in children's outcomes following parental divorce, with some children declining, others improving, and most not changing at all. The estimated effects of divorce appeared to be strongest among children with the highest propensity to experience parental divorce.
Estimating the Effects of Parental Divorce and Death With Fixed Effects Models
Amato, Paul R.; Anthony, Christopher J.
2014-01-01
The authors used child fixed effects models to estimate the effects of parental divorce and death on a variety of outcomes using 2 large national data sets: (a) the Early Childhood Longitudinal Study, Kindergarten Cohort (kindergarten through the 5th grade) and (b) the National Educational Longitudinal Study (8th grade to the senior year of high school). In both data sets, divorce and death were associated with multiple negative outcomes among children. Although evidence for a causal effect of divorce on children was reasonably strong, effect sizes were small in magnitude. A second analysis revealed a substantial degree of variability in children’s outcomes following parental divorce, with some children declining, others improving, and most not changing at all. The estimated effects of divorce appeared to be strongest among children with the highest propensity to experience parental divorce. PMID:24659827
Model ecosystem approach to estimate community level effects of radiation
Energy Technology Data Exchange (ETDEWEB)
Masahiro, Doi; Nobuyuki, Tanaka; Shoichi, Fuma; Nobuyoshi, Ishii; Hiroshi, Takeda; Zenichiro, Kawabata [National Institute of Radiological Sciences, Environmental and Toxicological Sciences Research Group, Chiba (Japan)
2004-07-01
Mathematical computer model is developed to simulate the population dynamics and dynamic mass budgets of the microbial community realized as a self sustainable aquatic ecological system in the tube. Autotrophic algae, heterotrophic protozoa and sapro-trophic bacteria live symbiotically with inter-species' interactions as predator-prey relationship, competition for the common resource, autolysis of detritus and detritus-grazing food chain, etc. The simulation model is the individual-based parallel model, built in the demographic stochasticity, environmental stochasticity by dividing the aquatic environment into patches. Validity of the model is checked by the multifaceted data of the microcosm experiments. In the analysis, intrinsic parameters of umbrella endpoints (lethality, morbidity, reproductive growth, mutation) are manipulated at the individual level, and tried to find the population level, community level and ecosystem level disorders of ecologically crucial parameters (e.g. intrinsic growth rate, carrying capacity, variation, etc.) that related to the probability of population extinction. (author)
Model ecosystem approach to estimate community level effects of radiation
International Nuclear Information System (INIS)
Masahiro, Doi; Nobuyuki, Tanaka; Shoichi, Fuma; Nobuyoshi, Ishii; Hiroshi, Takeda; Zenichiro, Kawabata
2004-01-01
Mathematical computer model is developed to simulate the population dynamics and dynamic mass budgets of the microbial community realized as a self sustainable aquatic ecological system in the tube. Autotrophic algae, heterotrophic protozoa and sapro-trophic bacteria live symbiotically with inter-species' interactions as predator-prey relationship, competition for the common resource, autolysis of detritus and detritus-grazing food chain, etc. The simulation model is the individual-based parallel model, built in the demographic stochasticity, environmental stochasticity by dividing the aquatic environment into patches. Validity of the model is checked by the multifaceted data of the microcosm experiments. In the analysis, intrinsic parameters of umbrella endpoints (lethality, morbidity, reproductive growth, mutation) are manipulated at the individual level, and tried to find the population level, community level and ecosystem level disorders of ecologically crucial parameters (e.g. intrinsic growth rate, carrying capacity, variation, etc.) that related to the probability of population extinction. (author)
Schluchter, Mark D.
2008-01-01
In behavioral research, interest is often in examining the degree to which the effect of an independent variable X on an outcome Y is mediated by an intermediary or mediator variable M. This article illustrates how generalized estimating equations (GEE) modeling can be used to estimate the indirect or mediated effect, defined as the amount by…
Robust estimation and moment selection in dynamic fixed-effects panel data models
Cizek, Pavel; Aquaro, Michele
Considering linear dynamic panel data models with fixed effects, existing outlier–robust estimators based on the median ratio of two consecutive pairs of first-differenced data are extended to higher-order differencing. The estimation procedure is thus based on many pairwise differences and their
Lee, Duncan; Rushworth, Alastair; Sahu, Sujit K
2014-06-01
Estimation of the long-term health effects of air pollution is a challenging task, especially when modeling spatial small-area disease incidence data in an ecological study design. The challenge comes from the unobserved underlying spatial autocorrelation structure in these data, which is accounted for using random effects modeled by a globally smooth conditional autoregressive model. These smooth random effects confound the effects of air pollution, which are also globally smooth. To avoid this collinearity a Bayesian localized conditional autoregressive model is developed for the random effects. This localized model is flexible spatially, in the sense that it is not only able to model areas of spatial smoothness, but also it is able to capture step changes in the random effects surface. This methodological development allows us to improve the estimation performance of the covariate effects, compared to using traditional conditional auto-regressive models. These results are established using a simulation study, and are then illustrated with our motivating study on air pollution and respiratory ill health in Greater Glasgow, Scotland in 2011. The model shows substantial health effects of particulate matter air pollution and nitrogen dioxide, whose effects have been consistently attenuated by the currently available globally smooth models. © 2014, The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Meta-analysis of choice set generation effects on route choice model estimates and predictions
DEFF Research Database (Denmark)
Prato, Carlo Giacomo
2012-01-01
are applied for model estimation and results are compared to the ‘true model estimates’. Last, predictions from the simulation of models estimated with objective choice sets are compared to the ‘postulated predicted routes’. A meta-analytical approach allows synthesizing the effect of judgments......Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior...... modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation...
EFFECTS OF OCEAN TIDE MODELS ON GNSS-ESTIMATED ZTD AND PWV IN TURKEY
Directory of Open Access Journals (Sweden)
G. Gurbuz
2015-12-01
Full Text Available Global Navigation Satellite System (GNSS observations can precisely estimate the total zenith tropospheric delay (ZTD and precipitable water vapour (PWV for weather prediction and atmospheric research as a continuous and all-weather technique. However, apart from GNSS technique itself, estimations of ZTD and PWV are subject to effects of geophysical models with large uncertainties, particularly imprecise ocean tide models in Turkey. In this paper, GNSS data from Jan. 1st to Dec. 31st of 2014 are processed at 4 co-located GNSS stations (GISM, DIYB, GANM, and ADAN with radiosonde from Turkish Met-Office along with several nearby IGS stations. The GAMIT/GLOBK software has been used to process GNSS data of 30-second sample using the Vienna Mapping Function and 10° elevation cut-off angle. Also tidal and non-tidal atmospheric pressure loadings (ATML at the observation level are also applied in GAMIT/GLOBK. Several widely used ocean tide models are used to evaluate their effects on GNSS-estimated ZTD and PWV estimation, such as IERS recommended FES2004, NAO99b from a barotropic hydrodynamic model, CSR4.0 obtained from TOPEX/Poseidon altimetry with the model FES94.1 as the reference model and GOT00 which is again long wavelength adjustments of FES94.1 using TOPEX/Poseidon data at 0.5 by 0.5 degree grid. The ZTD and PWV computed from radiosonde profile observations are regarded as reference values for the comparison and validation. In the processing phase, five different strategies are taken without ocean tide model and with four aforementioned ocean tide models, respectively, which are used to evaluate ocean tide models effects on GNSS-estimated ZTD and PWV estimation through comparing with co-located Radiosonde. Results showed that ocean tide models have greatly affected the estimation of the ZTD in centimeter level and thus the precipitable water vapour in millimeter level, respectively at stations near coasts. The ocean tide model FES2004 that is
Coley, Rebecca Yates; Browna, Elizabeth R.
2016-01-01
Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051
Directory of Open Access Journals (Sweden)
Hou Chengyu
2014-01-01
Full Text Available Skywave over-the-horizon (OTH radar systems have important long-range strategic warning values. They exploit skywave propagation reflection of high frequency signals from the ionosphere, which provides the ultra-long-range surveillance capabilities to detect and track maneuvering targets. Current OTH radar systems are capable of localizing targets in range and azimuth but are unable to achieve reliable instantaneous altitude estimation. Most existing height measurement methods of skywave OTH radar systems have taken advantage of the micromultipath effect and been considered in the flat earth model. However, the flat earth model is not proper since large error is inevitable, when the detection range is over one thousand kilometers. In order to avoid the error caused by the flat earth model, in this paper, an earth curvature model is introduced into OTH radar altimetry methods. The simulation results show that application of the earth curvature model can effectively reduce the estimation error.
Overgaard, Rune V; Jonsson, Niclas; Tornøe, Christoffer W; Madsen, Henrik
2005-02-01
Pharmacokinetic/pharmacodynamic modelling is most often performed using non-linear mixed-effects models based on ordinary differential equations with uncorrelated intra-individual residuals. More sophisticated residual error models as e.g. stochastic differential equations (SDEs) with measurement noise can in many cases provide a better description of the variations, which could be useful in various aspects of modelling. This general approach enables a decomposition of the intra-individual residual variation epsilon into system noise w and measurement noise e. The present work describes implementation of SDEs in a non-linear mixed-effects model, where parameter estimation was performed by a novel approximation of the likelihood function. This approximation is constructed by combining the First-Order Conditional Estimation (FOCE) method used in non-linear mixed-effects modelling with the Extended Kalman Filter used in models with SDEs. Fundamental issues concerning the proposed model and estimation algorithm are addressed by simulation studies, concluding that system noise can successfully be separated from measurement noise and inter-individual variability.
A Comparison of Methods for Estimating Quadratic Effects in Nonlinear Structural Equation Models
Harring, Jeffrey R.; Weiss, Brandi A.; Hsu, Jui-Chen
2012-01-01
Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent…
Effect of unrepresented model errors on estimated soil hydraulic material properties
Directory of Open Access Journals (Sweden)
S. Jaumann
2017-09-01
Full Text Available Unrepresented model errors influence the estimation of effective soil hydraulic material properties. As the required model complexity for a consistent description of the measurement data is application dependent and unknown a priori, we implemented a structural error analysis based on the inversion of increasingly complex models. We show that the method can indicate unrepresented model errors and quantify their effects on the resulting material properties. To this end, a complicated 2-D subsurface architecture (ASSESS was forced with a fluctuating groundwater table while time domain reflectometry (TDR and hydraulic potential measurement devices monitored the hydraulic state. In this work, we analyze the quantitative effect of unrepresented (i sensor position uncertainty, (ii small scale-heterogeneity, and (iii 2-D flow phenomena on estimated soil hydraulic material properties with a 1-D and a 2-D study. The results of these studies demonstrate three main points: (i the fewer sensors are available per material, the larger is the effect of unrepresented model errors on the resulting material properties. (ii The 1-D study yields biased parameters due to unrepresented lateral flow. (iii Representing and estimating sensor positions as well as small-scale heterogeneity decreased the mean absolute error of the volumetric water content data by more than a factor of 2 to 0. 004.
Effect of unrepresented model errors on estimated soil hydraulic material properties
Jaumann, Stefan; Roth, Kurt
2017-09-01
Unrepresented model errors influence the estimation of effective soil hydraulic material properties. As the required model complexity for a consistent description of the measurement data is application dependent and unknown a priori, we implemented a structural error analysis based on the inversion of increasingly complex models. We show that the method can indicate unrepresented model errors and quantify their effects on the resulting material properties. To this end, a complicated 2-D subsurface architecture (ASSESS) was forced with a fluctuating groundwater table while time domain reflectometry (TDR) and hydraulic potential measurement devices monitored the hydraulic state. In this work, we analyze the quantitative effect of unrepresented (i) sensor position uncertainty, (ii) small scale-heterogeneity, and (iii) 2-D flow phenomena on estimated soil hydraulic material properties with a 1-D and a 2-D study. The results of these studies demonstrate three main points: (i) the fewer sensors are available per material, the larger is the effect of unrepresented model errors on the resulting material properties. (ii) The 1-D study yields biased parameters due to unrepresented lateral flow. (iii) Representing and estimating sensor positions as well as small-scale heterogeneity decreased the mean absolute error of the volumetric water content data by more than a factor of 2 to 0. 004.
Rosenblum, Michael; van der Laan, Mark J
2010-04-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.
ξ common cause failure model and method for defense effectiveness estimation
International Nuclear Information System (INIS)
Li Zhaohuan
1991-08-01
Two issues have been dealt. One is to develop an event based parametric model called ξ-CCF model. Its parameters are expressed in the fraction of the progressive multiplicities of failure events. By these expressions, the contribution of each multiple failure can be presented more clearly. It can help to select defense tactics against common cause failures. The other is to provide a method which is based on the operational experience and engineering judgement to estimate the effectiveness of defense tactics. It is expressed in terms of reduction matrix for a given tactics on a specific plant in the event by event form. The application of practical example shows that the model in cooperation with the method can simply estimate the effectiveness of defense tactics. It can be easily used by the operators and its application may be extended
Directory of Open Access Journals (Sweden)
Tanck Michael WT
2008-01-01
Full Text Available Abstract Background This paper describes a likelihood approach to model the relation between failure time and haplotypes in studies with unrelated individuals where haplotype phase is unknown, while dealing with the problem of unstable estimates due to rare haplotypes by considering a penalized log-likelihood. Results The Cox model presented here incorporates the uncertainty related to the unknown phase of multiple heterozygous individuals as weights. Estimation is performed with an EM algorithm. In the E-step the weights are estimated, and in the M-step the parameter estimates are estimated by maximizing the expectation of the joint log-likelihood, and the baseline hazard function and haplotype frequencies are calculated. These steps are iterated until the parameter estimates converge. Two penalty functions are considered, namely the ridge penalty and a difference penalty, which is based on the assumption that similar haplotypes show similar effects. Simulations were conducted to investigate properties of the method, and the association between IL10 haplotypes and risk of target vessel revascularization was investigated in 2653 patients from the GENDER study. Conclusion Results from simulations and real data show that the penalized log-likelihood approach produces valid results, indicating that this method is of interest when studying the association between rare haplotypes and failure time in studies of unrelated individuals.
More Precise Estimation of Lower-Level Interaction Effects in Multilevel Models.
Loeys, Tom; Josephy, Haeike; Dewitte, Marieke
2018-03-20
In hierarchical data, the effect of a lower-level predictor on a lower-level outcome may often be confounded by an (un)measured upper-level factor. When such confounding is left unaddressed, the effect of the lower-level predictor is estimated with bias. Separating this effect into a within- and between-component removes such bias in a linear random intercept model under a specific set of assumptions for the confounder. When the effect of the lower-level predictor is additionally moderated by another lower-level predictor, an interaction between both lower-level predictors is included into the model. To address unmeasured upper-level confounding, this interaction term ought to be decomposed into a within- and between-component as well. This can be achieved by first multiplying both predictors and centering that product term next, or vice versa. We show that while both approaches, on average, yield the same estimates of the interaction effect in linear models, the former decomposition is much more precise and robust against misspecification of the effects of cross-level and upper-level terms, compared to the latter.
Infusion and sampling site effects on two-pool model estimates of leucine metabolism
International Nuclear Information System (INIS)
Helland, S.J.; Grisdale-Helland, B.; Nissen, S.
1988-01-01
To assess the effect of site of isotope infusion on estimates of leucine metabolism infusions of alpha-[4,5-3H]ketoisocaproate (KIC) and [U- 14 C]leucine were made into the left or right ventricles of sheep and pigs. Blood was sampled from the opposite ventricle. In both species, left ventricular infusions resulted in significantly lower specific radioactivities (SA) of [ 14 C]leucine and [ 3 H]KIC. [ 14 C]KIC SA was found to be insensitive to infusion and sampling sites. [ 14 C]KIC was in addition found to be equal to the SA of [ 14 C]leucine only during the left heart infusions. Therefore, [ 14 C]KIC SA was used as the only estimate for [ 14 C]SA in the equations for the two-pool model. This model eliminated the influence of site of infusion and blood sampling on the estimates for leucine entry and reduced the impact on the estimates for proteolysis and oxidation. This two-pool model could not compensate for the underestimation of transamination reactions occurring during the traditional venous isotope infusion and arterial blood sampling
Lin, L.; Luo, X.; Qin, F.; Yang, J.
2018-03-01
As one of the combustion products of hydrocarbon fuels in a combustion-heated wind tunnel, water vapor may condense during the rapid expansion process, which will lead to a complex two-phase flow inside the wind tunnel and even change the design flow conditions at the nozzle exit. The coupling of the phase transition and the compressible flow makes the estimation of the condensation effects in such wind tunnels very difficult and time-consuming. In this work, a reduced theoretical model is developed to approximately compute the nozzle-exit conditions of a flow including real-gas and homogeneous condensation effects. Specifically, the conservation equations of the axisymmetric flow are first approximated in the quasi-one-dimensional way. Then, the complex process is split into two steps, i.e., a real-gas nozzle flow but excluding condensation, resulting in supersaturated nozzle-exit conditions, and a discontinuous jump at the end of the nozzle from the supersaturated state to a saturated state. Compared with two-dimensional numerical simulations implemented with a detailed condensation model, the reduced model predicts the flow parameters with good accuracy except for some deviations caused by the two-dimensional effect. Therefore, this reduced theoretical model can provide a fast, simple but also accurate estimation of the condensation effect in combustion-heated hypersonic tunnels.
A structural dynamic factor model for the effects of monetary policy estimated by the EM algorithm
DEFF Research Database (Denmark)
Bork, Lasse
This paper applies the maximum likelihood based EM algorithm to a large-dimensional factor analysis of US monetary policy. Specifically, economy-wide effects of shocks to the US federal funds rate are estimated in a structural dynamic factor model in which 100+ US macroeconomic and financial time...... as opposed to the orthogonal factors resulting from the popular principal component approach to structural factor models. Correlated factors are economically more sensible and important for a richer monetary policy transmission mechanism. Secondly, I consider both static factor loadings as well as dynamic...
Estimating required information size by quantifying diversity in random-effects model meta-analyses
DEFF Research Database (Denmark)
Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper
2009-01-01
BACKGROUND: There is increasing awareness that meta-analyses require a sufficiently large information size to detect or reject an anticipated intervention effect. The required information size in a meta-analysis may be calculated from an anticipated a priori intervention effect or from...... an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-trial variability and a sampling error estimate considering the required information size. D2 is different from the intuitively obvious adjusting factor based on the common quantification of heterogeneity, the inconsistency (I2), which may underestimate the required information size. Thus, D2 and I2 are compared...
Effect of recent observations on Asian CO2 flux estimates by transport model inversions
International Nuclear Information System (INIS)
Maksyutov, Shamil; Patra, Prabir K.; Machida, Toshinobu; Mukai, Hitoshi; Nakazawa, Takakiyo; Inoue, Gen
2003-01-01
We use an inverse model to evaluate the effects of the recent CO 2 observations over Asia on estimates of regional CO 2 sources and sinks. Global CO 2 flux distribution is evaluated using several atmospheric transport models, atmospheric CO 2 observations and a 'time-independent' inversion procedure adopted in the basic synthesis inversion by the Transcom-3 inverse model intercomparison project. In our analysis we include airborne and tower observations in Siberia, continuous monitoring and airborne observations over Japan, and airborne monitoring on regular flights on Tokyo-Sydney route. The inclusion of the new data reduces the uncertainty of the estimated regional CO 2 fluxes for Boreal Asia (Siberia), Temperate Asia and South-East Asia. The largest effect is observed for the emission/sink estimate for the Boreal Asia region, where introducing the observations in Siberia reduces the source uncertainty by almost half. It also produces an uncertainty reduction for Boreal North America. Addition of the Siberian airborne observations leads to projecting extra sinks in Boreal Asia of 0.2 Pg C/yr, and a smaller change for Europe. The Tokyo-Sydney observations reduce and constrain the Southeast Asian source
The Additive Risk Model for Estimation of Effect of Haplotype Match in BMT Studies
DEFF Research Database (Denmark)
Scheike, Thomas; Martinussen, T; Zhang, MJ
2011-01-01
leads to a missing data problem. We show how Aalen's additive risk model can be applied in this setting with the benefit that the time-varying haplomatch effect can be easily studied. This problem has not been considered before, and the standard approach where one would use the expected-maximization (EM......) algorithm cannot be applied for this model because the likelihood is hard to evaluate without additional assumptions. We suggest an approach based on multivariate estimating equations that are solved using a recursive structure. This approach leads to an estimator where the large sample properties can...... be developed using product-integration theory. Small sample properties are investigated using simulations in a setting that mimics the motivating haplomatch problem....
Population Intervention Models to Estimate Ambient NO2 Health Effects in Children with Asthma
Snowden, Jonathan M.; Mortimer, Kathleen M.; Dufour, Mi-Suk Kang; Tager, Ira B.
2015-01-01
Health effects of ambient air pollution are most frequently expressed in individual studies as responses to a standardized unit of air pollution changes (e.g., an interquartile interval), which is thought to enable comparison of findings across studies. However, this approach does not necessarily convey health effects in terms of a real-world air pollution scenario. In the present study, we employ population intervention modeling to estimate the effect of an air pollution intervention that makes explicit reference to the observed exposure data and is identifiable in those data. We calculate the association between ambient summertime NO2 and forced expiratory flow between 25% and 75% of forced vital capacity (FEF25–75) in a cohort of children with asthma in Fresno, California. We scale the effect size to reflect NO2 abatement on a majority of summer days. The effect estimates were small, imprecise, and consistently indicated improved pulmonary function with decreased NO2. The effects ranged from −0.8% of mean FEF25–75 (95% Confidence Interval: −3.4 , 1.7) to −3.3% (95% CI: −7.5, 0.9). We conclude by discussing the nature and feasibility of the exposure change analyzed here given the observed air pollution profile, and we propose additional applications of the population intervention model in environmental epidemiology. PMID:25182844
Estimation of direct effects for survival data by using the Aalen additive hazards model
DEFF Research Database (Denmark)
Martinussen, Torben; Vansteelandt, Stijn; Gerster, Mette
2011-01-01
Aalen's additive regression for the event time, given exposure, intermediate variable and confounders. The second stage involves applying Aalen's additive model, given the exposure alone, to a modified stochastic process (i.e. a modification of the observed counting process based on the first......We extend the definition of the controlled direct effect of a point exposure on a survival outcome, other than through some given, time-fixed intermediate variable, to the additive hazard scale. We propose two-stage estimators for this effect when the exposure is dichotomous and randomly assigned...
Thai, Hoai-Thu; Mentré, France; Holford, Nicholas H G; Veyrat-Follet, Christine; Comets, Emmanuelle
2013-01-01
A version of the nonparametric bootstrap, which resamples the entire subjects from original data, called the case bootstrap, has been increasingly used for estimating uncertainty of parameters in mixed-effects models. It is usually applied to obtain more robust estimates of the parameters and more realistic confidence intervals (CIs). Alternative bootstrap methods, such as residual bootstrap and parametric bootstrap that resample both random effects and residuals, have been proposed to better take into account the hierarchical structure of multi-level and longitudinal data. However, few studies have been performed to compare these different approaches. In this study, we used simulation to evaluate bootstrap methods proposed for linear mixed-effect models. We also compared the results obtained by maximum likelihood (ML) and restricted maximum likelihood (REML). Our simulation studies evidenced the good performance of the case bootstrap as well as the bootstraps of both random effects and residuals. On the other hand, the bootstrap methods that resample only the residuals and the bootstraps combining case and residuals performed poorly. REML and ML provided similar bootstrap estimates of uncertainty, but there was slightly more bias and poorer coverage rate for variance parameters with ML in the sparse design. We applied the proposed methods to a real dataset from a study investigating the natural evolution of Parkinson's disease and were able to confirm that the methods provide plausible estimates of uncertainty. Given that most real-life datasets tend to exhibit heterogeneity in sampling schedules, the residual bootstraps would be expected to perform better than the case bootstrap. Copyright © 2013 John Wiley & Sons, Ltd.
Krishna Rao, Sreevidya; Mejia, Gloria C; Roberts-Thomson, Kaye; Logan, Richard M; Kamath, Veena; Kulkarni, Muralidhar; Mittinty, Murthy N
2015-07-01
Early life socioeconomic disadvantage could affect adult health directly or indirectly. To the best of our knowledge, there are no studies of the direct effect of early life socioeconomic conditions on oral cancer occurrence in adult life. We conducted a multicenter, hospital-based, case-control study in India between 2011 and 2012 on 180 histopathologically confirmed incident oral and/or oropharyngeal cancer cases, aged 18 years or more, and 272 controls that included hospital visitors, who were not diagnosed with any cancer in the same hospitals. Life-course data were collected on socioeconomic conditions, risk factors, and parental behavior through interview employing a life grid. The early life socioeconomic conditions measure was determined by occupation of the head of household in childhood. Adult socioeconomic measures included participant's education and current occupation of the head of household. Marginal structural models with stabilized inverse probability weights were used to estimate the controlled direct effects of early life socioeconomic conditions on oral cancer. The total effect model showed that those in the low socioeconomic conditions in the early years of childhood had 60% (risk ratio [RR] = 1.6 [95% confidence interval {CI} = 1.4, 1.9]) increased risk of oral cancer. From the marginal structural models, the estimated risk for developing oral cancer among those in low early life socioeconomic conditions was 50% (RR = 1.5 [95% CI = 1.4, 1.5]), 20% (RR = 1.2 [95% CI = 0.9, 1.7]), and 90% (RR = 1.9 [95% CI = 1.7, 2.2]) greater than those in the high socioeconomic conditions when controlled for smoking, chewing, and alcohol, respectively. When all the three mediators were controlled in a marginal structural model, the RR was 1.3 (95% CI = 1.0, 1.6). Early life low socioeconomic condition had a controlled direct effect on oral cancer when smoking, chewing tobacco, and alcohol were separately adjusted in marginal structural models.
Yang, Ji Seung; Cai, Li
2014-01-01
The main purpose of this study is to improve estimation efficiency in obtaining maximum marginal likelihood estimates of contextual effects in the framework of nonlinear multilevel latent variable model by adopting the Metropolis-Hastings Robbins-Monro algorithm (MH-RM). Results indicate that the MH-RM algorithm can produce estimates and standard…
Baum, Rex L.
2017-01-01
Thickness of colluvium or regolith overlying bedrock or other consolidated materials is a major factor in determining stability of unconsolidated earth materials on steep slopes. Many efforts to model spatially distributed slope stability, for example to assess susceptibility to shallow landslides, have relied on estimates of constant thickness, constant depth, or simple models of thickness (or depth) based on slope and other topographic variables. Assumptions of constant depth or thickness rarely give satisfactory results. Geomorphologists have devised a number of different models to represent the spatial variability of regolith depth and applied them to various settings. I have applied some of these models that can be implemented numerically to different study areas with different types of terrain and tested the results against available depth measurements and landslide inventories. The areas include crystalline rocks of the Colorado Front Range, and gently dipping sedimentary rocks of the Oregon Coast Range. Model performance varies with model, terrain type, and with quality of the input topographic data. Steps in contour-derived 10-m digital elevation models (DEMs) introduce significant errors into the predicted distribution of regolith and landslides. Scan lines, facets, and other artifacts further degrade DEMs and model predictions. Resampling to a lower grid-cell resolution can mitigate effects of facets in lidar DEMs of areas where dense forest severely limits ground returns. Due to its higher accuracy and ability to penetrate vegetation, lidar-derived topography produces more realistic distributions of cover and potential landslides than conventional photogrammetrically derived topographic data.
Simple model to estimate the contribution of atmospheric CO2 to the Earth's greenhouse effect
Wilson, Derrek J.; Gea-Banacloche, Julio
2012-04-01
We show how the CO2 contribution to the Earth's greenhouse effect can be estimated from relatively simple physical considerations and readily available spectroscopic data. In particular, we present a calculation of the "climate sensitivity" (that is, the increase in temperature caused by a doubling of the concentration of CO2) in the absence of feedbacks. Our treatment highlights the important role played by the frequency dependence of the CO2 absorption spectrum. For pedagogical purposes, we provide two simple models to visualize different ways in which the atmosphere might return infrared radiation back to the Earth. The more physically realistic model, based on the Schwarzschild radiative transfer equations, uses as input an approximate form of the atmosphere's temperature profile, and thus includes implicitly the effect of heat transfer mechanisms other than radiation.
The effect of compression on tuning estimates in a simple nonlinear auditory filter model
DEFF Research Database (Denmark)
Marschall, Marton; MacDonald, Ewen; Dau, Torsten
2013-01-01
Behavioral experiments using auditory masking have been used to characterize frequency selectivity, one of the basic properties of the auditory system. However, due to the nonlinear response of the basilar membrane, the interpretation of these experiments may not be straightforward. Specifically...... consists of a compressor between two bandpass filters. The BPNL forms the basis of the dual-resonance nonlinear (DRNL) filter that has been used in a number of modeling studies. The location of the nonlinear element and its effect on estimated tuning in the two measurement paradigms was investigated......, then compression alone may explain a large part of the behaviorally observed differences in tuning between simultaneous and forward-masking conditions....
An expert judgment model applied to estimating the safety effect of a bicycle facility.
Leden, L; Gårder, P; Pulkkinen, U
2000-07-01
This paper presents a risk index model that can be used for assessing the safety effect of countermeasures. The model estimates risk in a multiplicative way, which makes it possible to analyze the impact of different factors separately. Expert judgments are incorporated through a Bayesian error model. The variance of the risk estimate is determined by Monte-Carlo simulation. The model was applied to assess the safety effect of a new design of a bicycle crossing. The intent was to gain safety by raising the crossings to reduce vehicle speeds and by making the crossings more visible by painting them in a bright color. Before the implementations, bicyclists were riding on bicycle crossings of conventional Swedish type, i.e. similar to crosswalks but delineated by white squares rather than solid lines or zebra markings. Automobile speeds were reduced as anticipated. However, it seems as if the positive effect of this was more or less canceled out by increased bicycle speeds. The safety per bicyclist was still improved by approximately 20%. This improvement was primarily caused by an increase in bicycle flow, since the data show that more bicyclists at a given location seem to benefit their safety. The increase in bicycle flow was probably caused by the new layout of the crossings since bicyclists perceived them as safer and causing less delay. Some future development work is suggested. Pros and cons with the used methodology are discussed. The most crucial parameter to be added is probably a model describing the interaction between motorists and bicyclists, for example, how risk is influenced by the lateral position of the bicyclist in relation to the motorist. It is concluded that the interaction seems to be optimal when both groups share the roadway.
Modeling estimates of the effect of acid rain on background radiation dose
International Nuclear Information System (INIS)
Sheppard, S.C.; Sheppard, M.I.
1988-01-01
Acid rain causes accelerated mobilization of many materials in soils. Natural and anthropogenic radionuclides, especially 226Ra and 137Cs, are among these materials. Okamoto is apparently the only researcher to date who has attempted to quantify the effect of acid rain on the background radiation dose to man. He estimated an increase in dose by a factor of 1.3 following a decrease in soil pH of 1 unit. We reviewed literature that described the effects of changes in pH on mobility and plant uptake of Ra and Cs. Generally, a decrease in soil pH by 1 unit will increase mobility and plant uptake by factors of 2 to 7. Thus, Okamoto's dose estimate may be too low. We applied several simulation models to confirm Okamoto's ideas, with most emphasis on an atmospherically driven soil model that predicts water and nuclide flow through a soil profile. We modeled a typical, acid-rain sensitive soil using meteorological data from Geraldton, Ontario. The results, within the range of effects on the soil expected from acidification, showed essentially direct proportionality between the mobility of the nuclides and dose. This supports some of the assumptions invoked by Okamoto. We conclude that a decrease in pH of 1 unit may increase the mobility of Ra and Cs by a factor of 2 or more. Our models predict that this will lead to similar increases in plant uptake and radiological dose to man. Although health effects following such a small increase in dose have not been statistically demonstrated, any increase in dose is probably undesirable
Modeling estimates of the effect of acid rain on background radiation dose.
Sheppard, S C; Sheppard, M I
1988-06-01
Acid rain causes accelerated mobilization of many materials in soils. Natural and anthropogenic radionuclides, especially 226Ra and 137Cs, are among these materials. Okamoto is apparently the only researcher to date who has attempted to quantify the effect of acid rain on the "background" radiation dose to man. He estimated an increase in dose by a factor of 1.3 following a decrease in soil pH of 1 unit. We reviewed literature that described the effects of changes in pH on mobility and plant uptake of Ra and Cs. Generally, a decrease in soil pH by 1 unit will increase mobility and plant uptake by factors of 2 to 7. Thus, Okamoto's dose estimate may be too low. We applied several simulation models to confirm Okamoto's ideas, with most emphasis on an atmospherically driven soil model that predicts water and nuclide flow through a soil profile. We modeled a typical, acid-rain sensitive soil using meteorological data from Geraldton, Ontario. The results, within the range of effects on the soil expected from acidification, showed essentially direct proportionality between the mobility of the nuclides and dose. This supports some of the assumptions invoked by Okamoto. We conclude that a decrease in pH of 1 unit may increase the mobility of Ra and Cs by a factor of 2 or more. Our models predict that this will lead to similar increases in plant uptake and radiological dose to man. Although health effects following such a small increase in dose have not been statistically demonstrated, any increase in dose is probably undesirable.
[Application of Mixed-effect Model in PMI Estimation by Vitreous Humor].
Yang, M Z; Li, H J; Zhang, T Y; Ding, Z J; Wu, S F; Qiu, X G; Liu, Q
2018-02-01
To test the changes of the potassium （K⁺） and magnesium （Mg²⁺） concentrations in vitreous humor of rabbits along with postmortem interval （PMI） under different temperatures, and explore the feasibility of PMI estimation using mixed-effect model. After sacrifice, rabbit carcasses were preserved at 5 ℃, 15 ℃, 25 ℃ and 35 ℃, and 80-100 μL of vitreous humor was collected by the double-eye alternating micro-sampling method at every 12 h. The concentrations of K⁺ and Mg²⁺ in vitreous humor were measured by a biochemical-immune analyser. The mixed-effect model was used to perform analysis and fitting, and established the equations for PMI estimation. The data detected from the samples that were stoned at 10 ℃, 20 ℃ and 30 ℃ with 20, 40 and 65 h were used to validate the equations of PMI estimation. The concentrations of K⁺ and Mg²⁺ [f（ x , y ）] in vitreous humor of rabbits under different temperature increased along with PMI （ x ）. The relative equations of K⁺ and Mg²⁺ concentration with PMI and temperature under 5 ℃~35 ℃ were f K⁺ （ x , y ）=3.413 0+0.309 2 x +0.337 6 y +0.010 83 xy -0.002 47 x ² （ P PMI estimation by K⁺ and Mg²⁺ was in 10 h when PMI was between 0 to 40 h, and the time of deviation was in 21 h when PMI was between 40 to 65 h. the ambient temperature range of 5 ℃-35 ℃, the mixed-effect model based on temperature and vitreous humor substance concentrations can provide a new method for the practical application of vitreous humor chemicals for PMI estimation. Copyright© by the Editorial Department of Journal of Forensic Medicine.
Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H
2016-08-01
The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Estimation of morbidity effects
International Nuclear Information System (INIS)
Ostro, B.
1994-01-01
Many researchers have related exposure to ambient air pollution to respiratory morbidity. To be included in this review and analysis, however, several criteria had to be met. First, a careful study design and a methodology that generated quantitative dose-response estimates were required. Therefore, there was a focus on time-series regression analyses relating daily incidence of morbidity to air pollution in a single city or metropolitan area. Studies that used weekly or monthly average concentrations or that involved particulate measurements in poorly characterized metropolitan areas (e.g., one monitor representing a large region) were not included in this review. Second, studies that minimized confounding ad omitted variables were included. For example, research that compared two cities or regions and characterized them as 'high' and 'low' pollution area were not included because of potential confounding by other factors in the respective areas. Third, concern for the effects of seasonality and weather had to be demonstrated. This could be accomplished by either stratifying and analyzing the data by season, by examining the independent effects of temperature and humidity, and/or by correcting the model for possible autocorrelation. A fourth criterion for study inclusion was that the study had to include a reasonably complete analysis of the data. Such analysis would include an careful exploration of the primary hypothesis as well as possible examination of te robustness and sensitivity of the results to alternative functional forms, specifications, and influential data points. When studies reported the results of these alternative analyses, the quantitative estimates that were judged as most representative of the overall findings were those that were summarized in this paper. Finally, for inclusion in the review of particulate matter, the study had to provide a measure of particle concentration that could be converted into PM10, particulate matter below 10
Methods for estimating the uncertainty of climate change effects in a crop model
Energy Technology Data Exchange (ETDEWEB)
Liebetrau, A.M.; Scott, M.J.
1990-11-01
One of the most difficult problems facing policy makers in deciding what to do about the possible effects of future climate change on the earth's natural and human resources (the so-called Greenhouse Effect) is how to proceed in the face of serious uncertainty. Nowhere has this been made more obvious than in the uncertainty associated with the effects of climate on agriculture. The U.S. Department of Energy has funded a program to develop or modify the analytical tools required to estimate the effects of changed climate on a number of natural resource sectors. In the agricultural sector, the primary analytical tool used in this activity has been the Erosion-Production Impact Calculator (EPIC). EPIC is a mathematical cropping systems model developed originally to examine the relationship between soil erosion and soil productivity, but which has now been modified to account for the effects of climate and CO{sub 2} fertilization. The EPIC computer code is a highly detailed tool that incorporates extensive information on weather, plant physiology, and farming practices. In this paper, we describe a process of uncertainty analysis that will evaluate simultaneously the effects of temporal and geographic variability of weather, uncertainty of farm prices, and availability of different farming practices to estimate the effect of climate change on farm income. The second section of this paper discusses the methods and data that are used in making this evaluation; the third section provides information on the sampling schemes used in the exercise; and the fourth section discusses the empirical application of the techniques, together with conclusions. 24 refs., 2 figs., 1 tab.
Effect of optimal estimation of flux difference information on the lattice traffic flow model
Yang, Shu-hong; Li, Chun-gui; Tang, Xin-lai; Tian, Chuan
2016-12-01
In this paper, a new lattice model is proposed by considering the optimal estimation of flux difference information. The effect of this new consideration upon the stability of traffic flow is examined through linear stability analysis. Furthermore, a modified Korteweg-de Vries (mKdV) equation near the critical point is constructed and solved by means of nonlinear analysis method, and thus the propagation behavior of traffic jam can be described by the kink-antikink soliton solution of the mKdV equation. Numerical simulation is carried out under periodical condition with results in good agreement with theoretical analysis, therefore, it is verified that the new consideration can enhance the stability of traffic systems and suppress the emergence of traffic jams effectively.
An Improved Heat Budget Estimation Including Bottom Effects for General Ocean Circulation Models
Carder, Kendall; Warrior, Hari; Otis, Daniel; Chen, R. F.
2001-01-01
This paper studies the effects of the underwater light field on heat-budget calculations of general ocean circulation models for shallow waters. The presence of a bottom significantly alters the estimated heat budget in shallow waters, which affects the corresponding thermal stratification and hence modifies the circulation. Based on the data collected during the COBOP field experiment near the Bahamas, we have used a one-dimensional turbulence closure model to show the influence of the bottom reflection and absorption on the sea surface temperature field. The water depth has an almost one-to-one correlation with the temperature rise. Effects of varying the bottom albedo by replacing the sea grass bed with a coral sand bottom, also has an appreciable effect on the heat budget of the shallow regions. We believe that the differences in the heat budget for the shallow areas will have an influence on the local circulation processes and especially on the evaporative and long-wave heat losses for these areas. The ultimate effects on humidity and cloudiness of the region are expected to be significant as well.
A model to estimate the cost effectiveness of the indoorenvironment improvements in office work
Energy Technology Data Exchange (ETDEWEB)
Seppanen, Olli; Fisk, William J.
2004-06-01
Deteriorated indoor climate is commonly related to increases in sick building syndrome symptoms, respiratory illnesses, sick leave, reduced comfort and losses in productivity. The cost of deteriorated indoor climate for the society is high. Some calculations show that the cost is higher than the heating energy costs of the same buildings. Also building-level calculations have shown that many measures taken to improve indoor air quality and climate are cost-effective when the potential monetary savings resulting from an improved indoor climate are included as benefits gained. As an initial step towards systemizing these building level calculations we have developed a conceptual model to estimate the cost-effectiveness of various measures. The model shows the links between the improvements in the indoor environment and the following potential financial benefits: reduced medical care cost, reduced sick leave, better performance of work, lower turn over of employees, and lower cost of building maintenance due to fewer complaints about indoor air quality and climate. The pathways to these potential benefits from changes in building technology and practices go via several human responses to the indoor environment such as infectious diseases, allergies and asthma, sick building syndrome symptoms, perceived air quality, and thermal environment. The model also includes the annual cost of investments, operation costs, and cost savings of improved indoor climate. The conceptual model illustrates how various factors are linked to each other. SBS symptoms are probably the most commonly assessed health responses in IEQ studies and have been linked to several characteristics of buildings and IEQ. While the available evidence indicates that SBS symptoms can affect these outcomes and suspects that such a linkage exists, at present we can not quantify the relationships sufficiently for cost-benefit modeling. New research and analyses of existing data to quantify the financial
The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates
Sivo, Stephen; Fan, Xitao; Witta, Lea
2005-01-01
The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…
Levin, Bruce; Leu, Cheng-Shiun
2013-01-01
We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.
Bouwman, Aniek C; Hayes, Ben J; Calus, Mario P L
2017-10-30
Genomic evaluation is used to predict direct genomic values (DGV) for selection candidates in breeding programs, but also to estimate allele substitution effects (ASE) of single nucleotide polymorphisms (SNPs). Scaling of allele counts influences the estimated ASE, because scaling of allele counts results in less shrinkage towards the mean for low minor allele frequency (MAF) variants. Scaling may become relevant for estimating ASE as more low MAF variants will be used in genomic evaluations. We show the impact of scaling on estimates of ASE using real data and a theoretical framework, and in terms of power, model fit and predictive performance. In a dairy cattle dataset with 630 K SNP genotypes, the correlation between DGV for stature from a random regression model using centered allele counts (RRc) and centered and scaled allele counts (RRcs) was 0.9988, whereas the overall correlation between ASE using RRc and RRcs was 0.27. The main difference in ASE between both methods was found for SNPs with a MAF lower than 0.01. Both the ratio (ASE from RRcs/ASE from RRc) and the regression coefficient (regression of ASE from RRcs on ASE from RRc) were much higher than 1 for low MAF SNPs. Derived equations showed that scenarios with a high heritability, a large number of individuals and a small number of variants have lower ratios between ASE from RRc and RRcs. We also investigated the optimal scaling parameter [from - 1 (RRcs) to 0 (RRc) in steps of 0.1] in the bovine stature dataset. We found that the log-likelihood was maximized with a scaling parameter of - 0.8, while the mean squared error of prediction was minimized with a scaling parameter of - 1, i.e., RRcs. Large differences in estimated ASE were observed for low MAF SNPs when allele counts were scaled or not scaled because there is less shrinkage towards the mean for scaled allele counts. We derived a theoretical framework that shows that the difference in ASE due to shrinkage is heavily influenced by the
The effect of coupling hydrologic and hydrodynamic models on probable maximum flood estimation
Felder, Guido; Zischg, Andreas; Weingartner, Rolf
2017-07-01
Deterministic rainfall-runoff modelling usually assumes stationary hydrological system, as model parameters are calibrated with and therefore dependant on observed data. However, runoff processes are probably not stationary in the case of a probable maximum flood (PMF) where discharge greatly exceeds observed flood peaks. Developing hydrodynamic models and using them to build coupled hydrologic-hydrodynamic models can potentially improve the plausibility of PMF estimations. This study aims to assess the potential benefits and constraints of coupled modelling compared to standard deterministic hydrologic modelling when it comes to PMF estimation. The two modelling approaches are applied using a set of 100 spatio-temporal probable maximum precipitation (PMP) distribution scenarios. The resulting hydrographs, the resulting peak discharges as well as the reliability and the plausibility of the estimates are evaluated. The discussion of the results shows that coupling hydrologic and hydrodynamic models substantially improves the physical plausibility of PMF modelling, although both modelling approaches lead to PMF estimations for the catchment outlet that fall within a similar range. Using a coupled model is particularly suggested in cases where considerable flood-prone areas are situated within a catchment.
Letter to the Editor: Applications Air Q Model on Estimate Health Effects Exposure to Air Pollutants
Directory of Open Access Journals (Sweden)
Gholamreza Goudarzi
2016-02-01
Full Text Available Epidemiologic studies in worldwide have measured increases in mortality and morbidity associated with air pollution (1-3. Quantifying the effects of air pollution on the human health in urban area causes an increasingly critical component in policy discussion (4-6. Air Q model was proved to be a valid and reliable tool to predicts health effects related to criteria pollutants (particulate matter (PM, ozone (O3, nitrogen dioxide (NO2, sulfur dioxide (SO2, and carbon monoxide (CO, determinate the potential short term effects of air pollution and allows the examination of various scenarios in which emission rates of pollutants are varied (7,8. Air Q software provided by the WHO European Centre for Environment and Health (ECEH (9. Air Q model is based on cohort studies and used to estimates of both attributable average reductions in life-span and numbers of mortality and morbidity associated with exposure to air pollution (10,11. Applications
Estimating the Trade and Welfare Effects of Brexit: A Panel Data Structural Gravity Model
Oberhofer, Harald; Pfaffermayr, Michael
2018-01-01
This paper proposes a new panel data structural gravity approach for estimating the trade and welfare effects of Brexit. The suggested Constrained Poisson Pseudo Maximum Likelihood Estimator exhibits some useful properties for trade policy analysis and allows to obtain estimates and confidence intervals which are consistent with structural trade theory. Assuming different counterfactual post-Brexit scenarios, our main findings suggest that UKs (EUs) exports of goods to the EU (UK) are likely...
Estimating the Trade and Welfare Effects of Brexit: A Panel Data Structural Gravity Model
Oberhofer, Harald; Pfaffermayr, Michael
2017-01-01
This paper proposes a new panel data structural gravity approach for estimating the trade and welfare effects of Brexit. The suggested Constrained Poisson Pseudo Maximum Likelihood Estimator exhibits some useful properties for trade policy analysis and allows to obtain estimates and confidence intervals which are consistent with structural trade theory. Assuming different counterfactual post-Brexit scenarios, our main findings suggest that UKs (EUs) exports of goods to the EU (UK) are likely ...
Estimation of health effects of prenatal methylmercury exposure using structural equation models
DEFF Research Database (Denmark)
Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe
2002-01-01
BACKGROUND: Observational studies in epidemiology always involve concerns regarding validity, especially measurement error, confounding, missing data, and other problems that may affect the study outcomes. Widely used standard statistical techniques, such as multiple regression analysis, may......-mercury toxicity. RESULTS: Structural equation models were developed for assessment of the association between biomarkers of prenatal mercury exposure and neuropsychological test scores in 7 year old children. Eleven neurobehavioral outcomes were grouped into motor function and verbally mediated function...... and verbal skills. Adjustment for contaminant exposure to poly chlorinated biphenyls (PCBs) changed the estimates only marginally, but the mercury effect could be reduced to non-significance by assuming a large measurement error for the PCB biomarker. CONCLUSIONS: The structural equation analysis allows...
Noise Model Analysis and Estimation of Effect due to Wind Driven Ambient Noise in Shallow Water
Directory of Open Access Journals (Sweden)
S. Sakthivel Murugan
2011-01-01
Full Text Available Signal transmission in ocean using water as a channel is a challenging process due to attenuation, spreading, reverberation, absorption, and so forth, apart from the contribution of acoustic signals due to ambient noises. Ambient noises in sea are of two types: manmade (shipping, aircraft over the sea, motor on boat, etc. and natural (rain, wind, seismic, etc., apart from marine mammals and phytoplanktons. Since wind exists in all places and at all time: its effect plays a major role. Hence, in this paper, we concentrate on estimating the effects of wind. Seven sets of data with various wind speeds ranging from 2.11 m/s to 6.57 m/s were used. The analysis is performed for frequencies ranging from 100 Hz to 8 kHz. It is found that a linear relationship between noise spectrum and wind speed exists for the entire frequency range. Further, we developed a noise model for analyzing the noise level. The results of the empirical data are found to fit with results obtained with the aid of noise model.
International Nuclear Information System (INIS)
Frankle, S.C.; Fitzgerald, D.H.; Hutson, R.L.; Macek, R.J.; Wilkinson, C.A.
1993-01-01
Neutron dose equivalent rates have been measured for 800-MeV proton beam spills at the Los Alamos Meson Physics Facility. Neutron detectors were used to measure the neutron dose levels at a number of locations for each beam-spill test, and neutron energy spectra were measured for several beam-spill tests. Estimates of expected levels for various detector locations were made using a simple analytical model developed for 800-MeV proton beam spills. A comparison of measurements and model estimates indicates that the model is reasonably accurate in estimating the neutron dose equivalent rate for simple shielding geometries. The model fails for more complicated shielding geometries, where indirect contributions to the dose equivalent rate can dominate
Kelava, Augustin; Nagengast, Benjamin
2012-09-01
Structural equation models with interaction and quadratic effects have become a standard tool for testing nonlinear hypotheses in the social sciences. Most of the current approaches assume normally distributed latent predictor variables. In this article, we present a Bayesian model for the estimation of latent nonlinear effects when the latent predictor variables are nonnormally distributed. The nonnormal predictor distribution is approximated by a finite mixture distribution. We conduct a simulation study that demonstrates the advantages of the proposed Bayesian model over contemporary approaches (Latent Moderated Structural Equations [LMS], Quasi-Maximum-Likelihood [QML], and the extended unconstrained approach) when the latent predictor variables follow a nonnormal distribution. The conventional approaches show biased estimates of the nonlinear effects; the proposed Bayesian model provides unbiased estimates. We present an empirical example from work and stress research and provide syntax for substantive researchers. Advantages and limitations of the new model are discussed.
Estimating the effect of a variable in a high-dimensional regression model
DEFF Research Database (Denmark)
Jensen, Peter Sandholt; Wurtz, Allan
assume that the effect is identified in a high-dimensional linear model specified by unconditional moment restrictions. We consider properties of the following methods, which rely on lowdimensional models to infer the effect: Extreme bounds analysis, the minimum t-statistic over models, Sala...
Effect of Estimated Daily Global Solar Radiation Data on the Results of Crop Growth Models.
Trnka, Miroslav; Eitzinger, Josef; Kapler, Pavel; Dubrovský, Martin; Semerádová, Daniela; Žalud, Zdeněk; Formayer, Herbert
2007-10-16
The results of previous studies have suggested that estimated daily globalradiation (R G ) values contain an error that could compromise the precision of subsequentcrop model applications. The following study presents a detailed site and spatial analysis ofthe R G error propagation in CERES and WOFOST crop growth models in Central Europeanclimate conditions. The research was conducted i) at the eight individual sites in Austria andthe Czech Republic where measured daily R G values were available as a reference, withseven methods for R G estimation being tested, and ii) for the agricultural areas of the CzechRepublic using daily data from 52 weather stations, with five R G estimation methods. In thelatter case the R G values estimated from the hours of sunshine using the ångström-Prescottformula were used as the standard method because of the lack of measured R G data. At thesite level we found that even the use of methods based on hours of sunshine, which showedthe lowest bias in R G estimates, led to a significant distortion of the key crop model outputs.When the ångström-Prescott method was used to estimate R G , for example, deviationsgreater than ±10 per cent in winter wheat and spring barley yields were noted in 5 to 6 percent of cases. The precision of the yield estimates and other crop model outputs was lowerwhen R G estimates based on the diurnal temperature range and cloud cover were used (mean bias error 2.0 to 4.1 per cent). The methods for estimating R G from the diurnal temperature range produced a wheat yield bias of more than 25 per cent in 12 to 16 per cent of the seasons. Such uncertainty in the crop model outputs makes the reliability of any seasonal yield forecasts or climate change impact assessments questionable if they are based on this type of data. The spatial assessment of the R G data uncertainty propagation over the winter wheat yields also revealed significant differences within the study area. We found that R G estimates based on
Effect of Estimated Daily Global Solar Radiation Data on the Results of Crop Growth Models
Directory of Open Access Journals (Sweden)
Herbert Formayer
2007-10-01
Full Text Available The results of previous studies have suggested that estimated daily globalradiation (RG values contain an error that could compromise the precision of subsequentcrop model applications. The following study presents a detailed site and spatial analysis ofthe RG error propagation in CERES and WOFOST crop growth models in Central Europeanclimate conditions. The research was conducted i at the eight individual sites in Austria andthe Czech Republic where measured daily RG values were available as a reference, withseven methods for RG estimation being tested, and ii for the agricultural areas of the CzechRepublic using daily data from 52 weather stations, with five RG estimation methods. In thelatter case the RG values estimated from the hours of sunshine using the ÃƒÂ¥ngstrÃƒÂ¶m-Prescottformula were used as the standard method because of the lack of measured RG data. At thesite level we found that even the use of methods based on hours of sunshine, which showedthe lowest bias in RG estimates, led to a significant distortion of the key crop model outputs.When the ÃƒÂ¥ngstrÃƒÂ¶m-Prescott method was used to estimate RG, for example, deviationsgreater than Ã‚Â±10 per cent in winter wheat and spring barley yields were noted in 5 to 6 percent of cases. The precision of the yield estimates and other crop model outputs was lowerwhen RG estimates based on the diurnal temperature range and cloud cover were used (mean bias error 2.0 to 4.1 per cent. The methods for estimating RG from the diurnal temperature range produced a wheat yield bias of more than 25 per cent in 12 to 16 per cent of the seasons. Such uncertainty in the crop model outputs makes the reliability of any seasonal yield forecasts or climate change impact assessments questionable if they are based on this type of data. The spatial assessment of the RG data uncertainty propagation over the winter wheat yields also revealed significant differences within the study area. We
Directory of Open Access Journals (Sweden)
Yun Shi
2014-01-01
Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Meier, Petra S; Holmes, John; Angus, Colin; Ally, Abdallah K; Meng, Yang; Brennan, Alan
2016-02-01
While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO "best buy" intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities. An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol); and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA) and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as follows: current tax
Directory of Open Access Journals (Sweden)
Petra S Meier
2016-02-01
Full Text Available While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO "best buy" intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities.An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol; and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as
Directory of Open Access Journals (Sweden)
Howey Richard
2012-06-01
Full Text Available Abstract Background Here we present two new computer tools, PREMIM and EMIM, for the estimation of parental and child genetic effects, based on genotype data from a variety of different child-parent configurations. PREMIM allows the extraction of child-parent genotype data from standard-format pedigree data files, while EMIM uses the extracted genotype data to perform subsequent statistical analysis. The use of genotype data from the parents as well as from the child in question allows the estimation of complex genetic effects such as maternal genotype effects, maternal-foetal interactions and parent-of-origin (imprinting effects. These effects are estimated by EMIM, incorporating chosen assumptions such as Hardy-Weinberg equilibrium or exchangeability of parental matings as required. Results In application to simulated data, we show that the inference provided by EMIM is essentially equivalent to that provided by alternative (competing software packages such as MENDEL and LEM. However, PREMIM and EMIM (used in combination considerably outperform MENDEL and LEM in terms of speed and ease of execution. Conclusions Together, EMIM and PREMIM provide easy-to-use command-line tools for the analysis of pedigree data, giving unbiased estimates of parental and child genotype relative risks.
Energy Technology Data Exchange (ETDEWEB)
Woods, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Winkler, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, D. [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2013-01-01
This study examines the effective moisture penetration depth (EMPD) model, and its suitability for building simulations. The EMPD model is a compromise between the simple, inaccurate effective capacitance approach and the complex, yet accurate, finite-difference approach. Two formulations of the EMPD model were examined, including the model used in the EnergyPlus building simulation software. An error in the EMPD model we uncovered was fixed with the release of EnergyPlus version 7.2, and the EMPD model in earlier versions of EnergyPlus should not be used.
Directory of Open Access Journals (Sweden)
Rosa Ana Salas
2013-11-01
Full Text Available We propose a modeling procedure specifically designed for a ferrite inductor excited by a waveform in time domain. We estimate the loss resistance in the core (parameter of the electrical model of the inductor by means of a Finite Element Method in 2D which leads to significant computational advantages over the 3D model. The methodology is validated for an RM (rectangular modulus ferrite core working in the linear and the saturation regions. Excellent agreement is found between the experimental data and the computational results.
de Vries, R.; Van Bergen, J.E.A.M.; de Jong-van den Berg, Lolkje; Postma, Maarten
2006-01-01
To estimate the cost-effectiveness of a systematic one-off Chlamydia trachomatis (CT) screening program including partner treatment for Dutch young adults. Data on infection prevalence, participation rates, and sexual behavior were obtained from a large pilot study conducted in The Netherlands.
de Vries, Robin; van Bergen, Jan E. A. M.; de Jong-van den Berg, Lolkje T. W.; Postma, Maarten J.
2006-01-01
To estimate the cost-effectiveness of a systematic one-off Chlamydia trachomatis (CT) screening program including partner treatment for Dutch young adults. Data on infection prevalence, participation rates, and sexual behavior were obtained from a large pilot study conducted in The Netherlands.
Correcting for Test Score Measurement Error in ANCOVA Models for Estimating Treatment Effects
Lockwood, J. R.; McCaffrey, Daniel F.
2014-01-01
A common strategy for estimating treatment effects in observational studies using individual student-level data is analysis of covariance (ANCOVA) or hierarchical variants of it, in which outcomes (often standardized test scores) are regressed on pretreatment test scores, other student characteristics, and treatment group indicators. Measurement…
A Comparison of Uniform DIF Effect Size Estimators under the MIMIC and Rasch Models
Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon; Penfield, Randall D.
2013-01-01
The Rasch model, a member of a larger group of models within item response theory, is widely used in empirical studies. Detection of uniform differential item functioning (DIF) within the Rasch model typically employs null hypothesis testing with a concomitant consideration of effect size (e.g., signed area [SA]). Parametric equivalence between…
Effects of improved modeling on best estimate BWR severe accident analysis
International Nuclear Information System (INIS)
Hyman, C.R.; Ott, L.J.
1984-01-01
Since 1981, ORNL has completed best estimate studies analyzing several dominant BWR accident scenarios. These scenarios were identified by early Probabilistic Risk Assessment (PRA) studies and detailed ORNL analysis complements such studies. In performing these studies, ORNL has used the MARCH code extensively. ORNL investigators have identified several deficiencies in early versions of MARCH with regard to BWR modeling. Some of these deficiencies appear to have been remedied by the most recent release of the code. It is the purpose of this paper to identify several of these deficiencies. All the information presented concerns the degraded core thermal/hydraulic analysis associated with each of the ORNL studies. This includes calculations of the containment response. The period of interest is from the time of permanent core uncovery to the end of the transient. Specific objectives include the determination of the extent of core damage and timing of major events (i.e., onset of Zr/H 2 O reaction, initial clad/fuel melting, loss of control blade structure, etc.). As mentioned previously the major analysis tool used thus far was derived from an early version of MARCH. BWRs have unique features which must be modeled for best estimate severe accident analysis. ORNL has developed and incorporated into its version of MARCH several improved models. These include (1) channel boxes and control blades, (2) SRV actuations, (3) vessel water level, (4) multi-node analysis of in-vessel water inventory, (5) comprehensive hydrogen and water properties package, (6) first order correction to the ideal gas law, and (7) separation of fuel and cladding. Ongoing and future modeling efforts are required. These include (1) detailed modeling for the pressure suppression pool, (2) incorporation of B 4 C/steam reaction models, (3) phenomenological model of corium mass transport, and (4) advanced corium/concrete interaction modeling. 10 references, 17 figures, 1 table
Biemans, Floor; de Jong, Mart C M; Bijma, Piter
2017-06-30
Infectious diseases in farm animals affect animal health, decrease animal welfare and can affect human health. Selection and breeding of host individuals with desirable traits regarding infectious diseases can help to fight disease transmission, which is affected by two types of (genetic) traits: host susceptibility and host infectivity. Quantitative genetic studies on infectious diseases generally connect an individual's disease status to its own genotype, and therefore capture genetic effects on susceptibility only. However, they usually ignore variation in exposure to infectious herd mates, which may limit the accuracy of estimates of genetic effects on susceptibility. Moreover, genetic effects on infectivity will exist as well. Thus, to design optimal breeding strategies, it is essential that genetic effects on infectivity are quantified. Given the potential importance of genetic effects on infectivity, we set out to develop a model to estimate the effect of single nucleotide polymorphisms (SNPs) on both host susceptibility and host infectivity. To evaluate the quality of the resulting SNP effect estimates, we simulated an endemic disease in 10 groups of 100 individuals, and recorded time-series data on individual disease status. We quantified bias and precision of the estimates for different sizes of SNP effects, and identified the optimum recording interval when the number of records is limited. We present a generalized linear mixed model to estimate the effect of SNPs on both host susceptibility and host infectivity. SNP effects were on average slightly underestimated, i.e. estimates were conservative. Estimates were less precise for infectivity than for susceptibility. Given our sample size, the power to estimate SNP effects for susceptibility was 100% for differences between genotypes of a factor 1.56 or more, and was higher than 60% for infectivity for differences between genotypes of a factor 4 or more. When disease status was recorded 11 times on each
Petrie, Joshua G; Eisenberg, Marisa C; Ng, Sophia; Malosh, Ryan E; Lee, Kyu Han; Ohmit, Suzanne E; Monto, Arnold S
2017-12-15
Household cohort studies are an important design for the study of respiratory virus transmission. Inferences from these studies can be improved through the use of mechanistic models to account for household structure and risk as an alternative to traditional regression models. We adapted a previously described individual-based transmission hazard (TH) model and assessed its utility for analyzing data from a household cohort maintained in part for study of influenza vaccine effectiveness (VE). Households with ≥4 individuals, including ≥2 children hazards (PH) models. For each individual, TH models estimated hazards of infection from the community and each infected household contact. Influenza A(H3N2) infection was laboratory-confirmed in 58 (4%) subjects. VE estimates from both models were similarly low overall (Cox PH: 20%, 95% confidence interval: -57, 59; TH: 27%, 95% credible interval: -23, 58) and highest for children Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
International Nuclear Information System (INIS)
Laurens, J.M.
1985-01-01
A computer code was written to model food chains in order to estimate the internal and external doses, for stochastic and non-stochastic effects, on humans (adults and infants). Results are given for 67 radionuclides, for unit concentration in water (1 Bq/L) and in atmosphere (1 Bq/m 3 )
Comparison among Models to Estimate the Shielding Effectiveness Applied to Conductive Textiles
Directory of Open Access Journals (Sweden)
Alberto Lopez
2013-01-01
Full Text Available The purpose of this paper is to present a comparison among two models and its measurement to calculate the shielding effectiveness of electromagnetic barriers, applying it to conductive textiles. Each one, models a conductive textile as either a (1 wire mesh screen or (2 compact material. Therefore, the objective is to perform an analysis of the models in order to determine which one is a better approximation for electromagnetic shielding fabrics. In order to provide results for the comparison, the shielding effectiveness of the sample has been measured by means of the standard ASTM D4935-99.
Cizek, Pavel; Lei, Jinghua
The identification in a nonseparable single-index models with correlated random effects is considered in panel data with a fixed number of time periods. The identification assumption is based on the correlated random effects structure. Under this assumption, the parameters of interest are identified
Cizek, P.; Lei, J.
2013-01-01
Abstract: The identification of parameters in a nonseparable single-index models with correlated random effects is considered in the context of panel data with a fixed number of time periods. The identification assumption is based on the correlated random-effect structure: the distribution of
Estimating Modifying Effect of Age on Genetic and Environmental Variance Components in Twin Models.
He, Liang; Sillanpää, Mikko J; Silventoinen, Karri; Kaprio, Jaakko; Pitkäniemi, Janne
2016-04-01
Twin studies have been adopted for decades to disentangle the relative genetic and environmental contributions for a wide range of traits. However, heritability estimation based on the classical twin models does not take into account dynamic behavior of the variance components over age. Varying variance of the genetic component over age can imply the existence of gene-environment (G×E) interactions that general genome-wide association studies (GWAS) fail to capture, which may lead to the inconsistency of heritability estimates between twin design and GWAS. Existing parametricG×Einteraction models for twin studies are limited by assuming a linear or quadratic form of the variance curves with respect to a moderator that can, however, be overly restricted in reality. Here we propose spline-based approaches to explore the variance curves of the genetic and environmental components. We choose the additive genetic, common, and unique environmental variance components (ACE) model as the starting point. We treat the component variances as variance functions with respect to age modeled by B-splines or P-splines. We develop an empirical Bayes method to estimate the variance curves together with their confidence bands and provide an R package for public use. Our simulations demonstrate that the proposed methods accurately capture dynamic behavior of the component variances in terms of mean square errors with a data set of >10,000 twin pairs. Using the proposed methods as an alternative and major extension to the classical twin models, our analyses with a large-scale Finnish twin data set (19,510 MZ twins and 27,312 DZ same-sex twins) discover that the variances of the A, C, and E components for body mass index (BMI) change substantially across life span in different patterns and the heritability of BMI drops to ∼50% after middle age. The results further indicate that the decline of heritability is due to increasing unique environmental variance, which provides more
Ronald E. McRoberts; Veronica C. Lessard
2001-01-01
Uncertainty in diameter growth predictions is attributed to three general sources: measurement error or sampling variability in predictor variables, parameter covariances, and residual or unexplained variation around model expectations. Using measurement error and sampling variability distributions obtained from the literature and Monte Carlo simulation methods, the...
Lusivika-Nzinga, Clovis; Selinger-Leneman, Hana; Grabar, Sophie; Costagliola, Dominique; Carrat, Fabrice
2017-12-04
The Marginal Structural Cox Model (Cox-MSM), an alternative approach to handle time-dependent confounder, was introduced for survival analysis and applied to estimate the joint causal effect of two time-dependent nonrandomized treatments on survival among HIV-positive subjects. Nevertheless, Cox-MSM performance in the case of multiple treatments has not been fully explored under different degree of time-dependent confounding for treatments or in case of interaction between treatments. We aimed to evaluate and compare the performance of the marginal structural Cox model (Cox-MSM) to the standard Cox model in estimating the treatment effect in the case of multiple treatments under different scenarios of time-dependent confounding and when an interaction between treatment effects is present. We specified a Cox-MSM with two treatments including an interaction term for situations where an adverse event might be caused by two treatments taken simultaneously but not by each treatment taken alone. We simulated longitudinal data with two treatments and a time-dependent confounder affected by one or the two treatments. To fit the Cox-MSM, we used the inverse probability weighting method. We illustrated the method to evaluate the specific effect of protease inhibitors combined (or not) to other antiretroviral medications on the anal cancer risk in HIV-infected individuals, with CD4 cell count as time-dependent confounder. Overall, Cox-MSM performed better than the standard Cox model. Furthermore, we showed that estimates were unbiased when an interaction term was included in the model. Cox-MSM may be used for accurately estimating causal individual and joined treatment effects from a combination therapy in presence of time-dependent confounding provided that an interaction term is estimated.
Nelson, Suchitra; Albert, Jeffrey M.
2013-01-01
Mediators are intermediate variables in the causal pathway between an exposure and an outcome. Mediation analysis investigates the extent to which exposure effects occur through these variables, thus revealing causal mechanisms. In this paper, we consider the estimation of the mediation effect when the outcome is binary and multiple mediators of different types exist. We give a precise definition of the total mediation effect as well as decomposed mediation effects through individual or sets of mediators using the potential outcomes framework. We formulate a model of joint distribution (probit-normal) using continuous latent variables for any binary mediators to account for correlations among multiple mediators. A mediation formula approach is proposed to estimate the total mediation effect and decomposed mediation effects based on this parametric model. Estimation of mediation effects through individual or subsets of mediators requires an assumption involving the joint distribution of multiple counterfactuals. We conduct a simulation study that demonstrates low bias of mediation effect estimators for two-mediator models with various combinations of mediator types. The results also show that the power to detect a non-zero total mediation effect increases as the correlation coefficient between two mediators increases, while power for individual mediation effects reaches a maximum when the mediators are uncorrelated. We illustrate our approach by applying it to a retrospective cohort study of dental caries in adolescents with low and high socioeconomic status. Sensitivity analysis is performed to assess the robustness of conclusions regarding mediation effects when the assumption of no unmeasured mediator-outcome confounders is violated. PMID:23650048
Methods of statistical model estimation
Hilbe, Joseph
2013-01-01
Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th
Integrated human-clothing system model for estimating the effect of walking on clothing insulation
Energy Technology Data Exchange (ETDEWEB)
Ghaddar, Nesreen [American University of Beirut, Faculty of Engineering and Architecture, P.O. Box 11-236, Riad ElSolh, 1107 2020, Beirut (Lebanon); Ghali, Kamel [Beirut Arab University, Faculty of Engineering, Beirut (Lebanon); Jones, Byron [Kansas State University, College of Engineering, 148 Rathbone Hall, 66506-5202, Manhattan, KS (United States)
2003-06-01
The objective of this work is to develop a 1-D transient heat and mass transfer model of a walking clothed human to predict the dynamic clothing dry heat insulation values and vapor resistances. Developing an integrated model of human and clothing system under periodic ventilation requires estimation of the heat and mass transfer film coefficients at the skin to the air layer subject to oscillating normal flow. Experiments were conducted in an environmental chamber under controlled conditions of 25 C and 50% relative humidity to measure the mass transfer coefficient at the skin to the air layer separating the wet skin and the fabric. A 1-D mathematical model is developed to simulate the dynamic thermal behavior of clothing and its interaction with the human thermoregulation system under walking conditions. A modification of Gagge's two-node model is used to simulate the human physiological regulatory responses. The human model is coupled to a clothing three-node model of the fabric that takes into consideration the adsorption of water vapor in the fibers during the periodic ventilation of the fabric by the air motion in from ambient environment and out from the air layer adjacent to the moist skin. When physical activity and ambient conditions are specified, the integrated model of human-clothing can predict the thermo-regulatory responses of the body together with the temperature and insulation values of the fabric. The developed model is used to predict the periodic ventilation flow rate in and out of the fabric, the periodic fabric regain, the fabric temperature, the air layer temperature, the heat loss or gain from the skin, and dry and vapor resistances of the clothing. The heat loss from the skin increases with the increase of the frequency of ventilation and with the increased metabolic rate of the body. In addition, the dry resistance of the clothing fabrics, predicted by the current model, IS compared with published experimental data. The current
The Displacement Effect of Labour-Market Programs: Estimates from the MONASH Model
Peter B. Dixon; Maureen T. Rimmer
2005-01-01
A key question concerning labour-market programs is the extent to which they generate jobs for their target group at the expense of others. This effect is measured by displacement percentages. We describe a version of the MONASH model designed to quantify the effects of labour-market programs. Our simulation results suggests that: (1) labour-market programs can generate significant long-run increases in employment; (2) displacement percentages depend on how a labour-market program affects the...
Carpintero, Elisabet; González-Dugo, María P.; José Polo, María; Hain, Christopher; Nieto, Héctor; Gao, Feng; Andreu, Ana; Kustas, William; Anderson, Martha
2017-04-01
The integration of currently available satellite data into surface energy balance models can provide estimates of evapotranspiration (ET) with spatial and temporal resolutions determined by sensor characteristics. The use of data fusion techniques may increase the temporal resolution of these estimates using multiple satellites, providing a more frequent ET monitoring for hydrological purposes. The objective of this work is to analyze the effects of pixel resolution on the estimation of evapotranspiration using different remote sensing platforms, and to provide continuous monitoring of ET over a water-controlled ecosystem, the Holm oak savanna woodland known as dehesa. It is an agroforestry system with a complex canopy structure characterized by widely-spaced oak trees combined with crops, pasture and shrubs. The study was carried out during two years, 2013 and 2014, combining ET estimates at different spatial and temporal resolutions and applying data fusion techniques for a frequent monitoring of water use at fine spatial resolution. A global and daily ET product at 5 km resolution, developed with the ALEXI model using MODIS day-night temperature difference (Anderson et al., 2015a) was used as a starting point. The associated flux disaggregation scheme, DisALEXI (Norman et al., 2003), was later applied to constrain higher resolution ET from both MODIS and Landsat 7/8 images. The Climate Forecast System Reanalysis (CFSR) provided the meteorological data. Finally, a data fusion technique, the STARFM model (Gao et al., 2006), was applied to fuse MODIS and Landsat ET maps in order to obtain daily ET at 30 m resolution. These estimates were validated and analyzed at two different scales: at local scale over a dehesa experimental site and at watershed scale with a predominant Mediterranean oak savanna landscape, both located in Southern Spain. Local ET estimates from the modeling system were validated with measurements provided by an eddy covariance tower installed in
Multivariate Effect Size Estimation: Confidence Interval Construction via Latent Variable Modeling
Raykov, Tenko; Marcoulides, George A.
2010-01-01
A latent variable modeling method is outlined for constructing a confidence interval (CI) of a popular multivariate effect size measure. The procedure uses the conventional multivariate analysis of variance (MANOVA) setup and is applicable with large samples. The approach provides a population range of plausible values for the proportion of…
SPSS and SAS procedures for estimating indirect effects in simple mediation models.
Preacher, Kristopher J; Hayes, Andrew F
2004-11-01
Researchers often conduct mediation analysis in order to indirectly assess the effect of a proposed cause on some outcome through a proposed mediator. The utility of mediation analysis stems from its ability to go beyond the merely descriptive to a more functional understanding of the relationships among variables. A necessary component of mediation is a statistically and practically significant indirect effect. Although mediation hypotheses are frequently explored in psychological research, formal significance tests of indirect effects are rarely conducted. After a brief overview of mediation, we argue the importance of directly testing the significance of indirect effects and provide SPSS and SAS macros that facilitate estimation of the indirect effect with a normal theory approach and a bootstrap approach to obtaining confidence intervals, as well as the traditional approach advocated by Baron and Kenny (1986). We hope that this discussion and the macros will enhance the frequency of formal mediation tests in the psychology literature. Electronic copies of these macros may be downloaded from the Psychonomic Society's Web archive at www.psychonomic.org/archive/.
Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J; Zucker, David M
2017-12-01
The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elements of randomization, either by design or by nature (e.g., random inheritance of genes). Instrumental variables estimation of exposure effects is well established for continuous outcomes and to some extent for binary outcomes. It is, however, largely lacking for time-to-event outcomes because of complications due to censoring and survivorship bias. In this article, we make a novel proposal under a class of structural cumulative survival models which parameterize time-varying effects of a point exposure directly on the scale of the survival function; these models are essentially equivalent with a semi-parametric variant of the instrumental variables additive hazards model. We propose a class of recursive instrumental variable estimators for these exposure effects, and derive their large sample properties along with inferential tools. We examine the performance of the proposed method in simulation studies and illustrate it in a Mendelian randomization study to evaluate the effect of diabetes on mortality using data from the Health and Retirement Study. We further use the proposed method to investigate potential benefit from breast cancer screening on subsequent breast cancer mortality based on the HIP-study. © 2017, The International Biometric Society.
Kimball, John; Kang, Sinkyu
2003-01-01
The original objectives of this proposed 3-year project were to: 1) quantify the respective contributions of land cover and disturbance (i.e., wild fire) to uncertainty associated with regional carbon source/sink estimates produced by a variety of boreal ecosystem models; 2) identify the model processes responsible for differences in simulated carbon source/sink patterns for the boreal forest; 3) validate model outputs using tower and field- based estimates of NEP and NPP; and 4) recommend/prioritize improvements to boreal ecosystem carbon models, which will better constrain regional source/sink estimates for atmospheric C02. These original objectives were subsequently distilled to fit within the constraints of a 1 -year study. This revised study involved a regional model intercomparison over the BOREAS study region involving Biome-BGC, and TEM (A.D. McGuire, UAF) ecosystem models. The major focus of these revised activities involved quantifying the sensitivity of regional model predictions associated with land cover classification uncertainties. We also evaluated the individual and combined effects of historical fire activity, historical atmospheric CO2 concentrations, and climate change on carbon and water flux simulations within the BOREAS study region.
Eskes, H.; Boersma, F.; Dirksen, R.; van der A, R.; Veefkind, P.; Levelt, P.; Brinksma, E.; van Roozendael, M.; de Smedt, I.; Gleason, J.
2005-05-01
Based on measurements of GOME on ESA ERS-2, SCIAMACHY on ESA-ENVISAT, and Ozone Monitoring Instrument (OMI) on the NASA EOS-Aura satellite there is now a unique 11-year dataset of global tropospheric nitrogen dioxide measurements from space. The retrieval approach consists of two steps. The first step is an application of the DOAS (Differential Optical Absorption Spectroscopy) approach which delivers the total absorption optical thickness along the light path (the slant column). For GOME and SCIAMACHY this is based on the DOAS implementation developed by BIRA/IASB. For OMI the DOAS implementation was developed in a collaboration between KNMI and NASA. The second retrieval step, developed at KNMI, estimates the tropospheric vertical column of NO2 based on the slant column, cloud fraction and cloud top height retrieval, stratospheric column estimates derived from a data assimilation approach and vertical profile estimates from space-time collocated profiles from the TM chemistry-transport model. The second step was applied with only minor modifications to all three instruments to generate a uniform 11-year data set. In our talk we will address the following topics: - A short summary of the retrieval approach and results - Comparisons with other retrievals - Comparisons with global and regional-scale models - OMI-SCIAMACHY and SCIAMACHY-GOME comparisons - Validation with independent measurements - Trend studies of NO2 for the past 11 years
Effect of two viscosity models on lethality estimation in sterilization of liquid canned foods.
Calderón-Alvarado, M P; Alvarado-Orozco, J M; Herrera-Hernández, E C; Martínez-González, G M; Miranda-López, R; Jiménez-Islas, H
2016-09-01
A numerical study on 2D natural convection in cylindrical cavities during the sterilization of liquid foods was performed. The mathematical model was established on momentum and energy balances and predicts both the heating dynamics of the slowest heating zone (SHZ) and the lethal rate achieved in homogeneous liquid canned foods. Two sophistication levels were proposed in viscosity modelling: 1) considering average viscosity and 2) using an Arrhenius-type model to include the effect of temperature on viscosity. The remaining thermodynamic properties were kept constant. The governing equations were spatially discretized via orthogonal collocation (OC) with mesh size of 25 × 25. Computational simulations were performed using proximate and thermodynamic data for carrot-orange soup, broccoli-cheddar soup, tomato puree, and cream-style corn. Flow patterns, isothermals, heating dynamics of the SHZ, and the sterilization rate achieved for the cases studied were compared for both viscosity models. The dynamics of coldest point and the lethal rate F0 in all food fluids studied were approximately equal in both cases, although the second sophistication level is closer to physical behavior. The model accuracy was compared favorably with reported sterilization time for cream-style corn packed at 303 × 406 can size, predicting 66 min versus an experimental time of 68 min at retort temperature of 121.1 ℃. © The Author(s) 2016.
van der Wal, Wouter; Whitehouse, Pippa L.; Schrama, Ernst J. O.
2015-03-01
Seismic data indicate that there are large viscosity variations in the mantle beneath Antarctica. Consideration of such variations would affect predictions of models of Glacial Isostatic Adjustment (GIA), which are used to correct satellite measurements of ice mass change. However, most GIA models used for that purpose have assumed the mantle to be uniformly stratified in terms of viscosity. The goal of this study is to estimate the effect of lateral variations in viscosity on Antarctic mass balance estimates derived from the Gravity Recovery and Climate Experiment (GRACE) data. To this end, recently-developed global GIA models based on lateral variations in mantle temperature are tuned to fit constraints in the northern hemisphere and then compared to GPS-derived uplift rates in Antarctica. We find that these models can provide a better fit to GPS uplift rates in Antarctica than existing GIA models with a radially-varying (1D) rheology. When 3D viscosity models in combination with specific ice loading histories are used to correct GRACE measurements, mass loss in Antarctica is smaller than previously found for the same ice loading histories and their preferred 1D viscosity profiles. The variation in mass balance estimates arising from using different plausible realizations of 3D viscosity amounts to 20 Gt/yr for the ICE-5G ice model and 16 Gt/yr for the W12a ice model; these values are larger than the GRACE measurement error, but smaller than the variation arising from unknown ice history. While there exist 1D Earth models that can reproduce the total mass balance estimates derived using 3D Earth models, the spatial pattern of gravity rates can be significantly affected by 3D viscosity in a way that cannot be reproduced by GIA models with 1D viscosity. As an example, models with 1D viscosity always predict maximum gravity rates in the Ross Sea for the ICE-5G ice model, however, for one of the three preferred 3D models the maximum (for the same ice model) is found
Model estimation of land-use effects on water levels of northern Prairie wetlands
Voldseth, R.A.; Johnson, W.C.; Gilmanov, T.; Guntenspergen, G.R.; Millett, B.V.
2007-01-01
Wetlands of the Prairie Pothole Region exist in a matrix of grassland dominated by intensive pastoral and cultivation agriculture. Recent conservation management has emphasized the conversion of cultivated farmland and degraded pastures to intact grassland to improve upland nesting habitat. The consequences of changes in land-use cover that alter watershed processes have not been evaluated relative to their effect on the water budgets and vegetation dynamics of associated wetlands. We simulated the effect of upland agricultural practices on the water budget and vegetation of a semipermanent prairie wetland by modifying a previously published mathematical model (WETSIM). Watershed cover/land-use practices were categorized as unmanaged grassland (native grass, smooth brome), managed grassland (moderately heavily grazed, prescribed burned), cultivated crops (row crop, small grain), and alfalfa hayland. Model simulations showed that differing rates of evapotranspiration and runoff associated with different upland plant-cover categories in the surrounding catchment produced differences in wetland water budgets and linked ecological dynamics. Wetland water levels were highest and vegetation the most dynamic under the managed-grassland simulations, while water levels were the lowest and vegetation the least dynamic under the unmanaged-grassland simulations. The modeling results suggest that unmanaged grassland, often planted for waterfowl nesting, may produce the least favorable wetland conditions for birds, especially in drier regions of the Prairie Pothole Region. These results stand as hypotheses that urgently need to be verified with empirical data.
Kerboua, Kaouther; Hamdaoui, Oualid
2018-01-01
Based on two different assumptions regarding the equation describing the state of the gases within an acoustic cavitation bubble, this paper studies the sonochemical production of hydrogen, through two numerical models treating the evolution of a chemical mechanism within a single bubble saturated with oxygen during an oscillation cycle in water. The first approach is built on an ideal gas model, while the second one is founded on Van der Waals equation, and the main objective was to analyze the effect of the considered state equation on the ultrasonic hydrogen production retrieved by simulation under various operating conditions. The obtained results show that even when the second approach gives higher values of temperature, pressure and total free radicals production, yield of hydrogen does not follow the same trend. When comparing the results released by both models regarding hydrogen production, it was noticed that the ratio of the molar amount of hydrogen is frequency and acoustic amplitude dependent. The use of Van der Waals equation leads to higher quantities of hydrogen under low acoustic amplitude and high frequencies, while employing ideal gas law based model gains the upper hand regarding hydrogen production at low frequencies and high acoustic amplitudes. Copyright © 2017 Elsevier B.V. All rights reserved.
A conceptual model to estimate cost effectiveness of the indoor environment improvements
Energy Technology Data Exchange (ETDEWEB)
Seppanen, Olli; Fisk, William J.
2003-06-01
Macroeconomic analyses indicate a high cost to society of a deteriorated indoor climate. The few example calculations performed to date indicate that measures taken to improve IEQ are highly cost-effective when health and productivity benefits are considered. We believe that cost-benefit analyses of building designs and operations should routinely incorporate health and productivity impacts. As an initial step, we developed a conceptual model that shows the links between improvements in IEQ and the financial gains from reductions in medical care and sick leave, improved work performance, lower employee turn over, and reduced maintenance due to fewer complaints.
Putti, Fernando Ferrari; Filho, Luis Roberto Almeida Gabriel; Gabriel, Camila Pires Cremasco; Neto, Alfredo Bonini; Bonini, Carolina Dos Santos Batista; Rodrigues Dos Reis, André
2017-06-01
This study aimed to develop a fuzzy mathematical model to estimate the impacts of global warming on the vitality of Laelia purpurata growing in different Brazilian environmental conditions. In order to develop the mathematical model was considered as intrinsic factors the parameters: temperature, humidity and shade conditions to determine the vitality of plants. Fuzzy model results could accurately predict the optimal conditions for cultivation of Laelia purpurata in several sites of Brazil. Based on fuzzy model results, we found that higher temperatures and lacking of properly shading can reduce the vitality of orchids. Fuzzy mathematical model could precisely detect the effect of higher temperatures causing damages on vitality of plants as a consequence of global warming. Copyright © 2017 Elsevier Inc. All rights reserved.
Purshouse, Robin C; Meier, Petra S; Brennan, Alan; Taylor, Karl B; Rafia, Rachid
2010-04-17
Although pricing policies for alcohol are known to be effective, little is known about how specific interventions affect health-care costs and health-related quality-of-life outcomes for different types of drinkers. We assessed effects of alcohol pricing and promotion policy options in various population subgroups. We built an epidemiological mathematical model to appraise 18 pricing policies, with English data from the Expenditure and Food Survey and the General Household Survey for average and peak alcohol consumption. We used results from econometric analyses (256 own-price and cross-price elasticity estimates) to estimate effects of policies on alcohol consumption. We applied risk functions from systemic reviews and meta-analyses, or derived from attributable fractions, to model the effect of consumption changes on mortality and disease prevalence for 47 illnesses. General price increases were effective for reduction of consumption, health-care costs, and health-related quality of life losses in all population subgroups. Minimum pricing policies can maintain this level of effectiveness for harmful drinkers while reducing effects on consumer spending for moderate drinkers. Total bans of supermarket and off-license discounting are effective but banning only large discounts has little effect. Young adult drinkers aged 18-24 years are especially affected by policies that raise prices in pubs and bars. Minimum pricing policies and discounting restrictions might warrant further consideration because both strategies are estimated to reduce alcohol consumption, and related health harms and costs, with drinker spending increases targeting those who incur most harm. Policy Research Programme, UK Department of Health. Copyright 2010 Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Mailund, Thomas; Dutheil, Julien; Hobolth, Asger
2011-01-01
event has occurred to split them apart. The size of these segments of constant divergence depends on the recombination rate, but also on the speciation time, the effective population size of the ancestral population, as well as demographic effects and selection. Thus, inference of these parameters may......, and the ancestral effective population size. The model is efficient enough to allow inference on whole-genome data sets. We first investigate the power and consistency of the model with coalescent simulations and then apply it to the whole-genome sequences of the two orangutan sub-species, Bornean (P. p. pygmaeus......) and Sumatran (P. p. abelii) orangutans from the Orangutan Genome Project. We estimate the speciation time between the two sub-species to be thousand years ago and the effective population size of the ancestral orangutan species to be , consistent with recent results based on smaller data sets. We also report...
Asymptotic Optimality of Estimating Function Estimator for CHARN Model
Directory of Open Access Journals (Sweden)
Tomoyuki Amano
2012-01-01
Full Text Available CHARN model is a famous and important model in the finance, which includes many financial time series models and can be assumed as the return processes of assets. One of the most fundamental estimators for financial time series models is the conditional least squares (CL estimator. However, recently, it was shown that the optimal estimating function estimator (G estimator is better than CL estimator for some time series models in the sense of efficiency. In this paper, we examine efficiencies of CL and G estimators for CHARN model and derive the condition that G estimator is asymptotically optimal.
Directory of Open Access Journals (Sweden)
Quang Duy Pham
Full Text Available Vietnam has been largely reliant on international support in its HIV response. Over 2006-2010, a total of US$480 million was invested in its HIV programmes, more than 70% of which came from international sources. This study investigates the potential epidemiological impacts of these programmes and their cost-effectiveness.We conducted a data synthesis of HIV programming, spending, epidemiological, and clinical outcomes. Counterfactual scenarios were defined based on assumed programme coverage and behaviours had the programmes not been implemented. An epidemiological model, calibrated to reflect the actual epidemiological trends, was used to estimate plausible ranges of programme impacts. The model was then used to estimate the costs per averted infection, death, and disability adjusted life-year (DALY.Based on observed prevalence reductions amongst most population groups, and plausible counterfactuals, modelling suggested that antiretroviral therapy (ART and prevention programmes over 2006-2010 have averted an estimated 50,600 [95% uncertainty bound: 36,300-68,900] new infections and 42,600 [36,100-54,100] deaths, resulting in 401,600 [312,200-496,300] fewer DALYs across all population groups. HIV programmes in Vietnam have cost an estimated US$1,972 [1,447-2,747], US$2,344 [1,843-2,765], and US$248 [201-319] for each averted infection, death, and DALY, respectively.Our evaluation suggests that HIV programmes in Vietnam have most likely had benefits that are cost-effective. ART and direct HIV prevention were the most cost-effective interventions in reducing HIV disease burden.
Pham, Quang Duy; Wilson, David P; Kerr, Cliff C; Shattock, Andrew J; Do, Hoa Mai; Duong, Anh Thuy; Nguyen, Long Thanh; Zhang, Lei
2015-01-01
Vietnam has been largely reliant on international support in its HIV response. Over 2006-2010, a total of US$480 million was invested in its HIV programmes, more than 70% of which came from international sources. This study investigates the potential epidemiological impacts of these programmes and their cost-effectiveness. We conducted a data synthesis of HIV programming, spending, epidemiological, and clinical outcomes. Counterfactual scenarios were defined based on assumed programme coverage and behaviours had the programmes not been implemented. An epidemiological model, calibrated to reflect the actual epidemiological trends, was used to estimate plausible ranges of programme impacts. The model was then used to estimate the costs per averted infection, death, and disability adjusted life-year (DALY). Based on observed prevalence reductions amongst most population groups, and plausible counterfactuals, modelling suggested that antiretroviral therapy (ART) and prevention programmes over 2006-2010 have averted an estimated 50,600 [95% uncertainty bound: 36,300-68,900] new infections and 42,600 [36,100-54,100] deaths, resulting in 401,600 [312,200-496,300] fewer DALYs across all population groups. HIV programmes in Vietnam have cost an estimated US$1,972 [1,447-2,747], US$2,344 [1,843-2,765], and US$248 [201-319] for each averted infection, death, and DALY, respectively. Our evaluation suggests that HIV programmes in Vietnam have most likely had benefits that are cost-effective. ART and direct HIV prevention were the most cost-effective interventions in reducing HIV disease burden.
International Nuclear Information System (INIS)
Han, Seok-Jung; KEUM, Dong-Kwon; Jang, Seung-Cheol
2015-01-01
The FCM includes complex transport phenomena of radiation materials on a biokinetic system of contaminated environments. An estimation of chronic health effects is a key part of the level 3 PSA (Probabilistic Safety Assessment), which depends on the FCM estimation from contaminated foods ingestion. A cultural ingestion habit of a local region and agricultural productions are different to the general features over worldwide scale or case by case. This is a reason to develop a domestic FCM data for the level 3 PSA. However, a generation of the specific FCM data is a complex process and under a large degree of uncertainty due to inherent biokinetic models. As a preliminary study, the present study focuses on an infrastructure development to generation of a specific FCM data. During this process, the features of FCM data to generate a domestic FCM data were investigated. Based on the insights obtained from this process, a specific domestic FCM data was developed. The present study was developed a domestic FCM data to estimate the chronic health effects of off-site consequence analysis. From this study, an insight was obtained, that a domestic FCM data is roughly 20 times higher than the MACCS2 defaults data. Based on this observation, it is clear that the specific chronic health effects of a domestic plant site should be considered in the off-site consequence analysis
Energy Technology Data Exchange (ETDEWEB)
Han, Seok-Jung; KEUM, Dong-Kwon; Jang, Seung-Cheol [KAERI, Daejeon (Korea, Republic of)
2015-05-15
The FCM includes complex transport phenomena of radiation materials on a biokinetic system of contaminated environments. An estimation of chronic health effects is a key part of the level 3 PSA (Probabilistic Safety Assessment), which depends on the FCM estimation from contaminated foods ingestion. A cultural ingestion habit of a local region and agricultural productions are different to the general features over worldwide scale or case by case. This is a reason to develop a domestic FCM data for the level 3 PSA. However, a generation of the specific FCM data is a complex process and under a large degree of uncertainty due to inherent biokinetic models. As a preliminary study, the present study focuses on an infrastructure development to generation of a specific FCM data. During this process, the features of FCM data to generate a domestic FCM data were investigated. Based on the insights obtained from this process, a specific domestic FCM data was developed. The present study was developed a domestic FCM data to estimate the chronic health effects of off-site consequence analysis. From this study, an insight was obtained, that a domestic FCM data is roughly 20 times higher than the MACCS2 defaults data. Based on this observation, it is clear that the specific chronic health effects of a domestic plant site should be considered in the off-site consequence analysis.
Using Hedonic price model to estimate effects of flood on real ...
African Journals Online (AJOL)
Distances were measured in metres from the centroid of the building to the edge of the river and roads using Global Positioning System. The result of the estimation shows that property located within the floodplain are lowers in value by an average of N 493, 408 which represents 6.8 percent reduction in sales price for an ...
The effect of position sources on estimated eigenvalues in intensity modeled data
Hendrikse, A.J.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan; Goseling, Jasper; Weber, Jos H.
2010-01-01
In biometrics, often models are used in which the data distributions are approximated with normal distributions. In particular, the eigenface method models facial data as a mixture of fixed-position intensity signals with a normal distribution. The model parameters, a mean value and a covariance
Jacob Strunk; Hailemariam Temesgen; Hans-Erik Andersen; James P. Flewelling; Lisa Madsen
2012-01-01
Using lidar in an area-based model-assisted approach to forest inventory has the potential to increase estimation precision for some forest inventory variables. This study documents the bias and precision of a model-assisted (regression estimation) approach to forest inventory with lidar-derived auxiliary variables relative to lidar pulse density and the number of...
Amplitude Models for Discrimination and Yield Estimation
Energy Technology Data Exchange (ETDEWEB)
Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-09-01
This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.
Spatial scale effects on model parameter estimation and predictive uncertainty in ungauged basins
CSIR Research Space (South Africa)
Hughes, DA
2013-06-01
Full Text Available The most appropriate scale to use for hydrological modelling depends on the structure of the chosen model, the purpose of the results and the resolution of the available data used to quantify parameter values and provide the climatic forcing data...
DeMarco, J J; Cagnon, C H; Cody, D D; Stevens, D M; McCollough, C H; Zankl, M; Angel, E; McNitt-Gray, M F
2007-05-07
The purpose of this work is to examine the effects of patient size on radiation dose from CT scans. To perform these investigations, we used Monte Carlo simulation methods with detailed models of both patients and multidetector computed tomography (MDCT) scanners. A family of three-dimensional, voxelized patient models previously developed and validated by the GSF was implemented as input files using the Monte Carlo code MCNPX. These patient models represent a range of patient sizes and ages (8 weeks to 48 years) and have all radiosensitive organs previously identified and segmented, allowing the estimation of dose to any individual organ and calculation of patient effective dose. To estimate radiation dose, every voxel in each patient model was assigned both a specific organ index number and an elemental composition and mass density. Simulated CT scans of each voxelized patient model were performed using a previously developed MDCT source model that includes scanner specific spectra, including bowtie filter, scanner geometry and helical source path. The scan simulations in this work include a whole-body scan protocol and a thoracic CT scan protocol, each performed with fixed tube current. The whole-body scan simulation yielded a predictable decrease in effective dose as a function of increasing patient weight. Results from analysis of individual organs demonstrated similar trends, but with some individual variations. A comparison with a conventional dose estimation method using the ImPACT spreadsheet yielded an effective dose of 0.14 mSv mAs(-1) for the whole-body scan. This result is lower than the simulations on the voxelized model designated 'Irene' (0.15 mSv mAs(-1)) and higher than the models 'Donna' and 'Golem' (0.12 mSv mAs(-1)). For the thoracic scan protocol, the ImPACT spreadsheet estimates an effective dose of 0.037 mSv mAs(-1), which falls between the calculated values for Irene (0.042 mSv mAs(-1)) and Donna (0.031 mSv mAs(-1)) and is higher relative
DeMarco, J. J.; Cagnon, C. H.; Cody, D. D.; Stevens, D. M.; McCollough, C. H.; Zankl, M.; Angel, E.; McNitt-Gray, M. F.
2007-05-01
The purpose of this work is to examine the effects of patient size on radiation dose from CT scans. To perform these investigations, we used Monte Carlo simulation methods with detailed models of both patients and multidetector computed tomography (MDCT) scanners. A family of three-dimensional, voxelized patient models previously developed and validated by the GSF was implemented as input files using the Monte Carlo code MCNPX. These patient models represent a range of patient sizes and ages (8 weeks to 48 years) and have all radiosensitive organs previously identified and segmented, allowing the estimation of dose to any individual organ and calculation of patient effective dose. To estimate radiation dose, every voxel in each patient model was assigned both a specific organ index number and an elemental composition and mass density. Simulated CT scans of each voxelized patient model were performed using a previously developed MDCT source model that includes scanner specific spectra, including bowtie filter, scanner geometry and helical source path. The scan simulations in this work include a whole-body scan protocol and a thoracic CT scan protocol, each performed with fixed tube current. The whole-body scan simulation yielded a predictable decrease in effective dose as a function of increasing patient weight. Results from analysis of individual organs demonstrated similar trends, but with some individual variations. A comparison with a conventional dose estimation method using the ImPACT spreadsheet yielded an effective dose of 0.14 mSv mAs-1 for the whole-body scan. This result is lower than the simulations on the voxelized model designated 'Irene' (0.15 mSv mAs-1) and higher than the models 'Donna' and 'Golem' (0.12 mSv mAs-1). For the thoracic scan protocol, the ImPACT spreadsheet estimates an effective dose of 0.037 mSv mAs-1, which falls between the calculated values for Irene (0.042 mSv mAs-1) and Donna (0.031 mSv mAs-1) and is higher relative to Golem (0
Sasaki, S.; Yamada, T.
2013-12-01
The great earthquake attacked the north-east area in Japan in March 11, 2011. The system of electrical facilities to control Fukushima Daiichi nuclear power station was completely destroyed by the following tsunamis. From the damaged reactor containment vessels, an amount of radioactive substances had leaked and been diffused in the vicinity of this station. Radiological internal exposure becomes a serious social issue both in Japan and all over the world. The present study provides an easily understandable, kinematic-based model to estimate the effective dose of radioactive substances in a human body by simplified the complicated mechanism of metabolism. International Commission on Radiological Protection (ICRP) has developed an exact model, which is well-known as a standard method to calculate the effective dose for radiological protection. However, owing to that the above method accord too much with the actual mechanism of metabolism in human bodies, it becomes rather difficult for non-professional people of radiology to gasp the whole images of the movement and the influences of radioactive substances in a human body. Therefore, in the present paper we propose a newly-derived and easily-understandable model to estimate the effective dose. The present method is very similar with the traditional and conventional hydrological tank model. Ingestion flux of radioactive substances corresponds to rain intensity and the storage of radioactive substances to the water storage in a basin in runoff analysis. The key of this method is to estimate the energy radiated from the radioactive nuclear disintegration of an atom by using classical theory of E. Fermi of beta decay and special relativity for various kinds of radioactive atoms. The parameters used in this study are only physical half-time and biological half-time, and there are no intentional and operational parameters of coefficients to adjust our theoretical runoff to observation of ICRP. Figure.1 compares time
Warrington, Nicole M; Freathy, Rachel M; Neale, Michael C; Evans, David M
2018-02-13
To date, 60 genetic variants have been robustly associated with birthweight. It is unclear whether these associations represent the effect of an individual's own genotype on their birthweight, their mother's genotype, or both. We demonstrate how structural equation modelling (SEM) can be used to estimate both maternal and fetal effects when phenotype information is present for individuals in two generations and genotype information is available on the older individual. We conduct an extensive simulation study to assess the bias, power and type 1 error rates of the SEM and also apply the SEM to birthweight data in the UK Biobank study. Unlike simple regression models, our approach is unbiased when there is both a maternal and a fetal effect. The method can be used when either the individual's own phenotype or the phenotype of their offspring is not available, and allows the inclusion of summary statistics from additional cohorts where raw data cannot be shared. We show that the type 1 error rate of the method is appropriate, and that there is substantial statistical power to detect a genetic variant that has a moderate effect on the phenotype and reasonable power to detect whether it is a fetal and/or a maternal effect. We also identify a subset of birthweight-associated single nucleotide polymorphisms (SNPs) that have opposing maternal and fetal effects in the UK Biobank. Our results show that SEM can be used to estimate parameters that would be difficult to quantify using simple statistical methods alone. © The Author(s) 2018. Published by Oxford University Press on behalf of the International Epidemiological Association.
Rouholahnejad, E.; Fan, Y.; Kirchner, J. W.; Miralles, D. G.
2017-12-01
Most Earth system models (ESM) average over considerable sub-grid heterogeneity in land surface properties, and overlook subsurface lateral flow. This could potentially bias evapotranspiration (ET) estimates and has implications for future temperature predictions, since overestimations in ET imply greater latent heat fluxes and potential underestimation of dry and warm conditions in the context of climate change. Here we quantify the bias in evaporation estimates that may arise from the fact that ESMs average over considerable heterogeneity in surface properties, and also neglect lateral transfer of water across the heterogeneous landscapes at global scale. We use a Budyko framework to express ET as a function of P and PET to derive simple sub-grid closure relations that quantify how spatial heterogeneity and lateral transfer could affect average ET as seen from the atmosphere. We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimation of average ET. Our analysis at global scale shows that the effects of sub-grid heterogeneity will be most pronounced in steep mountainous areas where the topographic gradient is high and where P is inversely correlated with PET across the landscape. In addition, we use the Total Water Storage (TWS) anomaly estimates from the Gravity Recovery and Climate Experiment (GRACE) remote sensing product and assimilate it into the Global Land Evaporation Amsterdam Model (GLEAM) to correct for existing free drainage lower boundary condition in GLEAM and quantify whether, and how much, accounting for changes in terrestrial storage can improve the simulation of soil moisture and regional ET fluxes at global scale.
The effect of PLS regression in PLS path model estimation when multicollinearity is present
DEFF Research Database (Denmark)
Nielsen, Rikke; Kristensen, Kai; Eskildsen, Jacob
PLS path modelling has previously been found to be robust to multicollinearity both between latent variables and between manifest variables of a common latent variable (see e.g. Cassel et al. (1999), Kristensen, Eskildsen (2005), Westlund et al. (2008)). However, most of the studies investigate...
Effects of climate model radiation, humidity and wind estimates on hydrological simulations
Haddeland, I.; Heinke, J.; Eisner, S.; Chen, C.; Hagemann, S.; Ludwig, F.
2012-01-01
Due to biases in the output of climate models, a bias correction is often needed to make the output suitable for use in hydrological simulations. In most cases only the temperature and precipitation values are bias corrected. However, often there are also biases in other variables such as radiation,
Estimation of health effects of prenatal methylmercury exposure using structural equation models
DEFF Research Database (Denmark)
Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe
2002-01-01
-mercury toxicity. RESULTS: Structural equation models were developed for assessment of the association between biomarkers of prenatal mercury exposure and neuropsychological test scores in 7 year old children. Eleven neurobehavioral outcomes were grouped into motor function and verbally mediated function...... correction for measurement error in exposure variables, incorporation of multiple outcomes and incomplete cases. This approach therefore deserves to be applied more frequently in the analysis of complex epidemiological data sets....
Estimating Price Effects in an Almost Ideal Demand Model of Outbound Thai Tourism to East Asia
C-L. Chang (Chia-Lin); T. Khamkaew (Tanchanok); M.J. McAleer (Michael)
2010-01-01
textabstractThis paper analyzes the responsiveness of Thai outbound tourism to East Asian destinations, namely China, Hong Kong, Japan, Taiwan and Korea, to changes in effective relative price of tourism, total real total tourism expenditure, and one-off events. The nonlinear and linear Almost Ideal
Burke, K. D.; Goring, S. J.; Williams, J. W.; Holloway, T.
2015-12-01
Fossil pollen records from lakes, bogs, and small hollows offer the main source of information about vegetation responses to climate change and land use over timescales of decades to millennia. Millions of pollen grains are released from individual trees each year, and are transported by wind before settling out of the atmosphere. Reconstructing past vegetation from sedimentary pollen records, however, requires careful modeling of pollen production, transport, and deposition. The atmosphere is turbulent, and regional wind patterns shift from day to day. In accordance with this, it is necessary for pollen transport models to adequately account for variable, non-uniform wind patterns and vegetation heterogeneity. Using a simulation approach, with both simulated vegetation patterns and vegetation gradients, as well as simulated wind fields, we show the inconsistency in pollen loading proportions and local vegetation proportions when non-uniform wind patterns are incorporated. Vegetation upwind from the lake is over-represented due to the increased prevalence of winds transporting pollen from that area. The inclusion of North American Regional Reanalysis (NARR) wind records affirms this finding. Of the lake sites explored in this study, none had uniform wind patterns. The use of a settlement-era gridded vegetation dataset, compiled by the PalEON project and based on Public Land Survey System (PLSS) records allows us to model pollen source area with realistic vegetation heterogeneity. Due to differences in productivity, pollen fall speeds, and neighboring vegetation, there exist patterns of vegetation that may be poorly characterized due to over/under representation of different taxa. Better understanding these differences in representation allows for more accurate reconstruction of historical vegetation, and pollen-vegetation relationships.
Thomas J. Brandeis; Maria Del Rocio; Suarez Rozo
2005-01-01
Total aboveground live tree biomass in Puerto Rican lower montane wet, subtropical wet, subtropical moist and subtropical dry forests was estimated using data from two forest inventories and published regression equations. Multiple potentially-applicable published biomass models existed for some forested life zones, and their estimates tended to diverge with increasing...
Directory of Open Access Journals (Sweden)
Jing Su
2008-05-01
Full Text Available The impact of Asian dust on cloud radiative forcing during 2003–2006 is studied by using the Clouds and Earth's Radiant Energy Budget Scanner (CERES data and the Fu-Liou radiative transfer model. Analysis of satellite data shows that the dust aerosol significantly reduced the cloud cooling effect at TOA. In dust contaminated cloudy regions, the 4-year mean values of the instantaneous shortwave, longwave and net cloud radiative forcing are −138.9, 69.1, and −69.7 Wm^{−2}, which are 57.0, 74.2, and 46.3%, respectively, of the corresponding values in pristine cloudy regions. The satellite-retrieved cloud properties are significantly different in the dusty regions and can influence the radiative forcing indirectly. The contributions to the cloud radiation forcing by the dust direct, indirect and semi-direct effects are estimated using combined satellite observations and Fu-Liou model simulation. The 4-year mean value of combination of dust indirect and semi-direct shortwave radiative forcing (SWRF is 82.2 Wm^{−2}, which is 78.4% of the total dust effect. The dust direct effect is only 22.7 Wm^{−2}, which is 21.6% of the total effect. Because both first and second indirect effects enhance cloud cooling, the aerosol-induced cloud warming is mainly the result of the semi-direct effect of dust.
Directory of Open Access Journals (Sweden)
Yanchun Li
Full Text Available Proper development of a seed requires coordinated exchanges of signals among the three components that develop side by side in the seed. One of these is the maternal integument that encloses the other two zygotic components, i.e., the diploid embryo and its nurturing annex, the triploid endosperm. Although the formation of the embryo and endosperm contains the contributions of both maternal and paternal parents, maternally and paternally derived alleles may be expressed differently, leading to a so-called parent-of-origin or imprinting effect. Currently, the nature of how genes from the maternal and zygotic genomes interact to affect seed development remains largely unknown. Here, we present a novel statistical model for estimating the main and interaction effects of quantitative trait loci (QTLs that are derived from different genomes and further testing the imprinting effects of these QTLs on seed development. The experimental design used is based on reciprocal backcrosses toward both parents, so that the inheritance of parent-specific alleles could be traced. The computing model and algorithm were implemented with the maximum likelihood approach. The new strategy presented was applied to study the mode of inheritance for QTLs that control endoreduplication traits in maize endosperm. Monte Carlo simulation studies were performed to investigate the statistical properties of the new model with the data simulated under different imprinting degrees. The false positive rate of imprinting QTL discovery by the model was examined by analyzing the simulated data that contain no imprinting QTL. The reciprocal design and a series of analytical and testing strategies proposed provide a standard procedure for genomic mapping of QTLs involved in the genetic control of complex seed development traits in flowering plants.
Yang, Ji Seung; Cai, Li
2013-01-01
The main purpose of this study is to improve estimation efficiency in obtaining full-information maximum likelihood (FIML) estimates of contextual effects in the framework of a nonlinear multilevel latent variable model by adopting the Metropolis-Hastings Robbins-Monro algorithm (MH-RM; Cai, 2008, 2010a, 2010b). Results indicate that the MH-RM…
In this work, we address uncertainty analysis for a model, presented in a companion paper, quantifying the effect of soil moisture and plant development on soybean (Glycine max (L.) Merr.) leaf conductance. To achieve this we present several methods for confidence interval estimation. Estimation ...
Charlier, G.W.P.
1994-01-01
In a binary choice panel data model with individual effects and two time periods, Manski proposed the maximum score estimator, based on a discontinuous objective function, and proved its consistency under weak distributional assumptions. However, the rate of convergence of this estimator is low (N)
Cheng, Linsong; Gu, Hao; Huang, Shijun
2017-05-01
The aim of this work is to present a comprehensive mathematical model for estimating oil drainage rate in Steam-assisted gravity drainage (SAGD) process, more importantly, wellbore/formation coupling effect is considered. Firstly, mass and heat transfer in vertical and horizontal wellbores are described briefly. Then, a function of steam chamber height is introduced and the expressions for oil drainage rate in rising and expanding steam chamber stages are derived in detail. Next, a calculation flowchart is provided and an example is given to introduce how to use the proposed method. Finally, after the mathematical model is validated, the effects of wellhead steam injection rate on simulated results are further analyzed. The results indicate that heat injection power per meter reduces gradually along the horizontal wellbore, which affects both steam chamber height and oil drainage rate in the SAGD process. In addition, when production time is the same, the calculated oil drainage rate from the new method is lower than that from Butler's method. Moreover, the paper shows that when wellhead steam injection rate is low enough, the steam chamber is not formed at the horizontal well's toe position and enhancing the wellhead steam injection rate can increase the oil drainage rate.
Estimating haplotype effects for survival data
DEFF Research Database (Denmark)
Scheike, Thomas; Martinussen, Torben; Silver, J
2010-01-01
Genetic association studies often investigate the effect of haplotypes on an outcome of interest. Haplotypes are not observed directly, and this complicates the inclusion of such effects in survival models. We describe a new estimating equations approach for Cox's regression model to assess haplo...
Model for traffic emissions estimation
Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.
A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.
Strand, Matthew; Hopke, Philip K; Zhao, Weixiang; Vedal, Sverre; Gelfand, Erwin; Rabinovitch, Nathan
2007-09-01
Various methods have been developed recently to estimate personal exposures to ambient particulate matter less than 2.5 microm in diameter (PM2.5) using fixed outdoor monitors as well as personal exposure monitors. One class of estimators involves extrapolating values using ambient-source components of PM2.5, such as sulfate and iron. A key step in extrapolating these values is to correct for differences in infiltration characteristics of the component used in extrapolation (such as sulfate within PM2.5) and PM2.5. When this is not done, resulting health effect estimates will be biased. Another class of approaches involves factor analysis methods such as positive matrix factorization (PMF). Using either an extrapolation or a factor analysis method in conjunction with regression calibration allows one to estimate the direct effects of ambient PM2.5 on health, eliminating bias caused by using fixed outdoor monitors and estimated personal ambient PM2.5 concentrations. Several forms of the extrapolation method are defined, including some new ones. Health effect estimates that result from the use of these methods are compared with those from an expanded PMF analysis using data collected from a health study of asthmatic children conducted in Denver, Colorado. Examining differences in health effect estimates among the various methods using a measure of lung function (forced expiratory volume in 1 s) as the health indicator demonstrated the importance of the correction factor(s) in the extrapolation methods and that PMF yielded results comparable with the extrapolation methods that incorporated correction factors.
Energy Technology Data Exchange (ETDEWEB)
Robertson, Neil M. [44 Ardgowan Street, Greenock PA16 8EL (United Kingdom). E-mail: neil.robertson at physics.org; Diaz-Gomez, Manuel [Plaza Alcalde Horacio Hermoso, 2, 3-A 41013 Seville (Spain). E-mail: manolo-diaz at latinmail.com; Condon, Barrie [Department of Clinical Physics, Institute of Neurological Sciences, Glasgow G51 4TF (United Kingdom). E-mail: barrie.condon at udcf.gla.ac.uk
2000-12-01
Mitral and aortic valve replacement is a procedure which is common in cardiac surgery. Some of these replacement valves are mechanical and contain moving metal parts. Should the patient in whom such a valve has been implanted be involved in magnetic resonance imaging, there is a possible dangerous interaction between the moving metal parts and the static magnetic field due to the Lenz effect. Mathematical models of two relatively common forms of single-leaflet valves have been derived and the magnitude of the torque which opposes the motion of the valve leaflet has been calculated for a valve disc of solid metal. In addition, a differential model of a ring-strengthener valve type has been considered to determine the likely significance of the Lenz effect in the context of the human heart. For common magnetic field strengths at present, i.e. 1 to 2 T, the effect is not particularly significant. However, there is a marked increase in back pressure as static magnetic field strength increases. There are concerns that, since field strengths in the range 3 to 4 T are increasingly being used, the Lenz effect could become significant. At 5 to 10 T the malfunction of the mechanical heart valve could cause the heart to behave as though it is diseased. For unhealthy or old patients this could possibly prove fatal. (author)
Chaves, Luciano Eustáquio; Nascimento, Luiz Fernando Costa; Rizol, Paloma Maria Silva Rocha
2017-06-22
Predict the number of hospitalizations for asthma and pneumonia associated with exposure to air pollutants in the city of São José dos Campos, São Paulo State. This is a computational model using fuzzy logic based on Mamdani's inference method. For the fuzzification of the input variables of particulate matter, ozone, sulfur dioxide and apparent temperature, we considered two relevancy functions for each variable with the linguistic approach: good and bad. For the output variable number of hospitalizations for asthma and pneumonia, we considered five relevancy functions: very low, low, medium, high and very high. DATASUS was our source for the number of hospitalizations in the year 2007 and the result provided by the model was correlated with the actual data of hospitalization with lag from zero to two days. The accuracy of the model was estimated by the ROC curve for each pollutant and in those lags. In the year of 2007, 1,710 hospitalizations by pneumonia and asthma were recorded in São José dos Campos, State of São Paulo, with a daily average of 4.9 hospitalizations (SD = 2.9). The model output data showed positive and significant correlation (r = 0.38) with the actual data; the accuracies evaluated for the model were higher for sulfur dioxide in lag 0 and 2 and for particulate matter in lag 1. Fuzzy modeling proved accurate for the pollutant exposure effects and hospitalization for pneumonia and asthma approach. Prever o número de internações por asma e pneumonia associadas à exposição a poluentes do ar no município em São José dos Campos, estado de São Paulo. Trata-se de um modelo computacional que utiliza a lógica fuzzy baseado na técnica de inferência de Mamdani. Para a fuzzificação das variáveis de entrada material particulado, ozônio, dióxido de enxofre e temperatura aparente foram consideradas duas funções de pertinência para cada variável com abordagem linguísticas: bom e ruim. Para a variável de saída número interna
Yang, A.; Yongtao, F.
2016-12-01
The effective elastic thickness (Te) is an important parameter that characterizes the long term strength of the lithosphere, which has great significance on understanding the mechanical properties and evolution of the lithosphere. In contrast with many controversies regarding elastic thickness of continent lithosphere, the Te of oceanic lithosphere is thought to be in a simple way that is dependent on the age of the plate. However, rescent studies show that there is no simple relationship between Te and age at time of loading for both seamounts and subduction zones. As subsurface loading is very importand and has large influence in the estimate of Te for continent lithosphere, and many oceanic features such as subduction zones also have considerable subsurface loading. We introduce the method to estimate the effective elastic thickness of oceanic lithosphere using model including surface and subsurface loads by using free-air gravity anomaly and bathymetric data, together with a moving window admittance technique (MWAT). We use the multitaper spectral estimation method to calculate the power spectral density. Through tests with synthetic subduction zone like bathymetry and gravity data show that the Te can be recovered in an accurance similar to that in the continent and there is also a trade-off between spatial resolution and variance for different window sizes. We estimate Te of many subduction zones (Peru-Chile trench, Middle America trench, Caribbean trench, Kuril-Japan trench, Mariana trench, Tonga trench, Java trench, Ryukyu-Philippine trench) with an age range of 0-160 Myr to reassess the relationship between elastic thickness and the age of the lithosphere at the time of loading. The results do not show a simple relationship between Te and age.
Estimating haplotype effects for survival data.
Scheike, Thomas H; Martinussen, Torben; Silver, Jeremy D
2010-09-01
Genetic association studies often investigate the effect of haplotypes on an outcome of interest. Haplotypes are not observed directly, and this complicates the inclusion of such effects in survival models. We describe a new estimating equations approach for Cox's regression model to assess haplotype effects for survival data. These estimating equations are simple to implement and avoid the use of the EM algorithm, which may be slow in the context of the semiparametric Cox model with incomplete covariate information. These estimating equations also lead to easily computable, direct estimators of standard errors, and thus overcome some of the difficulty in obtaining variance estimators based on the EM algorithm in this setting. We also develop an easily implemented goodness-of-fit procedure for Cox's regression model including haplotype effects. Finally, we apply the procedures presented in this article to investigate possible haplotype effects of the PAF-receptor on cardiovascular events in patients with coronary artery disease, and compare our results to those based on the EM algorithm. © 2009, The International Biometric Society.
J. Breidenbach; E. Kublin; R. McGaughey; H.-E. Andersen; S. Reutebuch
2008-01-01
For this study, hierarchical data sets--in that several sample plots are located within a stand--were analyzed for study sites in the USA and Germany. The German data had an additional hierarchy as the stands are located within four distinct public forests. Fixed-effects models and mixed-effects models with a random intercept on the stand level were fit to each data...
Waszak, Martin R.; Fung, Jimmy
1998-01-01
This report describes the development of transfer function models for the trailing-edge and upper and lower spoiler actuators of the Benchmark Active Control Technology (BACT) wind tunnel model for application to control system analysis and design. A simple nonlinear least-squares parameter estimation approach is applied to determine transfer function parameters from frequency response data. Unconstrained quasi-Newton minimization of weighted frequency response error was employed to estimate the transfer function parameters. An analysis of the behavior of the actuators over time to assess the effects of wear and aerodynamic load by using the transfer function models is also presented. The frequency responses indicate consistent actuator behavior throughout the wind tunnel test and only slight degradation in effectiveness due to aerodynamic hinge loading. The resulting actuator models have been used in design, analysis, and simulation of controllers for the BACT to successfully suppress flutter over a wide range of conditions.
Shin, Hwashin Hyun; Stieb, David M; Jessiman, Barry; Goldberg, Mark S; Brion, Orly; Brook, Jeff; Ramsay, Tim; Burnett, Richard T
2008-09-01
Countries worldwide are expending significant resources to improve air quality partly to improve the health of their citizens. Are these societal expenditures improving public health? We consider these issues by tracking the risk of death associated with outdoor air pollution over both space and time in Canadian cities. We propose two multi-year estimators that use current plus several previous years of data to estimate current year risk. The estimators are derived from sequential time series analyses using moving time windows. To evaluate the statistical properties of the proposed methods, a simulation study with three scenarios of changing risk was conducted based on 12 Canadian cities from 1981 to 2000. Then an optimal estimator was applied to 24 of Canada's largest cities over the 17-year period from 1984 to 2000. The annual average daily concentrations of ozone appeared to be increasing over the time period, whereas those of nitrogen dioxide were decreasing. However, the proposed method returns different time trends in public health risks. Evidence for some monotonic increasing trends in the annual risks is weak for O(3) (p = 0.3870) but somewhat stronger for NO(2) (p = 0.1082). In particular, an increasing time trend becomes apparent when excluding year 1998, which reveals lower risk than proximal years, even though concentrations of NO(2) were decreasing. The simulation results validate our two proposed methods, producing estimates close to the preassigned values. Despite decreasing ambient concentrations, public health risks related to NO(2) appear to be increasing. Further investigations are necessary to understand why the concentrations and adverse effects of NO(2) show opposite time trends.
Directory of Open Access Journals (Sweden)
Neumann Anne
2011-11-01
Full Text Available Abstract Background Type 2 diabetes mellitus (T2D poses a large worldwide burden for health care systems. One possible tool to decrease this burden is primary prevention. As it is unethical to wait until perfect data are available to conclude whether T2D primary prevention intervention programmes are cost-effective, we need a model that simulates the effect of prevention initiatives. Thus, the aim of this study is to investigate the long-term cost-effectiveness of lifestyle intervention programmes for the prevention of T2D using a Markov model. As decision makers often face difficulties in applying health economic results, we visualise our results with health economic tools. Methods We use four-state Markov modelling with a probabilistic cohort analysis to calculate the cost per quality-adjusted life year (QALY gained. A one-year cycle length and a lifetime time horizon are applied. Best available evidence supplies the model with data on transition probabilities between glycaemic states, mortality risks, utility weights, and disease costs. The costs are calculated from a societal perspective. A 3% discount rate is used for costs and QALYs. Cost-effectiveness acceptability curves are presented to assist decision makers. Results The model indicates that diabetes prevention interventions have the potential to be cost-effective, but the outcome reveals a high level of uncertainty. Incremental cost-effectiveness ratios (ICERs were negative for the intervention, ie, the intervention leads to a cost reduction for men and women aged 30 or 50 years at initiation of the intervention. For men and women aged 70 at initiation of the intervention, the ICER was EUR27,546/QALY gained and EUR19,433/QALY gained, respectively. In all cases, the QALYs gained were low. Cost-effectiveness acceptability curves show that the higher the willingness-to-pay threshold value, the higher the probability that the intervention is cost-effective. Nonetheless, all curves are
Discrete Choice Models - Estimation of Passenger Traffic
DEFF Research Database (Denmark)
Sørensen, Majken Vildrik
2003-01-01
model, data and estimation are described, with a focus of possibilities/limitations of different techniques. Two special issues of modelling are addressed in further detail, namely data segmentation and estimation of Mixed Logit models. Both issues are concerned with whether individuals can be assumed...... for estimation of choice models). For application of the method an algorithm is provided with a case. Also for the second issue, estimation of Mixed Logit models, a method was proposed. The most commonly used approach to estimate Mixed Logit models, is to employ the Maximum Simulated Likelihood estimation (MSL...... distribution of coefficients were found. All the shapes of distributions found, complied with sound knowledge in terms of which should be uni-modal, sign specific and/or skewed distributions....
Nonparametric estimation in models for unobservable heterogeneity
Hohmann, Daniel
2014-01-01
Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.
MCMC estimation of multidimensional IRT models
Beguin, Anton; Glas, Cornelis A.W.
1998-01-01
A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will
Thai, Hoai-Thu; Mentré, France; Holford, Nicholas H G; Veyrat-Follet, Christine; Comets, Emmanuelle
2014-02-01
Bootstrap methods are used in many disciplines to estimate the uncertainty of parameters, including multi-level or linear mixed-effects models. Residual-based bootstrap methods which resample both random effects and residuals are an alternative approach to case bootstrap, which resamples the individuals. Most PKPD applications use the case bootstrap, for which software is available. In this study, we evaluated the performance of three bootstrap methods (case bootstrap, nonparametric residual bootstrap and parametric bootstrap) by a simulation study and compared them to that of an asymptotic method (Asym) in estimating uncertainty of parameters in nonlinear mixed-effects models (NLMEM) with heteroscedastic error. This simulation was conducted using as an example of the PK model for aflibercept, an anti-angiogenic drug. As expected, we found that the bootstrap methods provided better estimates of uncertainty for parameters in NLMEM with high nonlinearity and having balanced designs compared to the Asym, as implemented in MONOLIX. Overall, the parametric bootstrap performed better than the case bootstrap as the true model and variance distribution were used. However, the case bootstrap is faster and simpler as it makes no assumptions on the model and preserves both between subject and residual variability in one resampling step. The performance of the nonparametric residual bootstrap was found to be limited when applying to NLMEM due to its failure to reflate the variance before resampling in unbalanced designs where the Asym and the parametric bootstrap performed well and better than case bootstrap even with stratification.
Directory of Open Access Journals (Sweden)
Han Zhang
2017-09-01
Full Text Available China’s new round tenure reform has devolved collective forests to individuals on an egalitarian basis. To balance the equity–efficiency dilemma, forestland transfers are highly advocated by policymakers. However, the forestland rental market is still inactive after the reform. To examine the role of off-farm employment in forestland transfers, a simultaneous Tobit system of equations was employed to account for the endogeneity, interdependency, and censoring issues. Accordingly, the Nelson–Olson two-stage procedure, embedded with a multivariate Tobit estimator, was applied to a nationally representative dataset. The estimation results showed that off-farm employment plays a significantly negative role in forestland rent-in, at the 5% risk level. However, off-farm activities had no significant effect on forestland rent-out. Considering China’s specific situation, a reasonable explanation is that households hold forestland as a crucial means of social security against the risk of unemployment. In both rent-in and rent-out equations, high transaction costs are one of the main obstacles impeding forestland transfer. A remarkable finding was that forestland transactions occurred with a statistically significant factor equalization effect, which would be helpful to adjust the mismatched labor–land ratio and improve the land-use efficiency.
Eppelbaum, Lev; Meirova, Tatiana
2015-04-01
, EGU2014-2424, Vienna, Austria, 1-5. Eppelbaum, L.V. and Katz, Y.I., 2014b. First Maps of Mesozoic and Cenozoic Structural-Sedimentation Floors of the Easternmost Mediterranean and their Relationship with the Deep Geophysical-Geological Zonation. Proceed. of the 19th Intern. Congress of Sedimentologists, Geneva, Switzerland, 1-3. Eppelbaum, L.V. and Katz, Yu.I., 2015a. Newly Developed Paleomagnetic Map of the Easternmost Mediterranean Unmasks Geodynamic History of this Region. Central European Jour. of Geosciences, 6, No. 4 (in Press). Eppelbaum, L.V. and Katz, Yu.I., 2015b. Application of Integrated Geological-Geophysical Analysis for Development of Paleomagnetic Maps of the Easternmost Mediterranean. In: (Eppelbaum L., Ed.), New Developments in Paleomagnetism Research, Nova Publisher, NY (in Press). Eppelbaum, L.V. and Khesin, B.E., 2004. Advanced 3-D modelling of gravity field unmasks reserves of a pyrite-polymetallic deposit: A case study from the Greater Caucasus. First Break, 22, No. 11, 53-56. Eppelbaum, L.V., Nikolaev, A.V. and Katz, Y.I., 2014. Space location of the Kiama paleomagnetic hyperzone of inverse polarity in the crust of the eastern Mediterranean. Doklady Earth Sciences (Springer), 457, No. 6, 710-714. Haase, J.S., Park, C.H., Nowack, R.L. and Hill, J.R., 2010. Probabilistic seismic hazard estimates incorporating site effects - An example from Indiana, U.S.A. Environmental and Engineering Geoscience, 16, No. 4, 369-388. Hough, S.E., Borcherdt, R. D., Friberg, P. A., Busby, R., Field, E. and Jacob, K. N., 1990. The role of sediment-induced amplification in the collapse of the Nimitz freeway. Nature, 344, 853-855. Khesin, B.E. Alexeyev, V.V. and Eppelbaum, L.V., 1996. Interpretation of Geophysical Fields in Complicated Environments. Kluwer Academic Publ., Ser.: Advanced Appr. in Geophysics, Dordrecht - London - Boston. KlokoÄník, J., Kostelecký, J., Eppelbaum, L. and BezdÄk, A., 2014. Gravity Disturbances, the Marussi Tensor, Invariants and
Agbayani, Kristina A; Hiscock, Merrill
2013-01-01
A previous study found that the Flynn effect accounts for 85% of the normative difference between 20- and 70-year-olds on subtests of the Wechsler intelligence tests. Adjusting scores for the Flynn effect substantially reduces normative age-group differences, but the appropriate amount of adjustment is uncertain. The present study replicates previous findings and employs two other methods of adjusting for the Flynn effect. Averaged across models, results indicate that the Flynn effect accounts for 76% of normative age-group differences on Wechsler IQ subtests. Flynn-effect adjustment reduces the normative age-related decline in IQ from 4.3 to 1.1 IQ points per decade.
Burdick, Summer M.; Hightower, Joseph E.; Bacheler, Nathan M.; Paramore, Lee M.; Buckel, Jeffrey A.; Pollock, Kenneth H.
2010-01-01
Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated.
Improved diagnostic model for estimating wind energy
Energy Technology Data Exchange (ETDEWEB)
Endlich, R.M.; Lee, J.D.
1983-03-01
Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.
DEFF Research Database (Denmark)
Stygar, Anna Helena; Krogh, Mogens Agerbo; Kristensen, Troels
2017-01-01
Evolutionary operations is a method to exploit the association of often small changes in process variables, planned during systematic experimentation and occurring during the normal production flow, to production characteristics to find a way to alter the production process to be more efficient...... from a herd, and an intervention effect on a given day. The model was constructed to handle any number of cows, experimental interventions, different data sources, or presence of control groups. In this study, data from 2 commercial Danish herds were used. In herd 1, data on 98,046 and 12,133 milkings...... tank records. The presented model proved to be a flexible and dynamic tool, and it was successfully applied for systematic experimentation in dairy herds. The model can serve as a decision support tool for on-farm process optimization exploiting planned changes in process variables and the response...
On parameter estimation in deformable models
DEFF Research Database (Denmark)
Fisker, Rune; Carstensen, Jens Michael
1998-01-01
Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...
Estimators for longitudinal latent exposure models: examining measurement model assumptions.
Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D
2017-06-15
Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Modeling and estimating system availability
International Nuclear Information System (INIS)
Gaver, D.P.; Chu, B.B.
1976-11-01
Mathematical models to infer the availability of various types of more or less complicated systems are described. The analyses presented are probabilistic in nature and consist of three parts: a presentation of various analytic models for availability; a means of deriving approximate probability limits on system availability; and a means of statistical inference of system availability from sparse data, using a jackknife procedure. Various low-order redundant systems are used as examples, but extension to more complex systems is not difficult
Krengel, Annette; Hauth, Jan; Taskinen, Marja-Riitta; Adiels, Martin; Jirstrand, Mats
2013-01-19
When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example
International Nuclear Information System (INIS)
Fichot, Floriana; Duval, Fabiena; Garcia, Aureliena; Belloni, Julien; Quintard, Michel
2005-01-01
Full text of publication follows: In the framework of its research programme on severe nuclear reactor accidents, IRSN investigates the water flooding of an overheated porous bed, where complex two-phase flows are likely to exist. The goal is to describe the flow with a general model, covering rods and debris beds regions in the vessel. A better understanding of the flow at the pore level appears to be necessary in order to justify and improve closure laws of macroscopic models. Although the Direct Numerical Simulation (DNS) of two-phase flows is possible with several methods, applications are now limited to small computational domains, typically of the order of a few centimeters. Therefore, numerical solutions at the reactor scale can only be obtained by using averaged models. Volume averaging is the most traditional way of deriving such models. For nuclear safety codes, a control volume must include a few rods or a few debris particles, with a characteristic dimension of a few centimeters. The difficulty usually met with averaged models is the closure of several transport or source terms which appear in the averaged conservation equations (for example the interfacial drag or the heat transfers between phases) [2]. In the past, the closure of these terms was obtained, when possible, from one-dimensional experiments that allowed measurements of heat flux or pressure drops. For more complex flows, the experimental measurement of local parameters is often impossible and the effective properties cannot be determined easily. An alternative way is to perform 'numerical experiments' with numerical simulations of the local flow. As mentioned above, the domain of application of DNS corresponds to the size of control volumes necessary to derive averaged models. Therefore DNS appears as a powerful tool to investigate the local features of a two-phase flow in complex geometries. Diffuse interface methods provide a way to model flows with interfacial phenomena through an
Letcher, Benjamin H.; Schueller, Paul; Bassar, Ronald D.; Nislow, Keith H.; Coombs, Jason A.; Sakrejda, Krzysztof; Morrissey, Michael; Sigourney, Douglas B.; Whiteley, Andrew R.; O'Donnell, Matthew J.; Dubreuil, Todd L.
2015-01-01
Modelling the effects of environmental change on populations is a key challenge for ecologists, particularly as the pace of change increases. Currently, modelling efforts are limited by difficulties in establishing robust relationships between environmental drivers and population responses.We developed an integrated capture–recapture state-space model to estimate the effects of two key environmental drivers (stream flow and temperature) on demographic rates (body growth, movement and survival) using a long-term (11 years), high-resolution (individually tagged, sampled seasonally) data set of brook trout (Salvelinus fontinalis) from four sites in a stream network. Our integrated model provides an effective context within which to estimate environmental driver effects because it takes full advantage of data by estimating (latent) state values for missing observations, because it propagates uncertainty among model components and because it accounts for the major demographic rates and interactions that contribute to annual survival.We found that stream flow and temperature had strong effects on brook trout demography. Some effects, such as reduction in survival associated with low stream flow and high temperature during the summer season, were consistent across sites and age classes, suggesting that they may serve as robust indicators of vulnerability to environmental change. Other survival effects varied across ages, sites and seasons, indicating that flow and temperature may not be the primary drivers of survival in those cases. Flow and temperature also affected body growth rates; these responses were consistent across sites but differed dramatically between age classes and seasons. Finally, we found that tributary and mainstem sites responded differently to variation in flow and temperature.Annual survival (combination of survival and body growth across seasons) was insensitive to body growth and was most sensitive to flow (positive) and temperature (negative
International Nuclear Information System (INIS)
Zamuner, Stefano; Gomeni, Roberto; Bye, Alan
2002-01-01
Positron-Emission Tomography (PET) is an imaging technology currently used in drug development as a non-invasive measure of drug distribution and interaction with biochemical target system. The level of receptor occupancy achieved by a compound can be estimated by comparing time-activity measurements in an experiment done using tracer alone with the activity measured when the tracer is given following administration of unlabelled compound. The effective use of this surrogate marker as an enabling tool for drug development requires the definition of a model linking the brain receptor occupancy with the fluctuation of plasma concentrations. However, the predictive performance of such a model is strongly related to the precision on the estimate of receptor occupancy evaluated in PET scans collected at different times following drug treatment. Several methods have been proposed for the analysis and the quantification of the ligand-receptor interactions investigated from PET data. The aim of the present study is to evaluate alternative parameter estimation strategies based on the use of non-linear mixed effect models allowing to account for intra and inter-subject variability on the time-activity and for covariates potentially explaining this variability. A comparison of the different modeling approaches is presented using real data. The results of this comparison indicates that the mixed effect approach with a primary model partitioning the variance in term of Inter-Individual Variability (IIV) and Inter-Occasion Variability (IOV) and a second stage model relating the changes on binding potential to the dose of unlabelled drug is definitely the preferred approach
Yunyun Feng; Dengsheng Lu; Qi Chen; Michael Keller; Emilio Moran; Maiza Nara dos-Santos; Edson Luis Bolfe; Mateus Batistella
2017-01-01
Previous research has explored the potential to integrate lidar and optical data in aboveground biomass (AGB) estimation, but how different data sources, vegetation types, and modeling algorithms influence AGB estimation is poorly understood. This research conducts a comparative analysis of different data sources and modeling approaches in improving AGB estimation....
Bukoski, J. J.; Broadhead, J. S.; Donato, D.; Murdiyarso, D.; Gregoire, T. G.
2016-12-01
Mangroves provide extensive ecosystem services that support both local livelihoods and international environmental goals, including coastal protection, water filtration, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects that seek to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through measurement, reporting and verification (MRV) activities. To streamline MRV activities in mangrove C forestry projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We use linear mixed effect models to account for spatial correlation in modeling the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, and are found to explain a substantial proportion of variance within the estimation datasets. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm 3 (14.1% of mean soil C). A substantial proportion of the variation in soil C, however, is explained by the random effects and thus the use of the SOC model may be most valuable for sites in which field measurements of soil C exist.
A nonparametric mixture model for cure rate estimation.
Peng, Y; Dear, K B
2000-03-01
Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.
Bukoski, Jacob J.; Broadhead, Jeremy S.; Donato, Daniel C.; Murdiyarso, Daniel; Gregoire, Timothy G.
2017-01-01
Mangroves provide extensive ecosystem services that support local livelihoods and international environmental goals, including coastal protection, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects seeking to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through field inventories. To streamline C quantification in mangrove conservation projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We compile datasets of mangrove biomass C (197 observations from 48 sites) and soil organic C (99 observations from 27 sites) to parameterize the predictive models, and use linear mixed effect models to model the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, which are found to explain a substantial proportion of variance within the estimation datasets and indicate significant heterogeneity across-sites within the region. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm3 (14.1% of mean soil C). The results point to a need for standardization of forest metrics to facilitate meta-analyses, as well as provide important considerations for refining ecosystem C stock models in mangroves. PMID:28068361
Robust estimation of hydrological model parameters
Directory of Open Access Journals (Sweden)
A. Bárdossy
2008-11-01
Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.
Estimation methods for nonlinear state-space models in ecology
DEFF Research Database (Denmark)
Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro
2011-01-01
The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...
International Nuclear Information System (INIS)
Kaneko, D.; Moriwaki, Y.
2008-01-01
This study presents a crop production model improvement: the previously adopted Michaelis-Menten (MM) type photosynthesis response function (fsub(rad-MM)) was replaced with a Prioul-Chartier (PC) type function (fsub(rad-PC)). The authors' analysis reflects concerns regarding the background effect of global warming, under simultaneous conditions of high air temperature and strong solar radiation. The MM type function fsub(rad-MM) can give excessive values leading to an overestimate of photosynthesis rate (PSN) and grain yield for paddy-rice. The MM model is applicable to many plants whose (PSN) increases concomitant with increased insolation: wheat, maize, soybean, etc. For paddy rice, the PSN apparently shows a maximum PSN. This paper proves that the MM model overestimated the PSN for paddy rice for sufficient solar radiation: the PSN using the PC model yields 10% lower values. However, the unit crop production index (CPIsub(U)) is almost independent of the MM and PC models because of respective standardization of both PSN and crop production index using average PSNsub(0) and CPIsub(0). The authors improved the estimation method using a photosynthesis-and-sterility based crop situation index (CSIsub(E)) to produce a crop yield index (CYIsub(E)), which is used to estimate rice yields in place of the crop situation index (CSI); the CSI gives a percentage of rice yields compared to normal annual production. The model calculates PSN including biomass effects, low-temperature sterility, and high-temperature injury by incorporating insolation, effective air temperature, the normalized difference vegetation index (NDVI), and effects of temperature on photosynthesis. Based on routine observation data, the method enables automated crop-production monitoring in remote regions without special observations. This method can quantify grain production early to raise an alarm in Southeast Asian countries, which must confront climate fluctuation through this era of global
Estimation in autoregressive models with Markov regime
Ríos, Ricardo; Rodríguez, Luis
2005-01-01
In this paper we derive the consistency of the penalized likelihood method for the number state of the hidden Markov chain in autoregressive models with Markov regimen. Using a SAEM type algorithm to estimate the models parameters. We test the null hypothesis of hidden Markov Model against an autoregressive process with Markov regime.
International Nuclear Information System (INIS)
Wang, Wencai; Huang, Jianping; Zhou, Tian; Bi, Jianrong; Lin, Lei; Chen, Yonghang; Huang, Zhongwei; Su, Jing
2013-01-01
A heavy dust storm that occurred in Northwestern China during April 24–30 2010 was studied using observational data along with the Fu–Liou radiative transfer model. The dust storm was originated from Mongolia and affected more than 10 provinces of China. Our results showed that dust aerosols have a significant impact on the radiative energy budget. At Minqin (102.959°E, 38.607°N) and Semi-Arid Climate and Environment Observatory of Lanzhou University (SACOL, 104.13°E, 35.95°N) sites, the net radiative forcing (RF) ranged from 5.93 to 35.7 W m −2 at the top of the atmosphere (TOA), −6.3 to −30.94 W m −2 at surface, and 16.77 to 56.32 W m −2 in the atmosphere. The maximum net radiative heating rate reached 5.89 K at 1.5 km on 24 April at the Minqin station and 4.46 K at 2.2 km on 29 April at the SACOL station. Our results also indicated that the radiative effect of dust aerosols is affected by aerosol optical depth (AOD), single-scattering albedo (SSA) and surface albedo. Modifications of the radiative energy budget by dust aerosols may have important implications for atmospheric circulation and regional climate. -- Highlights: ► Dust aerosols' optical properties and radiative effects were investigated. ► We have surface observations on Minqin and SACOL where heavy dust storm occurred. ► Accurate input parameters for model were acquired from ground-based measurements. ► Aerosol's optical properties may have changed when transported
Maneuver Estimation Model for Geostationary Orbit Determination
National Research Council Canada - National Science Library
Hirsch, Brian J
2006-01-01
.... The Clohessy-Wiltshire equations were used to model the relative motion of a geostationary satellite about its intended location, and a nonlinear least squares algorithm was developed to estimate the satellite trajectories.
Seo, Seongwon; Hwang, Yongwoo
1999-08-01
Construction and demolition (C&D) debris is generated at the site of various construction activities. However, the amount of the debris is usually so large that it is necessary to estimate the amount of C&D debris as accurately as possible for effective waste management and control in urban areas. In this paper, an effective estimation method using a statistical model was proposed. The estimation process was composed of five steps: estimation of the life span of buildings; estimation of the floor area of buildings to be constructed and demolished; calculation of individual intensity units of C&D debris; and estimation of the future C&D debris production. This method was also applied in the city of Seoul as an actual case, and the estimated amount of C&D debris in Seoul in 2021 was approximately 24 million tons. Of this total amount, 98% was generated by demolition, and the main components of debris were concrete and brick.
Estimating maternal genetic effects in livestock
Bijma, P.
2006-01-01
This study investigates the estimation of direct and maternal genetic (co)variances, accounting for environmental covariances between direct and maternal effects. Estimated genetic correlations between direct and maternal effects presented in the literature have often been strongly negative, and
Semi-parametric estimation for ARCH models
Directory of Open Access Journals (Sweden)
Raed Alzghool
2018-03-01
Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator
Gerritse, S.C.
2016-01-01
Official Statistics bureaus are periodically asked to give an estimate of their country's population, which can be defined by the number of usual residents. A person is considered a usual resident when they have lived in the Netherlands for longer than a year, or if they have the intention to reside
A Dynamic Travel Time Estimation Model Based on Connected Vehicles
Directory of Open Access Journals (Sweden)
Daxin Tian
2015-01-01
Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.
Energy Technology Data Exchange (ETDEWEB)
Armenta-Deu, C. (Universidad Complutense de Madrid (ES). Facultad Fisicas); Lukac, B. (University of T. and C. Zilina (CS))
1991-01-01
The radiation transmittance and absorptance of materials vary according to the angle of incidence of the incoming solar radiation. Therefore, the efficiency of most solar converters (thermal or photovoltaic) is a function of the sun's position through the angle of incidence. This problem may be taken account of by the Incidence Angle Modifier, which is considered in this paper. An analytic expression for the incidence angle modifier, based on meteorological data or on geographic and geometric parameters, has been developed; this expression includes the effect of beam and diffuse radiation as well as the global influence. A comparison between measured data and these computed from our model has given a very good correlation, the results being within {+-}3% for horizontal and titled planes, and within {+-}7% for vertical surfaces, on average. The method also computes the collectible solar energy within a 5% error for thresholds up to 300Wm{sup -2}. The method has been validated for more than 30 locations in south and west Europe. (author).
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei
2013-09-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.
EFFECTIVE TOOL WEAR ESTIMATION THROUGH ...
African Journals Online (AJOL)
Though a number of researchers have used MLP for fusing the information and estimating the tool status, enough literatures are not available to state the influence of neural network parameters which may affect the accurate estimation of the tool status. This point also has been emphasized in [4], where the authors stated ...
Watad, Abdulla; Bragazzi, Nicola L; Bacigaluppi, Susanna; Amital, Howard; Watad, Samaa; Sharif, Kassem; Bisharat, Bishara; Siri, Anna; Mahamid, Ala; Abu Ras, Hakim; Nasr, Ahmed; Bilotta, Federico; Robba, Chiara; Adawi, Mohammad
2018-02-23
Artificial Intelligence (AI) techniques play a major role in anesthesiology, even though their importance is often overlooked. In the extant literature, AI approaches, such as Artificial Neural Networks (ANNs), have been underutilized, mainly being used to model patient's consciousness state, to predict the precise amount of anesthetic gases, the level of analgesia, or the need of anesthesiological blocks, among others. In the field of neurosurgery, ANNs have been effectively applied to the diagnosis and prognosis of cerebral tumors, seizures, low back pain, and also to the monitoring of intracranial pressure (ICP). A MultiLayer Perceptron (MLP), which is a feedforward ANN, with hyperbolic tangent as activation function in the input/hidden layers, softmax as activation function in the output layer, and cross-entropy as error function, was used to model the impact of prone versus supine position and the use of positive end expiratory pressure (PEEP) on ICP in a sample of 30 patients undergoing spinal surgery. Different non invasive surrogate estimations of ICP have been used and compared: namely, mean optic nerve sheath diameter (ONSD), non invasive estimated cerebral perfusion pressure (NCPP), pulsatility index (PI), ICP derived from PI (ICP-PI), and flow velocity diastolic formula (FVDICP). ONSD proved to be a more robust surrogate estimation of ICP, with a predictive power of 75%, whilst the power of NCPP, ICP-PI, PI, and FVDICP were 60.5%, 54.8%, 53.1%, and 47.7%, respectively. Our MLP analysis confirmed our findings previously obtained with regression, correlation, multivariate Receiving Operator Curve (multi-ROC) analyses. ANNs can be successfully used to predict the effects of prone versus supine position and PEEP on ICP in patients undergoing spinal surgery using different non invasive surrogate estimators of ICP.
Cole, Stephen R; Jacobson, Lisa P; Tien, Phyllis C; Kingsley, Lawrence; Chmiel, Joan S; Anastos, Kathryn
2010-01-01
To estimate the net effect of imperfectly measured highly active antiretroviral therapy on incident acquired immunodeficiency syndrome or death, the authors combined inverse probability-of-treatment-and-censoring weighted estimation of a marginal structural Cox model with regression-calibration methods. Between 1995 and 2007, 950 human immunodeficiency virus-positive men and women were followed in 2 US cohort studies. During 4,054 person-years, 374 initiated highly active antiretroviral therapy, 211 developed acquired immunodeficiency syndrome or died, and 173 dropped out. Accounting for measured confounders and determinants of dropout, the weighted hazard ratio for acquired immunodeficiency syndrome or death comparing use of highly active antiretroviral therapy in the prior 2 years with no therapy was 0.36 (95% confidence limits: 0.21, 0.61). This association was relatively constant over follow-up (P = 0.19) and stronger than crude or adjusted hazard ratios of 0.75 and 0.95, respectively. Accounting for measurement error in reported exposure using external validation data on 331 men and women provided a hazard ratio of 0.17, with bias shifted from the hazard ratio to the estimate of precision as seen by the 2.5-fold wider confidence limits (95% confidence limits: 0.06, 0.43). Marginal structural measurement-error models can simultaneously account for 3 major sources of bias in epidemiologic research: validated exposure measurement error, measured selection bias, and measured time-fixed and time-varying confounding.
FUZZY MODELING BY SUCCESSIVE ESTIMATION OF RULES ...
African Journals Online (AJOL)
This paper presents an algorithm for automatically deriving fuzzy rules directly from a set of input-output data of a process for the purpose of modeling. The rules are extracted by a method termed successive estimation. This method is used to generate a model without truncating the number of fired rules, to within user ...
Groundwater temperature estimation and modeling using hydrogeophysics.
Nguyen, F.; Lesparre, N.; Hermans, T.; Dassargues, A.; Klepikova, M.; Kemna, A.; Caers, J.
2017-12-01
Groundwater temperature may be of use as a state variable proxy for aquifer heat storage, highlighting preferential flow paths, or contaminant remediation monitoring. However, its estimation often relies on scarce temperature data collected in boreholes. Hydrogeophysical methods such as electrical resistivity tomography (ERT) and distributed temperature sensing (DTS) may provide more exhaustive spatial information of the bulk properties of interest than samples from boreholes. If a properly calibrated DTS reading provides direct measurements of the groundwater temperature in the well, ERT requires one to determine the fractional change per degree Celsius. One advantage of this petrophysical relationship is its relative simplicity: the fractional change is often found to be around 0.02 per degree Celcius, and represents mainly the variation of electrical resistivity due to the viscosity effect. However, in presence of chemical and kinetics effects, the variation may also depend on the duration of the test and may neglect reactions occurring between the pore water and the solid matrix. Such effects are not expected to be important for low temperature systems (<30 °C), at least for short experiments. In this contribution, we use different field experiments under natural and forced flow conditions to review developments for the joint use of DTS and ERT to map and monitor the temperature distribution within aquifers, to characterize aquifers in terms of heterogeneity and to better understand processes. We show how temperature time-series measurements might be used to constraint the ERT inverse problem in space and time and how combined ERT-derived and DTS estimation of temperature may be used together with hydrogeological modeling to provide predictions of the groundwater temperature field.
Modelling and parameter estimation of dynamic systems
Raol, JR; Singh, J
2004-01-01
Parameter estimation is the process of using observations from a system to develop mathematical models that adequately represent the system dynamics. The assumed model consists of a finite set of parameters, the values of which are calculated using estimation techniques. Most of the techniques that exist are based on least-square minimization of error between the model response and actual system response. However, with the proliferation of high speed digital computers, elegant and innovative techniques like filter error method, H-infinity and Artificial Neural Networks are finding more and mor
Bärgman, Jonas; Boda, Christian-Nils; Dozza, Marco
2017-05-01
As the development and deployment of in-vehicle intelligent safety systems (ISS) for crash avoidance and mitigation have rapidly increased in the last decades, the need to evaluate their prospective safety benefits before introduction has never been higher. Counterfactual simulations using relevant mathematical models (for vehicle dynamics, sensors, the environment, ISS algorithms, and models of driver behavior) have been identified as having high potential. However, although most of these models are relatively mature, models of driver behavior in the critical seconds before a crash are still relatively immature. There are also large conceptual differences between different driver models. The objective of this paper is, firstly, to demonstrate the importance of the choice of driver model when counterfactual simulations are used to evaluate two ISS: Forward collision warning (FCW), and autonomous emergency braking (AEB). Secondly, the paper demonstrates how counterfactual simulations can be used to perform sensitivity analyses on parameter settings, both for driver behavior and ISS algorithms. Finally, the paper evaluates the effect of the choice of glance distribution in the driver behavior model on the safety benefit estimation. The paper uses pre-crash kinematics and driver behavior from 34 rear-end crashes from the SHRP2 naturalistic driving study for the demonstrations. The results for FCW show a large difference in the percent of avoided crashes between conceptually different models of driver behavior, while differences were small for conceptually similar models. As expected, the choice of model of driver behavior did not affect AEB benefit much. Based on our results, researchers and others who aim to evaluate ISS with the driver in the loop through counterfactual simulations should be sure to make deliberate and well-grounded choices of driver models: the choice of model matters. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chesson, Harrell W.; Markowitz, Lauri E.; Hariri, Susan; Ekwueme, Donatus U.; Saraiya, Mona
2016-01-01
ABSTRACT Introduction: The objective of this study was to assess the incremental costs and benefits of the 9-valent HPV vaccine (9vHPV) compared with the quadrivalent HPV vaccine (4vHPV). Like 4vHPV, 9vHPV protects against HPV types 6, 11, 16, and 18. 9vHPV also protects against 5 additional HPV types 31, 33, 45, 52, and 58. Methods: We adapted a previously published model of the impact and cost-effectiveness of 4vHPV to include the 5 additional HPV types in 9vHPV. The vaccine strategies we examined were (1) 4vHPV for males and females; (2) 9vHPV for females and 4vHPV for males; and (3) 9vHPV for males and females. In the base case, 9vHPV cost $13 more per dose than 4vHPV, based on available vaccine price information. Results: Providing 9vHPV to females compared with 4vHPV for females (assuming 4vHPV for males in both scenarios) was cost-saving regardless of whether or not cross-protection for 4vHPV was assumed. The cost per quality-adjusted life year (QALY) gained by 9vHPV for both sexes (compared with 4vHPV for both sexes) was vaccination program of 4vHPV for both sexes, a vaccination program of 9vHPV for both sexes can improve health outcomes and can be cost-saving. PMID:26890978
Comparison of Estimation Procedures for Multilevel AR(1 Models
Directory of Open Access Journals (Sweden)
Tanja eKrone
2016-04-01
Full Text Available To estimate a time series model for multiple individuals, a multilevel model may be used.In this paper we compare two estimation methods for the autocorrelation in Multilevel AR(1 models, namely Maximum Likelihood Estimation (MLE and Bayesian Markov Chain Monte Carlo.Furthermore, we examine the difference between modeling fixed and random individual parameters.To this end, we perform a simulation study with a fully crossed design, in which we vary the length of the time series (10 or 25, the number of individuals per sample (10 or 25, the mean of the autocorrelation (-0.6 to 0.6 inclusive, in steps of 0.3 and the standard deviation of the autocorrelation (0.25 or 0.40.We found that the random estimators of the population autocorrelation show less bias and higher power, compared to the fixed estimators. As expected, the random estimators profit strongly from a higher number of individuals, while this effect is small for the fixed estimators.The fixed estimators profit slightly more from a higher number of time points than the random estimators.When possible, random estimation is preferred to fixed estimation.The difference between MLE and Bayesian estimation is nearly negligible. The Bayesian estimation shows a smaller bias, but MLE shows a smaller variability (i.e., standard deviation of the parameter estimates.Finally, better results are found for a higher number of individuals and time points, and for a lower individual variability of the autocorrelation. The effect of the size of the autocorrelation differs between outcome measures.
How to use COSMIC Functional Size in Effort Estimation Models?
Gencel, Cigdem
2008-01-01
Although Functional Size Measurement (FSM) methods have become widely used by the software organizations, the functional size based effort estimation still needs further investigation. Most of the studies on effort estimation consider total functional size of the software as the primary input to estimation models and they mostly focus on identifying the project parameters which might have a significant effect on the size-effort relationship. This study brings suggestions on how to use COSMIC ...
Pan, Shin-Liang; Chen, Hsiu-Hsi
2010-09-01
The rates of functional recovery after stroke tend to decrease with time. Time-varying Markov processes (TVMP) may be more biologically plausible than time-invariant Markov process for modeling such data. However, analysis of such stochastic processes, particularly tackling reversible transitions and the incorporation of random effects into models, can be analytically intractable. We make use of ordinary differential equations to solve continuous-time TVMP with reversible transitions. The proportional hazard form was used to assess the effects of an individual's covariates on multi-state transitions with the incorporation of random effects that capture the residual variation after being explained by measured covariates under the concept of generalized linear model. We further built up Bayesian directed acyclic graphic model to obtain full joint posterior distribution. Markov chain Monte Carlo (MCMC) with Gibbs sampling was applied to estimate parameters based on posterior marginal distributions with multiple integrands. The proposed method was illustrated with empirical data from a study on the functional recovery after stroke. Copyright 2010 Elsevier Inc. All rights reserved.
Comparing fixed effects and covariance structure estimators for panel data
DEFF Research Database (Denmark)
Ejrnæs, Mette; Holm, Anders
2006-01-01
In this article, the authors compare the traditional econometric fixed effect estimator with the maximum likelihood estimator implied by covariance structure models for panel data. Their findings are that the maximum like lipoid estimator is remarkably robust to certain types of misspecifications...
Huang, Guowen; Lee, Duncan; Scott, Marian
2015-01-01
The long-term health effects of air pollution can be estimated using a spatio-temporal ecological study, where the disease data are counts of hospital admissions from populations in small areal units at yearly intervals. Spatially representative pollution concentrations for each areal unit are typically estimated by applying Kriging to data from a sparse monitoring network, or by computing averages over grid level concentrations from an atmospheric dispersion model. We propose a novel fusion model for estimating spatially aggregated pollution concentrations using both the modelled and monitored data, and relate these concentrations to respiratory disease in a new study in Scotland between 2007 and 2011. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Estimation and uncertainty of reversible Markov models.
Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank
2015-11-07
Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software--http://pyemma.org--as of version 2.0.
Error estimation and adaptive chemical transport modeling
Directory of Open Access Journals (Sweden)
Malte Braack
2014-09-01
Full Text Available We present a numerical method to use several chemical transport models of increasing accuracy and complexity in an adaptive way. In largest parts of the domain, a simplified chemical model may be used, whereas in certain regions a more complex model is needed for accuracy reasons. A mathematically derived error estimator measures the modeling error and provides information where to use more accurate models. The error is measured in terms of output functionals. Therefore, one has to consider adjoint problems which carry sensitivity information. This concept is demonstrated by means of ozone formation and pollution emission.
Parameter Estimation for Thurstone Choice Models
Energy Technology Data Exchange (ETDEWEB)
Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-24
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.
Zein, Rizqy Amelia; Suhariadi, Fendy; Hendriani, Wiwin
2017-01-01
The research aimed to investigate the effect of lay knowledge of pulmonary tuberculosis (TB) and prior contact with pulmonary TB patients on a health-belief model (HBM) as well as to identify the social determinants that affect lay knowledge. Survey research design was conducted, where participants were required to fill in a questionnaire, which measured HBM and lay knowledge of pulmonary TB. Research participants were 500 residents of Semampir, Asemrowo, Bubutan, Pabean Cantian, and Simokerto districts, where the risk of pulmonary TB transmission is higher than other districts in Surabaya. Being a female, older in age, and having prior contact with pulmonary TB patients significantly increase the likelihood of having a higher level of lay knowledge. Lay knowledge is a substantial determinant to estimate belief in the effectiveness of health behavior and personal health threat. Prior contact with pulmonary TB patients is able to explain the belief in the effectiveness of a health behavior, yet fails to estimate participants' belief in the personal health threat. Health authorities should prioritize males and young people as their main target groups in a pulmonary TB awareness campaign. The campaign should be able to reconstruct people's misconception about pulmonary TB, thereby bringing around the health-risk perception so that it is not solely focused on improving lay knowledge.
Parameter and Uncertainty Estimation in Groundwater Modelling
DEFF Research Database (Denmark)
Jensen, Jacob Birk
The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... was applied.Capture zone modelling was conducted on a synthetic stationary 3-dimensional flow problem involving river, surface and groundwater flow. Simulated capture zones were illustrated as likelihood maps and compared with a deterministic capture zones derived from a reference model. The results showed...
Extreme gust wind estimation using mesoscale modeling
DEFF Research Database (Denmark)
Larsén, Xiaoli Guo; Kruger, Andries
2014-01-01
Currently, the existing estimation of the extreme gust wind, e.g. the 50-year winds of 3 s values, in the IEC standard, is based on a statistical model to convert the 1:50-year wind values from the 10 min resolution. This statistical model assumes a Gaussian process that satisfies the classical...... through turbulent eddies. This process is modeled using the mesoscale Weather Forecasting and Research (WRF) model. The gust at the surface is calculated as the largest winds over a layer where the averaged turbulence kinetic energy is greater than the averaged buoyancy force. The experiments have been...
Robust estimation procedure in panel data model
Energy Technology Data Exchange (ETDEWEB)
Shariff, Nurul Sima Mohamad [Faculty of Science of Technology, Universiti Sains Islam Malaysia (USIM), 71800, Nilai, Negeri Sembilan (Malaysia); Hamzah, Nor Aishah [Institute of Mathematical Sciences, Universiti Malaya, 50630, Kuala Lumpur (Malaysia)
2014-06-19
The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependence is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.
Parameter Estimation for a Class of Lifetime Models
Directory of Open Access Journals (Sweden)
Xinyang Ji
2014-01-01
Full Text Available Our purpose in this paper is to present a better method of parametric estimation for a bivariate nonlinear regression model, which takes the performance indicator of rubber aging as the dependent variable and time and temperature as the independent variables. We point out that the commonly used two-step method (TSM, which splits the model and estimate parameters separately, has limitation. Instead, we apply the Marquardt’s method (MM to implement parametric estimation directly for the model and compare these two methods of parametric estimation by random simulation. Our results show that MM has better effect of data fitting, more reasonable parametric estimates, and smaller prediction error compared with TSM.
Multilevel Autoregressive Mediation Models: Specification, Estimation, and Applications.
Zhang, Qian; Wang, Lijuan; Bergeman, C S
2017-11-27
In the current study, extending from the cross-lagged panel models (CLPMs) in Cole and Maxwell (2003), we proposed the multilevel autoregressive mediation models (MAMMs) by allowing the coefficients to differ across individuals. In addition, Level-2 covariates can be included to explain the interindividual differences of mediation effects. Given the complexity of the proposed models, Bayesian estimation was used. Both a CLPM and an unconditional MAMM were fitted to daily diary data. The 2 models yielded different statistical conclusions regarding the average mediation effect. A simulation study was conducted to examine the estimation accuracy of Bayesian estimation for MAMMs and consequences of model mis-specifications. Factors considered included the sample size (N), number of time points (T), fixed indirect and direct effect sizes, and Level-2 variances and covariances. Results indicated that the fixed effect estimates for the indirect effect components (a and b) and the fixed effects of Level-2 covariates were accurate when N ≥ 50 and T ≥ 5. For estimating Level-2 variances and covariances, they were accurate provided a sufficiently large N and T (e.g., N ≥ 500 and T ≥ 50). Estimates of the average mediation effect were generally accurate when N ≥ 100 and T ≥ 10, or N ≥ 50 and T ≥ 20. Furthermore, we found that when Level-2 variances were zero, MAMMs yielded valid inferences about the fixed effects, whereas when random effects existed, CLPMs had low coverage rates for fixed effects. DIC can be used for model selection. Limitations and future directions were discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Model-Based Optimizing Control and Estimation Using Modelica Model
Directory of Open Access Journals (Sweden)
L. Imsland
2010-07-01
Full Text Available This paper reports on experiences from case studies in using Modelica/Dymola models interfaced to control and optimization software, as process models in real time process control applications. Possible applications of the integrated models are in state- and parameter estimation and nonlinear model predictive control. It was found that this approach is clearly possible, providing many advantages over modeling in low-level programming languages. However, some effort is required in making the Modelica models accessible to NMPC software.
Estimating Stochastic Volatility Models using Prediction-based Estimating Functions
DEFF Research Database (Denmark)
Lunde, Asger; Brix, Anne Floor
to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from...... the two estimation methods without noise correction are studied. Second, a noise robust GMM estimator is constructed by approximating integrated volatility by a realized kernel instead of realized variance. The PBEFs are also recalculated in the noise setting, and the two estimation methods ability...
DEFF Research Database (Denmark)
Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.
2011-01-01
the uncertainty of the weather radar rainfall input. The main findings of this work, is that the input uncertainty propagate through the urban drainage model with significant effects on the model result. The GLUE methodology is in general a usable way to explore this uncertainty although; the exact width......Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...
Multilevel models improve precision and speed of IC50 estimates.
Vis, Daniel J; Bombardelli, Lorenzo; Lightfoot, Howard; Iorio, Francesco; Garnett, Mathew J; Wessels, Lodewyk Fa
2016-05-01
Experimental variation in dose-response data of drugs tested on cell lines result in inaccuracies in the estimate of a key drug sensitivity characteristic: the IC50. We aim to improve the precision of the half-limiting dose (IC50) estimates by simultaneously employing all dose-responses across all cell lines and drugs, rather than using a single drug-cell line response. We propose a multilevel mixed effects model that takes advantage of all available dose-response data. The new estimates are highly concordant with the currently used Bayesian model when the data are well behaved. Otherwise, the multilevel model is clearly superior. The multilevel model yields a significant reduction of extreme IC50 estimates, an increase in precision and it runs orders of magnitude faster.
High-dimensional model estimation and model selection
CERN. Geneva
2015-01-01
I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.
Extreme Earthquake Risk Estimation by Hybrid Modeling
Chavez, M.; Cabrera, E.; Ashworth, M.; Garcia, S.; Emerson, D.; Perea, N.; Salazar, A.; Moulinec, C.
2012-12-01
The estimation of the hazard and the economical consequences i.e. the risk associated to the occurrence of extreme magnitude earthquakes in the neighborhood of urban or lifeline infrastructure, such as the 11 March 2011 Mw 9, Tohoku, Japan, represents a complex challenge as it involves the propagation of seismic waves in large volumes of the earth crust, from unusually large seismic source ruptures up to the infrastructure location. The large number of casualties and huge economic losses observed for those earthquakes, some of which have a frequency of occurrence of hundreds or thousands of years, calls for the development of new paradigms and methodologies in order to generate better estimates, both of the seismic hazard, as well as of its consequences, and if possible, to estimate the probability distributions of their ground intensities and of their economical impacts (direct and indirect losses), this in order to implement technological and economical policies to mitigate and reduce, as much as possible, the mentioned consequences. Herewith, we propose a hybrid modeling which uses 3D seismic wave propagation (3DWP) and neural network (NN) modeling in order to estimate the seismic risk of extreme earthquakes. The 3DWP modeling is achieved by using a 3D finite difference code run in the ~100 thousands cores Blue Gene Q supercomputer of the STFC Daresbury Laboratory of UK, combined with empirical Green function (EGF) techniques and NN algorithms. In particular the 3DWP is used to generate broadband samples of the 3D wave propagation of extreme earthquakes (plausible) scenarios corresponding to synthetic seismic sources and to enlarge those samples by using feed-forward NN. We present the results of the validation of the proposed hybrid modeling for Mw 8 subduction events, and show examples of its application for the estimation of the hazard and the economical consequences, for extreme Mw 8.5 subduction earthquake scenarios with seismic sources in the Mexican
Decimative Spectral Estimation with Unconstrained Model Order
Directory of Open Access Journals (Sweden)
Stavroula-Evita Fotinea
2012-01-01
Full Text Available This paper presents a new state-space method for spectral estimation that performs decimation by any factor, it makes use of the full set of data and brings further apart the poles under consideration, while imposing almost no constraints to the size of the Hankel matrix (model order, as decimation increases. It is compared against two previously proposed techniques for spectral estimation (along with derived decimative versions, that lie among the most promising methods in the field of spectroscopy, where accuracy of parameter estimation is of utmost importance. Moreover, it is compared against a state-of-the-art purely decimative method proposed in literature. Experiments performed on simulated NMR signals prove the new method to be more robust, especially for low signal-to-noise ratio.
Tsai, Alexander C; Weiser, Sheri D; Petersen, Maya L; Ragland, Kathleen; Kushel, Margot B; Bangsberg, David R
2010-12-01
Depression strongly predicts nonadherence to human immunodeficiency virus (HIV) antiretroviral therapy, and adherence is essential to maintaining viral suppression. This suggests that pharmacologic treatment of depression may improve virologic outcomes. However, previous longitudinal observational analyses have inadequately adjusted for time-varying confounding by depression severity, which could yield biased estimates of treatment effect. Application of marginal structural modeling to longitudinal observation data can, under certain assumptions, approximate the findings of a randomized controlled trial. To determine whether antidepressant medication treatment increases the probability of HIV viral suppression. Community-based prospective cohort study with assessments conducted every 3 months. Community-based research field site in San Francisco, California. One hundred fifty-eight homeless and marginally housed persons with HIV who met baseline immunologic (CD4+ T-lymphocyte count, 13) inclusion criteria, observed from April 2002 through August 2007. Probability of achieving viral suppression to less than 50 copies/mL. Secondary outcomes of interest were probability of being on an antiretroviral therapy regimen, 7-day self-reported percentage adherence to antiretroviral therapy, and probability of reporting complete (100%) adherence. Marginal structural models estimated a 2.03 greater odds of achieving viral suppression (95% confidence interval [CI], 1.15-3.58; P = .02) resulting from antidepressant medication treatment. In addition, antidepressant medication use increased the probability of antiretroviral uptake (weighted odds ratio, 3.87; 95% CI, 1.98-7.58; P effect is likely attributable to improved adherence to a continuum of HIV care, including increased uptake and adherence to antiretroviral therapy.
Estimating Coastal Digital Elevation Model (DEM) Uncertainty
Amante, C.; Mesick, S.
2017-12-01
Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.
Directory of Open Access Journals (Sweden)
T. S. Bates
2006-01-01
Full Text Available The largest uncertainty in the radiative forcing of climate change over the industrial era is that due to aerosols, a substantial fraction of which is the uncertainty associated with scattering and absorption of shortwave (solar radiation by anthropogenic aerosols in cloud-free conditions (IPCC, 2001. Quantifying and reducing the uncertainty in aerosol influences on climate is critical to understanding climate change over the industrial period and to improving predictions of future climate change for assumed emission scenarios. Measurements of aerosol properties during major field campaigns in several regions of the globe during the past decade are contributing to an enhanced understanding of atmospheric aerosols and their effects on light scattering and climate. The present study, which focuses on three regions downwind of major urban/population centers (North Indian Ocean (NIO during INDOEX, the Northwest Pacific Ocean (NWP during ACE-Asia, and the Northwest Atlantic Ocean (NWA during ICARTT, incorporates understanding gained from field observations of aerosol distributions and properties into calculations of perturbations in radiative fluxes due to these aerosols. This study evaluates the current state of observations and of two chemical transport models (STEM and MOZART. Measurements of burdens, extinction optical depth (AOD, and direct radiative effect of aerosols (DRE – change in radiative flux due to total aerosols are used as measurement-model check points to assess uncertainties. In-situ measured and remotely sensed aerosol properties for each region (mixing state, mass scattering efficiency, single scattering albedo, and angular scattering properties and their dependences on relative humidity are used as input parameters to two radiative transfer models (GFDL and University of Michigan to constrain estimates of aerosol radiative effects, with uncertainties in each step propagated through the analysis. Constraining the radiative
Consistent Estimation of Partition Markov Models
Directory of Open Access Journals (Sweden)
Jesús E. García
2017-04-01
Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.
Los Alamos Waste Management Cost Estimation Model
International Nuclear Information System (INIS)
Matysiak, L.M.; Burns, M.L.
1994-03-01
This final report completes the Los Alamos Waste Management Cost Estimation Project, and includes the documentation of the waste management processes at Los Alamos National Laboratory (LANL) for hazardous, mixed, low-level radioactive solid and transuranic waste, development of the cost estimation model and a user reference manual. The ultimate goal of this effort was to develop an estimate of the life cycle costs for the aforementioned waste types. The Cost Estimation Model is a tool that can be used to calculate the costs of waste management at LANL for the aforementioned waste types, under several different scenarios. Each waste category at LANL is managed in a separate fashion, according to Department of Energy requirements and state and federal regulations. The cost of the waste management process for each waste category has not previously been well documented. In particular, the costs associated with the handling, treatment and storage of the waste have not been well understood. It is anticipated that greater knowledge of these costs will encourage waste generators at the Laboratory to apply waste minimization techniques to current operations. Expected benefits of waste minimization are a reduction in waste volume, decrease in liability and lower waste management costs
DEFF Research Database (Denmark)
Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan
This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...... Carlo experiment. We find that estimation of the parameters in the transition function can be problematic but that there may be significant benefits in terms of forecast performance....
Ronald E. McRoberts; Paolo Moser; Laio Zimermann Oliveira; Alexander C. Vibrans
2015-01-01
Forest inventory estimates of tree volume for large areas are typically calculated by adding the model predictions of volumes for individual trees at the plot level, calculating the mean over plots, and expressing the result on a per unit area basis. The uncertainty in the model predictions is generally ignored, with the result that the precision of the large-area...
Yin, X.Y.; Chasalow, S.D.; Dourleijn, C.J.; Stam, P.; Kropff, M.J.
2000-01-01
Advances in the use of molecular markers to elucidate the inheritance of quantitative traits enable the integration of genetic information on physiological traits into crop growth models. The objective of this study was to assess the ability of a crop growth model with QTL-based estimates of
Numerical estimation of the effects of climatic variations on human ...
African Journals Online (AJOL)
temperature, humidity, solar radiation and wind speed) are used to develop a numerical model for estimating the effect of climatic changes on human thermal comfort in Botswana. Numerical values of energy load for four different comfort classes were ...
Directory of Open Access Journals (Sweden)
Fragoulakis V
2013-06-01
Full Text Available Vassilis Fragoulakis, Nikolaos ManiadakisNational School of Public Health, Department of Health Services Management, Athens, GreeceObjective: To quantify the economic effects of a child conceived by in vitro fertilization (IVF in terms of net tax revenue from the state's perspective in Greece.Methods: Based on previous international experience, a mathematical model was developed to assess the lifetime productivity of a single individual and his/her lifetime transactions with governmental agencies. The model distinguished among three periods in the economic life cycle of an individual: (1 early life, when the government primarily contributes resources through child tax credits, health care, and educational expenses; (2 employment, when individuals begin returning resources through taxes; and (3 retirement, when the government expends additional resources on pensions and health care. The cost of a live birth with IVF was based on the modification of a previously published model developed by the authors. All outcomes were discounted at a 3% discount rate. The data inputs – namely, the economic or demographic variables – were derived from the National Statistical Secretariat of Greece and other relevant sources. To deal with uncertainty, bias-corrected uncertainty intervals (UIs were calculated based on 5000 Monte Carlo simulations. In addition, to examine the robustness of our results, other one-way sensitivity analyses were also employed.Results: The cost of IVF per birth was estimated at €17,015 (95% UI: €13,932–€20,200. The average projected income generated by an individual throughout his/her productive life was €258,070 (95% UI: €185,376–€339,831. In addition, his/her life tax contribution was estimated at €133,947 (95% UI: €100,126–€177,375, while the discounted governmental expenses for elderly and underage individuals were €67,624 (95% UI: €55,211–€83,930. Hence, the net present value of IVF was €60
Conditional shape models for cardiac motion estimation
DEFF Research Database (Denmark)
Metz, Coert; Baka, Nora; Kirisli, Hortense
2010-01-01
We propose a conditional statistical shape model to predict patient specific cardiac motion from the 3D end-diastolic CTA scan. The model is built from 4D CTA sequences by combining atlas based segmentation and 4D registration. Cardiac motion estimation is, for example, relevant in the dynamic...... alignment of pre-operative CTA data with intra-operative X-ray imaging. Due to a trend towards prospective electrocardiogram gating techniques, 4D imaging data, from which motion information could be extracted, is not commonly available. The prediction of motion from shape information is thus relevant...
Software Cost Estimating Models: A Comparative Study of What the Models Estimate
1993-09-01
generate good cost estimates. One model developer best summed up this sentiment by stating: Estimation is not a mechanical process. Art, skill, and...Allocation Perc.uinta~es for SASEY Development Phases Sysieni Conce~pt 7.5% yseS/W Requ~irements Anlysis _________%__ S/W Raq;iirements Analysis 9.0
DEFF Research Database (Denmark)
Puthumana, Govindan; Bissacco, Giuliano; Hansen, Hans Nørgaard
2017-01-01
In micro-EDM milling, real time electrode wear compensation based on tool wear per discharge (TWD) estimation permits the direct control of the position of the tool electrode frontal surface. However, TWD estimation errors will cause errors on the tool electrode axial depth. A simulation tool...... is developed to determine the effects of errors in the initial estimation of TWD and its propagation effect with respect to the error on the depth of the cavity generated. Simulations were applied to micro-EDM milling of a slot of 5000 μm length and 50 μm depth and validated through slot milling experiments...
Models and estimation methods for clinical HIV-1 data
Verotta, Davide
2005-12-01
Clinical HIV-1 data include many individual factors, such as compliance to treatment, pharmacokinetics, variability in respect to viral dynamics, race, sex, income, etc., which might directly influence or be associated with clinical outcome. These factors need to be taken into account to achieve a better understanding of clinical outcome and mathematical models can provide a unifying framework to do so. The first objective of this paper is to demonstrate the development of comprehensive HIV-1 dynamics models that describe viral dynamics and also incorporate different factors influencing such dynamics. The second objective of this paper is to describe alternative estimation methods that can be applied to the analysis of data with such models. In particular, we consider: (i) simple but effective two-stage estimation methods, in which data from each patient are analyzed separately and summary statistics derived from the results, (ii) more complex nonlinear mixed effect models, used to pool all the patient data in a single analysis. Bayesian estimation methods are also considered, in particular: (iii) maximum posterior approximations, MAP, and (iv) Markov chain Monte Carlo, MCMC. Bayesian methods incorporate prior knowledge into the models, thus avoiding some of the model simplifications introduced when the data are analyzed using two-stage methods, or a nonlinear mixed effect framework. We demonstrate the development of the models and the different estimation methods using real AIDS clinical trial data involving patients receiving multiple drugs regimens.
Bias effects on magnitude and ratio estimation power function exponents.
Fagot, R F; Pokorny, R
1989-03-01
A bias model of relative judgment was used to derive a ratio estimation (RE) power function, and its effectiveness in providing estimates of exponents free of the effects of standards was evaluated. The RE bias model was compared with the simple RE power function that ignores bias. Results showed that when bias was not taken into account, estimates of exponents exhibited the usual effects of standards observed in previous research. However, the introduction of bias parameters into the RE power function virtually eliminated these effects. Exponents calculated from "equal-range segments" (e.g., low stimulus range vs. high stimulus range) judged by magnitude estimation (ME) were examined: the effects of equal-range segments on exponents were much stronger for ME than standards were for RE, using the bias model.
International Nuclear Information System (INIS)
Nakano, Masanao
2007-01-01
The worldwide environmental protection is required by the public. A long-term environmental assessment from nuclear fuel cycle facilities to the aquatic environment also becomes more important to utilize nuclear energy more efficiently. Evaluation of long-term risk including not only in Japan but also in neighboring countries is considered to be necessary in order to develop nuclear power industry. The author successfully simulated the distribution of radionuclides in seawater and seabed sediment produced by atmospheric nuclear tests using LAMER (Long-term Assessment ModEl for Radioactivity in the oceans). A part of the LAMER calculated the advection- diffusion-scavenging processes for radionuclides in the oceans and the Japan Sea in cooperate with Oceanic General Circulation Model (OGCM) and was validated. The author is challenging to calculate probabilistic effective dose suggested by ICRP from intake of marine products due to atmospheric nuclear tests using the Monte Carlo method in the other part of LAMER. Depending on the deviation of each parameter, the 95th percentile of the probabilistic effective dose was calculated about half of the 95th percentile of the deterministic effective dose in proforma calculation. The probabilistic assessment gives realistic value for the dose assessment of a nuclear fuel cycle facility. (author)
Estimation and prediction under local volatility jump-diffusion model
Kim, Namhyoung; Lee, Younhee
2018-02-01
Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.
Directory of Open Access Journals (Sweden)
Wenhua Yu
Full Text Available OBJECTIVES: Based on the important changes in South Africa since 2009 and the Antiretroviral Treatment Guideline 2013 recommendations, we explored the cost-effectiveness of different strategy combinations according to the South African HIV-infected mothers' prompt treatments and different feeding patterns. STUDY DESIGN: A decision analytic model was applied to simulate cohorts of 10,000 HIV-infected pregnant women to compare the cost-effectiveness of two different HIV strategy combinations: (1 Women were tested and treated promptly at any time during pregnancy (Promptly treated cohort. (2 Women did not get testing or treatment until after delivery and appropriate standard treatments were offered as a remedy (Remedy cohort. Replacement feeding or exclusive breastfeeding was assigned in both strategies. Outcome measures included the number of infant HIV cases averted, the cost per infant HIV case averted, and the cost per life year (LY saved from the interventions. One-way and multivariate sensitivity analyses were performed to estimate the uncertainty ranges of all outcomes. RESULTS: The remedy strategy does not particularly cost-effective. Compared with the untreated baseline cohort which leads to 1127 infected infants, 698 (61.93% and 110 (9.76% of pediatric HIV cases are averted in the promptly treated cohort and remedy cohort respectively, with incremental cost-effectiveness of $68.51 and $118.33 per LY, respectively. With or without the antenatal testing and treatments, breastfeeding is less cost-effective ($193.26 per LY than replacement feeding ($134.88 per LY, without considering the impact of willingness to pay. CONCLUSION: Compared with the prompt treatments, remedy in labor or during the postnatal period is less cost-effective. Antenatal HIV testing and prompt treatments and avoiding breastfeeding are the best strategies. Although encouraging mothers to practice replacement feeding in South Africa is far from easy and the advantages of
Directory of Open Access Journals (Sweden)
Luca eCaricchi
2016-04-01
Full Text Available Magma fluxes in the Earth’s crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes. Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions.Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.
Marginal Maximum Likelihood Estimation of Item Response Models in R
Directory of Open Access Journals (Sweden)
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
Directory of Open Access Journals (Sweden)
Cora L Bernard
2017-05-01
Full Text Available The risks of HIV transmission associated with the opioid epidemic make cost-effective programs for people who inject drugs (PWID a public health priority. Some of these programs have benefits beyond prevention of HIV-a critical consideration given that injection drug use is increasing across most United States demographic groups. To identify high-value HIV prevention program portfolios for US PWID, we consider combinations of four interventions with demonstrated efficacy: opioid agonist therapy (OAT, needle and syringe programs (NSPs, HIV testing and treatment (Test & Treat, and oral HIV pre-exposure prophylaxis (PrEP.We adapted an empirically calibrated dynamic compartmental model and used it to assess the discounted costs (in 2015 US dollars, health outcomes (HIV infections averted, change in HIV prevalence, and discounted quality-adjusted life years [QALYs], and incremental cost-effectiveness ratios (ICERs of the four prevention programs, considered singly and in combination over a 20-y time horizon. We obtained epidemiologic, economic, and health utility parameter estimates from the literature, previously published models, and expert opinion. We estimate that expansions of OAT, NSPs, and Test & Treat implemented singly up to 50% coverage levels can be cost-effective relative to the next highest coverage level (low, medium, and high at 40%, 45%, and 50%, respectively and that OAT, which we assume to have immediate and direct health benefits for the individual, has the potential to be the highest value investment, even under scenarios where it prevents fewer infections than other programs. Although a model-based analysis can provide only estimates of health outcomes, we project that, over 20 y, 50% coverage with OAT could avert up to 22,000 (95% CI: 5,200, 46,000 infections and cost US$18,000 (95% CI: US$14,000, US$24,000 per QALY gained, 50% NSP coverage could avert up to 35,000 (95% CI: 8,900, 43,000 infections and cost US$25,000 (95% CI: US
Owens, Douglas K.; Goldhaber-Fiebert, Jeremy D.; Brandeau, Margaret L.
2017-01-01
Background The risks of HIV transmission associated with the opioid epidemic make cost-effective programs for people who inject drugs (PWID) a public health priority. Some of these programs have benefits beyond prevention of HIV—a critical consideration given that injection drug use is increasing across most United States demographic groups. To identify high-value HIV prevention program portfolios for US PWID, we consider combinations of four interventions with demonstrated efficacy: opioid agonist therapy (OAT), needle and syringe programs (NSPs), HIV testing and treatment (Test & Treat), and oral HIV pre-exposure prophylaxis (PrEP). Methods and findings We adapted an empirically calibrated dynamic compartmental model and used it to assess the discounted costs (in 2015 US dollars), health outcomes (HIV infections averted, change in HIV prevalence, and discounted quality-adjusted life years [QALYs]), and incremental cost-effectiveness ratios (ICERs) of the four prevention programs, considered singly and in combination over a 20-y time horizon. We obtained epidemiologic, economic, and health utility parameter estimates from the literature, previously published models, and expert opinion. We estimate that expansions of OAT, NSPs, and Test & Treat implemented singly up to 50% coverage levels can be cost-effective relative to the next highest coverage level (low, medium, and high at 40%, 45%, and 50%, respectively) and that OAT, which we assume to have immediate and direct health benefits for the individual, has the potential to be the highest value investment, even under scenarios where it prevents fewer infections than other programs. Although a model-based analysis can provide only estimates of health outcomes, we project that, over 20 y, 50% coverage with OAT could avert up to 22,000 (95% CI: 5,200, 46,000) infections and cost US$18,000 (95% CI: US$14,000, US$24,000) per QALY gained, 50% NSP coverage could avert up to 35,000 (95% CI: 8,900, 43,000) infections and
Bernard, Cora L; Owens, Douglas K; Goldhaber-Fiebert, Jeremy D; Brandeau, Margaret L
2017-05-01
The risks of HIV transmission associated with the opioid epidemic make cost-effective programs for people who inject drugs (PWID) a public health priority. Some of these programs have benefits beyond prevention of HIV-a critical consideration given that injection drug use is increasing across most United States demographic groups. To identify high-value HIV prevention program portfolios for US PWID, we consider combinations of four interventions with demonstrated efficacy: opioid agonist therapy (OAT), needle and syringe programs (NSPs), HIV testing and treatment (Test & Treat), and oral HIV pre-exposure prophylaxis (PrEP). We adapted an empirically calibrated dynamic compartmental model and used it to assess the discounted costs (in 2015 US dollars), health outcomes (HIV infections averted, change in HIV prevalence, and discounted quality-adjusted life years [QALYs]), and incremental cost-effectiveness ratios (ICERs) of the four prevention programs, considered singly and in combination over a 20-y time horizon. We obtained epidemiologic, economic, and health utility parameter estimates from the literature, previously published models, and expert opinion. We estimate that expansions of OAT, NSPs, and Test & Treat implemented singly up to 50% coverage levels can be cost-effective relative to the next highest coverage level (low, medium, and high at 40%, 45%, and 50%, respectively) and that OAT, which we assume to have immediate and direct health benefits for the individual, has the potential to be the highest value investment, even under scenarios where it prevents fewer infections than other programs. Although a model-based analysis can provide only estimates of health outcomes, we project that, over 20 y, 50% coverage with OAT could avert up to 22,000 (95% CI: 5,200, 46,000) infections and cost US$18,000 (95% CI: US$14,000, US$24,000) per QALY gained, 50% NSP coverage could avert up to 35,000 (95% CI: 8,900, 43,000) infections and cost US$25,000 (95% CI: US$7
Nonparametric model assisted model calibrated estimation in two ...
African Journals Online (AJOL)
Nonparametric model assisted model calibrated estimation in two stage survey sampling. RO Otieno, PN Mwita, PN Kihara. Abstract. No Abstract > East African Journal of Statistics Vol. 1 (3) 2007: pp.261-281. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT.
International Nuclear Information System (INIS)
Perkinson, A S; Evans, C J; Burniston, M T; Smye, S W
2010-01-01
The glomerular filtration rate (GFR) is used clinically to assess renal function. The most accurate estimation technique is tracer clearance where deterministic compartment pharmacokinetic models are most widely used. The aim of this study was to assess the viability of alternative pharmacokinetic models to describe tracer clearance, and in turn, measure GFR. This study was carried out on 126 clearance datasets obtained from 44 patients with large solid tumours; these were fitted to four pharmacokinetic models with superiority of model determined by Akaike Information Criteria. A fractal model was found to be superior to the best deterministic compartment model (70% of datasets, P < 0.0020) as was a gamma-distributed residence time model (93% of datasets, P < 0.0020); both models also gave greater mean weighted coefficients of determination than deterministic compartment models. These results suggest that gamma-distributed residence time and fractal models better describe tracer clearance than deterministic compartment models and therefore should allow more accurate estimation of GFR
A unified framework for benchmark dose estimation applied to mixed models and model averaging
DEFF Research Database (Denmark)
Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.
2013-01-01
This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...... continuous and quantal data, facilitating benchmark dose estimation in general for a wide range of candidate models commonly used in toxicology. Moreover, the proposed framework provides a convenient means for extending benchmark dose concepts through the use of model averaging and random effects modeling...... provides slightly conservative, yet useful, estimates of benchmark dose lower limit under realistic scenarios....
Parameter estimation in fractional diffusion models
Kubilius, Kęstutis; Ralchenko, Kostiantyn
2017-01-01
This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...
Climate change trade measures : estimating industry effects
2009-06-01
Estimating the potential effects of domestic emissions pricing for industries in the United States is complex. If the United States were to regulate greenhouse gas emissions, production costs could rise for certain industries and could cause output, ...
The Effect of Workforce Mobility on Intervention Effectiveness Estimates.
Manjourides, Justin; Sparer, Emily H; Okechukwu, Cassandra A; Dennerlein, Jack T
2018-03-12
Little is known about how mobile populations of workers may influence the ability to implement, measure, and evaluate health and safety interventions delivered at worksites. A simulation study is used to objectively measure both precision and relative bias of six different analytic methods as a function of the amount of mobility observed in the workforce. Those six methods are then used to reanalyze a previously conducted cluster-randomized control trial involving a highly mobile workforce in the construction industry. As workforce mobility increases, relative bias in treatment effects derived from standard models to analyze cluster-randomized trials also increases. Controlling for amount of time exposed to the intervention can greatly reduce this bias. Analyzing only subsets of workers who exhibit the least amount of mobility can result in decreased precision of treatment effect estimates. We demonstrate a 59% increase in the treatment effect size from the reanalysis of the previously conducted trial. When evaluating organizational interventions implemented at specific worksites by measuring perceptions and outcomes of workers present at those sites, researchers should consider the effects that the mobility of the workforce may have on the estimated treatment effects. The choice of analytic method can greatly affect both precision and accuracy of estimates.
Coupling Hydrologic and Hydrodynamic Models to Estimate PMF
Felder, G.; Weingartner, R.
2015-12-01
Most sophisticated probable maximum flood (PMF) estimations derive the PMF from the probable maximum precipitation (PMP) by applying deterministic hydrologic models calibrated with observed data. This method is based on the assumption that the hydrological system is stationary, meaning that the system behaviour during the calibration period or the calibration event is presumed to be the same as it is during the PMF. However, as soon as a catchment-specific threshold is reached, the system is no longer stationary. At or beyond this threshold, retention areas, new flow paths, and changing runoff processes can strongly affect downstream peak discharge. These effects can be accounted for by coupling hydrologic and hydrodynamic models, a technique that is particularly promising when the expected peak discharge may considerably exceed the observed maximum discharge. In such cases, the coupling of hydrologic and hydraulic models has the potential to significantly increase the physical plausibility of PMF estimations. This procedure ensures both that the estimated extreme peak discharge does not exceed the physical limit based on riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered. Our study discusses the prospect of considering retention effects on PMF estimations by coupling hydrologic and hydrodynamic models. This method is tested by forcing PREVAH, a semi-distributed deterministic hydrological model, with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to externally force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). Finally, the PMF estimation results obtained using the coupled modelling approach are compared to the results obtained using ordinary hydrologic modelling.
Revised models and genetic parameter estimates for production and ...
African Journals Online (AJOL)
Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...
Determining input values for a simple parametric model to estimate ...
African Journals Online (AJOL)
Estimating soil evaporation (Es) is an important part of modelling vineyard evapotranspiration for irrigation purposes. Furthermore, quantification of possible soil texture and trellis effects is essential. Daily Es from six topsoils packed into lysimeters was measured under grapevines on slanting and vertical trellises, ...
ON THE ESTIMATION AND PREDICTION IN MIXED LINEAR MODELS
Directory of Open Access Journals (Sweden)
LÓPEZ L.A.
1998-01-01
Full Text Available Beginning with the classical Gauss-Markov Linear Model for mixed effects and using the technique of the Lagrange multipliers to obtain an alternative method for the estimation of linear predictors. A structural method is also discussed in order to obtain the variance and covariance matrixes and their inverses.
DEFF Research Database (Denmark)
Puthumana, Govindan; Bissacco, Giuliano; Hansen, Hans Nørgaard
2017-01-01
In micro-EDM milling, real time electrode wear compensation based on tool wear per discharge (TWD) estimation permits the direct control of the position of the tool electrode frontal surface. However, TWD estimation errors will cause errors on the tool electrode axial depth. A simulation tool...... is developed to determine the effects of errors in the initial estimation of TWD and its propagation effect with respect to the error on the depth of the cavity generated. Simulations were applied to micro-EDM milling of a slot of 5000 μm length and 50 μm depth and validated through slot milling experiments...... performed on a micro-EDM machine. Simulations and experimental results were found to be in good agreement, showing the effect of errror amplification through the cavity depth....
International Nuclear Information System (INIS)
Demirhan, Haydar
2014-01-01
Highlights: • Impacts of multicollinearity on solar radiation estimation models are discussed. • Accuracy of existing empirical models for Turkey is evaluated. • A new non-linear model for the estimation of average daily horizontal global solar radiation is proposed. • Estimation and prediction performance of the proposed and existing models are compared. - Abstract: Due to the considerable decrease in energy resources and increasing energy demand, solar energy is an appealing field of investment and research. There are various modelling strategies and particular models for the estimation of the amount of solar radiation reaching at a particular point over the Earth. In this article, global solar radiation estimation models are taken into account. To emphasize severity of multicollinearity problem in solar radiation estimation models, some of the models developed for Turkey are revisited. It is observed that these models have been identified as accurate under certain multicollinearity structures, and when the multicollinearity is eliminated, the accuracy of these models is controversial. Thus, a reliable model that does not suffer from multicollinearity and gives precise estimates of global solar radiation for the whole region of Turkey is necessary. A new nonlinear model for the estimation of average daily horizontal solar radiation is proposed making use of the genetic programming technique. There is no multicollinearity problem in the new model, and its estimation accuracy is better than the revisited models in terms of numerous statistical performance measures. According to the proposed model, temperature, precipitation, altitude, longitude, and monthly average daily extraterrestrial horizontal solar radiation have significant effect on the average daily global horizontal solar radiation. Relative humidity and soil temperature are not included in the model due to their high correlation with precipitation and temperature, respectively. While altitude has
Development on electromagnetic impedance function modeling and its estimation
International Nuclear Information System (INIS)
Sutarno, D.
2015-01-01
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition
Development on electromagnetic impedance function modeling and its estimation
Energy Technology Data Exchange (ETDEWEB)
Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)
2015-09-30
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition
Chapman, Cole G; Brooks, John M
2016-12-01
To examine the settings of simulation evidence supporting use of nonlinear two-stage residual inclusion (2SRI) instrumental variable (IV) methods for estimating average treatment effects (ATE) using observational data and investigate potential bias of 2SRI across alternative scenarios of essential heterogeneity and uniqueness of marginal patients. Potential bias of linear and nonlinear IV methods for ATE and local average treatment effects (LATE) is assessed using simulation models with a binary outcome and binary endogenous treatment across settings varying by the relationship between treatment effectiveness and treatment choice. Results show that nonlinear 2SRI models produce estimates of ATE and LATE that are substantially biased when the relationships between treatment and outcome for marginal patients are unique from relationships for the full population. Bias of linear IV estimates for LATE was low across all scenarios. Researchers are increasingly opting for nonlinear 2SRI to estimate treatment effects in models with binary and otherwise inherently nonlinear dependent variables, believing that it produces generally unbiased and consistent estimates. This research shows that positive properties of nonlinear 2SRI rely on assumptions about the relationships between treatment effect heterogeneity and choice. © Health Research and Educational Trust.
Automated effective dose estimation in CT.
García, M Sánchez; Cameán, M Pombar; Busto, R Lobato; Vega, V Luna; Sueiro, J Mosquera; Martínez, C Otero; del Río, J R Sendón
2010-01-01
European regulations require the dose delivered to patients in CT examinations to be monitored and checked against reference levels. Dose estimation has traditionally been performed manually. This is time consuming and therefore it is typically performed on just a few patients and the results extrapolated to the general case. In this work an automated method to estimate the dose in CT studies is presented. The presented software downloads CT studies from the corporative picture archiving and communication system and uses the information on the DICOM headers to perform the dose calculation. Automation enables dose estimations to be performed on a larger fraction of studies, enabling more significant comparisons with diagnostic reference levels (DRLs). A preliminary analysis involving 5800 studies is presented with details of dose distributions for selected CT protocols in use at a university hospital. Average doses are compared with DRLs. Effective dose estimations are also compared with estimations based on the dose length product.
Advanced empirical estimate of information value for credit scoring models
Directory of Open Access Journals (Sweden)
Martin Řezáč
2011-01-01
Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.
Near Shore Wave Modeling and applications to wave energy estimation
Zodiatis, G.; Galanis, G.; Hayes, D.; Nikolaidis, A.; Kalogeri, C.; Adam, A.; Kallos, G.; Georgiou, G.
2012-04-01
The estimation of the wave energy potential at the European coastline is receiving increased attention the last years as a result of the adaptation of novel policies in the energy market, the concernsfor global warming and the nuclear energy security problems. Within this framework, numerical wave modeling systems keep a primary role in the accurate description of wave climate and microclimate that is a prerequisite for any wave energy assessment study. In the present work two of the most popular wave models are used for the estimation of the wave parameters at the coastline of Cyprus: The latest parallel version of the wave model WAM (ECMWF version), which employs new parameterization of shallow water effects, and the SWAN model, classically used for near shore wave simulations. The results obtained from the wave models near shores are studied by an energy estimation point of view: The wave parameters that mainly affect the energy temporal and spatial distribution, that is the significant wave height and the mean wave period, are statistically analyzed,focusing onpossible different aspects captured by the two models. Moreover, the wave spectrum distribution prevailing in different areas are discussed contributing, in this way, to the wave energy assessmentin the area. This work is a part of two European projects focusing on the estimation of the wave energy distribution around Europe: The MARINA platform (http://www.marina-platform.info/ index.aspx) and the Ewave (http://www.oceanography.ucy.ac.cy/ewave/) projects.
Robust estimation of errors-in-variables models using M-estimators
Guo, Cuiping; Peng, Junhuan
2017-07-01
The traditional Errors-in-variables (EIV) models are widely adopted in applied sciences. The EIV model estimators, however, can be highly biased by gross error. This paper focuses on robust estimation in EIV models. A new class of robust estimators, called robust weighted total least squared estimators (RWTLS), is introduced. Robust estimators of the parameters of the EIV models are derived from M-estimators and Lagrange multiplier method. A simulated example is carried out to demonstrate the performance of the presented RWTLS. The result shows that the RWTLS algorithm can indeed resist gross error to achieve a reliable solution.
Directory of Open Access Journals (Sweden)
Gouveia Diego
2018-01-01
Full Text Available Lidar measurements of cirrus clouds are highly influenced by multiple scattering (MS. We therefore developed an iterative approach to correct elastic backscatter lidar signals for multiple scattering to obtain best estimates of single-scattering cloud optical depth and lidar ratio as well as of the ice crystal effective radius. The approach is based on the exploration of the effect of MS on the molecular backscatter signal returned from above cloud top.
ESTIMATING THE EFFECTS OF EXCHANGE AND INTEREST ...
African Journals Online (AJOL)
Empirical evidence from IPOs of German family-owned firms”, CFS Working Paper. 10. Erdem Cumhur, Arslan C.K.and Erdem, M.S.2005 “Effects of Macroeconomic Variables on Instanbul Stock Exchange Indexes“, Applied Financial Economics 15, pp. 987-. 994. Okoli, M. N.: Estimating the Effects of Exhange and Interest ...
Models for estimating photosynthesis parameters from in situ production profiles
Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana
2017-12-01
The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of
The Impact of Statistical Leakage Models on Design Yield Estimation
Directory of Open Access Journals (Sweden)
Rouwaida Kanj
2011-01-01
Full Text Available Device mismatch and process variation models play a key role in determining the functionality and yield of sub-100 nm design. Average characteristics are often of interest, such as the average leakage current or the average read delay. However, detecting rare functional fails is critical for memory design and designers often seek techniques that enable accurately modeling such events. Extremely leaky devices can inflict functionality fails. The plurality of leaky devices on a bitline increase the dimensionality of the yield estimation problem. Simplified models are possible by adopting approximations to the underlying sum of lognormals. The implications of such approximations on tail probabilities may in turn bias the yield estimate. We review different closed form approximations and compare against the CDF matching method, which is shown to be most effective method for accurate statistical leakage modeling.
Estimation of oil toxicity using an additive toxicity model
International Nuclear Information System (INIS)
French, D.
2000-01-01
The impacts to aquatic organisms resulting from acute exposure to aromatic mixtures released from oil spills can be modeled using a newly developed toxicity model. This paper presented a summary of the model development for the toxicity of monoaromatic and polycyclic aromatic hydrocarbon mixtures. This is normally difficult to quantify because oils are mixtures of a variety of hydrocarbons with different toxicities and environmental fates. Also, aromatic hydrocarbons are volatile, making it difficult to expose organism to constant concentrations in bioassay tests. This newly developed and validated model corrects toxicity for time and temperature of exposure. In addition, it estimates the toxicity of each aromatic in the oil-derived mixture. The toxicity of the mixture can be estimated by the weighted sum of the toxicities of the individual compounds. Acute toxicity is estimated as LC50 (lethal concentration to 50 per cent of exposed organisms). Sublethal effects levels are estimated from LC50s. The model was verified with available oil bioassay data. It was concluded that oil toxicity is a function of the aromatic content and composition in the oil as well as the fate and partitioning of those components in the environment. 81 refs., 19 tabs., 1 fig
Sreelash, K.; Buis, Samuel; Sekhar, M.; Ruiz, Laurent; Kumar Tomer, Sat; Guérif, Martine
2017-03-01
Characterization of the soil water reservoir is critical for understanding the interactions between crops and their environment and the impacts of land use and environmental changes on the hydrology of agricultural catchments especially in tropical context. Recent studies have shown that inversion of crop models is a powerful tool for retrieving information on root zone properties. Increasing availability of remotely sensed soil and vegetation observations makes it well suited for large scale applications. The potential of this methodology has however never been properly evaluated on extensive experimental datasets and previous studies suggested that the quality of estimation of soil hydraulic properties may vary depending on agro-environmental situations. The objective of this study was to evaluate this approach on an extensive field experiment. The dataset covered four crops (sunflower, sorghum, turmeric, maize) grown on different soils and several years in South India. The components of AWC (available water capacity) namely soil water content at field capacity and wilting point, and soil depth of two-layered soils were estimated by inversion of the crop model STICS with the GLUE (generalized likelihood uncertainty estimation) approach using observations of surface soil moisture (SSM; typically from 0 to 10 cm deep) and leaf area index (LAI), which are attainable from radar remote sensing in tropical regions with frequent cloudy conditions. The results showed that the quality of parameter estimation largely depends on the hydric regime and its interaction with crop type. A mean relative absolute error of 5% for field capacity of surface layer, 10% for field capacity of root zone, 15% for wilting point of surface layer and root zone, and 20% for soil depth can be obtained in favorable conditions. A few observations of SSM (during wet and dry soil moisture periods) and LAI (within water stress periods) were sufficient to significantly improve the estimation of AWC
AMEM-ADL Polymer Migration Estimation Model User's Guide
The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.
International Nuclear Information System (INIS)
Kiriushin, A.I.; Korotkikh, Yu.G.; Gorodov, G.F.
2002-01-01
Full text: The estimation problems of spent life and forecast of residual life of NPP equipment design units, operated at unstationary thermal force loads are considered. These loads are, as a rule, unregular and cause rotation of main stress tensor platforms of the most loaded zones of structural elements and viscoelastic plastic deformation of material in the places of stresses concentrations. The existing engineering approaches to the damages accumulation processes calculation in the material of structural units, their advantages and disadvantages are analyzed. For the processes of fatigue damages accumulation a model is proposed, which allows to take into account the unregular pattern of deformation multiaxiality of stressed state, rotation of main platforms, non-linear summation of damages at the loading mode change. The model in based on the equations of damaged medium mechanics, including the equations of viscoplastic deformation of the material and evolutionary equations of damages accumulation. The algorithms of spent life estimation and residual life forecast of the controlled equipment and systems zones are made on the bases of the given model by the known real history of loading, which is determined by real model of NPP operation. The results of numerical experiments on the basis of given model for various processes of thermal force loads and their comparison with experimental results are presented. (author)
Benefit Estimation Model for Tourist Spaceflights
Goehlich, Robert A.
2003-01-01
It is believed that the only potential means for significant reduction of the recurrent launch cost, which results in a stimulation of human space colonization, is to make the launcher reusable, to increase its reliability, and to make it suitable for new markets such as mass space tourism. But such space projects, that have long range aspects are very difficult to finance, because even politicians would like to see a reasonable benefit during their term in office, because they want to be able to explain this investment to the taxpayer. This forces planners to use benefit models instead of intuitive judgement to convince sceptical decision-makers to support new investments in space. Benefit models provide insights into complex relationships and force a better definition of goals. A new approach is introduced in the paper that allows to estimate the benefits to be expected from a new space venture. The main objective why humans should explore space is determined in this study to ``improve the quality of life''. This main objective is broken down in sub objectives, which can be analysed with respect to different interest groups. Such interest groups are the operator of a space transportation system, the passenger, and the government. For example, the operator is strongly interested in profit, while the passenger is mainly interested in amusement, while the government is primarily interested in self-esteem and prestige. This leads to different individual satisfactory levels, which are usable for the optimisation process of reusable launch vehicles.
2012-01-01
Background We examine the effect of heat waves on mortality, over and above what would be predicted on the basis of temperature alone. Methods Present modeling approaches may not fully capture extra effects relating to heat wave duration, possibly because the mechanisms of action and the population at risk are different under more extreme conditions. Modeling such extra effects can be achieved using the commonly left-out effect-modification between the lags of temperature in distributed lag models. Results Using data from Stockholm, Sweden, and a variety of modeling approaches, we found that heat wave effects amount to a stable and statistically significant 8.1-11.6% increase in excess deaths per heat wave day. The effects explicitly relating to heat wave duration (2.0–3.9% excess deaths per day) were more sensitive to the degrees of freedom allowed for in the overall temperature-mortality relationship. However, allowing for a very large number of degrees of freedom indicated over-fitting the overall temperature-mortality relationship. Conclusions Modeling additional heat wave effects, e.g. between lag effect-modification, can give a better description of the effects from extreme temperatures, particularly in the non-elderly population. We speculate that it is biologically plausible to differentiate effects from heat and heat wave duration. PMID:22490779
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Phase noise effects on turbulent weather radar spectrum parameter estimation
Lee, Jonggil; Baxa, Ernest G., Jr.
1990-01-01
Accurate weather spectrum moment estimation is important in the use of weather radar for hazardous windshear detection. The effect of the stable local oscillator (STALO) instability (jitter) on the spectrum moment estimation algorithm is investigated. Uncertainty in the stable local oscillator will affect both the transmitted signal and the received signal since the STALO provides transmitted and reference carriers. The proposed approach models STALO phase jitter as it affects the complex autocorrelation of the radar return. The results can therefore by interpreted in terms of any source of system phase jitter for which the model is appropriate and, in particular, may be considered as a cumulative effect of all radar system sources.
Dynamic Diffusion Estimation in Exponential Family Models
Czech Academy of Sciences Publication Activity Database
Dedecius, Kamil; Sečkárová, Vladimíra
2013-01-01
Roč. 20, č. 11 (2013), s. 1114-1117 ISSN 1070-9908 R&D Projects: GA MŠk 7D12004; GA ČR GA13-13502S Keywords : diffusion estimation * distributed estimation * paremeter estimation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.639, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0396518.pdf
UAV State Estimation Modeling Techniques in AHRS
Razali, Shikin; Zhahir, Amzari
2017-11-01
Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.
Estimating the Multilevel Rasch Model: With the lme4 Package
Directory of Open Access Journals (Sweden)
Harold Doran
2007-02-01
Full Text Available Traditional Rasch estimation of the item and student parameters via marginal maximum likelihood, joint maximum likelihood or conditional maximum likelihood, assume individuals in clustered settings are uncorrelated and items within a test that share a grouping structure are also uncorrelated. These assumptions are often violated, particularly in educational testing situations, in which students are grouped into classrooms and many test items share a common grouping structure, such as a content strand or a reading passage. Consequently, one possible approach is to explicitly recognize the clustered nature of the data and directly incorporate random effects to account for the various dependencies. This article demonstrates how the multilevel Rasch model can be estimated using the functions in R for mixed-effects models with crossed or partially crossed random effects. We demonstrate how to model the following hierarchical data structures: a individuals clustered in similar settings (e.g., classrooms, schools, b items nested within a particular group (such as a content strand or a reading passage, and c how to estimate a teacher × content strand interaction.
Quantitative Estimation for the Effectiveness of Automation
International Nuclear Information System (INIS)
Lee, Seung Min; Seong, Poong Hyun
2012-01-01
In advanced MCR, various automation systems are applied to enhance the human performance and reduce the human errors in industrial fields. It is expected that automation provides greater efficiency, lower workload, and fewer human errors. However, these promises are not always fulfilled. As the new types of events related to application of the imperfect and complex automation are occurred, it is required to analyze the effects of automation system for the performance of human operators. Therefore, we suggest the quantitative estimation method to analyze the effectiveness of the automation systems according to Level of Automation (LOA) classification, which has been developed over 30 years. The estimation of the effectiveness of automation will be achieved by calculating the failure probability of human performance related to the cognitive activities
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2011-01-01
In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator
Performances of some estimators of linear model with ...
African Journals Online (AJOL)
The estimators are compared by examing the finite properties of estimators namely; sum of biases, sum of absolute biases, sum of variances and sum of the mean squared error of the estimated parameter of the model. Results show that when the autocorrelation level is small (ρ=0.4), the MLGD estimator is best except when ...
On population size estimators in the Poisson mixture model.
Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua
2013-09-01
Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.
International Nuclear Information System (INIS)
Wu, Kuei-Yen; Wu, Jung-Hua; Huang, Yun-Hsun; Fu, Szu-Chi; Chen, Chia-Yon
2016-01-01
Most existing literature focuses on the direct rebound effect on the demand side for consumers. This study analyses direct and indirect rebound effects in Taiwan's industry from the perspective of producers. However, most studies on the producers' viewpoint may overlook inter-industry linkages. This study applies a supply-driven input-output model to quantify the magnitude of rebound effects by explicitly considering inter-industry linkages. Empirical results showed that total rebound effects for most Taiwan's sectors were less than 10% in 2011. A comparison among the sectors yields that sectors with lower energy efficiency had higher direct rebound effects, while sectors with higher forward linkages generated higher indirect rebound effects. Taking the Mining sector (S3) as an example, which is an upstream supplier and has high forward linkages; it showed high indirect rebound effects that are derived from the accumulation of additional energy consumption by its downstream producers. The findings also showed that in almost all sectors, indirect rebound effects were higher than direct rebound effects. In other words, if indirect rebound effects are neglected, the total rebound effects will be underestimated. Hence, the energy-saving potential may be overestimated. - Highlights: • This study quantifies rebound effects by a supply-driven input-output model. • For most Taiwan's sectors, total rebound magnitudes were less than 10% in 2011. • Direct rebound effects and energy efficiency were inverse correlation. • Indirect rebound effects and industrial forward linkages were positive correlation. • Indirect rebound effects were generally higher than direct rebound effects.
A Derivative Based Estimator for Semiparametric Index Models
Donkers, A.C.D.; Schafgans, M.
2003-01-01
This paper proposes a semiparametric estimator for single- and multiple index models.It provides an extension of the average derivative estimator to the multiple index model setting.The estimator uses the average of the outer product of derivatives and is shown to be root-N consistent and
Estimation of Stochastic Volatility Models by Nonparametric Filtering
DEFF Research Database (Denmark)
Kanaya, Shin; Kristensen, Dennis
2016-01-01
/estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases...
Radiation risk estimation based on measurement error models
Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya
2017-01-01
This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.
Evaluation of black carbon estimations in global aerosol models
Directory of Open Access Journals (Sweden)
Y. Zhao
2009-11-01
Full Text Available We evaluate black carbon (BC model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD retrievals from AERONET and Ozone Monitoring Instrument (OMI and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.7 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50N, the average model is a factor of 8 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC ratio is 0.4 and models underestimate the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model
Efficient estimation of semiparametric copula models for bivariate survival data
Cheng, Guang
2014-01-01
A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.
Mathematical model of transmission network static state estimation
Directory of Open Access Journals (Sweden)
Ivanov Aleksandar
2012-01-01
Full Text Available In this paper the characteristics and capabilities of the power transmission network static state estimator are presented. The solving process of the mathematical model containing the measurement errors and their processing is developed. To evaluate difference between the general model of state estimation and the fast decoupled state estimation model, the both models are applied to an example, and so derived results are compared.
Instrumental variables estimation under a structural Cox model
DEFF Research Database (Denmark)
Martinussen, Torben; Nørbo Sørensen, Ditte; Vansteelandt, Stijn
2017-01-01
Instrumental variable (IV) analysis is an increasingly popular tool for inferring the effect of an exposure on an outcome, as witnessed by the growing number of IV applications in epidemiology, for instance. The majority of IV analyses of time-to-event endpoints are, however, dominated by heuristic...... and instruments. We propose a novel class of estimators and derive their asymptotic properties. The methodology is illustrated using two real data applications, and using simulated data....... approaches. More rigorous proposals have either sidestepped the Cox model, or considered it within a restrictive context with dichotomous exposure and instrument, amongst other limitations. The aim of this article is to reconsider IV estimation under a structural Cox model, allowing for arbitrary exposure...
Energy Technology Data Exchange (ETDEWEB)
Jin, Cui; Xiao, Xiangming; Wagle, Pradeep; Griffis, Timothy; Dong, Jinwei; Wu, Chaoyang; Qin, Yuanwei; Cook, David R.
2015-11-01
Satellite-based Production Efficiency Models (PEMs) often require meteorological reanalysis data such as the North America Regional Reanalysis (NARR) by the National Centers for Environmental Prediction (NCEP) as model inputs to simulate Gross Primary Production (GPP) at regional and global scales. This study first evaluated the accuracies of air temperature (TNARR) and downward shortwave radiation (RNARR) of the NARR by comparing with in-situ meteorological measurements at 37 AmeriFlux non-crop eddy flux sites, then used one PEM – the Vegetation Photosynthesis Model (VPM) to simulate 8-day mean GPP (GPPVPM) at seven AmeriFlux crop sites, and investigated the uncertainties in GPPVPM from climate inputs as compared with eddy covariance-based GPP (GPPEC). Results showed that TNARR agreed well with in-situ measurements; RNARR, however, was positively biased. An empirical linear correction was applied to RNARR, and significantly reduced the relative error of RNARR by ~25% for crop site-years. Overall, GPPVPM calculated from the in-situ (GPPVPM(EC)), original (GPPVPM(NARR)) and adjusted NARR (GPPVPM(adjNARR)) climate data tracked the seasonality of GPPEC well, albeit with different degrees of biases. GPPVPM(EC) showed a good match with GPPEC for maize (Zea mays L.), but was slightly underestimated for soybean (Glycine max L.). Replacing the in-situ climate data with the NARR resulted in a significant overestimation of GPPVPM(NARR) (18.4/29.6% for irrigated/rainfed maize and 12.7/12.5% for irrigated/rainfed soybean). GPPVPM(adjNARR) showed a good agreement with GPPVPM(EC) for both crops due to the reduction in the bias of RNARR. The results imply that the bias of RNARR introduced significant uncertainties into the PEM-based GPP estimates, suggesting that more accurate surface radiation datasets are needed to estimate primary production of terrestrial ecosystems at regional and global scales.
Forward models and state estimation in compensatory eye movements
Directory of Open Access Journals (Sweden)
Maarten A Frens
2009-11-01
Full Text Available The compensatory eye movement system maintains a stable retinal image, integrating information from different sensory modalities to compensate for head movements. Inspired by recent models of physiology of limb movements, we suggest that compensatory eye movements (CEM can be modeled as a control system with three essential building blocks: a forward model that predicts the effects of motor commands; a state estimator that integrates sensory feedback into this prediction; and, a feedback controller that translates a state estimate into motor commands. We propose a specific mapping of nuclei within the CEM system onto these control functions. Specifically, we suggest that the Flocculus is responsible for generating the forward model prediction and that the Vestibular Nuclei integrate sensory feedback to generate an estimate of current state. Finally, the brainstem motor nuclei – in the case of horizontal compensation this means the Abducens Nucleus and the Nucleus Prepositus Hypoglossi – implement a feedback controller, translating state into motor commands. While these efforts to understand the physiological control system as a feedback control system are in their infancy, there is the intriguing possibility that compensatory eye movements and targeted voluntary movements use the same cerebellar circuitry in fundamentally different ways.
Estimation of distribution overlap of urn models.
Hampton, Jerrad; Lladser, Manuel E
2012-01-01
A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.
Semiparametric Efficient Adaptive Estimation of the PTTGARCH model
Ciccarelli, Nicola
2016-01-01
Financial data sets exhibit conditional heteroskedasticity and asymmetric volatility. In this paper we derive a semiparametric efficient adaptive estimator of a conditional heteroskedasticity and asymmetric volatility GARCH-type model (i.e., the PTTGARCH(1,1) model). Via kernel density estimation of the unknown density function of the innovation and via the Newton-Raphson technique applied on the root-n-consistent quasi-maximum likelihood estimator, we construct a more efficient estimator tha...
Leaching of atmospherically deposited nitrogen from forested watersheds can acidify lakes and streams. Using a modified version of the Model of Acidification of Groundwater in Catchments, we made computer simulations of such effects for 36 lake catchments in the Adirondack Mount...
Volatility estimation using a rational GARCH model
Directory of Open Access Journals (Sweden)
Tetsuya Takaishi
2018-03-01
Full Text Available The rational GARCH (RGARCH model has been proposed as an alternative GARCHmodel that captures the asymmetric property of volatility. In addition to the previously proposedRGARCH model, we propose an alternative RGARCH model called the RGARCH-Exp model thatis more stable when dealing with outliers. We measure the performance of the volatility estimationby a loss function calculated using realized volatility as a proxy for true volatility and compare theRGARCH-type models with other asymmetric type models such as the EGARCH and GJR models.We conduct empirical studies of six stocks on the Tokyo Stock Exchange and find that a volatilityestimation using the RGARCH-type models outperforms the GARCH model and is comparable toother asymmetric GARCH models.
Comparison of two intelligent models to estimate the instantaneous ...
Indian Academy of Sciences (India)
Mostafa Zamani Mohiabadi
2017-07-25
Jul 25, 2017 ... 2014) has combined empirical models and a Bayesian neural network (BNN) model to estimate daily global solar radiation on a horizon- tal surface in Ghardaıa, Algeria. In their model, the maximum and minimum air temperatures of the year 2006 have been used to estimate the coefficients of the empirical ...
A Contingent Trip Model for Estimating Rail-trail Demand
Carter J. Betz; John C. Bergstrom; J. Michael Bowker
2003-01-01
The authors develop a contingent trip model to estimate the recreation demand for and value of a potential rail-trail site in north-east Georgia. The contingent trip model is an alternative to travel cost modelling useful for ex ante evaluation of proposed recreation resources or management alternatives. The authors estimate the empirical demand for trips using a...
Natesan, Prathiba; Limbers, Christine; Varni, James W.
2010-01-01
The present study presents the formulation of graded response models in the multilevel framework (as nonlinear mixed models) and demonstrates their use in estimating item parameters and investigating the group-level effects for specific covariates using Bayesian estimation. The graded response multilevel model (GRMM) combines the formulation of…
DEFF Research Database (Denmark)
Siersma, V; Als-Nielsen, B; Chen, Weikeng
2007-01-01
Methodological deficiencies are known to affect the results of randomized trials. There are several components of trial quality, which, when inadequately attended to, may bias the treatment effect under study. The extent of this bias, so far only vaguely known, is currently being investigated....... Therefore, a stable multivariable method that allows for heterogeneity is needed for assessing the 'bias coefficients'. We present two general statistical models for analysis of a study of 523 randomized trials from 48 meta-analyses in a random sample of Cochrane reviews: a logistic regression model uses...
NEW MODEL FOR SOLAR RADIATION ESTIMATION FROM ...
African Journals Online (AJOL)
Air temperature of monthly mean minimum temperature, maximum temperature and relative humidity obtained from Nigerian Meteorological Agency (NIMET) were used as inputs to the ANFIS model and monthly mean global solar radiation was used as out of the model. Statistical evaluation of the model was done based on ...
Lag space estimation in time series modelling
DEFF Research Database (Denmark)
Goutte, Cyril
1997-01-01
The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...
Model parameters estimation and sensitivity by genetic algorithms
International Nuclear Information System (INIS)
Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca
2003-01-01
In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The
Chen, Li; Han, Ting-Ting; Li, Tao; Ji, Ya-Qin; Bai, Zhi-Peng; Wang, Bin
2012-07-01
Due to the lack of a prediction model for current wind erosion in China and the slow development for such models, this study aims to predict the wind erosion of soil and the dust emission and develop a prediction model for wind erosion in Tianjin by investigating the structure, parameter systems and the relationships among the parameter systems of the prediction models for wind erosion in typical areas, using the U.S. wind erosion prediction system (WEPS) as reference. Based on the remote sensing technique and the test data, a parameter system was established for the prediction model of wind erosion and dust emission, and a model was developed that was suitable for the prediction of wind erosion and dust emission in Tianjin. Tianjin was divided into 11 080 blocks with a resolution of 1 x 1 km2, among which 7 778 dust emitting blocks were selected. The parameters of the blocks were localized, including longitude, latitude, elevation and direction, etc.. The database files of blocks were localized, including wind file, climate file, soil file and management file. The weps. run file was edited. Based on Microsoft Visualstudio 2008, secondary development was done using C + + language, and the dust fluxes of 7 778 blocks were estimated, including creep and saltation fluxes, suspension fluxes and PM10 fluxes. Based on the parameters of wind tunnel experiments in Inner Mongolia, the soil measurement data and climate data in suburbs of Tianjin, the wind erosion module, wind erosion fluxes, dust emission release modulus and dust release fluxes were calculated for the four seasons and the whole year in suburbs of Tianjin. In 2009, the total creep and saltation fluxes, suspension fluxes and PM10 fluxes in the suburbs of Tianjin were 2.54 x 10(6) t, 1.25 x 10(7) t and 9.04 x 10(5) t, respectively, among which, the parts pointing to the central district were 5.61 x 10(5) t, 2.89 x 10(6) t and 2.03 x 10(5) t, respectively.
Health effects estimation code development for accident consequence analysis
International Nuclear Information System (INIS)
Togawa, O.; Homma, T.
1992-01-01
As part of a computer code system for nuclear reactor accident consequence analysis, two computer codes have been developed for estimating health effects expected to occur following an accident. Health effects models used in the codes are based on the models of NUREG/CR-4214 and are revised for the Japanese population on the basis of the data from the reassessment of the radiation dosimetry and information derived from epidemiological studies on atomic bomb survivors of Hiroshima and Nagasaki. The health effects models include early and continuing effects, late somatic effects and genetic effects. The values of some model parameters are revised for early mortality. The models are modified for predicting late somatic effects such as leukemia and various kinds of cancers. The models for genetic effects are the same as those of NUREG. In order to test the performance of one of these codes, it is applied to the U.S. and Japanese populations. This paper provides descriptions of health effects models used in the two codes and gives comparisons of the mortality risks from each type of cancer for the two populations. (author)
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2009-01-01
In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2010-01-01
In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By
Probability density estimation in stochastic environmental models using reverse representations
Van den Berg, E.; Heemink, A.W.; Lin, H.X.; Schoenmakers, J.G.M.
2003-01-01
The estimation of probability densities of variables described by systems of stochastic dierential equations has long been done using forward time estimators, which rely on the generation of realizations of the model, forward in time. Recently, an estimator based on the combination of forward and
Performances Of Estimators Of Linear Models With Autocorrelated ...
African Journals Online (AJOL)
The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...
Pseudo-partial likelihood estimators for the Cox regression model with missing covariates.
Luo, Xiaodong; Tsai, Wei Yann; Xu, Qiang
2009-09-01
By embedding the missing covariate data into a left-truncated and right-censored survival model, we propose a new class of weighted estimating functions for the Cox regression model with missing covariates. The resulting estimators, called the pseudo-partial likelihood estimators, are shown to be consistent and asymptotically normal. A simulation study demonstrates that, compared with the popular inverse-probability weighted estimators, the new estimators perform better when the observation probability is small and improve efficiency of estimating the missing covariate effects. Application to a practical example is reported.
TPmsm: Estimation of the Transition Probabilities in 3-State Models
Directory of Open Access Journals (Sweden)
Artur Araújo
2014-12-01
Full Text Available One major goal in clinical applications of multi-state models is the estimation of transition probabilities. The usual nonparametric estimator of the transition matrix for non-homogeneous Markov processes is the Aalen-Johansen estimator (Aalen and Johansen 1978. However, two problems may arise from using this estimator: first, its standard error may be large in heavy censored scenarios; second, the estimator may be inconsistent if the process is non-Markovian. The development of the R package TPmsm has been motivated by several recent contributions that account for these estimation problems. Estimation and statistical inference for transition probabilities can be performed using TPmsm. The TPmsm package provides seven different approaches to three-state illness-death modeling. In two of these approaches the transition probabilities are estimated conditionally on current or past covariate measures. Two real data examples are included for illustration of software usage.
Modeling and Parameter Estimation of a Small Wind Generation System
Directory of Open Access Journals (Sweden)
Carlos A. Ramírez Gómez
2013-11-01
Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.
Use of econometric models to estimate expenditure shares.
Trogdon, Justin G; Finkelstein, Eric A; Hoerger, Thomas J
2008-08-01
To investigate the use of regression models to calculate disease-specific shares of medical expenditures. Medical Expenditure Panel Survey (MEPS), 2000-2003. Theoretical investigation and secondary data analysis. Condition files used to define the presence of 10 medical conditions. Incremental effects of conditions on expenditures, expressed as a fraction of total expenditures, cannot generally be interpreted as shares. When the presence of one condition increases treatment costs for another condition, summing condition-specific shares leads to double-counting of expenditures. Condition-specific shares generated from multiplicative models should not be summed. We provide an algorithm that allows estimates based on these models to be interpreted as shares and summed across conditions.
Estimating the power of Mars’ greenhouse effect
Haberle, Robert M.
2013-03-01
Extensive modeling of Mars in conjunction with in situ observations suggests that the annual average global mean surface temperature is Tsbar∼202 K. Yet its effective temperature, i.e., the temperature at which a blackbody radiates away the energy it absorbs, is Te ∼ 208 K. How can a planet with a CO2 atmosphere have a mean annual surface temperature that is actually less than its effective temperature? We use the Ames General Circulation Model explain why this is the case and point out that the correct comparison of the effective temperature is with the effective surface temperature Tse, which is the fourth root of the annual and globally averaged value of Ts4. This may seem obvious, but the distinction is often not recognized in the literature.
Lagrangian speckle model and tissue-motion estimation--theory.
Maurice, R L; Bertrand, M
1999-07-01
It is known that when a tissue is subjected to movements such as rotation, shearing, scaling, etc., changes in speckle patterns that result act as a noise source, often responsible for most of the displacement-estimate variance. From a modeling point of view, these changes can be thought of as resulting from two mechanisms: one is the motion of the speckles and the other, the alterations of their morphology. In this paper, we propose a new tissue-motion estimator to counteract these speckle decorrelation effects. The estimator is based on a Lagrangian description of the speckle motion. This description allows us to follow local characteristics of the speckle field as if they were a material property. This method leads to an analytical description of the decorrelation in a way which enables the derivation of an appropriate inverse filter for speckle restoration. The filter is appropriate for linear geometrical transformation of the scattering function (LT), i.e., a constant-strain region of interest (ROI). As the LT itself is a parameter of the filter, a tissue-motion estimator can be formulated as a nonlinear minimization problem, seeking the best match between the pre-tissue-motion image and a restored-speckle post-motion image. The method is tested, using simulated radio-frequency (RF) images of tissue undergoing axial shear.
A simulation of water pollution model parameter estimation
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
Verweij, Karin J H; Yang, Jian; Lahti, Jari; Veijola, Juha; Hintsanen, Mirka; Pulkki-Råback, Laura; Heinonen, Kati; Pouta, Anneli; Pesonen, Anu-Katriina; Widen, Elisabeth; Taanila, Anja; Isohanni, Matti; Miettunen, Jouko; Palotie, Aarno; Penke, Lars; Service, Susan K; Heath, Andrew C; Montgomery, Grant W; Raitakari, Olli; Kähönen, Mika; Viikari, Jorma; Räikkönen, Katri; Eriksson, Johan G; Keltikangas-Järvinen, Liisa; Lehtimäki, Terho; Martin, Nicholas G; Järvelin, Marjo-Riitta; Visscher, Peter M; Keller, Matthew C; Zietsch, Brendan P
2012-10-01
Personality traits are basic dimensions of behavioral variation, and twin, family, and adoption studies show that around 30% of the between-individual variation is due to genetic variation. There is rapidly growing interest in understanding the evolutionary basis of this genetic variation. Several evolutionary mechanisms could explain how genetic variation is maintained in traits, and each of these makes predictions in terms of the relative contribution of rare and common genetic variants to personality variation, the magnitude of nonadditive genetic influences, and whether personality is affected by inbreeding. Using genome-wide single nucleotide polymorphism (SNP) data from > 8000 individuals, we estimated that little variation in the Cloninger personality dimensions (7.2% on average) is due to the combined effect of common, additive genetic variants across the genome, suggesting that most heritable variation in personality is due to rare variant effects and/or a combination of dominance and epistasis. Furthermore, higher levels of inbreeding were associated with less socially desirable personality trait levels in three of the four personality dimensions. These findings are consistent with genetic variation in personality traits having been maintained by mutation-selection balance. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
Adult head CT scans: the uncertainties of effective dose estimates
International Nuclear Information System (INIS)
Gregory, Kent J.; Bibbo, Giovanni; Pattison, John E.
2008-01-01
sizes and positions within patients, and advances in CT scanner design that have not been taken into account by the effective dose estimation methods. The analysis excludes uncertainties due to variation in patient head size and the size of the model heads. For each of the four dose estimation methods analysed, the smallest and largest uncertainties (stated at the 95% confidence interval) were; 20-31% (Nagel), 14-28% (ImpaCT), 20-36% (Wellhoefer) and 21-32% (DLP). In each case, the smallest dose estimate uncertainties apply when the CT Dose Index for the scanner has been measured. In general, each of the four methods provide reasonable estimates of effective dose from head CT scans, with the ImpaCT method having the marginally smaller uncertainties. This uncertainty analysis method may be applied to other types of CT scans, such as chest, abdomen and pelvis studies, and may reveal where improvements can be made to reduce the uncertainty of those effective dose estimates. As identified in the BEIR VII report (2006), improvement in the uncertainty of effective dose estimates for individuals is expected to lead to a greater understanding of the hazards posed by diagnostic radiation exposures. (author)
Optimal covariance selection for estimation using graphical models
Vichik, Sergey; Oshman, Yaakov
2011-01-01
We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...
Temporal rainfall estimation using input data reduction and model inversion
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a
Estimating Equilibrium Effects of Job Search Assistance
DEFF Research Database (Denmark)
Gautier, Pieter; Muller, Paul; van der Klaauw, Bas
Randomized experiments provide policy relevant treatment effects if there are no spillovers between participants and nonparticipants. We show that this assumption is violated for a Danish activation program for unemployed workers. Using a difference-in-difference model e show that the nonparticip...
Estimating a Noncompensatory IRT Model Using Metropolis within Gibbs Sampling
Babcock, Ben
2011-01-01
Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…
Estimated Frequency Domain Model Uncertainties used in Robust Controller Design
DEFF Research Database (Denmark)
Tøffner-Clausen, S.; Andersen, Palle; Stoustrup, Jakob
1994-01-01
This paper deals with the combination of system identification and robust controller design. Recent results on estimation of frequency domain model uncertainty are......This paper deals with the combination of system identification and robust controller design. Recent results on estimation of frequency domain model uncertainty are...
Estimating Lead (Pb) Bioavailability In A Mouse Model
Children are exposed to Pb through ingestion of Pb-contaminated soil. Soil Pb bioavailability is estimated using animal models or with chemically defined in vitro assays that measure bioaccessibility. However, bioavailability estimates in a large animal model (e.g., swine) can be...
Bahadori, A.; VanBaalen, M.; Shavers, M.; Semones, E.; Dodge, C.; Bolch, W.
2010-01-01
The estimate of absorbed dose to individual organs of a space crewmember is affected by the geometry of the anatomical model of the astronaut used in the radiation transport calculation. For astronaut dosimetry, NASA currently uses the computerized anatomical male (CAM) and computerized anatomical female (CAF) stylized phantoms to represent astronauts in its operational radiation dose analyses. These phantoms are available in one size and in two body positions. In contrast, the UF Hybrid Adult Male and Female (UFHADM and UFHADF) phantoms have organ shapes based on actual CT data. The surfaces of these phantoms are defined by non-uniform rational B-spline surfaces, and are thus flexible in terms of body morphometry and extremity positioning. In this study, UFHADM and UFHADF are scaled to dimensions corresponding to 5th, 50th, and 95th percentile (PCTL) male and female astronauts. A ray-tracing program is written in Visual Basic 2008, which is then used to create areal density maps for dose points corresponding to various organs within the phantoms. The areal density maps, along with appropriate space radiation spectra, are input into the NASA program couplet HZETRN/BRYNTRN, and organ doses are calculated. The areal density maps selected tissues and organs of the 5th, 50th, and 95th PCTL male and female phantoms are presented and compared. In addition, the organ doses for the 5th, 50th, and 95th PCTL male and female phantoms are presented and compared to organ doses for CAM and CAF.
Time improvement of photoelectric effect calculation for absorbed dose estimation
International Nuclear Information System (INIS)
Massa, J M; Wainschenker, R S; Doorn, J H; Caselli, E E
2007-01-01
Ionizing radiation therapy is a very useful tool in cancer treatment. It is very important to determine absorbed dose in human tissue to accomplish an effective treatment. A mathematical model based on affected areas is the most suitable tool to estimate the absorbed dose. Lately, Monte Carlo based techniques have become the most reliable, but they are time expensive. Absorbed dose calculating programs using different strategies have to choose between estimation quality and calculating time. This paper describes an optimized method for the photoelectron polar angle calculation in photoelectric effect, which is significant to estimate deposited energy in human tissue. In the case studies, time cost reduction nearly reached 86%, meaning that the time needed to do the calculation is approximately 1/7 th of the non optimized approach. This has been done keeping precision invariant
Risk estimates for the health effects of alpha radiation
International Nuclear Information System (INIS)
Thomas, D.C.; McNeill, K.G.
1981-09-01
This report provides risk estimates for various health effects of alpha radiation. Human and animal data have been used to characterize the shapes of dose-response relations and the effects of various modifying factors, but quantitative risk estimates are based solely on human data: for lung cancer, on miners in the Colorado plateau, Czechoslovakia, Sweden, Ontario and Newfoundland; for bone and head cancers, on radium dial painters and radium-injected patients. Slopes of dose-response relations for lung cancer show a tendency to decrease with increasing dose. Linear extrapolation is unlikely to underestimate the excess risk at low doses by more than a factor of l.5. Under the linear cell-killing model, our best estimate
ESTIMATION DU MODELE LINEAIRE GENERALISE ET APPLICATION
Directory of Open Access Journals (Sweden)
Malika CHIKHI
2012-06-01
Full Text Available Cet article présente le modèle linéaire généralisé englobant les techniques de modélisation telles que la régression linéaire, la régression logistique, la régression log linéaire et la régression de Poisson . On Commence par la présentation des modèles des lois exponentielles pour ensuite estimer les paramètres du modèle par la méthode du maximum de vraisemblance. Par la suite on teste les coefficients du modèle pour voir leurs significations et leurs intervalles de confiances, en utilisant le test de Wald qui porte sur la signification de la vraie valeur du paramètre basé sur l'estimation de l'échantillon.
Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V
2007-10-01
The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.
Incremental parameter estimation of kinetic metabolic network models
Directory of Open Access Journals (Sweden)
Jia Gengjie
2012-11-01
Full Text Available Abstract Background An efficient and reliable parameter estimation method is essential for the creation of biological models using ordinary differential equation (ODE. Most of the existing estimation methods involve finding the global minimum of data fitting residuals over the entire parameter space simultaneously. Unfortunately, the associated computational requirement often becomes prohibitively high due to the large number of parameters and the lack of complete parameter identifiability (i.e. not all parameters can be uniquely identified. Results In this work, an incremental approach was applied to the parameter estimation of ODE models from concentration time profiles. Particularly, the method was developed to address a commonly encountered circumstance in the modeling of metabolic networks, where the number of metabolic fluxes (reaction rates exceeds that of metabolites (chemical species. Here, the minimization of model residuals was performed over a subset of the parameter space that is associated with the degrees of freedom in the dynamic flux estimation from the concentration time-slopes. The efficacy of this method was demonstrated using two generalized mass action (GMA models, where the method significantly outperformed single-step estimations. In addition, an extension of the estimation method to handle missing data is also presented. Conclusions The proposed incremental estimation method is able to tackle the issue on the lack of complete parameter identifiability and to significantly reduce the computational efforts in estimating model parameters, which will facilitate kinetic modeling of genome-scale cellular metabolism in the future.
Estimation of some stochastic models used in reliability engineering
International Nuclear Information System (INIS)
Huovinen, T.
1989-04-01
The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method
Ballistic model to estimate microsprinkler droplet distribution
Directory of Open Access Journals (Sweden)
Conceição Marco Antônio Fonseca
2003-01-01
Full Text Available Experimental determination of microsprinkler droplets is difficult and time-consuming. This determination, however, could be achieved using ballistic models. The present study aimed to compare simulated and measured values of microsprinkler droplet diameters. Experimental measurements were made using the flour method, and simulations using a ballistic model adopted by the SIRIAS computational software. Drop diameters quantified in the experiment varied between 0.30 mm and 1.30 mm, while the simulated between 0.28 mm and 1.06 mm. The greatest differences between simulated and measured values were registered at the highest radial distance from the emitter. The model presented a performance classified as excellent for simulating microsprinkler drop distribution.
Weibull Parameters Estimation Based on Physics of Failure Model
DEFF Research Database (Denmark)
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...
Cokriging model for estimation of water table elevation
International Nuclear Information System (INIS)
Hoeksema, R.J.; Clapp, R.B.; Thomas, A.L.; Hunley, A.E.; Farrow, N.D.; Dearstone, K.C.
1989-01-01
In geological settings where the water table is a subdued replica of the ground surface, cokriging can be used to estimate the water table elevation at unsampled locations on the basis of values of water table elevation and ground surface elevation measured at wells and at points along flowing streams. The ground surface elevation at the estimation point must also be determined. In the proposed method, separate models are generated for the spatial variability of the water table and ground surface elevation and for the dependence between these variables. After the models have been validated, cokriging or minimum variance unbiased estimation is used to obtain the estimated water table elevations and their estimation variances. For the Pits and Trenches area (formerly a liquid radioactive waste disposal facility) near Oak Ridge National Laboratory, water table estimation along a linear section, both with and without the inclusion of ground surface elevation as a statistical predictor, illustrate the advantages of the cokriging model
Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model
Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami
2017-06-01
A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.
Linear Regression Models for Estimating True Subsurface ...
Indian Academy of Sciences (India)
47
The objective is to minimize the processing time and computer memory required .... Survey. 65 time to acquire extra GPR or seismic data for large sites and picking the first arrival time. 66 to provide the needed datasets for the joint inversion are also .... The data utilized for the regression modelling was acquired from ground.
Linear Regression Models for Estimating True Subsurface ...
Indian Academy of Sciences (India)
47
of the processing time and memory space required to carry out the inversion with the. 29. SCLS algorithm. ... consumption of time and memory space for the iterative computations to converge at. 54 minimum data ..... colour scale and blanking as the observed true resistivity models, for visual assessment. 163. The accuracy ...
Hayashi, Yoshihiro; Otoguro, Saori; Miura, Takahiro; Onuki, Yoshinori; Obata, Yasuko; Takayama, Kozo
2014-01-01
A multivariate statistical technique was applied to clarify the causal correlation between variables in the manufacturing process and the residual stress distribution of tablets. Theophylline tablets were prepared according to a Box-Behnken design using the wet granulation method. Water amounts (X1), kneading time (X2), lubricant-mixing time (X3), and compression force (X4) were selected as design variables. The Drucker-Prager cap (DPC) model was selected as the method for modeling the mechanical behavior of pharmaceutical powders. Simulation parameters, such as Young's modulus, Poisson rate, internal friction angle, plastic deformation parameters, and initial density of the powder, were measured. Multiple regression analysis demonstrated that the simulation parameters were significantly affected by process variables. The constructed DPC models were fed into the analysis using the finite element method (FEM), and the mechanical behavior of pharmaceutical powders during the tableting process was analyzed using the FEM. The results of this analysis revealed that the residual stress distribution of tablets increased with increasing X4. Moreover, an interaction between X2 and X3 also had an effect on shear and the x-axial residual stress of tablets. Bayesian network analysis revealed causal relationships between the process variables, simulation parameters, residual stress distribution, and pharmaceutical responses of tablets. These results demonstrated the potential of the FEM as a tool to help improve our understanding of the residual stress of tablets and to optimize process variables, which not only affect tablet characteristics, but also are risks of causing tableting problems.
Genetic Prediction Models and Heritability Estimates for Functional ...
African Journals Online (AJOL)
This paper discusses these methodologies and their advantages and disadvantages. Heritability estimates obtained from these models are also reviewed. Linear methodologies can model binary and actual longevity, while RR and TM methodologies model binary survival. PH procedures model the hazard function of a cow ...
System Estimation of Panel Data Models under Long-Range Dependence
DEFF Research Database (Denmark)
Ergemen, Yunus Emre
using conditional-sum-of-squares criteria based on projected series by which latent characteristics are proxied. Resulting estimates are consistent and asymptotically normal at standard parametric rates. A simulation study provides reliability on the estimation method. The method is then applied......A general dynamic panel data model is considered that incorporates individual and interactive fixed effects allowing for contemporaneous correlation in model innovations. The model accommodates general stationary or nonstationary long-range dependence through interactive fixed effects...
International Nuclear Information System (INIS)
Doi, M.; Lagarde, F.
1996-12-01
Effective dose per unit radon progeny exposure to Swedish population in 1992 is estimated by the risk projection model based on the Swedish epidemiological study of radon and lung cancer. The resulting values range from 1.29 - 3.00 mSv/WLM and 2.58 - 5.99 mSv/WLM, respectively. Assuming a radon concentration of 100 Bq/m 3 , an equilibrium factor of 0.4 and an occupancy factor of 0.6 in Swedish houses, the annual effective dose for the Swedish population is estimated to be 0.43 - 1.98 mSv/year, which should be compared to the value of 1.9 mSv/year, according to the UNSCEAR 1993 report. 27 refs, tabs, figs
Energy Technology Data Exchange (ETDEWEB)
Doi, M. [National Inst. of Radiological Sciences, Chiba (Japan); Lagarde, F. [Karolinska Inst., Stockholm (Sweden). Inst. of Environmental Medicine; Falk, R.; Swedjemark, G.A. [Swedish Radiation Protection Inst., Stockholm (Sweden)
1996-12-01
Effective dose per unit radon progeny exposure to Swedish population in 1992 is estimated by the risk projection model based on the Swedish epidemiological study of radon and lung cancer. The resulting values range from 1.29 - 3.00 mSv/WLM and 2.58 - 5.99 mSv/WLM, respectively. Assuming a radon concentration of 100 Bq/m{sup 3}, an equilibrium factor of 0.4 and an occupancy factor of 0.6 in Swedish houses, the annual effective dose for the Swedish population is estimated to be 0.43 - 1.98 mSv/year, which should be compared to the value of 1.9 mSv/year, according to the UNSCEAR 1993 report. 27 refs, tabs, figs.
Explicit estimating equations for semiparametric generalized linear latent variable models
Ma, Yanyuan
2010-07-05
We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.
Model for estimating of population abundance using line transect sampling
Abdulraqeb Abdullah Saeed, Gamil; Muhammad, Noryanti; Zun Liang, Chuan; Yusoff, Wan Nur Syahidah Wan; Zuki Salleh, Mohd
2017-09-01
Today, many studies use the nonparametric methods for estimating objects abundance, for the simplicity, the parametric methods are widely used by biometricians. This paper is designed to present the proposed model for estimating of population abundance using line transect technique. The proposed model is appealing because it is strictly monotonically decreasing with perpendicular distance and it satisfies the shoulder conditions. The statistical properties and inference of the proposed model are discussed. In the presented detection function, theoretically, the proposed model is satisfied the line transect assumption, that leads us to study the performance of this model. We use this model as a reference for the future research of density estimation. In this paper we also study the assumption of the detection function and introduce the corresponding model in order to apply the simulation in future work.
Battery Calendar Life Estimator Manual Modeling and Simulation
Energy Technology Data Exchange (ETDEWEB)
Jon P. Christophersen; Ira Bloom; Ed Thomas; Vince Battaglia
2012-10-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Estimating Dynamic Equilibrium Models using Macro and Financial Data
DEFF Research Database (Denmark)
Christensen, Bent Jesper; Posch, Olaf; van der Wel, Michel
We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuous-time formulation conveniently accounts for the difference in observation...... frequency. We suggest two approaches for the estimation of structural parameters. The first is a simple regression-based procedure for estimation of the reduced-form parameters of the model, combined with a minimum-distance method for identifying the structural parameters. The second approach uses...... martingale estimating functions to estimate the structural parameters directly through a non-linear optimization scheme. We illustrate both approaches by estimating the stochastic AK model with mean-reverting spot interest rates. We also provide Monte Carlo evidence on the small sample behavior...
Information matrix estimation procedures for cognitive diagnostic models.
Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei
2018-03-06
Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.
Ecotoxicological effects extrapolation models
Energy Technology Data Exchange (ETDEWEB)
Suter, G.W. II
1996-09-01
One of the central problems of ecological risk assessment is modeling the relationship between test endpoints (numerical summaries of the results of toxicity tests) and assessment endpoints (formal expressions of the properties of the environment that are to be protected). For example, one may wish to estimate the reduction in species richness of fishes in a stream reach exposed to an effluent and have only a fathead minnow 96 hr LC50 as an effects metric. The problem is to extrapolate from what is known (the fathead minnow LC50) to what matters to the decision maker, the loss of fish species. Models used for this purpose may be termed Effects Extrapolation Models (EEMs) or Activity-Activity Relationships (AARs), by analogy to Structure-Activity Relationships (SARs). These models have been previously reviewed in Ch. 7 and 9 of and by an OECD workshop. This paper updates those reviews and attempts to further clarify the issues involved in the development and use of EEMs. Although there is some overlap, this paper does not repeat those reviews and the reader is referred to the previous reviews for a more complete historical perspective, and for treatment of additional extrapolation issues.
E-model MOS Estimate Improvement through Jitter Buffer Packet Loss Modelling
Directory of Open Access Journals (Sweden)
Adrian Kovac
2011-01-01
Full Text Available Proposed article analyses dependence of MOS as a voice call quality (QoS measure estimated through ITU-T E-model under real network conditions with jitter. In this paper, a method of jitter effect is proposed. Jitter as voice packet time uncertainty appears as increased packet loss caused by jitter memory buffer under- or overflow. Jitter buffer behaviour at receiver’s side is modelled as Pareto/D/1/K system with Pareto-distributed packet interarrival times and its performance is experimentally evaluated by using statistic tools. Jitter buffer stochastic model is then incorporated into E-model in an additive manner accounting for network jitter effects via excess packet loss complementing measured network packet loss. Proposed modification of E-model input parameter adds two degrees of freedom in modelling: network jitter and jitter buffer size.
Estimation of effective dose during hysterosalpingography procedures
International Nuclear Information System (INIS)
Alzimamil, K.; Babikir, E.; Alkhorayef, M.; Sulieman, A.; Alsafi, K.; Omer, H.
2014-08-01
Hysterosalpingography (HSG) is the most frequently used diagnostic tool to evaluate the endometrial cavity and fallopian tube by using conventional x-ray or fluoroscopy. Determination of the patient radiation doses values from x-ray examinations provides useful guidance on where best to concentrate efforts on patient dose reduction in order to optimize the protection of the patients. The aims of this study were to measure the patients entrance surface air kerma doses (ESA K), effective doses and to compare practices between different hospitals in Sudan. ESA K were measured for patient using calibrated thermo luminance dosimeters (TLDs, Gr-200A). Effective doses were estimated using National Radiological Protection Board (NRPB) software. This study was conducted in five radiological departments: Two Teaching Hospitals (A and D), two private hospitals (B and C) and one University Hospital (E). The mean ESD was 20.1 mGy, 28.9 mGy, 13.6 mGy, 58.65 mGy, 35.7, 22.4 and 19.6 mGy for hospitals A,B,C,D, and E), respectively. The mean effective dose was 2.4 mSv, 3.5 mSv, 1.6 mSv, 7.1 mSv and 4.3 mSv in the same order. The study showed wide variations in the ESDs with three of the hospitals having values above the internationally reported values. Number of x-ray images, fluoroscopy time, operator skills x-ray machine type and clinical complexity of the procedures were shown to be major contributors to the variations reported. Results demonstrated the need for standardization of technique throughout the hospital. The results also suggest that there is a need to optimize the procedures. Local DRLs were proposed for the entire procedures. (author)
Consistent estimation of linear panel data models with measurement error
Meijer, Erik; Spierdijk, Laura; Wansbeek, Thomas
2017-01-01
Measurement error causes a bias towards zero when estimating a panel data linear regression model. The panel data context offers various opportunities to derive instrumental variables allowing for consistent estimation. We consider three sources of moment conditions: (i) restrictions on the
The Development of an Empirical Model for Estimation of the ...
African Journals Online (AJOL)
Nassiri P
rate, daily water consumption, smoking habits, drugs that interfere with the thermoregulatory processes, and exposure to other harmful agents. Conclusions: Eventually, based on the criteria, a model for estimation of the workers' sensitivity to heat stress was presented for the first time, by which the sensitivity is estimated in ...
Asymptotics for Estimating Equations in Hidden Markov Models
DEFF Research Database (Denmark)
Hansen, Jørgen Vinsløv; Jensen, Jens Ledet
Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore a class of estimating equations is considered...
Performances of estimators of linear auto-correlated error model ...
African Journals Online (AJOL)
The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...
Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model
DEFF Research Database (Denmark)
Åberg, Andreas; Widd, Anders; Abildskov, Jens
2016-01-01
A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests...
Estimation for the Multiple Factor Model When Data Are Missing.
Finkbeiner, Carl
1979-01-01
A maximum likelihood method of estimating the parameters of the multiple factor model when data are missing from the sample is presented. A Monte Carlo study compares the method with five heuristic methods of dealing with the problem. The present method shows some advantage in accuracy of estimation. (Author/CTM)
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Inverse Gaussian model for small area estimation via Gibbs sampling
African Journals Online (AJOL)
We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...
Person Appearance Modeling and Orientation Estimation using Spherical Harmonics
Liem, M.C.; Gavrila, D.M.
2013-01-01
We present a novel approach for the joint estimation of a person's overall body orientation, 3D shape and texture, from overlapping cameras. Overall body orientation (i.e. rotation around torso major axis) is estimated by minimizing the difference between a learned texture model in a canonical
Performances of estimators of linear model with auto-correlated ...
African Journals Online (AJOL)
A Monte Carlo Study of the small sampling properties of five estimators of a linear model with Autocorrelated error terms is discussed. The independent variable was specified as standard normal data. The estimators of the slop coefficients β with the help of Ordinary Least Squares (OLS), increased with increased ...
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
2002-01-01
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Review Genetic prediction models and heritability estimates for ...
African Journals Online (AJOL)
edward
2015-05-09
May 9, 2015 ... cattle in South Africa. Linear models, random regression (RR) models, threshold models (TMs) and ...... Heritability for longevity has been estimated with TMs in Canadian Holsteins (Boettcher et al., 1999),. Spanish ... simulation to incorporate the tri-gamma function (γ) as used by Sasaki et al. (2012) and ...
On mixture model complexity estimation for music recommender systems
Balkema, W.; van der Heijden, Ferdinand; Meijerink, B.
2006-01-01
Content-based music navigation systems are in need of robust music similarity measures. Current similarity measures model each song with the same model parameters. We propose methods to efficiently estimate the required number of model parameters of each individual song. First results of a study on
Parameter estimation of electricity spot models from futures prices
Aihara, ShinIchi; Bagchi, Arunabha; Imreizeeq, E.S.N.; Walter, E.
We consider a slight perturbation of the Schwartz-Smith model for the electricity futures prices and the resulting modified spot model. Using the martingale property of the modified price under the risk neutral measure, we derive the arbitrage free model for the spot and futures prices. We estimate
Parameter Estimation for the Thurstone Case III Model.
Mackay, David B.; Chaiy, Seoil
1982-01-01
The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)
Carbon footprint estimator, phase II : volume I - GASCAP model.
2014-03-01
The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...
Parameter estimation in stochastic rainfall-runoff models
DEFF Research Database (Denmark)
Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur
2006-01-01
A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...
Fundamental Frequency and Model Order Estimation Using Spatial Filtering
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
In signal processing applications of harmonic-structured signals, estimates of the fundamental frequency and number of harmonics are often necessary. In real scenarios, a desired signal is contaminated by different levels of noise and interferers, which complicate the estimation of the signal...... extend this procedure to account for inharmonicity using unconstrained model order estimation. The simulations show that beamforming improves the performance of the joint estimates of fundamental frequency and the number of harmonics in low signal to interference (SIR) levels, and an experiment...... on a trumpet signal show the applicability on real signals....
Context Tree Estimation in Variable Length Hidden Markov Models
Dumont, Thierry
2011-01-01
We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...
Estimation of Site Effects in Beijing City
Ding, Z.; Chen, Y. T.; Panza, G. F.
For the realistic modeling of the seismic ground motion in lateral heterogeneous anelastic media, the database of 3-D geophysical structures for Beijing City has been built up to model the seismic ground motion in the City, caused by the 1976 Tangshan and the 1998 Zhangbei earthquakes. The hybrid method, which combines the modal summation and the finite-difference algorithms, is used in the simulation. The modeling of the seismic ground motion, for both the Tangshan and the Zhangbei earthquakes, shows that the thick Quaternary sedimentary cover amplifies the peak values and increases the duration of the seismic ground motion in the northwestern part of the City. Therefore the thickness of the Quaternary sediments in Beijing City is the key factor controling the local ground effects. Four zones are defined on the base of the different thickness of the Quaternary sediments. The response spectra for each zone are computed, indicating that peak spectral values as high as 0.1 g are compatible with past seismicity and can be well exceeded if an event similar to the 1697 Sanhe-Pinggu occurs.
Estimation of site effects in Beijing City
International Nuclear Information System (INIS)
Ding, Z.; Chen, Y.T.; Panza, G.F.
2002-01-01
For the realistic modeling of the seismic ground motion in lateral heterogeneous anelastic media, the database of 3-D geophysical structures for Beijing City has been built up to model the seismic ground motion in the City, caused by the 1976 Tangshan and the 1998 Zhangbei earthquakes. The hybrid method, that combines the modal summation and the finite difference algorithms, is used in the simulation. The modeling of the seismic ground motion for both the Tangshan and the Zhangbei earthquakes shows that the thick Quaternary sedimentary cover amplifies the peak values and increases the duration of the seismic ground motion in the northwest part of the City. Therefore the thickness of the Quaternary sediments in Beijing City is the key factor that controls the local ground effects, and four zones are defined on the base of the different thickness of the Quaternary sediments. The response spectra for each zone are computed, indicating that peak spectral values as high as 0.1g are compatible with past seismicity and can be well exceeded if an event similar to the 1697 Sanhe-Pinggu occurs. (author)
Estimation models of variance components for farrowing interval in swine
Directory of Open Access Journals (Sweden)
Aderbal Cavalcante Neto
2009-02-01
Full Text Available The main objective of this study was to evaluate the importance of including maternal genetic, common litter environmental and permanent environmental effects in estimation models of variance components for the farrowing interval trait in swine. Data consisting of 1,013 farrowing intervals of Dalland (C-40 sows recorded in two herds were analyzed. Variance components were obtained by the derivative-free restricted maximum likelihood method. Eight models were tested which contained the fixed effects(contemporary group and covariables and the direct genetic additive and residual effects, and varied regarding the inclusion of the maternal genetic, common litter environmental, and/or permanent environmental random effects. The likelihood-ratio test indicated that the inclusion of these effects in the model was unnecessary, but the inclusion of the permanent environmental effect caused changes in the estimates of heritability, which varied from 0.00 to 0.03. In conclusion, the heritability values obtained indicated that this trait appears to present no genetic gain as response to selection. The common litter environmental and the maternal genetic effects did not present any influence on this trait. The permanent environmental effect, however, should be considered in the genetic models for this trait in swine, because its presence caused changes in the additive genetic variance estimates.Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40, registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos
A model for estimating carbon accumulation in cork products
Directory of Open Access Journals (Sweden)
Ana C. Dias
2014-08-01
Full Text Available Aim of study: This study aims to develop a calculation model for estimating carbon accumulation in cork products, both whilst in use and when in landfills, and to apply the model to Portugal as an example.Area of study: The model is applicable worldwide and the case-study is Portugal.Material and methods: The model adopts a flux-data method based on a lifetime analysis and quantifies carbon accumulation in cork products according to three approaches that differ on how carbon stocks (or emissions are allocated to cork product consuming and producing countries. These approaches are: stock-change, production and atmospheric-flow. The effect on carbon balance of methane emissions from the decay of cork products in landfills is also evaluated.Main results: The model was applied to Portugal and the results show that carbon accumulation in cork products in the period between 1990 and 2010 varied between 24 and 92 Gg C year-1. The atmospheric-flow approach provided the highest carbon accumulation over the whole period due to the net export of carbon in cork products. The production approach ranked second because exported cork products were mainly manufactured from domestically produced cork. The net carbon balance in cork products was also a net carbon accumulation with all the approaches, ranging from 5 to 81 Gg C eq year-1.Research highlights: The developed model can be applied to other countries and may be a step forward to consider carbon accumulation in cork products in national greenhouse gas inventories, as well as in future climate agreements.Keywords: Atmospheric-flow approach; Greenhouse gas balance; Modelling; Production approach; Stock-change approach.
Estimating growth of SMES using a logit model: Evidence from manufacturing companies in Italy
Directory of Open Access Journals (Sweden)
Amith Vikram Megaravalli
2017-03-01
Full Text Available In this paper, an effort has been put to develop a model for estimating growth based on logit re-gression (logit and implemented the model to Italian manufacturing companies. Our data set consists of 8232 SMEs of Italy. To estimate the growth of the firm an innovative approach that considers annual statements issued the year before the accelerated growth has been considered as the effective estimators of firm growth. The result of the logit showed that return on asset, log (cash flow and log (Inventory positively affect in estimating the growth of the high growth firm whereas working capital turnover times negatively affects in estimating the growth of the firm. The discriminant power of the model using Receiver Operating Characteristics curve shows 72.35%, which means the model is fair in terms of estimating the growth.
Optimal difference-based estimation for partially linear models
Zhou, Yuejin
2017-12-16
Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.
Capacitance Online Estimation Based on Adaptive Model Observer
Directory of Open Access Journals (Sweden)
Cen Zhaohui
2016-01-01
Full Text Available As a basic component in electrical and electronic devices, capacitors are very popular in electrical circuits. Conventional capacitors such as electrotype capacitors are easy to degradation, aging and fatigue due to long‐time running and outer damages such as mechanical and electrical stresses. In this paper, a novel online capacitance measurement/estimation approach is proposed. Firstly, an Adaptive Model Observer (AMO is designed based on the capacitor's circuit equations. Secondly, the AMO’s stability and convergence are analysed and discussed. Finally, Capacitors with different capacitance and different initial voltages in a buck converter topology are tested and validated. Simulation results demonstrate the effectiveness and superiority of our proposed approach.
An improved model for estimating pesticide emissions for agricultural LCA
DEFF Research Database (Denmark)
Dijkman, Teunis Johannes; Birkved, Morten; Hauschild, Michael Zwicky
2011-01-01
Credible quantification of chemical emissions in the inventory phase of Life Cycle Assessment (LCA) is crucial since chemicals are the dominating cause of the human and ecotoxicity-related environmental impacts in Life Cycle Impact Assessment (LCIA). When applying LCA for assessment of agricultural...... products, off-target pesticide emissions need to be quantified as accurately as possible because of the considerable toxicity effects associated with chemicals designed to have a high impact on biological organisms like for example insects or weed plants. PestLCI was developed to estimate the fractions....... To overcome these limitations, a reworked and updated version of PestLCI is presented here. The new model includes 16 European climate types and 6 mean European soil characteristic profiles covering all dominant European soil types to widen the geographical scope and to allow contemporary (varying site...
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Models for estimating macronutrients in Mimosa scabrella Bentham
Directory of Open Access Journals (Sweden)
Saulo Jorge Téo
2010-09-01
Full Text Available The aim of this work was to adjust and test different statistical models for estimating macronutrient content in theabove-ground biomass of bracatinga (Mimosa scabrella Bentham. The data were collected from 25 bracatinga trees, all native to thenorth of the metropolitan region of Curitiba, Paraná state, Brazil. To determine the biomass and macronutrient content, the trees wereseparated into the compartments leaves, branches < 4 cm, branches 4 cm, wood and stem barks. Different statistical models wereadjusted to estimate N, P, K, Ca and Mg contents in the tree compartments, using dendrometric variables as the model independentvariables. Based on the results, the equations developed for estimating macronutrient contents were, in general, satisfactory. The mostaccurate estimates were obtained for the stem biomass compartments and the sum of the biomass compartments. In some cases, theequations had a better performance when crown and stem dimensions, age and dominant height were included as independentvariables.
Wherry, Susan A.; Wood, Tamara M.
2018-04-27
A whole lake eutrophication (WLE) model approach for phosphorus and cyanobacterial biomass in Upper Klamath Lake, south-central Oregon, is presented here. The model is a successor to a previous model developed to inform a Total Maximum Daily Load (TMDL) for phosphorus in the lake, but is based on net primary production (NPP), which can be calculated from dissolved oxygen, rather than scaling up a small-scale description of cyanobacterial growth and respiration rates. This phase 3 WLE model is a refinement of the proof-of-concept developed in phase 2, which was the first attempt to use NPP to simulate cyanobacteria in the TMDL model. The calibration of the calculated NPP WLE model was successful, with performance metrics indicating a good fit to calibration data, and the calculated NPP WLE model was able to simulate mid-season bloom decreases, a feature that previous models could not reproduce.In order to use the model to simulate future scenarios based on phosphorus load reduction, a multivariate regression model was created to simulate NPP as a function of the model state variables (phosphorus and chlorophyll a) and measured meteorological and temperature model inputs. The NPP time series was split into a low- and high-frequency component using wavelet analysis, and regression models were fit to the components separately, with moderate success.The regression models for NPP were incorporated in the WLE model, referred to as the “scenario” WLE (SWLE), and the fit statistics for phosphorus during the calibration period were mostly unchanged. The fit statistics for chlorophyll a, however, were degraded. These statistics are still an improvement over prior models, and indicate that the SWLE is appropriate for long-term predictions even though it misses some of the seasonal variations in chlorophyll a.The complete whole lake SWLE model, with multivariate regression to predict NPP, was used to make long-term simulations of the response to 10-, 20-, and 40-percent
Improving the realism of hydrologic model through multivariate parameter estimation
Rakovec, Oldrich; Kumar, Rohini; Attinger, Sabine; Samaniego, Luis
2017-04-01
Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains. Rakovec, O., Kumar, R., Attinger, S. and Samaniego, L. (2016): Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resour. Res., 52, http://dx.doi.org/10
Estimating Effects of Species Interactions on Populations of Endangered Species.
Roth, Tobias; Bühler, Christoph; Amrhein, Valentin
2016-04-01
Global change causes community composition to change considerably through time, with ever-new combinations of interacting species. To study the consequences of newly established species interactions, one available source of data could be observational surveys from biodiversity monitoring. However, approaches using observational data would need to account for niche differences between species and for imperfect detection of individuals. To estimate population sizes of interacting species, we extended N-mixture models that were developed to estimate true population sizes in single species. Simulations revealed that our model is able to disentangle direct effects of dominant on subordinate species from indirect effects of dominant species on detection probability of subordinate species. For illustration, we applied our model to data from a Swiss amphibian monitoring program and showed that sizes of expanding water frog populations were negatively related to population sizes of endangered yellow-bellied toads and common midwife toads and partly of natterjack toads. Unlike other studies that analyzed presence and absence of species, our model suggests that the spread of water frogs in Central Europe is one of the reasons for the decline of endangered toad species. Thus, studying population impacts of dominant species on population sizes of endangered species using data from biodiversity monitoring programs should help to inform conservation policy and to decide whether competing species should be subject to population management.
Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution
Directory of Open Access Journals (Sweden)
Emmanuel Kidando
2017-01-01
Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.
Bayesian estimation of parameters in a regional hydrological model
Directory of Open Access Journals (Sweden)
K. Engeland
2002-01-01
Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis
Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.
Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih
2016-10-01
In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.
Gouweleeuw, B.T.; vd Griend, A.A.; Owe, M.
1996-01-01
A surface moisture model for large-scale semiarid land application has been extended with a moisture flow routine for capillary flow. The model has been applied to a field-scale data set of topsoil moisture and latent heat flux of an arable site in central Spain. A comparison of the soil hydraulic
Surface-source modeling and estimation using biomagnetic measurements.
Yetik, Imam Samil; Nehorai, Arye; Muravchik, Carlos H; Haueisen, Jens; Eiselt, Michael
2006-10-01
We propose a number of electric source models that are spatially distributed on an unknown surface for biomagnetism. These can be useful to model, e.g., patches of electrical activity on the cortex. We use a realistic head (or another organ) model and discuss the special case of a spherical head model with radial sensors resulting in more efficient computations of the estimates for magnetoencephalography. We derive forward solutions, maximum likelihood (ML) estimates, and Cramér-Rao bound (CRB) expressions for the unknown source parameters. A model selection method is applied to decide on the most appropriate model. We also present numerical examples to compare the performances and computational costs of the different models and illustrate when it is possible to distinguish between surface and focal sources or line sources. Finally, we apply our methods to real biomagnetic data of phantom human torso and demonstrate the applicability of them.
Estimation of shape model parameters for 3D surfaces
DEFF Research Database (Denmark)
Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen
2008-01-01
is applied to a database of 3D surfaces from a section of the porcine pelvic bone extracted from 33 CT scans. A leave-one-out validation shows that the parameters of the first 3 modes of the shape model can be predicted with a mean difference within [-0.01,0.02] from the true mean, with a standard deviation......Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D...... surfaces using distance maps, which enables the estimation of model parameters without the requirement of point correspondence. For applications with acquisition limitations such as speed and cost, this formulation enables the fitting of a statistical shape model to arbitrarily sampled data. The method...
Bayesian analysis for uncertainty estimation of a canopy transpiration model
Samanta, S.; Mackay, D. S.; Clayton, M. K.; Kruger, E. L.; Ewers, B. E.
2007-04-01
A Bayesian approach was used to fit a conceptual transpiration model to half-hourly transpiration rates for a sugar maple (Acer saccharum) stand collected over a 5-month period and probabilistically estimate its parameter and prediction uncertainties. The model used the Penman-Monteith equation with the Jarvis model for canopy conductance. This deterministic model was extended by adding a normally distributed error term. This extension enabled using Markov chain Monte Carlo simulations to sample the posterior parameter distributions. The residuals revealed approximate conformance to the assumption of normally distributed errors. However, minor systematic structures in the residuals at fine timescales suggested model changes that would potentially improve the modeling of transpiration. Results also indicated considerable uncertainties in the parameter and transpiration estimates. This simple methodology of uncertainty analysis would facilitate the deductive step during the development cycle of deterministic conceptual models by accounting for these uncertainties while drawing inferences from data.
Estimating model parameters in nonautonomous chaotic systems using synchronization
International Nuclear Information System (INIS)
Yang, Xiaoli; Xu, Wei; Sun, Zhongkui
2007-01-01
In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation
[Using log-binomial model for estimating the prevalence ratio].
Ye, Rong; Gao, Yan-hui; Yang, Yi; Chen, Yue
2010-05-01
To estimate the prevalence ratios, using a log-binomial model with or without continuous covariates. Prevalence ratios for individuals' attitude towards smoking-ban legislation associated with smoking status, estimated by using a log-binomial model were compared with odds ratios estimated by logistic regression model. In the log-binomial modeling, maximum likelihood method was used when there were no continuous covariates and COPY approach was used if the model did not converge, for example due to the existence of continuous covariates. We examined the association between individuals' attitude towards smoking-ban legislation and smoking status in men and women. Prevalence ratio and odds ratio estimation provided similar results for the association in women since smoking was not common. In men however, the odds ratio estimates were markedly larger than the prevalence ratios due to a higher prevalence of outcome. The log-binomial model did not converge when age was included as a continuous covariate and COPY method was used to deal with the situation. All analysis was performed by SAS. Prevalence ratio seemed to better measure the association than odds ratio when prevalence is high. SAS programs were provided to calculate the prevalence ratios with or without continuous covariates in the log-binomial regression analysis.
Bayesian parameter estimation for stochastic models of biological cell migration
Dieterich, Peter; Preuss, Roland
2013-08-01
Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading
Procedures for parameter estimates of computational models for localized failure
Iacono, C.
2007-01-01
In the last years, many computational models have been developed for tensile fracture in concrete. However, their reliability is related to the correct estimate of the model parameters, not all directly measurable during laboratory tests. Hence, the development of inverse procedures is needed, that
Estimation of pure autoregressive vector models for revenue series ...
African Journals Online (AJOL)
This paper aims at applying multivariate approach to Box and Jenkins univariate time series modeling to three vector series. General Autoregressive Vector Models with time varying coefficients are estimated. The first vector is a response vector, while others are predictor vectors. By matrix expansion each vector, whether ...
GMM estimation in panel data models with measurement error
Wansbeek, T.J.
Griliches and Hausman (J. Econom. 32 (1986) 93) have introduced GMM estimation in panel data models with measurement error. We present a simple, systematic approach to derive moment conditions for such models under a variety of assumptions. (C) 2001 Elsevier Science S.A. All rights reserved.
Estimation of pump operational state with model-based methods
International Nuclear Information System (INIS)
Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha
2010-01-01
Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.
Estimating classification images with generalized linear and additive models.
Knoblauch, Kenneth; Maloney, Laurence T
2008-12-22
Conventional approaches to modeling classification image data can be described in terms of a standard linear model (LM). We show how the problem can be characterized as a Generalized Linear Model (GLM) with a Bernoulli distribution. We demonstrate via simulation that this approach is more accurate in estimating the underlying template in the absence of internal noise. With increasing internal noise, however, the advantage of the GLM over the LM decreases and GLM is no more accurate than LM. We then introduce the Generalized Additive Model (GAM), an extension of GLM that can be used to estimate smooth classification images adaptively. We show that this approach is more robust to the presence of internal noise, and finally, we demonstrate that GAM is readily adapted to estimation of higher order (nonlinear) classification images and to testing their significance.
Bases for the Creation of Electric Energy Price Estimate Model
International Nuclear Information System (INIS)
Toljan, I.; Klepo, M.
1995-01-01
The paper presents the basic principles for the creation and introduction of a new model for the electric energy price estimate and its significant influence on the tariff system functioning. There is also a review of the model used presently for the electric energy price estimate which is based on the model of objectivized values of electric energy plants and production, transmission and distribution facilities, followed by proposed changes which would result in functional and organizational improvements within the electric energy system as the most complex subsystem of the whole power system. The model is based on substantial and functional connection of the optimization and analysis system with the electric energy economic dispatching, including marginal cost estimate and their influence on the tariff system as the main means in achieving better electric energy system's functioning quality. (author). 10 refs., 2 figs
E.M. Wever (Elisabeth); G. Draisma (Gerrit); E.A.M. Heijnsdijk (Eveline); H.J. de Koning (Harry)
2011-01-01
textabstractBackground. Simulation models are essential tools for estimating benefits of cancer screening programs. Such models include a screening-effect model that represents how early detection by screening followed by treatment affects disease-specific survival. Two commonly used
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four
Unemployment estimation: Spatial point referenced methods and models
Pereira, Soraia
2017-06-26
Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to
Background Suppression Effects on Signal Estimation
Energy Technology Data Exchange (ETDEWEB)
Burr, Tom [Los Alamos National Laboratory
2008-01-01
Gamma detectors at border crossings are intended to detect illicit nuclear material. One performance challenge involves the fact that vehicles suppress the natural background, thus potentially reducing detection probability for threat items. Methods to adjust for background suppression have been considered in related but different settings. Here, methods to adjust for background suppression are tested in the context of signal estimation. Adjustment methods include several clustering options. We find that for the small-to-moderate suppression magnitudes exhibited in the analyzed data, suppression adjustment is only moderatel helpful in locating the signal peak, and in estimating its width or magnitude.
Application of Parameter Estimation for Diffusions and Mixture Models
DEFF Research Database (Denmark)
Nolsøe, Kim
error models. This is obtained by constructing an estimating function through projections of some chosen function of Yti+1 onto functions of previous observations Yti ; : : : ; Yt0 . The process of interest Xti+1 is partially observed through a measurement equation Yti+1 = h(Xti+1)+ noice, where h......(:) is restricted to be a polynomial. Through a simulation study we compare for the CIR process the obtained estimator with an estimator derived from utilizing the extended Kalman filter. The simulation study shows that the two estimation methods perform equally well.......The first part of this thesis proposes a method to determine the preferred number of structures, their proportions and the corresponding geometrical shapes of an m-membered ring molecule. This is obtained by formulating a statistical model for the data and constructing an algorithm which samples...
Asymptotic distribution theory for break point estimators in models estimated via 2SLS
Boldea, O.; Hall, A.R.; Han, S.
2012-01-01
In this paper, we present a limiting distribution theory for the break point estimator in a linear regression model with multiple structural breaks obtained by minimizing a Two Stage Least Squares (2SLS) objective function. Our analysis covers both the case in which the reduced form for the
Bilinear Mixed Effects Models for Dyadic Data
National Research Council Canada - National Science Library
Hoff, Peter D
2003-01-01
.... Such an effect, along with standard linear fixed and random effects, is incorporated into a generalized linear model, and a Markov chain Monte Carlo algorithm is provided for Bayesian estimation and inference...
Input-output model for MACCS nuclear accident impacts estimation¹
Energy Technology Data Exchange (ETDEWEB)
Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-27
Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.
Model Year 2012 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2011-11-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.
Model Year 2011 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2010-11-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.
Model Year 2013 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2012-12-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.
Model Year 2017 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2016-11-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.
Model Year 2018 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2017-12-07
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2011-01-01
, propagated exponentially, can lead to severely sub-optimal plans. Modern optimizers typically maintain one-dimensional statistical summaries and make the attribute value independence and join uniformity assumptions for efficiently estimating selectivities. Therefore, selectivity estimation errors in today......’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...
DEFF Research Database (Denmark)
Uthes, Sandra; Sattler, Claudia; Piorr, Annette
2010-01-01
production orientations and grassland types was modeled under the presence and absence of the grassland extensification scheme using the bio-economic model MODAM. Farms were based on available accountancy data and surveyed production data, while information on farm location within the district was derived...... from a spatial allocation procedure. The reduction in total gross margin per unit area was used to measure on-farm compliance costs. A dimensionless environmental index was used to assess the suitability of the scheme to reduce the risk of nitrate-leaching. Calculated on-farm compliance costs...
Estimation of Rotor Effective Wind Speed: A Comparison
DEFF Research Database (Denmark)
Soltani, Mohsen; Knudsen, Torben; Svenstrup, Mikael
2013-01-01
Modern wind turbine controllers use wind speed information to improve power production and reduce loads on the turbine components. The turbine top wind speed measurement is unfortunately imprecise and not a good representative of the rotor effective wind speed. Consequently, many different model......-based algorithms have been proposed that are able to estimate the wind speed using common turbine measurements. In this paper, we present a concise yet comprehensive analysis and comparison of these techniques, reviewing their advantages and drawbacks. We implement these techniques and compare the results on both...
Evaluation of Rock Stress Estimation by the Kaiser effect
International Nuclear Information System (INIS)
Lehtonen, A.
2005-11-01
The knowledge of in situ stress is the key input parameter in many rock mechanics analyses. Information on stress allows the definition of boundary conditions for various modelling and engineering tasks. Presently, the estimation of stresses in bedrock is one of the most difficult, time-consuming and high-priced rock mechanical investigations. In addition, the methods used today have not evolved significantly in many years. This brings out a demand for novel, more economical and practical methods for stress estimation. In this study, one such method, Kaiser effect based on acoustic emission of core samples, has been evaluated. It can be described as a 'memory' in rock that is indicated by a change in acoustic emission emitted during uniaxial loading test. The most tempting feature of this method is the ability to estimate the in situ stress state from core specimens in laboratory conditions. This yields considerable cost savings compared to laborious borehole measurements. Kaiser effect has been studied in order to determine in situ stresses for decades without any major success. However, recent studies in Australia and China have been promising and made the estimation of stress tensor possible from differently oriented core samples. The aim of this work has been to develop a similar estimation method in Finland (including both equipment and data reduction), and to test it on samples obtained from Olkiluoto, Eurajoki. The developed measuring system proved to work well. The quality of obtained data varied, but they were still interpretable. The results obtained from these tests were compared with results of previous overcoring measurements, and they showed quite good correlation. Thus, the results were promising, but the method still needs further development and more testing before the final decision on its feasibility can be made. (orig.)
Parameter estimation and model selection in computational biology.
Directory of Open Access Journals (Sweden)
Gabriele Lillacci
2010-03-01
Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.
Efficient and robust estimation for longitudinal mixed models for binary data
DEFF Research Database (Denmark)
Holst, René
2009-01-01
This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...... as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...... equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally...
A single model procedure for estimating tank calibration equations
International Nuclear Information System (INIS)
Liebetrau, A.M.
1997-10-01
A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes
Rutten, M.J.M.; Bovenhuis, H.; Arendonk, van J.A.M.
2010-01-01
Fourier transform infrared spectroscopy is a suitable method to determine bovine milk fat composition. However, the determination of fat composition by gas chromatography, required for calibration of the infrared prediction model, is expensive and labor intensive. It has recently been shown that the
A distributed approach for parameters estimation in System Biology models
International Nuclear Information System (INIS)
Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.
2009-01-01
Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.
Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models
Directory of Open Access Journals (Sweden)
Shelton Peiris
2017-12-01
Full Text Available This paper considers a flexible class of time series models generated by Gegenbauer polynomials incorporating the long memory in stochastic volatility (SV components in order to develop the General Long Memory SV (GLMSV model. We examine the corresponding statistical properties of this model, discuss the spectral likelihood estimation and investigate the finite sample properties via Monte Carlo experiments. We provide empirical evidence by applying the GLMSV model to three exchange rate return series and conjecture that the results of out-of-sample forecasts adequately confirm the use of GLMSV model in certain financial applications.
Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms
Berhausen, Sebastian; Paszek, Stefan
2016-01-01
In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.
Estimation of the Human Absorption Cross Section Via Reverberation Models
DEFF Research Database (Denmark)
Steinböck, Gerhard; Pedersen, Troels; Fleury, Bernard Henri
2018-01-01
and compare the obtained results to those of Sabine's model. We find that the absorption by persons is large enough to be measured with a wideband channel sounder and that estimates of the human absorption cross section differ for the two models. The obtained values are comparable to values reported...... in the literature. We also suggest the use of controlled environments with low average absorption coefficients to obtain more reliable estimates. The obtained values can be used to predict the change of reverberation time with persons in the propagation environment. This allows prediction of channel characteristics...... relevant in communication systems, e.g. path loss and rms delay spread, for various population densities....
P.P. Liu (Paul); M. Fabbri (Marco)
2016-01-01
textabstractThis work studies the effect of unarmed private security patrols on crime. We make use of a initiative, triggered by an arguably exogenous events, consisting in hiring unarmed private security agents to patrol, observe and report to ordinary police criminal activities within a
Salli F. Dymond; W. Michael Aust; Stephen P. Prisley; Mark H. Eisenbies; James M. Vose
2014-01-01
Managed forests have historically been linked to watershed protection and flood mitigation. Research indicates that forests can potentially minimize peak flows during storm events, yet the relationship between forests and flooding is complex. Forest roads, usually found in managed systems, can potentially magnify the effects of forest harvesting on water yields. The...
Biomass models to estimate carbon stocks for hardwood tree species
Energy Technology Data Exchange (ETDEWEB)
Ruiz-Peinado, R.; Montero, G.; Rio, M. del
2012-11-01
To estimate forest carbon pools from forest inventories it is necessary to have biomass models or biomass expansion factors. In this study, tree biomass models were developed for the main hardwood forest species in Spain: Alnus glutinosa, Castanea sativa, Ceratonia siliqua, Eucalyptus globulus, Fagus sylvatica, Fraxinus angustifolia, Olea europaea var. sylvestris, Populus x euramericana, Quercus canariensis, Quercus faginea, Quercus ilex, Quercus pyrenaica and Quercus suber. Different tree biomass components were considered: stem with bark, branches of different sizes, above and belowground biomass. For each species, a system of equations was fitted using seemingly unrelated regression, fulfilling the additivity property between biomass components. Diameter and total height were explored as independent variables. All models included tree diameter whereas for the majority of species, total height was only considered in the stem biomass models and in some of the branch models. The comparison of the new biomass models with previous models fitted separately for each tree component indicated an improvement in the accuracy of the models. A mean reduction of 20% in the root mean square error and a mean increase in the model efficiency of 7% in comparison with recently published models. So, the fitted models allow estimating more accurately the biomass stock in hardwood species from the Spanish National Forest Inventory data. (Author) 45 refs.
Model-Based Material Parameter Estimation for Terahertz Reflection Spectroscopy
Kniffin, Gabriel Paul
Many materials such as drugs and explosives have characteristic spectral signatures in the terahertz (THz) band. These unique signatures imply great promise for spectral detection and classification using THz radiation. While such spectral features are most easily observed in transmission, real-life imaging systems will need to identify materials of interest from reflection measurements, often in non-ideal geometries. One important, yet commonly overlooked source of signal corruption is the etalon effect -- interference phenomena caused by multiple reflections from dielectric layers of packaging and clothing likely to be concealing materials of interest in real-life scenarios. This thesis focuses on the development and implementation of a model-based material parameter estimation technique, primarily for use in reflection spectroscopy, that takes the influence of the etalon effect into account. The technique is adapted from techniques developed for transmission spectroscopy of thin samples and is demonstrated using measured data taken at the Northwest Electromagnetic Research Laboratory (NEAR-Lab) at Portland State University. Further tests are conducted, demonstrating the technique's robustness against measurement noise and common sources of error.
Parameter and State Estimator for State Space Models
Directory of Open Access Journals (Sweden)
Ruifeng Ding
2014-01-01
Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
Parameter and state estimator for state space models.
Ding, Ruifeng; Zhuang, Linfan
2014-01-01
This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
Parameter Estimation for Single Diode Models of Photovoltaic Modules
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.
2015-03-01
Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.
Estimated Effects of the October 1979 Change in Monetary Policy on the 1980 Economy
Ray C* Fair
1980-01-01
On October 6. 1979, the Federal Reserve announced what most people interpreted as a change in monetary policy. The purpose of this paper is to estimate the effects of this change on the 1980-81 economy. The effects of the change are estimated from simulations with my model of the U.S. economy (1976, 1980b).
Occupancy Estimation and Modeling : Inferring Patterns and Dynamics of Species Occurrence
MacKenzie, D.I.; Nichols, J.D.; Royle, J. Andrew; Pollock, K.H.; Bailey, L.L.; Hines, J.E.
2006-01-01
This is the first book to examine the latest methods in analyzing presence/absence data surveys. Using four classes of models (single-species, single-season; single-species, multiple season; multiple-species, single-season; and multiple-species, multiple-season), the authors discuss the practical sampling situation, present a likelihood-based model enabling direct estimation of the occupancy-related parameters while allowing for imperfect detectability, and make recommendations for designing studies using these models. It provides authoritative insights into the latest in estimation modeling; discusses multiple models which lay the groundwork for future study designs; addresses critical issues of imperfect detectibility and its effects on estimation; and explores the role of probability in estimating in detail.
Groundwater Modelling For Recharge Estimation Using Satellite Based Evapotranspiration
Soheili, Mahmoud; (Tom) Rientjes, T. H. M.; (Christiaan) van der Tol, C.
2017-04-01
Groundwater movement is influenced by several factors and processes in the hydrological cycle, from which, recharge is of high relevance. Since the amount of aquifer extractable water directly relates to the recharge amount, estimation of recharge is a perquisite of groundwater resources management. Recharge is highly affected by water loss mechanisms the major of which is actual evapotranspiration (ETa). It is, therefore, essential to have detailed assessment of ETa impact on groundwater recharge. The objective of this study was to evaluate how recharge was affected when satellite-based evapotranspiration was used instead of in-situ based ETa in the Salland area, the Netherlands. The Methodology for Interactive Planning for Water Management (MIPWA) model setup which includes a groundwater model for the northern part of the Netherlands was used for recharge estimation. The Surface Energy Balance Algorithm for Land (SEBAL) based actual evapotranspiration maps from Waterschap Groot Salland were also used. Comparison of SEBAL based ETa estimates with in-situ abased estimates in the Netherlands showed that these SEBAL estimates were not reliable. As such results could not serve for calibrating root zone parameters in the CAPSIM model. The annual cumulative ETa map produced by the model showed that the maximum amount of evapotranspiration occurs in mixed forest areas in the northeast and a portion of central parts. Estimates ranged from 579 mm to a minimum of 0 mm in the highest elevated areas with woody vegetation in the southeast of the region. Variations in mean seasonal hydraulic head and groundwater level for each layer showed that the hydraulic gradient follows elevation in the Salland area from southeast (maximum) to northwest (minimum) of the region which depicts the groundwater flow direction. The mean seasonal water balance in CAPSIM part was evaluated to represent recharge estimation in the first layer. The highest recharge estimated flux was for autumn
Estimating Drilling Cost and Duration Using Copulas Dependencies Models
Directory of Open Access Journals (Sweden)
M. Al Kindi
2017-03-01
Full Text Available Estimation of drilling budget and duration is a high-level challenge for oil and gas industry. This is due to the many uncertain activities in the drilling procedure such as material prices, overhead cost, inflation, oil prices, well type, and depth of drilling. Therefore, it is essential to consider all these uncertain variables and the nature of relationships between them. This eventually leads into the minimization of the level of uncertainty and yet makes a "good" estimation points for budget and duration given the well type. In this paper, the copula probability theory is used in order to model the dependencies between cost/duration and MRI (mechanical risk index. The MRI is a mathematical computation, which relates various drilling factors such as: water depth, measured depth, true vertical depth in addition to mud weight and horizontal displacement. In general, the value of MRI is utilized as an input for the drilling cost and duration estimations. Therefore, modeling the uncertain dependencies between MRI and both cost and duration using copulas is important. The cost and duration estimates for each well were extracted from the copula dependency model where research study simulate over 10,000 scenarios. These new estimates were later compared to the actual data in order to validate the performance of the procedure. Most of the wells show moderate - weak relationship of MRI dependence, which means that the variation in these wells can be related to MRI but to the extent that it is not the primary source.
Deconvolution Estimation in Measurement Error Models: The R Package decon
Wang, Xiao-Feng; Wang, Bin
2011-01-01
Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors-in-variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples. PMID:21614139
Modeling, estimation and optimal filtration in signal processing
Najim, Mohamed
2010-01-01
The purpose of this book is to provide graduate students and practitioners with traditional methods and more recent results for model-based approaches in signal processing.Firstly, discrete-time linear models such as AR, MA and ARMA models, their properties and their limitations are introduced. In addition, sinusoidal models are addressed.Secondly, estimation approaches based on least squares methods and instrumental variable techniques are presented.Finally, the book deals with optimal filters, i.e. Wiener and Kalman filtering, and adaptive filters such as the RLS, the LMS and the
System Level Modelling and Performance Estimation of Embedded Systems
DEFF Research Database (Denmark)
Tranberg-Hansen, Anders Sejer
The advances seen in the semiconductor industry within the last decade have brought the possibility of integrating evermore functionality onto a single chip forming functionally highly advanced embedded systems. These integration possibilities also imply that as the design complexity increases, so...... an efficient system level design methodology, a modelling framework for performance estimation and design space exploration at the system level is required. This thesis presents a novel component based modelling framework for system level modelling and performance estimation of embedded systems. The framework...... is performed by having the framework produce detailed quantitative information about the system model under investigation. The project is part of the national Danish research project, Danish Network of Embedded Systems (DaNES), which is funded by the Danish National Advanced Technology Foundation. The project...
Estimation of traffic accident costs: a prompted model.
Hejazi, Rokhshad; Shamsudin, Mad Nasir; Radam, Alias; Rahim, Khalid Abdul; Ibrahim, Zelina Zaitun; Yazdani, Saeed
2013-01-01
Traffic accidents are the reason for 25% of unnatural deaths in Iran. The main objective of this study is to find a simple model for the estimation of economic costs especially in Islamic countries (like Iran) in a straightforward manner. The model can show the magnitude of traffic accident costs with monetary equivalent. Data were collected from different sources that included traffic police records, insurance companies and hospitals. The conceptual framework, in our study, was based on the method of Ayati. He used this method for the estimation of economic costs in Iran. We promoted his method via minimum variables. Our final model has only three available variables which can be taken from insurance companies and police records. The running model showed that the traffic accident costs were US$2.2 million in 2007 for our case study route.
PDS-Modelling and Regional Bayesian Estimation of Extreme Rainfalls
DEFF Research Database (Denmark)
Madsen, Henrik; Rosbjerg, Dan; Harremoës, Poul
1994-01-01
Since 1979 a country-wide system of raingauges has been operated in Denmark in order to obtain a better basis for design and analysis of urban drainage systems. As an alternative to the traditional non-parametric approach the Partial Duration Series method is employed in the modelling of extreme ....... The application of the Bayesian approach is derived in case of both exponential and generalized Pareto distributed exceedances. Finally, the aspect of including economic perspectives in the estimation of the design events is briefly discussed....... in Denmark cannot be justified. In order to obtain an estimation procedure at non-monitored sites and to improve at-site estimates a regional Bayesian approach is adopted. The empirical regional distributions of the parameters in the Partial Duration Series model are used as prior information...
Estimation of Continuous Velocity Model Variations in Rock Deformation Tests.
Flynn, J. W.; Tomas, R.; Benson, P. M.
2017-12-01
Seismic interferometry, using either seismic waves coda or ambient noise, is a passive technique to image the sub-surface seismic velocity structure, which directly relates to the physical properties of the material through which they travel. The methodology estimates the Green's function for the volume between two seismic stations by cross-correlating long time series of ambient noise recorded at both stations, with the Green's function being effectively the seismogram recorded at one station due to an impulsive or instantaneous energy source at the second station. In laboratory rock deformation experiments, changes in the velocity structure of the rock sample are generally measured through active surveys using an array of AE piezoelectric P-wave transducers, producing a time series of ultrasonic velocities in both axial and radial directions. The velocity information from the active surveys is used to provide a time dependent velocity model for the inversion of AE event source locations. These velocity measurements are carried out at regular intervals throughout the laboratory test, causing the interruption of passive AE monitoring for the length of the surveys. There is therefore a trade-off between the frequency at which the active velocity surveys are carried out to optimise the velocity model and the availability of a complete AE record during the rock deformation test.This study proposes to use noise interferometry to provide a continuous measurement of velocity variations in a rock sample during a laboratory rock deformation experiment without the need to carry out active velocity surveys while simultaneously passively monitoring AE activity. The continuous noise source in this test, is an AE transducer fed with a white gaussian noise signal from a function generator. Data from all AE transducers is continuously acquired and recorded during the deformation experiment. The cross correlation of the continuous AE record is used to produce a continuous velocity
Trofymow, J. A.; Metsaranta, J. M.; Black, T. A.; Jassal, R. S.; Filipescu, C.
2013-12-01
In coastal BC, 6,000-10,000 ha of public and significant areas of private forest land are annually fertilized with nitrogen, with or without thinning, to increase merchantable wood and reduce rotation age. Fertilization has also been viewed as a way to increase carbon (C) sequestration in forests and obtain C offsets. Such offset projects must demonstrate additionality with reference to a baseline and include monitoring to verify net C gains over the project period. Models in combination with field-plot measurements are currently the accepted methods for most C offset protocols. On eastern Vancouver Island, measurements of net ecosystem production (NEP), ecosystem respiration (Re) and gross primary productivity (GPP) using the eddy-covariance (EC) technique as well as component C fluxes and stocks have been made since 1998 in an intermediate-aged Douglas-fir dominated forest planted in 1949. In January 2007 an area around the EC flux tower was aerially fertilized with 200 kg urea-N ha-1. Ground plots in the fertilized area and an adjacent unfertilized control area were also monitored for soil (Rs) and heterotrophic (Rh) respiration, litterfall, and tree growth. To determine fertilization effects on whole tree growth, sample trees were felled in both areas for the 4-year (2003-06) pre- and the 4-year (2007-10) post-fertilization periods and were compared with EC NEP estimates and tree-ring based NEP estimates from Carbon Budget Model - Canadian Forest Sector (CBM-CFS3) for the same periods. Empirical equations using climate and C fluxes from 1998-2006 were derived to estimate what the EC fluxes would have been in 2007-10 for the fertilized area had it been unfertilized. Mean EC NEP for 2007-10 was 561 g C m2 y-1 , a 64% increase above pre-fertilization NEP (341 g C m2 y-1) or 28% increase above estimated unfertilized NEP (438 g C m2 y-1). Most of the increase was attributed to increased tree C uptake (i.e., GPP), with little change in Re. In 2007 fertilization
Directory of Open Access Journals (Sweden)
Talerngsak Angkuraseranee
2010-05-01
Full Text Available The additive and dominance genetic variances of 5,801 Duroc reproductive and growth records were estimated usingBULPF90 PC-PACK. Estimates were obtained for number born alive (NBA, birth weight (BW, number weaned (NW, andweaning weight (WW. Data were analyzed using two mixed model equations. The first model included fixed effects andrandom effects identifying inbreeding depression, additive gene effect and permanent environments effects. The secondmodel was similar to the first model, but included the dominance genotypic effect. Heritability estimates of NBA, BW, NWand WW from the two models were 0.1558/0.1716, 0.1616/0.1737, 0.0372/0.0874 and 0.1584/0.1516 respectively. Proportionsof dominance effect to total phenotypic variance from the dominance model were 0.1024, 0.1625, 0.0470, and 0.1536 for NBA,BW, NW and WW respectively. Dominance effects were found to have sizable influence on the litter size traits analyzed.Therefore, genetic evaluation with the dominance model (Model 2 is found more appropriate than the animal model (Model 1.
Comparison of physically based catchment models for estimating Phosphorus losses
Nasr, Ahmed Elssidig; Bruen, Michael
2003-01-01
As part of a large EPA-funded research project, coordinated by TEAGASC, the Centre for Water Resources Research at UCD reviewed the available distributed physically based catchment models with a potential for use in estimating phosphorous losses for use in implementing the Water Framework Directive. Three models, representative of different levels of approach and complexity, were chosen and were implemented for a number of Irish catchments. This paper reports on (i) the lessons and experience...
Simplified evacuation model for estimating mitigation of early population exposures
International Nuclear Information System (INIS)
Strenge, D.L.
1980-12-01
The application of a simple evacuation model to the prediction of expected population exposures following acute releases of activity to the atmosphere is described. The evacuation model of Houston is coupled with a normalized Gaussian dispersion calculation to estimate the time integral of population exposure. The methodology described can be applied to specific sites to determine the expected reduction of population exposures due to evacuation
Comparison of two intelligent models to estimate the instantaneous ...
Indian Academy of Sciences (India)
... they are 85.46 (w/m2), 3.08 (w/m2) and 5.41, respectively. As the results indicate, both models are able to estimate the amount of radiation well, while the neural network has a higher accuracy. The output of the modes for six other cities of Iran, with similar climate conditions, also proves the ability of the proposed models.
Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials
DEFF Research Database (Denmark)
Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2010-01-01
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficult...
Time-of-flight estimation based on covariance models
van der Heijden, Ferdinand; Tuquerres, G.; Regtien, Paulus P.L.
We address the problem of estimating the time-of-flight (ToF) of a waveform that is disturbed heavily by additional reflections from nearby objects. These additional reflections cause interference patterns that are difficult to predict. The introduction of a model for the reflection in terms of a
Empirical Models for the Estimation of Global Solar Radiation in ...
African Journals Online (AJOL)
Empirical Models for the Estimation of Global Solar Radiation in Yola, Nigeria. ... and average daily wind speed (WS) for the interval of three years (2010 – 2012) measured using various instruments for Yola of recorded data collected from the Center for Atmospheric Research (CAR), Anyigba are presented and analyzed.
inverse gaussian model for small area estimation via gibbs sampling
African Journals Online (AJOL)
ADMIN
(1994) extended the work by Fries and. Bhattacharyya (1983) to include the maximum likelihood analysis of the two-factor inverse. Gaussian model for the unbalanced and interaction case for the estimation of small area parameters in finite populations. The object of this article is to develop a Bayesian approach for small ...
An Approach to Quality Estimation in Model-Based Development
DEFF Research Database (Denmark)
Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter
2004-01-01
We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...
Constrained Optimization Approaches to Estimation of Structural Models
DEFF Research Database (Denmark)
Iskhakov, Fedor; Rust, John; Schjerning, Bertel
2015-01-01
We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...
Constrained Optimization Approaches to Estimation of Structural Models
DEFF Research Database (Denmark)
Iskhakov, Fedor; Jinhyuk, Lee; Rust, John
2016-01-01
We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...
Performances of estimators of linear model with auto-correlated ...
African Journals Online (AJOL)
Performances of estimators of linear model with auto-correlated error terms when the independent variable is normal. ... On the other hand, the same slope coefficients β , under Generalized Least Squares (GLS) decreased with increased autocorrelation when the sample size T is small. Journal of the Nigerian Association ...
Method of moments estimation of GO-GARCH models
Boswijk, H.P.; van der Weide, R.
2009-01-01
We propose a new estimation method for the factor loading matrix in generalized orthogonal GARCH (GO-GARCH) models. The method is based on the eigenvectors of a suitably defined sample autocorrelation matrix of squares and cross-products of the process. The method can therefore be easily applied to
Bayesian nonparametric estimation of hazard rate in monotone Aalen model
Czech Academy of Sciences Publication Activity Database
Timková, Jana
2014-01-01
Roč. 50, č. 6 (2014), s. 849-868 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf
Mathematical models for estimating radio channels utilization when ...
African Journals Online (AJOL)
Definition of the radio channel utilization indicator is given. Mathematical models for radio channels utilization assessment by real-time flows transfer in the wireless self-organized network are presented. Estimated experiments results according to the average radio channel utilization productivity with and without buffering of ...
Efficient Bayesian Estimation and Combination of GARCH-Type Models
D. David (David); L.F. Hoogerheide (Lennart)
2010-01-01
textabstractThis paper proposes an up-to-date review of estimation strategies available for the Bayesian inference of GARCH-type models. The emphasis is put on a novel efficient procedure named AdMitIS. The methodology automatically constructs a mixture of Student-t distributions as an approximation
An improved COCOMO software cost estimation model | Duke ...
African Journals Online (AJOL)
In this paper, we discuss the methodologies adopted previously in software cost estimation using the COnstructive COst MOdels (COCOMOs). From our analysis, COCOMOs produce very high software development efforts, which eventually produce high software development costs. Consequently, we propose its extension, ...
Remote sensing estimates of impervious surfaces for pluvial flood modelling
DEFF Research Database (Denmark)
Kaspersen, Per Skougaard; Drews, Martin
This paper investigates the accuracy of medium resolution (MR) satellite imagery in estimating impervious surfaces for European cities at the detail required for pluvial flood modelling. Using remote sensing techniques enables precise and systematic quantification of the influence of the past 30...
Models for estimation of carbon sequestered by Cupressus ...
African Journals Online (AJOL)
This study compared models for estimating carbon sequestered aboveground in Cupressus lusitanica plantation stands at Wondo Genet College of Forestry and Natural Resources, Ethiopia. Relationships of carbon storage with tree component and stand age were also investigated. Thirty trees of three different ages (5, ...
Eigenspace perturbations for structural uncertainty estimation of turbulence closure models
Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca
2017-11-01
With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).
Estimating Predictive Variance for Statistical Gas Distribution Modelling
International Nuclear Information System (INIS)
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-01-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Bayesian Estimation of Small Effects in Exercise and Sports Science.
Mengersen, Kerrie L; Drovandi, Christopher C; Robert, Christian P; Pyne, David B; Gore, Christopher J
2016-01-01
The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a 'magnitude-based inference' approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.
Bayesian Estimation of Small Effects in Exercise and Sports Science.
Directory of Open Access Journals (Sweden)
Kerrie L Mengersen
Full Text Available The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL, and intermittent hypoxic exposure (IHE on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a 'magnitude-based inference' approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
Estimating Terra MODIS Polarization Effect Using Ocean Data
Wald, Andrew E.; Brinkmann, Jake; Wu, Aisheng; Xiong, Jack
2016-01-01
Terra MODIS has been known since pre-launch to have polarization sensitivity, particularly in shortest-wavelength bands 8 and 9. On-orbit reflectance trending of pseudo-invariant sites show a variation in reflectance as a function of band and scan mirror angle of incidence consistent with time-dependent polarization effects from the rotating double-sided scan mirror. The MODIS Characterization Support Team [MCST] estimates the Mueller matrix trending from this variation as observed from a single desert site, but this effect is not included in Collection 6 [C6] calibration. Here we extend the MCSTs current polarization sensitivity monitoring to two ocean sites distributed over latitude to helpestimate the uncertainties in the derived Mueller matrix. The Mueller matrix elements derived for polarization-sensitive Band 8 for a given site are found to be fairly insensitive to surface brdf modeling. The site-to-site variation is a measure of the uncertainty in the Mueller estimation.Results for band 8 show that the polarization correction reduces mirror-side striping by up to 50% and reduces the instrument polarization effect on reflectance time series of an ocean target.
Sparse Estimation Using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Badiu, Mihai Alin
2015-01-01
-valued models, this paper proposes a GSM model - the Bessel K model - that induces concave penalty functions for the estimation of complex sparse signals. The properties of the Bessel K model are analyzed when it is applied to Type I and Type II estimation. This analysis reveals that, by tuning the parameters...... of the mixing pdf different penalty functions are invoked depending on the estimation type used, the value of the noise variance, and whether real or complex signals are estimated. Using the Bessel K model, we derive a sparse estimator based on a modification of the expectation-maximization algorithm formulated......In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex...
Bayes estimation of the general hazard rate model
International Nuclear Information System (INIS)
Sarhan, A.
1999-01-01
In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2
Tyre pressure monitoring using a dynamical model-based estimator
Reina, Giulio; Gentile, Angelo; Messina, Arcangelo
2015-04-01
In the last few years, various control systems have been investigated in the automotive field with the aim of increasing the level of safety and stability, avoid roll-over, and customise handling characteristics. One critical issue connected with their integration is the lack of state and parameter information. As an example, vehicle handling depends to a large extent on tyre inflation pressure. When inflation pressure drops, handling and comfort performance generally deteriorate. In addition, it results in an increase in fuel consumption and in a decrease in lifetime. Therefore, it is important to keep tyres within the normal inflation pressure range. This paper introduces a model-based approach to estimate online tyre inflation pressure. First, basic vertical dynamic modelling of the vehicle is discussed. Then, a parameter estimation framework for dynamic analysis is presented. Several important vehicle parameters including tyre inflation pressure can be estimated using the estimated states. This method aims to work during normal driving using information from standard sensors only. On the one hand, the driver is informed about the inflation pressure and he is warned for sudden changes. On the other hand, accurate estimation of the vehicle states is available as possible input to onboard control systems.
The effects of global warming on fisheries: Simulation estimates
Directory of Open Access Journals (Sweden)
Carlos A. Medel
2016-04-01
Full Text Available This paper develops two fisheries models in order to estimate the effect of global warming (GW on firm value. GW is defined as an increase in the average temperature of the Earth’s surface as a result of emissions. It is assumed that (i GW exists, and (ii higher temperatures negatively affect biomass. CO2 The literature on biology and GW supporting these two crucial assumptions is reviewed. The main argument presented is that temperature increase has two effects on biomass, both of which have an impact on firm value. First, higher temperatures cause biomass to oscillate. To measure the effect of biomass oscillation on firm value the model in [1] is modified to include water temperature as a variable. The results indicate that a 1 to 20% variation in biomass causes firm value to fall from 6 to 44%, respectively. Second, higher temperatures reduce biomass, and a modification of the model in [2] reveals that an increase in temperature anomaly between +1 and +8°C causes fishing firm value to decrease by 8 to 10%.
Tie, Junbo; Cao, Juliang; Chang, Lubing; Cai, Shaokun; Wu, Meiping; Lian, Junxiang
2018-03-16
Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method.
Estimation of the effective distribution coefficient from the solubility constant
International Nuclear Information System (INIS)
Wang, Yug-Yea; Yu, C.
1994-01-01
An updated version of RESRAD has been developed by Argonne National Laboratory for the US Department of Energy to derive site-specific soil guidelines for residual radioactive material. In this updated version, many new features have been added to the, RESRAD code. One of the options is that a user can input a solubility constant to limit the leaching of contaminants. The leaching model used in the code requires the input of an empirical distribution coefficient, K d , which represents the ratio of the solute concentration in soil to that in solution under equilibrium conditions. This paper describes the methodology developed to estimate an effective distribution coefficient, Kd, from the user-input solubility constant and the use of the effective K d for predicting the leaching of contaminants
Estimating and Testing Mediation Effects with Censored Data
Wang, Lijuan; Zhang, Zhiyong
2011-01-01
This study investigated influences of censored data on mediation analysis. Mediation effect estimates can be biased and inefficient with censoring on any one of the input, mediation, and output variables. A Bayesian Tobit approach was introduced to estimate and test mediation effects with censored data. Simulation results showed that the Bayesian…
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
How good is the Prevent model for estimating the health benefits of prevention?
DEFF Research Database (Denmark)
Brønnum-Hansen, Henrik
1999-01-01
Prevent is a public health model for estimating the effect on mortality of changes in exposure to risk factors. When the model is tested by simulating a development that has already taken place, the results may differ considerably from the actual situation. The purpose of this study is to test...
The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models
Schoeneberger, Jason A.
2016-01-01
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Parameter estimation in nonlinear models for pesticide degradation
International Nuclear Information System (INIS)
Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.
1991-01-01
A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)
Novel mathematical model to estimate ball impact force in soccer.
Iga, Takahito; Nunome, Hiroyuki; Sano, Shinya; Sato, Nahoko; Ikegami, Yasuo
2017-11-22
To assess ball impact force during soccer kicking is important to quantify from both performance and chronic injury prevention perspectives. We aimed to verify the appropriateness of previous models used to estimate ball impact force and to propose an improved model to better capture the time history of ball impact force. A soccer ball was fired directly onto a force platform (10 kHz) at five realistic kicking ball velocities and ball behaviour was captured by a high-speed camera (5,000 Hz). The time history of ball impact force was estimated using three existing models and two new models. A new mathematical model that took into account a rapid change in ball surface area and heterogeneous ball deformation showed a distinctive advantage to estimate the peak forces and its occurrence times and to reproduce time history of ball impact forces more precisely, thereby reinforcing the possible mechanics of 'footballer's ankle'. Ball impact time was also systematically shortened when ball velocity increases in contrast to practical understanding for producing faster ball velocity, however, the aspect of ball contact time must be considered carefully from practical point of view.
Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.
Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi
2017-12-01
We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.
A Bayesian Markov geostatistical model for estimation of hydrogeological properties
International Nuclear Information System (INIS)
Rosen, L.; Gustafson, G.
1996-01-01
A geostatistical methodology based on Markov-chain analysis and Bayesian statistics was developed for probability estimations of hydrogeological and geological properties in the siting process of a nuclear waste repository. The probability estimates have practical use in decision-making on issues such as siting, investigation programs, and construction design. The methodology is nonparametric which makes it possible to handle information that does not exhibit standard statistical distributions, as is often the case for classified information. Data do not need to meet the requirements on additivity and normality as with the geostatistical methods based on regionalized variable theory, e.g., kriging. The methodology also has a formal way for incorporating professional judgments through the use of Bayesian statistics, which allows for updating of prior estimates to posterior probabilities each time new information becomes available. A Bayesian Markov Geostatistical Model (BayMar) software was developed for implementation of the methodology in two and three dimensions. This paper gives (1) a theoretical description of the Bayesian Markov Geostatistical Model; (2) a short description of the BayMar software; and (3) an example of application of the model for estimating the suitability for repository establishment with respect to the three parameters of lithology, hydraulic conductivity, and rock quality designation index (RQD) at 400--500 meters below ground surface in an area around the Aespoe Hard Rock Laboratory in southeastern Sweden
Negative binomial models for abundance estimation of multiple closed populations
Boyce, Mark S.; MacKenzie, Darry I.; Manly, Bryan F.J.; Haroldson, Mark A.; Moody, David W.
2001-01-01
Counts of uniquely identified individuals in a population offer opportunities to estimate abundance. However, for various reasons such counts may be burdened by heterogeneity in the probability of being detected. Theoretical arguments and empirical evidence demonstrate that the negative binomial distribution (NBD) is a useful characterization for counts from biological populations with heterogeneity. We propose a method that focuses on estimating multiple populations by simultaneously using a suite of models derived from the NBD. We used this approach to estimate the number of female grizzly bears (Ursus arctos) with cubs-of-the-year in the Yellowstone ecosystem, for each year, 1986-1998. Akaike's Information Criteria (AIC) indicated that a negative binomial model with a constant level of heterogeneity across all years was best for characterizing the sighting frequencies of female grizzly bears. A lack-of-fit test indicated the model adequately described the collected data. Bootstrap techniques were used to estimate standard errors and 95% confidence intervals. We provide a Monte Carlo technique, which confirms that the Yellowstone ecosystem grizzly bear population increased during the period 1986-1998.
On estimation of linear transformation models with nested case-control sampling.
Lu, Wenbin; Liu, Mengling
2012-01-01
Nested case-control (NCC) sampling is widely used in large epidemiological cohort studies for its cost effectiveness, but its data analysis primarily relies on the Cox proportional hazards model. In this paper, we consider a family of linear transformation models for analyzing NCC data and propose an inverse selection probability weighted estimating equation method for inference. Consistency and asymptotic normality of our estimators for regression coefficients are established. We show that the asymptotic variance has a closed analytic form and can be easily estimated. Numerical studies are conducted to support the theory and an application to the Wilms' Tumor Study is also given to illustrate the methodology.
A comparison of estimated and calculated effective porosity
Stephens, Daniel B.; Hsu, Kuo-Chin; Prieksat, Mark A.; Ankeny, Mark D.; Blandford, Neil; Roth, Tracy L.; Kelsey, James A.; Whitworth, Julia R.
Effective porosity in solute-transport analyses is usually estimated rather than calculated from tracer tests in the field or laboratory. Calculated values of effective porosity in the laboratory on three different textured samples were compared to estimates derived from particle-size distributions and soil-water characteristic curves. The agreement was poor and it seems that no clear relationships exist between effective porosity calculated from laboratory tracer tests and effective porosity estimated from particle-size distributions and soil-water characteristic curves. A field tracer test in a sand-and-gravel aquifer produced a calculated effective porosity of approximately 0.17. By comparison, estimates of effective porosity from textural data, moisture retention, and published values were approximately 50-90% greater than the field calibrated value. Thus, estimation of effective porosity for chemical transport is highly dependent on the chosen transport model and is best obtained by laboratory or field tracer tests. Résumé La porosité effective dans les analyses de transport de soluté est habituellement estimée, plutôt que calculée à partir d'expériences de traçage sur le terrain ou au laboratoire. Les valeurs calculées de la porosité effective au laboratoire sur trois échantillons de textures différentes ont été comparées aux estimations provenant de distributions de taille de particules et de courbes caractéristiques sol-eau. La concordance était plutôt faible et il semble qu'il n'existe aucune relation claire entre la porosité effective calculée à partir des expériences de traçage au laboratoire et la porosité effective estimée à partir des distributions de taille de particules et de courbes caractéristiques sol-eau. Une expérience de traçage de terrain dans un aquifère de sables et de graviers a fourni une porosité effective calculée d'environ 0,17. En comparaison, les estimations de porosité effective de données de
[Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].
Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L
2017-03-10
To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.
Directory of Open Access Journals (Sweden)
Marion Hoehn
Full Text Available The effective population size (N(e is proportional to the loss of genetic diversity and the rate of inbreeding, and its accurate estimation is crucial for the monitoring of small populations. Here, we integrate temporal studies of the gecko Oedura reticulata, to compare genetic and demographic estimators of N(e. Because geckos have overlapping generations, our goal was to demographically estimate N(bI, the inbreeding effective number of breeders and to calculate the N(bI/N(a ratio (N(a =number of adults for four populations. Demographically estimated N(bI ranged from 1 to 65 individuals. The mean reduction in the effective number of breeders relative to census size (N(bI/N(a was 0.1 to 1.1. We identified the variance in reproductive success as the most important variable contributing to reduction of this ratio. We used four methods to estimate the genetic based inbreeding effective number of breeders N(bI(gen and the variance effective populations size N(eV(gen estimates from the genotype data. Two of these methods - a temporal moment-based (MBT and a likelihood-based approach (TM3 require at least two samples in time, while the other two were single-sample estimators - the linkage disequilibrium method with bias correction LDNe and the program ONeSAMP. The genetic based estimates were fairly similar across methods and also similar to the demographic estimates excluding those estimates, in which upper confidence interval boundaries were uninformative. For example, LDNe and ONeSAMP estimates ranged from 14-55 and 24-48 individuals, respectively. However, temporal methods suffered from a large variation in confidence intervals and concerns about the prior information. We conclude that the single-sample estimators are an acceptable short-cut to estimate N(bI for species such as geckos and will be of great importance for the monitoring of species in fragmented landscapes.
Estimation of Biological Effects of Tritium.
Umata, Toshiyuki
2017-01-01
Nuclear fusion technology is expected to create new energy in the future. However, nuclear fusion requires a large amount of tritium as a fuel, leading to concern about the exposure of radiation workers to tritium beta radiation. Furthermore, countermeasures for tritium-polluted water produced in decommissioning of the reactor at Fukushima Daiichi Nuclear Power Station may potentially cause health problems in radiation workers. Although, internal exposure to tritium at a low dose/low dose rate can be assumed, biological effect of tritium exposure is not negligible, because tritiated water (HTO) intake to the body via the mouth/inhalation/skin would lead to homogeneous distribution throughout the whole body. Furthermore, organically-bound tritium (OBT) stays in the body as parts of the molecules that comprise living organisms resulting in long-term exposure, and the chemical form of tritium should be considered. To evaluate the biological effect of tritium, the effect should be compared with that of other radiation types. Many studies have examined the relative biological effectiveness (RBE) of tritium. Hence, we report the RBE, which was obtained with radiation carcinogenesis classified as a stochastic effect, and serves as a reference for cancer risk. We also introduce the outline of the tritium experiment and the principle of a recently developed animal experimental system using transgenic mouse to detect the biological influence of radiation exposure at a low dose/low dose rate.
Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models
Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea
2014-05-01
Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.
Estimation of soil moisture and its effect on soil thermal ...
Indian Academy of Sciences (India)
Soil moisture is an important parameter of the earth's climate system. Regression model for estimation of soil moisture at various depths has been developed using the amount of moisture near the surface layer. The estimated values of soil moisture are tested with the measured moisture values and it is found that the ...
Remaining lifetime modeling using State-of-Health estimation
Beganovic, Nejra; Söffker, Dirk
2017-08-01
Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model
Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver
Kang, Ling; Zhou, Liwei
2018-02-01
Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter
Model Year 2014 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2013-12-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.
Model Year 2010 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2009-10-14
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.
Model Year 2016 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2015-11-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.
Model Year 2015 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2014-12-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.
Model Year 2005 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2004-11-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.
Model Year 2006 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2005-11-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.
Modeling extreme events: Sample fraction adaptive choice in parameter estimation
Neves, Manuela; Gomes, Ivette; Figueiredo, Fernanda; Gomes, Dora Prata
2012-09-01
When modeling extreme events there are a few primordial parameters, among which we refer the extreme value index and the extremal index. The extreme value index measures the right tail-weight of the underlying distribution and the extremal index characterizes the degree of local dependence in the extremes of a stationary sequence. Most of the semi-parametric estimators of these parameters show the same type of behaviour: nice asymptotic properties, but a high variance for small values of k, the number of upper order statistics to be used in the estimation, and a high bias for large values of k. This shows a real need for the choice of k. Choosing some well-known estimators of those parameters we revisit the application of a heuristic algorithm for the adaptive choice of k. The procedure is applied to some simulated samples as well as to some real data sets.
Model Year 2009 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2008-10-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.
Model Year 2008 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2007-10-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.
Model Year 2007 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2007-10-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.
Modeling of Closed-Die Forging for Estimating Forging Load
Sheth, Debashish; Das, Santanu; Chatterjee, Avik; Bhattacharya, Anirban
2017-02-01
Closed die forging is one common metal forming process used for making a range of products. Enough load is to exert on the billet for deforming the material. This forging load is dependent on work material property and frictional characteristics of the work material with the punch and die. Several researchers worked on estimation of forging load for specific products under different process variables. Experimental data on deformation resistance and friction were used to calculate the load. In this work, theoretical estimation of forging load is made to compare this value with that obtained through LS-DYNA model facilitating the finite element analysis. Theoretical work uses slab method to assess forging load for an axi-symmetric upsetting job made of lead. Theoretical forging load estimate shows slightly higher value than the experimental one; however, simulation shows quite close matching with experimental forging load, indicating possibility of wide use of this simulation software.
Estimating police effectiveness with individual victimisation data
Vollaard, B.; Koning, P.
2005-01-01
In this paper, we present evidence on the effect of greater numbers of police personnel on victimisation of crime and experience of nuisance. We make use of individual data from a Dutch victimisation survey unique in its size, duration and scope. By using individual victimisation data we provide
A screening-level modeling approach to estimate nitrogen ...
This paper presents a screening-level modeling approach that can be used to rapidly estimate nutrient loading and assess numerical nutrient standard exceedance risk of surface waters leading to potential classification as impaired for designated use. It can also be used to explore best management practice (BMP) implementation to reduce loading. The modeling framework uses a hybrid statistical and process based approach to estimate source of pollutants, their transport and decay in the terrestrial and aquatic parts of watersheds. The framework is developed in the ArcGIS environment and is based on the total maximum daily load (TMDL) balance model. Nitrogen (N) is currently addressed in the framework, referred to as WQM-TMDL-N. Loading for each catchment includes non-point sources (NPS) and point sources (PS). NPS loading is estimated using export coefficient or event mean concentration methods depending on the temporal scales, i.e., annual or daily. Loading from atmospheric deposition is also included. The probability of a nutrient load to exceed a target load is evaluated using probabilistic risk assessment, by including the uncertainty associated with export coefficients of various land uses. The computed risk data can be visualized as spatial maps which show the load exceedance probability for all stream segments. In an application of this modeling approach to the Tippecanoe River watershed in Indiana, USA, total nitrogen (TN) loading and risk of standard exce
Directory of Open Access Journals (Sweden)
Menon Carlo
2011-09-01
Full Text Available Abstract Background Several regression models have been proposed for estimation of isometric joint torque using surface electromyography (SEMG signals. Common issues related to torque estimation models are degradation of model accuracy with passage of time, electrode displacement, and alteration of limb posture. This work compares the performance of the most commonly used regression models under these circumstances, in order to assist researchers with identifying the most appropriate model for a specific biomedical application. Methods Eleven healthy volunteers participated in this study. A custom-built rig, equipped with a torque sensor, was used to measure isometric torque as each volunteer flexed and extended his wrist. SEMG signals from eight forearm muscles, in addition to wrist joint torque data were gathered during the experiment. Additional data were gathered one hour and twenty-four hours following the completion of the first data gathering session, for the purpose of evaluating the effects of passage of time and electrode displacement on accuracy of models. Acquired SEMG signals were filtered, rectified, normalized and then fed to models for training. Results It was shown that mean adjusted coefficient of determination (Ra2 values decrease between 20%-35% for different models after one hour while altering arm posture decreased mean Ra2 values between 64% to 74% for different models. Conclusions Model estimation accuracy drops significantly with passage of time, electrode displacement, and alteration of limb posture. Therefore model retraining is crucial for preserving estimation accuracy. Data resampling can significantly reduce model training time without losing estimation accuracy. Among the models compared, ordinary least squares linear regression model (OLS was shown to have high isometric torque estimation accuracy combined with very short training times.