WorldWideScience

Sample records for comparing model goodness-of-fit

  1. The Meaning of Goodness-of-Fit Tests: Commentary on "Goodness-of-Fit Assessment of Item Response Theory Models"

    Science.gov (United States)

    Thissen, David

    2013-01-01

    In this commentary, David Thissen states that "Goodness-of-fit assessment for IRT models is maturing; it has come a long way from zero." Thissen then references prior works on "goodness of fit" in the index of Lord and Novick's (1968) classic text; Yen (1984); Drasgow, Levine, Tsien, Williams, and Mead (1995); Chen and…

  2. Goodness-of-Fit Assessment of Item Response Theory Models

    Science.gov (United States)

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  3. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....

  4. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda; Hart, Jeffrey D.

    2009-01-01

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors

  5. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda

    2009-05-12

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.

  6. GOODNESS-OF-FIT TEST FOR THE ACCELERATED FAILURE TIME MODEL BASED ON MARTINGALE RESIDUALS

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2013-01-01

    Roč. 49, č. 1 (2013), s. 40-59 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:GA MŠk(CZ) SVV 261315/2011 Keywords : accelerated failure time model * survival analysis * goodness-of-fit Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/novak-goodness-of-fit test for the aft model based on martingale residuals.pdf

  7. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    Science.gov (United States)

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  8. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    Science.gov (United States)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on

  9. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  10. Permutation tests for goodness-of-fit testing of mathematical models to experimental data.

    Science.gov (United States)

    Fişek, M Hamit; Barlas, Zeynep

    2013-03-01

    This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh; Genton, Marc G.

    2014-01-01

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte

  12. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  13. A goodness-of-fit test for occupancy models with correlated within-season revisits

    Science.gov (United States)

    Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.

    2016-01-01

    Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and

  14. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    Science.gov (United States)

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.

  15. Assessing the Goodness of Fit of Phylogenetic Comparative Methods: A Meta-Analysis and Simulation Study.

    Directory of Open Access Journals (Sweden)

    Dwueng-Chwuan Jhwueng

    Full Text Available Phylogenetic comparative methods (PCMs have been applied widely in analyzing data from related species but their fit to data is rarely assessed.Can one determine whether any particular comparative method is typically more appropriate than others by examining comparative data sets?I conducted a meta-analysis of 122 phylogenetic data sets found by searching all papers in JEB, Blackwell Synergy and JSTOR published in 2002-2005 for the purpose of assessing the fit of PCMs. The number of species in these data sets ranged from 9 to 117.I used the Akaike information criterion to compare PCMs, and then fit PCMs to bivariate data sets through REML analysis. Correlation estimates between two traits and bootstrapped confidence intervals of correlations from each model were also compared.For phylogenies of less than one hundred taxa, the Independent Contrast method and the independent, non-phylogenetic models provide the best fit.For bivariate analysis, correlations from different PCMs are qualitatively similar so that actual correlations from real data seem to be robust to the PCM chosen for the analysis. Therefore, researchers might apply the PCM they believe best describes the evolutionary mechanisms underlying their data.

  16. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    Science.gov (United States)

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  17. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2006-01-01

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...

  18. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...

  19. Testing the goodness of fit of selected infiltration models on soils with different land use histories

    International Nuclear Information System (INIS)

    Mbagwu, J.S.C.

    1993-10-01

    Six infiltration models, some obtained by reformulating the fitting parameters of the classical Kostiakov (1932) and Philip (1957) equations, were investigated for their ability to describe water infiltration into highly permeable sandy soils from the Nsukka plains of SE Nigeria. The models were Kostiakov, Modified Kostiakov (A), Modified Kostiakov (B), Philip, Modified Philip (A) and Modified Philip (B). Infiltration data were obtained from double ring infiltrometers on field plots established on a Knadic Paleustult (Nkpologu series) to investigate the effects of land use on soil properties and maize yield. The treatments were; (i) tilled-mulched (TM), (ii) tilled-unmulched (TU), (iii) untilled-mulched (UM), (iv) untilled-unmulched (UU) and (v) continuous pasture (CP). Cumulative infiltration was highest on the TM and lowest on the CP plots. All estimated model parameters obtained by the best fit of measured data differed significantly among the treatments. Based on the magnitude of R 2 values, the Kostiakov, Modified Kostiakov (A), Philip and Modified Philip (A) models provided best predictions of cumulative infiltration as a function of time. Comparing experimental with model-predicted cumulative infiltration showed, however, that on all treatments the values predicted by the classical Kostiakov, Philip and Modified Philip (A) models deviated most from experimental data. The other models produced values that agreed very well with measured data. Considering the eases of determining the fitting parameters it is proposed that on soils with high infiltration rates, either Modified Kostiakov model (I = Kt a + Ict) or Modified Philip model (I St 1/2 + Ict), (where I is cumulative infiltration, K, the time coefficient, t, time elapsed, 'a' the time exponent, Ic the equilibrium infiltration rate and S, the soil water sorptivity), be used for routine characterization of the infiltration process. (author). 33 refs, 3 figs 6 tabs

  20. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan; Hart, Jeffrey D.; Janicki, Ryan; Carroll, Raymond J.

    2010-01-01

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal

  1. Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.

    Science.gov (United States)

    Mi, Gu; Di, Yanming; Schafer, Daniel W

    2015-01-01

    This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.

  2. Comparison of hypertabastic survival model with other unimodal hazard rate functions using a goodness-of-fit test.

    Science.gov (United States)

    Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S

    2017-05-30

    We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. A goodness of fit and validity study of the Korean radiological technologists' core job competency model

    International Nuclear Information System (INIS)

    Lim, Chang Seon; Cho, A Ra; Hur, Yera; Choi, Seong Youl

    2017-01-01

    Radiological Technologists deals with the life of a person which means professional competency is essential for the job. Nevertheless, there have been no studies in Korea that identified the job competence of radiologists. In order to define the core job competencies of Korean radiologists and to present the factor models, 147 questionnaires on job competency of radiology were analyzed using 'PASW Statistics Version 18.0' and 'AMOS Version 18.0'. The valid model consisted of five core job competencies ('Patient management', 'Health and safety', 'Operation of equipment', 'Procedures and management') and 17 sub – competencies. As a result of the factor analysis, the RMSEA value was 0.1 and the CFI, and TLI values were close to 0.9 in the measurement model of the five core job competencies. The validity analysis showed that the mean variance extraction was 0.5 or more and the conceptual reliability value was 0.7 or more , And there was a high correlation between subordinate competencies included in each subordinate competencies. The results of this study are expected to provide specific information necessary for the training and management of human resources centered on competence by clearly showing the job competence required for radiologists in Korea's health environment

  4. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  5. Two Aspects of the Simplex Model: Goodness of Fit to Linear Growth Curve Structures and the Analysis of Mean Trends.

    Science.gov (United States)

    Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.

    1994-01-01

    Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)

  6. A goodness of fit and validity study of the Korean radiological technologists' core job competency model

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Chang Seon [Dept. of Radiological Science, Konyang University College of Medical Sciences, Daejeon (Korea, Republic of); Cho, A Ra [Dept. of Medical Education, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Hur, Yera [Dept. of Medical Education, Konyang University College of Medicine, Daejeon (Korea, Republic of); Choi, Seong Youl [Dept. of Occupational Therapy, Kwangju women’s University, Gwangju (Korea, Republic of)

    2017-09-15

    Radiological Technologists deals with the life of a person which means professional competency is essential for the job. Nevertheless, there have been no studies in Korea that identified the job competence of radiologists. In order to define the core job competencies of Korean radiologists and to present the factor models, 147 questionnaires on job competency of radiology were analyzed using 'PASW Statistics Version 18.0' and 'AMOS Version 18.0'. The valid model consisted of five core job competencies ('Patient management', 'Health and safety', 'Operation of equipment', 'Procedures and management') and 17 sub – competencies. As a result of the factor analysis, the RMSEA value was 0.1 and the CFI, and TLI values were close to 0.9 in the measurement model of the five core job competencies. The validity analysis showed that the mean variance extraction was 0.5 or more and the conceptual reliability value was 0.7 or more , And there was a high correlation between subordinate competencies included in each subordinate competencies. The results of this study are expected to provide specific information necessary for the training and management of human resources centered on competence by clearly showing the job competence required for radiologists in Korea's health environment.

  7. A goodness of fit statistic for the geometric distribution

    NARCIS (Netherlands)

    J.A. Ferreira

    2003-01-01

    textabstractWe propose a goodness of fit statistic for the geometric distribution and compare it in terms of power, via simulation, with the chi-square statistic. The statistic is based on the Lau-Rao theorem and can be seen as a discrete analogue of the total time on test statistic. The results

  8. Goodness-of-fit test for copulas

    Science.gov (United States)

    Panchenko, Valentyn

    2005-09-01

    Copulas are often used in finance to characterize the dependence between assets. However, a choice of the functional form for the copula is an open question in the literature. This paper develops a goodness-of-fit test for copulas based on positive definite bilinear forms. The suggested test avoids the use of plug-in estimators that is the common practice in the literature. The test statistics can be consistently computed on the basis of V-estimators even in the case of large dimensions. The test is applied to a dataset of US large cap stocks to assess the performance of the Gaussian copula for the portfolios of assets of various dimension. The Gaussian copula appears to be inadequate to characterize the dependence between assets.

  9. A goodness of fit statistic for the geometric distribution

    OpenAIRE

    Ferreira, J.A.

    2003-01-01

    textabstractWe propose a goodness of fit statistic for the geometric distribution and compare it in terms of power, via simulation, with the chi-square statistic. The statistic is based on the Lau-Rao theorem and can be seen as a discrete analogue of the total time on test statistic. The results suggest that the test based on the new statistic is generally superior to the chi-square test.

  10. Goodness of Fit Test and Test of Independence by Entropy

    Directory of Open Access Journals (Sweden)

    M. Sharifdoost

    2009-06-01

    Full Text Available To test whether a set of data has a specific distribution or not, we can use the goodness of fit test. This test can be done by one of Pearson X 2 -statistic or the likelihood ratio statistic G 2 , which are asymptotically equal, and also by using the Kolmogorov-Smirnov statistic in continuous distributions. In this paper, we introduce a new test statistic for goodness of fit test which is based on entropy distance, and which can be applied for large sample sizes. We compare this new statistic with the classical test statistics X 2 , G 2 , and Tn by some simulation studies. We conclude that the new statistic is more sensitive than the usual statistics to the rejection of distributions which are almost closed to the desired distribution. Also for testing independence, a new test statistic based on mutual information is introduced

  11. A note on goodness of fit test using moments

    Directory of Open Access Journals (Sweden)

    Alex Papadopoulos

    2007-10-01

    Full Text Available The purpose of this article is to introduce a general moment-based approach to derive formal goodness of fit tests of a parametric family. We show that, in general, an approximate normal test or a chi-squared test can be derived by exploring the moment structure of a parametric family, when moments up to certain order exist. The idea is simple and the resulting tests are easy to implement. To illustrate the use of this approach, we derive moment-based goodness of fit tests for some common discrete and continuous parametric families. We also compare the proposed tests with the well known Pearson-Fisher chi-square test and some distance tests in a simulation study.

  12. Mothers' Appraisal of Goodness of Fit and Children's Social Development

    Science.gov (United States)

    Seifer, Ronald; Dickstein, Susan; Parade, Stephanie; Hayden, Lisa C.; Magee, Karin Dodge; Schiller, Masha

    2014-01-01

    Goodness of fit has been a key theoretical construct for understanding caregiver-child relationships. We developed an interview method to assess goodness of fit as a relationship construct, and employed this method in a longitudinal study of child temperament, family context, and attachment relationship formation. Goodness of fit at 4 and 8 months…

  13. Exact goodness-of-fit tests for Markov chains.

    Science.gov (United States)

    Besag, J; Mondal, D

    2013-06-01

    Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps. © 2013, The International Biometric Society.

  14. Goodness-of-fit tests with dependent observations

    International Nuclear Information System (INIS)

    Chicheportiche, Rémy; Bouchaud, Jean-Philippe

    2011-01-01

    We revisit the Kolmogorov–Smirnov and Cramér–von Mises goodness-of-fit (GoF) tests and propose a generalization to identically distributed, but dependent univariate random variables. We show that the dependence leads to a reduction of the 'effective' number of independent observations. The generalized GoF tests are not distribution-free but rather depend on all the lagged bivariate copulas. These objects, that we call 'self-copulas', encode all the non-linear temporal dependences. We introduce a specific, log-normal model for these self-copulas, for which a number of analytical results are derived. An application to financial time series is provided. As is well known, the dependence is to be long-ranged in this case, a finding that we confirm using self-copulas. As a consequence, the acceptance rates for GoF tests are substantially higher than if the returns were iid random variables

  15. Modified Distribution-Free Goodness-of-Fit Test Statistic.

    Science.gov (United States)

    Chun, So Yeon; Browne, Michael W; Shapiro, Alexander

    2018-03-01

    Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.

  16. An Introduction to Goodness of Fit for PMU Parameter Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Riepnieks, Artis; Kirkham, Harold

    2017-10-01

    New results of measurements of phasor-like signals are presented based on our previous work on the topic. In this document an improved estimation method is described. The algorithm (which is realized in MATLAB software) is discussed. We examine the effect of noisy and distorted signals on the Goodness of Fit metric. The estimation method is shown to be performing very well with clean data and with a measurement window as short as a half a cycle and as few as 5 samples per cycle. The Goodness of Fit decreases predictably with added phase noise, and seems to be acceptable even with visible distortion in the signal. While the exact results we obtain are specific to our method of estimation, the Goodness of Fit method could be implemented in any phasor measurement unit.

  17. Quantum chi-squared and goodness of fit testing

    Energy Technology Data Exchange (ETDEWEB)

    Temme, Kristan [IQIM, California Institute of Technology, Pasadena, California 91125 (United States); Verstraete, Frank [Fakultät für Physik, Universität Wien, Boltzmanngasse 5, 1090 Wien, Austria and Faculty of Science, Ghent University, B-9000 Ghent (Belgium)

    2015-01-15

    A quantum mechanical hypothesis test is presented for the hypothesis that a certain setup produces a given quantum state. Although the classical and the quantum problems are very much related to each other, the quantum problem is much richer due to the additional optimization over the measurement basis. A goodness of fit test for i.i.d quantum states is developed and a max-min characterization for the optimal measurement is introduced. We find the quantum measurement which leads both to the maximal Pitman and Bahadur efficiencies, and determine the associated divergence rates. We discuss the relationship of the quantum goodness of fit test to the problem of estimating multiple parameters from a density matrix. These problems are found to be closely related and we show that the largest error of an optimal strategy, determined by the smallest eigenvalue of the Fisher information matrix, is given by the divergence rate of the goodness of fit test.

  18. Chi-squared goodness of fit tests with applications

    CERN Document Server

    Balakrishnan, N; Nikulin, MS

    2013-01-01

    Chi-Squared Goodness of Fit Tests with Applications provides a thorough and complete context for the theoretical basis and implementation of Pearson's monumental contribution and its wide applicability for chi-squared goodness of fit tests. The book is ideal for researchers and scientists conducting statistical analysis in processing of experimental data as well as to students and practitioners with a good mathematical background who use statistical methods. The historical context, especially Chapter 7, provides great insight into importance of this subject with an authoritative author team

  19. Goodness of Fit Test and Test of Independence by Entropy

    OpenAIRE

    M. Sharifdoost; N. Nematollahi; E. Pasha

    2009-01-01

    To test whether a set of data has a specific distribution or not, we can use the goodness of fit test. This test can be done by one of Pearson X 2 -statistic or the likelihood ratio statistic G 2 , which are asymptotically equal, and also by using the Kolmogorov-Smirnov statistic in continuous distributions. In this paper, we introduce a new test statistic for goodness of fit test which is based on entropy distance, and which can be applied for large sample sizes...

  20. Goodness of Fit of Skills Assessment Approaches: Insights from Patterns of Real vs. Synthetic Data Sets

    Science.gov (United States)

    Beheshti, Behzad; Desmarais, Michel C.

    2015-01-01

    This study investigates the issue of the goodness of fit of different skills assessment models using both synthetic and real data. Synthetic data is generated from the different skills assessment models. The results show wide differences of performances between the skills assessment models over synthetic data sets. The set of relative performances…

  1. Goodness-of-fit tests for the Gompertz distribution

    DEFF Research Database (Denmark)

    Lenart, Adam; Missov, Trifon

    The Gompertz distribution is often fitted to lifespan data, however testing whether the fit satisfies theoretical criteria was neglected. Here five goodness-of-fit measures, the Anderson-Darling statistic, the Kullback-Leibler discrimination information, the correlation coefficient test, testing ...... for the mean of the sample hazard and a nested test against the generalized extreme value distributions are discussed. Along with an application to laboratory rat data, critical values calculated by the empirical distribution of the test statistics are also presented.......The Gompertz distribution is often fitted to lifespan data, however testing whether the fit satisfies theoretical criteria was neglected. Here five goodness-of-fit measures, the Anderson-Darling statistic, the Kullback-Leibler discrimination information, the correlation coefficient test, testing...

  2. Goodness-of-fit tests for multi-dimensional copulas: Expanding application to historical drought data

    Directory of Open Access Journals (Sweden)

    Ming-wei Ma

    2013-01-01

    Full Text Available The question of how to choose a copula model that best fits a given dataset is a predominant limitation of the copula approach, and the present study aims to investigate the techniques of goodness-of-fit tests for multi-dimensional copulas. A goodness-of-fit test based on Rosenblatt's transformation was mathematically expanded from two dimensions to three dimensions and procedures of a bootstrap version of the test were provided. Through stochastic copula simulation, an empirical application of historical drought data at the Lintong Gauge Station shows that the goodness-of-fit tests perform well, revealing that both trivariate Gaussian and Student t copulas are acceptable for modeling the dependence structures of the observed drought duration, severity, and peak. The goodness-of-fit tests for multi-dimensional copulas can provide further support and help a lot in the potential applications of a wider range of copulas to describe the associations of correlated hydrological variables. However, for the application of copulas with the number of dimensions larger than three, more complicated computational efforts as well as exploration and parameterization of corresponding copulas are required.

  3. Exploiting the information content of hydrological ''outliers'' for goodness-of-fit testing

    Directory of Open Access Journals (Sweden)

    F. Laio

    2010-10-01

    Full Text Available Validation of probabilistic models based on goodness-of-fit tests is an essential step for the frequency analysis of extreme events. The outcome of standard testing techniques, however, is mainly determined by the behavior of the hypothetical model, FX(x, in the central part of the distribution, while the behavior in the tails of the distribution, which is indeed very relevant in hydrological applications, is relatively unimportant for the results of the tests. The maximum-value test, originally proposed as a technique for outlier detection, is a suitable, but seldom applied, technique that addresses this problem. The test is specifically targeted to verify if the maximum (or minimum values in the sample are consistent with the hypothesis that the distribution FX(x is the real parent distribution. The application of this test is hindered by the fact that the critical values for the test should be numerically obtained when the parameters of FX(x are estimated on the same sample used for verification, which is the standard situation in hydrological applications. We propose here a simple, analytically explicit, technique to suitably account for this effect, based on the application of censored L-moments estimators of the parameters. We demonstrate, with an application that uses artificially generated samples, the superiority of this modified maximum-value test with respect to the standard version of the test. We also show that the test has comparable or larger power with respect to other goodness-of-fit tests (e.g., chi-squared test, Anderson-Darling test, Fung and Paul test, in particular when dealing with small samples (sample size lower than 20–25 and when the parent distribution is similar to the distribution being tested.

  4. Bootstrap Power of Time Series Goodness of fit tests

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2013-10-01

    Full Text Available In this article, we looked at power of various versions of Box and Pierce statistic and Cramer von Mises test. An extensive simulation study has been conducted to compare the power of these tests. Algorithms have been provided for the power calculations and comparison has also been made between the semi parametric bootstrap methods used for time series. Results show that Box-Pierce statistic and its various versions have good power against linear time series models but poor power against non linear models while situation reverses for Cramer von Mises test. Moreover, we found that dynamic bootstrap method is better than xed design bootstrap method.

  5. Is Good Fit Related to Good Behaviour? Goodness of Fit between Daycare Teacher-Child Relationships, Temperament, and Prosocial Behaviour

    Science.gov (United States)

    Hipson, Will E.; Séguin, Daniel G.

    2016-01-01

    The Goodness-of-Fit model [Thomas, A., & Chess, S. (1977). Temperament and development. New York: Brunner/Mazel] proposes that a child's temperament interacts with the environment to influence child outcomes. In the past, researchers have shown how the association between the quality of the teacher-child relationship in daycare and child…

  6. Statistical alignment: computational properties, homology testing and goodness-of-fit

    DEFF Research Database (Denmark)

    Hein, J; Wiuf, Carsten; Møller, Martin

    2000-01-01

    The model of insertions and deletions in biological sequences, first formulated by Thorne, Kishino, and Felsenstein in 1991 (the TKF91 model), provides a basis for performing alignment within a statistical framework. Here we investigate this model.Firstly, we show how to accelerate the statistical...... alignment algorithms several orders of magnitude. The main innovations are to confine likelihood calculations to a band close to the similarity based alignment, to get good initial guesses of the evolutionary parameters and to apply an efficient numerical optimisation algorithm for finding the maximum...... analysis.Secondly, we propose a new homology test based on this model, where homology means that an ancestor to a sequence pair can be found finitely far back in time. This test has statistical advantages relative to the traditional shuffle test for proteins.Finally, we describe a goodness-of-fit test...

  7. Empirical Power Comparison Of Goodness of Fit Tests for Normality In The Presence of Outliers

    International Nuclear Information System (INIS)

    Saculinggan, Mayette; Balase, Emily Amor

    2013-01-01

    Most statistical tests such as t-tests, linear regression analysis and Analysis of Variance (ANOVA) require the normality assumptions. When the normality assumption is violated, interpretation and inferences may not be reliable. Therefore it is important to assess such assumption before using any appropriate statistical test. One of the commonly used procedures in determining whether a random sample of size n comes from a normal population are the goodness-of-fit tests for normality. Several studies have already been conducted on the comparison of the different goodness-of-fit(see, for example [2]) but it is generally limited to the sample size or to the number of GOF tests being compared(see, for example [2] [5] [6] [7] [8]). This paper compares the power of six formal tests of normality: Kolmogorov-Smirnov test (see [3]), Anderson-Darling test, Shapiro-Wilk test, Lilliefors test, Chi-Square test (see [1]) and D'Agostino-Pearson test. Small, moderate and large sample sizes and various contamination levels were used to obtain the power of each test via Monte Carlo simulation. Ten thousand samples of each sample size and contamination level at a fixed type I error rate α were generated from the given alternative distribution. The power of each test was then obtained by comparing the normality test statistics with the respective critical values. Results show that the power of all six tests is low for small sample size(see, for example [2]). But for n = 20, the Shapiro-Wilk test and Anderson – Darling test have achieved high power. For n = 60, Shapiro-Wilk test and Liliefors test are most powerful. For large sample size, Shapiro-Wilk test is most powerful (see, for example [5]). However, the test that achieves the highest power under all conditions for large sample size is D'Agostino-Pearson test (see, for example [9]).

  8. A simple non-parametric goodness-of-fit test for elliptical copulas

    Directory of Open Access Journals (Sweden)

    Jaser Miriam

    2017-12-01

    Full Text Available In this paper, we propose a simple non-parametric goodness-of-fit test for elliptical copulas of any dimension. It is based on the equality of Kendall’s tau and Blomqvist’s beta for all bivariate margins. Nominal level and power of the proposed test are investigated in a Monte Carlo study. An empirical application illustrates our goodness-of-fit test at work.

  9. Removing Visual Bias in Filament Identification: A New Goodness-of-fit Measure

    Science.gov (United States)

    Green, C.-E.; Cunningham, M. R.; Dawson, J. R.; Jones, P. A.; Novak, G.; Fissel, L. M.

    2017-05-01

    Different combinations of input parameters to filament identification algorithms, such as disperse and filfinder, produce numerous different output skeletons. The skeletons are a one-pixel-wide representation of the filamentary structure in the original input image. However, these output skeletons may not necessarily be a good representation of that structure. Furthermore, a given skeleton may not be as good of a representation as another. Previously, there has been no mathematical “goodness-of-fit” measure to compare output skeletons to the input image. Thus far this has been assessed visually, introducing visual bias. We propose the application of the mean structural similarity index (MSSIM) as a mathematical goodness-of-fit measure. We describe the use of the MSSIM to find the output skeletons that are the most mathematically similar to the original input image (the optimum, or “best,” skeletons) for a given algorithm, and independently of the algorithm. This measure makes possible systematic parameter studies, aimed at finding the subset of input parameter values returning optimum skeletons. It can also be applied to the output of non-skeleton-based filament identification algorithms, such as the Hessian matrix method. The MSSIM removes the need to visually examine thousands of output skeletons, and eliminates the visual bias, subjectivity, and limited reproducibility inherent in that process, representing a major improvement upon existing techniques. Importantly, it also allows further automation in the post-processing of output skeletons, which is crucial in this era of “big data.”

  10. A Bayesian goodness of fit test and semiparametric generalization of logistic regression with measurement data.

    Science.gov (United States)

    Schörgendorfer, Angela; Branscum, Adam J; Hanson, Timothy E

    2013-06-01

    Logistic regression is a popular tool for risk analysis in medical and population health science. With continuous response data, it is common to create a dichotomous outcome for logistic regression analysis by specifying a threshold for positivity. Fitting a linear regression to the nondichotomized response variable assuming a logistic sampling model for the data has been empirically shown to yield more efficient estimates of odds ratios than ordinary logistic regression of the dichotomized endpoint. We illustrate that risk inference is not robust to departures from the parametric logistic distribution. Moreover, the model assumption of proportional odds is generally not satisfied when the condition of a logistic distribution for the data is violated, leading to biased inference from a parametric logistic analysis. We develop novel Bayesian semiparametric methodology for testing goodness of fit of parametric logistic regression with continuous measurement data. The testing procedures hold for any cutoff threshold and our approach simultaneously provides the ability to perform semiparametric risk estimation. Bayes factors are calculated using the Savage-Dickey ratio for testing the null hypothesis of logistic regression versus a semiparametric generalization. We propose a fully Bayesian and a computationally efficient empirical Bayesian approach to testing, and we present methods for semiparametric estimation of risks, relative risks, and odds ratios when parametric logistic regression fails. Theoretical results establish the consistency of the empirical Bayes test. Results from simulated data show that the proposed approach provides accurate inference irrespective of whether parametric assumptions hold or not. Evaluation of risk factors for obesity shows that different inferences are derived from an analysis of a real data set when deviations from a logistic distribution are permissible in a flexible semiparametric framework. © 2013, The International Biometric

  11. Different goodness of fit tests for Rayleigh distribution in ranked set sampling

    Directory of Open Access Journals (Sweden)

    Amer Al-Omari

    2016-03-01

    Full Text Available In this paper, different goodness of fit tests for the Rayleigh distribution are considered based on simple random sampling (SRS and ranked set sampling (RSS techniques. The performance of the suggested estimators is evaluated in terms of the power of the tests by using Monte Carlo simulation. It is found that the suggested RSS tests perform better than their counterparts  in SRS.

  12. Sensitivity of goodness-of-fit statistics to rainfall data rounding off

    Science.gov (United States)

    Deidda, Roberto; Puliga, Michelangelo

    An analysis based on the L-moments theory suggests of adopting the generalized Pareto distribution to interpret daily rainfall depths recorded by the rain-gauge network of the Hydrological Survey of the Sardinia Region. Nevertheless, a big problem, not yet completely resolved, arises in the estimation of a left-censoring threshold able to assure a good fitting of rainfall data with the generalized Pareto distribution. In order to detect an optimal threshold, keeping the largest possible number of data, we chose to apply a “failure-to-reject” method based on goodness-of-fit tests, as it was proposed by Choulakian and Stephens [Choulakian, V., Stephens, M.A., 2001. Goodness-of-fit tests for the generalized Pareto distribution. Technometrics 43, 478-484]. Unfortunately, the application of the test, using percentage points provided by Choulakian and Stephens (2001), did not succeed in detecting a useful threshold value in most analyzed time series. A deeper analysis revealed that these failures are mainly due to the presence of large quantities of rounding off values among sample data, affecting the distribution of goodness-of-fit statistics and leading to significant departures from percentage points expected for continuous random variables. A procedure based on Monte Carlo simulations is thus proposed to overcome these problems.

  13. A note on Poisson goodness-of-fit tests for ionizing radiation induced chromosomal aberration samples.

    Science.gov (United States)

    Higueras, Manuel; González, J E; Di Giorgio, Marina; Barquinero, J F

    2018-05-18

    To present Poisson exact goodness-of-fit tests as alternatives and complements to the asymptotic u-test, which is the most widely used in cytogenetic biodosimetry, to decide whether a sample of chromosomal aberrations in blood cells comes from an homogeneous or inhomogeneous exposure. Three Poisson exact goodness-of-fit test from the literature are introduced and implemented in the R environment. A Shiny R Studio application, named GOF Poisson, has been updated for the purpose of giving support to this work. The three exact tests and the u-test are applied in chromosomal aberration data from clinical and accidental radiation exposure patients. It is observed how the u-test is not an appropriate approximation in small samples with small yield of chromosomal aberrations. Tools are provided to compute the three exact tests, which is not as trivial as the implementation of the u-test. Poisson exact goodness-of-fit tests should be considered jointly to the u-test for detecting inhomogeneous exposures in the cytogenetic biodosimetry practice.

  14. Goodness-of-Fit versus Significance: A CAPM Selection with Dynamic Betas Applied to the Brazilian Stock Market

    Directory of Open Access Journals (Sweden)

    André Ricardo de Pinho Ronzani

    2017-12-01

    Full Text Available In this work, a Capital Asset Pricing Model (CAPM with time-varying betas is considered. These betas evolve over time, conditional on financial and non-financial variables. Indeed, the model proposed by Adrian and Franzoni (2009 is adapted to assess the behavior of some selected Brazilian equities. For each equity, several models are fitted, and the best model is chosen based on goodness-of-fit tests and parameters significance. Finally, using the selected dynamic models, VaR (Value-at-Risk measures are calculated. We can conclude that CAPM with time-varying betas provide less conservative VaR measures than those based on CAPM with static betas or historical VaR.

  15. On the optimal number of classes in the Pearson goodness-of-fit tests

    Czech Academy of Sciences Publication Activity Database

    Morales, D.; Pardo, L.; Vajda, Igor

    2005-01-01

    Roč. 41, č. 6 (2005), s. 677-698 ISSN 0023-5954 R&D Projects: GA AV ČR(CZ) IAA1075403 Grant - others:BFM(ES) 2003-00892; GV(ES) 04B-670 Institutional research plan: CEZ:AV0Z10750506 Keywords : pearson-type goodness -of- fit test s * asymptotic local test power * asymptotic equivalence of test s * optimal number of classes Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.343, year: 2005

  16. Goodness-of-Fit Tests For Elliptical and Independent Copulas through Projection Pursuit

    Directory of Open Access Journals (Sweden)

    Jacques Touboul

    2011-04-01

    Full Text Available Two goodness-of-fit tests for copulas are being investigated. The first one deals with the case of elliptical copulas and the second one deals with independent copulas. These tests result from the expansion of the projection pursuit methodology that we will introduce in the present article. This method enables us to determine on which axis system these copulas lie as well as the exact value of these very copulas in the basis formed by the axes previously determined irrespective of their value in their canonical basis. Simulations are also presented as well as an application to real datasets.

  17. FREQFIT: Computer program which performs numerical regression and statistical chi-squared goodness of fit analysis

    International Nuclear Information System (INIS)

    Hofland, G.S.; Barton, C.C.

    1990-01-01

    The computer program FREQFIT is designed to perform regression and statistical chi-squared goodness of fit analysis on one-dimensional or two-dimensional data. The program features an interactive user dialogue, numerous help messages, an option for screen or line printer output, and the flexibility to use practically any commercially available graphics package to create plots of the program's results. FREQFIT is written in Microsoft QuickBASIC, for IBM-PC compatible computers. A listing of the QuickBASIC source code for the FREQFIT program, a user manual, and sample input data, output, and plots are included. 6 refs., 1 fig

  18. Comment on the asymptotics of a distribution-free goodness of fit test statistic.

    Science.gov (United States)

    Browne, Michael W; Shapiro, Alexander

    2015-03-01

    In a recent article Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed that a proof by Browne (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) of the asymptotic distribution of a goodness of fit test statistic is incomplete because it fails to prove that the orthogonal component function employed is continuous. Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed how Browne's proof can be completed satisfactorily but this required the development of an extensive and mathematically sophisticated framework for continuous orthogonal component functions. This short note provides a simple proof of the asymptotic distribution of Browne's (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) test statistic by using an equivalent form of the statistic that does not involve orthogonal component functions and consequently avoids all complicating issues associated with them.

  19. Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis

    Science.gov (United States)

    Das, Samiran

    2018-04-01

    The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.

  20. Measures of effect size for chi-squared and likelihood-ratio goodness-of-fit tests.

    Science.gov (United States)

    Johnston, Janis E; Berry, Kenneth J; Mielke, Paul W

    2006-10-01

    A fundamental shift in editorial policy for psychological journals was initiated when the fourth edition of the Publication Manual of the American Psychological Association (1994) placed emphasis on reporting measures of effect size. This paper presents measures of effect size for the chi-squared and the likelihood-ratio goodness-of-fit statistic tests.

  1. Mapping the perceptual structure of rectangles through goodness-of-fit ratings.

    Science.gov (United States)

    Palmer, Stephen E; Guidi, Stefano

    2011-01-01

    Three experiments were carried out to investigate the internal structure of a rectangular frame to test Arnheim's (1974 Art and Visual Perception, 1988 The Power of the Center) proposals about its 'structural skeleton'. Observers made subjective ratings of how well a small probe circle fit within a rectangle at different interior positions. In experiment 1, ratings of 77 locations were highest in the center, decreased with distance from the center, greatly elevated along vertical and horizontal symmetry axes, and somewhat elevated along the local symmetry axes. A linear regression model with six symmetry-related factors accounted for 95% of the variance. In experiment 2 we measured perceived fit along local symmetry axes versus global diagonals near the corners to determine which factor was relevant. 2AFC probabilities were elevated only along the local symmetry axes and were higher when the probe was closer to the vertex. In experiment 3 we examined the effect of dividing a rectangular frame into two rectangular 'subframes' using an additional line. The results show that the primary determinant of good fit is the position of the target circle within the local subframes. In general, the results are consistent with Arnheim's proposals about the internal structure of a rectangular frame, but an alternative interpretation is offered in terms of the Gestalt concept of figural goodness.

  2. Goodness of fit between prenatal maternal sleep and infant sleep: Associations with maternal depression and attachment security.

    Science.gov (United States)

    Newland, Rebecca P; Parade, Stephanie H; Dickstein, Susan; Seifer, Ronald

    2016-08-01

    The current study prospectively examined the ways in which goodness of fit between maternal and infant sleep contributes to maternal depressive symptoms and the mother-child relationship across the first years of life. In a sample of 173 mother-child dyads, maternal prenatal sleep, infant sleep, maternal depressive symptoms, and mother-child attachment security were assessed via self-report, actigraphy, and observational measures. Results suggested that a poor fit between mothers' prenatal sleep and infants' sleep at 8 months (measured by sleep diary and actigraphy) was associated with maternal depressive symptoms at 15 months. Additionally, maternal depression mediated the association between the interplay of mother and infant sleep (measured by sleep diary) and mother-child attachment security at 30 months. Findings emphasize the importance of the match between mother and infant sleep on maternal wellbeing and mother-child relationships and highlight the role of mothers' perceptions of infant sleep. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Goodness of fit between prenatal maternal sleep and infant sleep: Associations with maternal depression and attachment security

    Science.gov (United States)

    Newland, Rebecca P.; Parade, Stephanie H.; Dickstein, Susan; Seifer, Ronald

    2016-01-01

    The current study prospectively examined the ways in which goodness of fit between maternal and infant sleep contributes to maternal depressive symptoms and the mother-child relationship across the first years of life. In a sample of 173 mother-child dyads, maternal prenatal sleep, infant sleep, maternal depressive symptoms, and mother-child attachment security were assessed via self-report, actigraphy, and observational measures. Results suggested that a poor fit between mothers’ prenatal sleep and infants’ sleep at 8 months (measured by sleep diary and actigraphy) was associated with maternal depressive symptoms at 15 months. Additionally, maternal depression mediated the association between the interplay of mother and infant sleep (measured by sleep diary) and mother-child attachment security at 30 months. Findings emphasize the importance of the match between mother and infant sleep on maternal wellbeing and mother-child relationships and highlight the role of mothers’ perceptions of infant sleep. PMID:27448324

  4. Statistical energy as a tool for binning-free, multivariate goodness-of-fit tests, two-sample comparison and unfolding

    International Nuclear Information System (INIS)

    Aslan, B.; Zech, G.

    2005-01-01

    We introduce the novel concept of statistical energy as a statistical tool. We define statistical energy of statistical distributions in a similar way as for electric charge distributions. Charges of opposite sign are in a state of minimum energy if they are equally distributed. This property is used to check whether two samples belong to the same parent distribution, to define goodness-of-fit tests and to unfold distributions distorted by measurement. The approach is binning-free and especially powerful in multidimensional applications

  5. A new calibrated bayesian internal goodness-of-fit method: sampled posterior p-values as simple and general p-values that allow double use of the data.

    Directory of Open Access Journals (Sweden)

    Frédéric Gosselin

    Full Text Available BACKGROUND: Recent approaches mixing frequentist principles with bayesian inference propose internal goodness-of-fit (GOF p-values that might be valuable for critical analysis of bayesian statistical models. However, GOF p-values developed to date only have known probability distributions under restrictive conditions. As a result, no known GOF p-value has a known probability distribution for any discrepancy function. METHODOLOGY/PRINCIPAL FINDINGS: We show mathematically that a new GOF p-value, called the sampled posterior p-value (SPP, asymptotically has a uniform probability distribution whatever the discrepancy function. In a moderate finite sample context, simulations also showed that the SPP appears stable to relatively uninformative misspecifications of the prior distribution. CONCLUSIONS/SIGNIFICANCE: These reasons, together with its numerical simplicity, make the SPP a better canonical GOF p-value than existing GOF p-values.

  6. A closer look at the effect of preliminary goodness-of-fit testing for normality for the one-sample t-test.

    Science.gov (United States)

    Rochon, Justine; Kieser, Meinhard

    2011-11-01

    Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.

  7. Figure-of-merit (FOM), an improved criterion over the normalized chi-squared test for assessing goodness-of-fit of gamma-ray spectral peaks

    International Nuclear Information System (INIS)

    Garo Balian, H.; Eddy, N.W.

    1977-01-01

    A careful experimenter knows that in order to choose the best curve fits of peaks from a gamma ray spectrum for such purposes as energy or intensity calibration, half-life determination, etc., the application of the normalized chi-squared test, [chisub(N)] 2 =chi 2 /(n-m), is insufficient. One must normally verify the goodness-of-fit with plots, detailed scans of residuals, etc. Because of different techniques of application, variations in backgrounds, in peak sizes and shapes, etc., quotation of the [chisub(N)] 2 value associated with an individual peak fit conveys very little information unless accompanied by considerable ancillary data. (This is not to say that the traditional chi 2 formula should not be used as the source of the normal equations in the least squares fitting procedure. But after the fitting, it is unreliable as a criterion for comparison with other fits.) The authors present a formula designated figure-of-merit (FOM) which greatly improves on the uncertainty and fluctuations of the [chisub(N)] 2 formula. An FOM value of less than 2.5% indicates a good fit (in the authors' judgement) irrespective of background conditions and variations in peak sizes and shapes. Furthermore, the authors feel the FOM formula is less subject to fluctuations resulting from different techniques of application. (Auth.)

  8. A chi-square goodness-of-fit test for non-identically distributed random variables: with application to empirical Bayes

    International Nuclear Information System (INIS)

    Conover, W.J.; Cox, D.D.; Martz, H.F.

    1997-12-01

    When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems at US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica reg-sign computer programs which are provided

  9. Application of tests of goodness of fit in determining the probability density function for spacing of steel sets in tunnel support system

    Directory of Open Access Journals (Sweden)

    Farnoosh Basaligheh

    2015-12-01

    Full Text Available One of the conventional methods for temporary support of tunnels is to use steel sets with shotcrete. The nature of a temporary support system demands a quick installation of its structures. As a result, the spacing between steel sets is not a fixed amount and it can be considered as a random variable. Hence, in the reliability analysis of these types of structures, the selection of an appropriate probability distribution function of spacing of steel sets is essential. In the present paper, the distances between steel sets are collected from an under-construction tunnel and the collected data is used to suggest a proper Probability Distribution Function (PDF for the spacing of steel sets. The tunnel has two different excavation sections. In this regard, different distribution functions were investigated and three common tests of goodness of fit were used for evaluation of each function for each excavation section. Results from all three methods indicate that the Wakeby distribution function can be suggested as the proper PDF for spacing between the steel sets. It is also noted that, although the probability distribution function for two different tunnel sections is the same, the parameters of PDF for the individual sections are different from each other.

  10. AN EXACT GOODNESS-OF-FIT TEST BASED ON THE OCCUPANCY PROBLEMS TO STUDY ZERO-INFLATION AND ZERO-DEFLATION IN BIOLOGICAL DOSIMETRY DATA.

    Science.gov (United States)

    Fernández-Fontelo, Amanda; Puig, Pedro; Ainsbury, Elizabeth A; Higueras, Manuel

    2018-01-12

    The goal in biological dosimetry is to estimate the dose of radiation that a suspected irradiated individual has received. For that, the analysis of aberrations (most commonly dicentric chromosome aberrations) in scored cells is performed and dose response calibration curves are built. In whole body irradiation (WBI) with X- and gamma-rays, the number of aberrations in samples is properly described by the Poisson distribution, although in partial body irradiation (PBI) the excess of zeros provided by the non-irradiated cells leads, for instance, to the Zero-Inflated Poisson distribution. Different methods are used to analyse the dosimetry data taking into account the distribution of the sample. In order to test the Poisson distribution against the Zero-Inflated Poisson distribution, several asymptotic and exact methods have been proposed which are focused on the dispersion of the data. In this work, we suggest an exact test for the Poisson distribution focused on the zero-inflation of the data developed by Rao and Chakravarti (Some small sample tests of significance for a Poisson distribution. Biometrics 1956; 12 : 264-82.), derived from the problems of occupancy. An approximation based on the standard Normal distribution is proposed in those cases where the computation of the exact test can be tedious. A Monte Carlo Simulation study was performed in order to estimate empirical confidence levels and powers of the exact test and other tests proposed in the literature. Different examples of applications based on in vitro data and also data recorded in several radiation accidents are presented and discussed. A Shiny application which computes the exact test and other interesting goodness-of-fit tests for the Poisson distribution is presented in order to provide them to all interested researchers. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Comparative analysis through probability distributions of a data set

    Science.gov (United States)

    Cristea, Gabriel; Constantinescu, Dan Mihai

    2018-02-01

    In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.

  12. How Should We Assess the Fit of Rasch-Type Models? Approximating the Power of Goodness-of-Fit Statistics in Categorical Data Analysis

    Science.gov (United States)

    Maydeu-Olivares, Alberto; Montano, Rosa

    2013-01-01

    We investigate the performance of three statistics, R [subscript 1], R [subscript 2] (Glas in "Psychometrika" 53:525-546, 1988), and M [subscript 2] (Maydeu-Olivares & Joe in "J. Am. Stat. Assoc." 100:1009-1020, 2005, "Psychometrika" 71:713-732, 2006) to assess the overall fit of a one-parameter logistic model…

  13. Modelling binary data

    CERN Document Server

    Collett, David

    2002-01-01

    INTRODUCTION Some Examples The Scope of this Book Use of Statistical Software STATISTICAL INFERENCE FOR BINARY DATA The Binomial Distribution Inference about the Success Probability Comparison of Two Proportions Comparison of Two or More Proportions MODELS FOR BINARY AND BINOMIAL DATA Statistical Modelling Linear Models Methods of Estimation Fitting Linear Models to Binomial Data Models for Binomial Response Data The Linear Logistic Model Fitting the Linear Logistic Model to Binomial Data Goodness of Fit of a Linear Logistic Model Comparing Linear Logistic Models Linear Trend in Proportions Comparing Stimulus-Response Relationships Non-Convergence and Overfitting Some other Goodness of Fit Statistics Strategy for Model Selection Predicting a Binary Response Probability BIOASSAY AND SOME OTHER APPLICATIONS The Tolerance Distribution Estimating an Effective Dose Relative Potency Natural Response Non-Linear Logistic Regression Models Applications of the Complementary Log-Log Model MODEL CHECKING Definition of Re...

  14. An Analysis of Cross Racial Identity Scale Scores Using Classical Test Theory and Rasch Item Response Models

    Science.gov (United States)

    Sussman, Joshua; Beaujean, A. Alexander; Worrell, Frank C.; Watson, Stevie

    2013-01-01

    Item response models (IRMs) were used to analyze Cross Racial Identity Scale (CRIS) scores. Rasch analysis scores were compared with classical test theory (CTT) scores. The partial credit model demonstrated a high goodness of fit and correlations between Rasch and CTT scores ranged from 0.91 to 0.99. CRIS scores are supported by both methods.…

  15. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  16. Evaluation of pharmacokinetic model designs for subcutaneous infusion of insulin aspart

    DEFF Research Database (Denmark)

    Mansell, Erin J.; Schmidt, Signe; Docherty, Paul D.

    2017-01-01

    Effective mathematical modelling of continuous subcutaneous infusion pharmacokinetics should aid understanding and control in insulin therapy. Thorough analysis of candidate model performance is important for selecting the appropriate models. Eight candidate models for insulin pharmacokinetics...... included a range of modelled behaviours, parameters and complexity. The models were compared using clinical data from subjects with type 1 diabetes with continuous subcutaneous insulin infusion. Performance of the models was compared through several analyses: R2 for goodness of fit; the Akaike Information...

  17. Forecasting production of fossil fuel sources in Turkey using a comparative regression and ARIMA model

    International Nuclear Information System (INIS)

    Ediger, Volkan S.; Akar, Sertac; Ugurlu, Berkin

    2006-01-01

    This study aims at forecasting the most possible curve for domestic fossil fuel production of Turkey to help policy makers to develop policy implications for rapidly growing dependency problem on imported fossil fuels. The fossil fuel dependency problem is international in scope and context and Turkey is a typical example for emerging energy markets of the developing world. We developed a decision support system for forecasting fossil fuel production by applying a regression, ARIMA and SARIMA method to the historical data from 1950 to 2003 in a comparative manner. The method integrates each model by using some decision parameters related to goodness-of-fit and confidence interval, behavior of the curve, and reserves. Different forecasting models are proposed for different fossil fuel types. The best result is obtained for oil since the reserve classifications used it is much better defined them for the others. Our findings show that the fossil fuel production peak has already been reached; indicating the total fossil fuel production of the country will diminish and theoretically will end in 2038. However, production is expected to end in 2019 for hard coal, in 2024 for natural gas, in 2029 for oil and 2031 for asphaltite. The gap between the fossil fuel consumption and production is growing enormously and it reaches in 2030 to approximately twice of what it is in 2000

  18. Fault diagnosis and comparing risk for the steel coil manufacturing process using statistical models for binary data

    International Nuclear Information System (INIS)

    Debón, A.; Carlos Garcia-Díaz, J.

    2012-01-01

    Advanced statistical models can help industry to design more economical and rational investment plans. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing. Increasingly stringent quality requirements in the automotive industry also require ongoing efforts in process control to make processes more robust. Robust methods for estimating the quality of galvanized steel coils are an important tool for the comprehensive monitoring of the performance of the manufacturing process. This study applies different statistical regression models: generalized linear models, generalized additive models and classification trees to estimate the quality of galvanized steel coils on the basis of short time histories. The data, consisting of 48 galvanized steel coils, was divided into sets of conforming and nonconforming coils. Five variables were selected for monitoring the process: steel strip velocity and four bath temperatures. The present paper reports a comparative evaluation of statistical models for binary data using Receiver Operating Characteristic (ROC) curves. A ROC curve is a graph or a technique for visualizing, organizing and selecting classifiers based on their performance. The purpose of this paper is to examine their use in research to obtain the best model to predict defective steel coil probability. In relation to the work of other authors who only propose goodness of fit statistics, we should highlight one distinctive feature of the methodology presented here, which is the possibility of comparing the different models with ROC graphs which are based on model classification performance. Finally, the results are validated by bootstrap procedures.

  19. A comparative evaluation of risk-adjustment models for benchmarking amputation-free survival after lower extremity bypass.

    Science.gov (United States)

    Simons, Jessica P; Goodney, Philip P; Flahive, Julie; Hoel, Andrew W; Hallett, John W; Kraiss, Larry W; Schanzer, Andres

    2016-04-01

    Providing patients and payers with publicly reported risk-adjusted quality metrics for the purpose of benchmarking physicians and institutions has become a national priority. Several prediction models have been developed to estimate outcomes after lower extremity revascularization for critical limb ischemia, but the optimal model to use in contemporary practice has not been defined. We sought to identify the highest-performing risk-adjustment model for amputation-free survival (AFS) at 1 year after lower extremity bypass (LEB). We used the national Society for Vascular Surgery Vascular Quality Initiative (VQI) database (2003-2012) to assess the performance of three previously validated risk-adjustment models for AFS. The Bypass versus Angioplasty in Severe Ischaemia of the Leg (BASIL), Finland National Vascular (FINNVASC) registry, and the modified Project of Ex-vivo vein graft Engineering via Transfection III (PREVENT III [mPIII]) risk scores were applied to the VQI cohort. A novel model for 1-year AFS was also derived using the VQI data set and externally validated using the PIII data set. The relative discrimination (Harrell c-index) and calibration (Hosmer-May goodness-of-fit test) of each model were compared. Among 7754 patients in the VQI who underwent LEB for critical limb ischemia, the AFS was 74% at 1 year. Each of the previously published models for AFS demonstrated similar discriminative performance: c-indices for BASIL, FINNVASC, mPIII were 0.66, 0.60, and 0.64, respectively. The novel VQI-derived model had improved discriminative ability with a c-index of 0.71 and appropriate generalizability on external validation with a c-index of 0.68. The model was well calibrated in both the VQI and PIII data sets (goodness of fit P = not significant). Currently available prediction models for AFS after LEB perform modestly when applied to national contemporary VQI data. Moreover, the performance of each model was inferior to that of the novel VQI-derived model

  20. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  1. Alternative regression models to assess increase in childhood BMI

    OpenAIRE

    Beyerlein, Andreas; Fahrmeir, Ludwig; Mansmann, Ulrich; Toschke, André M

    2008-01-01

    Abstract Background Body mass index (BMI) data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Methods Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs), quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS). We analyzed data of 4967 childre...

  2. Extensions and Applications of the Cox-Aalen Survival Model

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2003-01-01

    Aalen additive risk model; competing risk; counting processes; Cox model; cumulative incidence function; goodness of fit; prediction of survival probability; time-varying effects......Aalen additive risk model; competing risk; counting processes; Cox model; cumulative incidence function; goodness of fit; prediction of survival probability; time-varying effects...

  3. Comparative testing of dark matter models with 15 HSB and 15 LSB galaxies

    Science.gov (United States)

    Kun, E.; Keresztes, Z.; Simkó, A.; Szűcs, G.; Gergely, L. Á.

    2017-12-01

    Context. We assemble a database of 15 high surface brightness (HSB) and 15 low surface brightness (LSB) galaxies, for which surface brightness density and spectroscopic rotation curve data are both available and representative for various morphologies. We use this dataset to test the Navarro-Frenk-White, the Einasto, and the pseudo-isothermal sphere dark matter models. Aims: We investigate the compatibility of the pure baryonic model and baryonic plus one of the three dark matter models with observations on the assembled galaxy database. When a dark matter component improves the fit with the spectroscopic rotational curve, we rank the models according to the goodness of fit to the datasets. Methods: We constructed the spatial luminosity density of the baryonic component based on the surface brightness profile of the galaxies. We estimated the mass-to-light (M/L) ratio of the stellar component through a previously proposed color-mass-to-light ratio relation (CMLR), which yields stellar masses independent of the photometric band. We assumed an axissymetric baryonic mass model with variable axis ratios together with one of the three dark matter models to provide the theoretical rotational velocity curves, and we compared them with the dataset. In a second attempt, we addressed the question whether the dark component could be replaced by a pure baryonic model with fitted M/L ratios, varied over ranges consistent with CMLR relations derived from the available stellar population models. We employed the Akaike information criterion to establish the performance of the best-fit models. Results: For 7 galaxies (2 HSB and 5 LSB), neither model fits the dataset within the 1σ confidence level. For the other 23 cases, one of the models with dark matter explains the rotation curve data best. According to the Akaike information criterion, the pseudo-isothermal sphere emerges as most favored in 14 cases, followed by the Navarro-Frenk-White (6 cases) and the Einasto (3 cases) dark

  4. Methods of comparing associative models and an application to retrospective revaluation.

    Science.gov (United States)

    Witnauer, James E; Hutchings, Ryan; Miller, Ralph R

    2017-11-01

    Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Comparing the Discrete and Continuous Logistic Models

    Science.gov (United States)

    Gordon, Sheldon P.

    2008-01-01

    The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)

  6. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  7. COMPARATIVE ANALYSIS OF SOFTWARE DEVELOPMENT MODELS

    OpenAIRE

    Sandeep Kaur*

    2017-01-01

    No geek is unfamiliar with the concept of software development life cycle (SDLC). This research deals with the various SDLC models covering waterfall, spiral, and iterative, agile, V-shaped, prototype model. In the modern era, all the software systems are fallible as they can’t stand with certainty. So, it is tried to compare all aspects of the various models, their pros and cons so that it could be easy to choose a particular model at the time of need

  8. Is it Worth Comparing Different Bankruptcy Models?

    Directory of Open Access Journals (Sweden)

    Miroslava Dolejšová

    2015-01-01

    Full Text Available The aim of this paper is to compare the performance of small enterprises in the Zlín and Olomouc Regions. These enterprises were assessed using the Altman Z-Score model, the IN05 model, the Zmijewski model and the Springate model. The batch selected for this analysis included 16 enterprises from the Zlín Region and 16 enterprises from the Olomouc Region. Financial statements subjected to the analysis are from 2006 and 2010. The statistical data analysis was performed using the one-sample z-test for proportions and the paired t-test. The outcomes of the evaluation run using the Altman Z-Score model, the IN05 model and the Springate model revealed the enterprises to be financially sound, but the Zmijewski model identified them as being insolvent. The one-sample z-test for proportions confirmed that at least 80% of these enterprises show a sound financial condition. A comparison of all models has emphasized the substantial difference produced by the Zmijewski model. The paired t-test showed that the financial performance of small enterprises had remained the same during the years involved. It is recommended that small enterprises assess their financial performance using two different bankruptcy models. They may wish to combine the Zmijewski model with any bankruptcy model (the Altman Z-Score model, the IN05 model or the Springate model to ensure a proper method of analysis.

  9. Wellness Model of Supervision: A Comparative Analysis

    Science.gov (United States)

    Lenz, A. Stephen; Sangganjanavanich, Varunee Faii; Balkin, Richard S.; Oliver, Marvarene; Smith, Robert L.

    2012-01-01

    This quasi-experimental study compared the effectiveness of the Wellness Model of Supervision (WELMS; Lenz & Smith, 2010) with alternative supervision models for developing wellness constructs, total personal wellness, and helping skills among counselors-in-training. Participants were 32 master's-level counseling students completing their…

  10. H I versus H α - comparing the kinematic tracers in modelling the initial conditions of the Mice

    Science.gov (United States)

    Mortazavi, S. Alireza; Lotz, Jennifer M.; Barnes, Joshua E.; Privon, George C.; Snyder, Gregory F.

    2018-03-01

    We explore the effect of using different kinematic tracers (H I and H α) on reconstructing the encounter parameters of the Mice major galaxy merger (NGC 4676A/B). We observed the Mice using the SparsePak Integral Field Unit (IFU) on the WIYN telescope, and compared the H α velocity map with VLA H I observations. The relatively high spectral resolution of our data (R ≈ 5000) allows us to resolve more than one kinematic component in the emission lines of some fibres. We separate the H α-[N II] emission of the star-forming regions from shocks using their [N II]/H α line ratio and velocity dispersion. We show that the velocity of star-forming regions agree with that of the cold gas (H I), particularly, in the tidal tails of the system. We reconstruct the morphology and kinematics of these tidal tails utilizing an automated modelling method based on the IDENTIKIT software package. We quantify the goodness of fit and the uncertainties of the derived encounter parameters. Most of the initial conditions reconstructed using H α and H I are consistent with each other, and qualitatively agree with the results of previous works. For example, we find 210± ^{50}_{40} Myr, and 180± ^{50}_{40} Myr for the time since pericentre, when modelling H α and H I kinematics, respectively. This confirms that in some cases, H α kinematics can be used instead of H I kinematics for reconstructing the initial conditions of galaxy mergers, and our automated modelling method is applicable to some merging systems.

  11. Comparing flood loss models of different complexity

    Science.gov (United States)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno

    2013-04-01

    Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.

  12. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  13. Comparative study of void fraction models

    International Nuclear Information System (INIS)

    Borges, R.C.; Freitas, R.L.

    1985-01-01

    Some models for the calculation of void fraction in water in sub-cooled boiling and saturated vertical upward flow with forced convection have been selected and compared with experimental results in the pressure range of 1 to 150 bar. In order to know the void fraction axial distribution it is necessary to determine the net generation of vapour and the fluid temperature distribution in the slightly sub-cooled boiling region. It was verified that the net generation of vapour was well represented by the Saha-Zuber model. The selected models for the void fraction calculation present adequate results but with a tendency to super-estimate the experimental results, in particular the homogeneous models. The drift flux model is recommended, followed by the Armand and Smith models. (F.E.) [pt

  14. Comparing coefficients of nested nonlinear probability models

    DEFF Research Database (Denmark)

    Kohler, Ulrich; Karlson, Kristian Bernt; Holm, Anders

    2011-01-01

    In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general decomposi......In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general...... decomposition method that is unaffected by the rescaling or attenuation bias that arise in cross-model comparisons in nonlinear models. It recovers the degree to which a control variable, Z, mediates or explains the relationship between X and a latent outcome variable, Y*, underlying the nonlinear probability...

  15. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  16. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    Science.gov (United States)

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  17. Fitting Hidden Markov Models to Psychological Data

    Directory of Open Access Journals (Sweden)

    Ingmar Visser

    2002-01-01

    Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.

  18. Goodness-of-fit tests for a heavy tailed distribution

    NARCIS (Netherlands)

    A.J. Koning (Alex); L. Peng (Liang)

    2005-01-01

    textabstractFor testing whether a distribution function is heavy tailed, we study the Kolmogorov test, Berk-Jones test, score test and their integrated versions. A comparison is conducted via Bahadur efficiency and simulations. The score test and the integrated score test show the best

  19. Two-dimensional goodness-of-fit testing in astronomy

    International Nuclear Information System (INIS)

    Peacock, J.A

    1983-01-01

    This paper deals with the techniques available to test for consistency between the empirical distribution of data points on a plane and a hypothetical density law. Two new statistical tests are developed. The first is a two-dimensional version of the Kolmogorov-Smirnov test, for which the distribution of the test statistic is investigated using a Monte Carlo method. This test is found in practice to be very nearly distribution-free, and empirical formulae for the confidence levels are given. Secondly, the method of power-spectrum analysis is extended to deal with cases in which the null hypothesis is not a uniform distribution. These methods are illustrated by application to the distribution of quasar candidates found on an objective-prism plate of the Virgo Cluster. (author)

  20. Comparing numerically exact and modelled static friction

    Directory of Open Access Journals (Sweden)

    Krengel Dominik

    2017-01-01

    Full Text Available Currently there exists no mechanically consistent “numerically exact” implementation of static and dynamic Coulomb friction for general soft particle simulations with arbitrary contact situations in two or three dimension, but only along one dimension. We outline a differential-algebraic equation approach for a “numerically exact” computation of friction in two dimensions and compare its application to the Cundall-Strack model in some test cases.

  1. Some Improved Diagnostics for Failure of The Rasch Model.

    Science.gov (United States)

    Molenaar, Ivo W.

    1983-01-01

    Goodness of fit tests for the Rasch model are typically large-sample, global measures. This paper offers suggestions for small-sample exploratory techniques for examining the fit of item data to the Rasch model. (Author/JKS)

  2. Comparative analysis of Goodwin's business cycle models

    Science.gov (United States)

    Antonova, A. O.; Reznik, S.; Todorov, M. D.

    2016-10-01

    We compare the behavior of solutions of Goodwin's business cycle equation in the form of neutral delay differential equation with fixed delay (NDDE model) and in the form of the differential equations of 3rd, 4th and 5th orders (ODE model's). Such ODE model's (Taylor series expansion of NDDE in powers of θ) are proposed in N. Dharmaraj and K. Vela Velupillai [6] for investigation of the short periodic sawthooth oscillations in NDDE. We show that the ODE's of 3rd, 4th and 5th order may approximate the asymptotic behavior of only main Goodwin's mode, but not the sawthooth modes. If the order of the Taylor series expansion exceeds 5, then the approximate ODE becomes unstable independently of time lag θ.

  3. Comparing Realistic Subthalamic Nucleus Neuron Models

    Science.gov (United States)

    Njap, Felix; Claussen, Jens C.; Moser, Andreas; Hofmann, Ulrich G.

    2011-06-01

    The mechanism of action of clinically effective electrical high frequency stimulation is still under debate. However, recent evidence points at the specific activation of GABA-ergic ion channels. Using a computational approach, we analyze temporal properties of the spike trains emitted by biologically realistic neurons of the subthalamic nucleus (STN) as a function of GABA-ergic synaptic input conductances. Our contribution is based on a model proposed by Rubin and Terman and exhibits a wide variety of different firing patterns, silent, low spiking, moderate spiking and intense spiking activity. We observed that most of the cells in our network turn to silent mode when we increase the GABAA input conductance above the threshold of 3.75 mS/cm2. On the other hand, insignificant changes in firing activity are observed when the input conductance is low or close to zero. We thus reproduce Rubin's model with vanishing synaptic conductances. To quantitatively compare spike trains from the original model with the modified model at different conductance levels, we apply four different (dis)similarity measures between them. We observe that Mahalanobis distance, Victor-Purpura metric, and Interspike Interval distribution are sensitive to different firing regimes, whereas Mutual Information seems undiscriminative for these functional changes.

  4. Fitting and comparing competing models of the species abundance distribution: assessment and prospect

    Directory of Open Access Journals (Sweden)

    Thomas J Matthews

    2014-06-01

    Full Text Available A species abundance distribution (SAD characterises patterns in the commonness and rarity of all species within an ecological community. As such, the SAD provides the theoretical foundation for a number of other biogeographical and macroecological patterns, such as the species–area relationship, as well as being an interesting pattern in its own right. While there has been resurgence in the study of SADs in the last decade, less focus has been placed on methodology in SAD research, and few attempts have been made to synthesise the vast array of methods which have been employed in SAD model evaluation. As such, our review has two aims. First, we provide a general overview of SADs, including descriptions of the commonly used distributions, plotting methods and issues with evaluating SAD models. Second, we review a number of recent advances in SAD model fitting and comparison. We conclude by providing a list of recommendations for fitting and evaluating SAD models. We argue that it is time for SAD studies to move away from many of the traditional methods available for fitting and evaluating models, such as sole reliance on the visual examination of plots, and embrace statistically rigorous techniques. In particular, we recommend the use of both goodness-of-fit tests and model-comparison analyses because each provides unique information which one can use to draw inferences.

  5. Comprehensive ecosystem model-data synthesis using multiple data sets at two temperate forest free-air CO2 enrichment experiments: Model performance at ambient CO2 concentration

    Science.gov (United States)

    Walker, Anthony P.; Hanson, Paul J.; De Kauwe, Martin G.; Medlyn, Belinda E.; Zaehle, Sönke; Asao, Shinichi; Dietze, Michael; Hickler, Thomas; Huntingford, Chris; Iversen, Colleen M.; Jain, Atul; Lomas, Mark; Luo, Yiqi; McCarthy, Heather; Parton, William J.; Prentice, I. Colin; Thornton, Peter E.; Wang, Shusen; Wang, Ying-Ping; Warlind, David; Weng, Ensheng; Warren, Jeffrey M.; Woodward, F. Ian; Oren, Ram; Norby, Richard J.

    2014-05-01

    Free-air CO2 enrichment (FACE) experiments provide a remarkable wealth of data which can be used to evaluate and improve terrestrial ecosystem models (TEMs). In the FACE model-data synthesis project, 11 TEMs were applied to two decadelong FACE experiments in temperate forests of the southeastern U.S.—the evergreen Duke Forest and the deciduous Oak Ridge Forest. In this baseline paper, we demonstrate our approach to model-data synthesis by evaluating the models' ability to reproduce observed net primary productivity (NPP), transpiration, and leaf area index (LAI) in ambient CO2 treatments. Model outputs were compared against observations using a range of goodness-of-fit statistics. Many models simulated annual NPP and transpiration within observed uncertainty. We demonstrate, however, that high goodness-of-fit values do not necessarily indicate a successful model, because simulation accuracy may be achieved through compensating biases in component variables. For example, transpiration accuracy was sometimes achieved with compensating biases in leaf area index and transpiration per unit leaf area. Our approach to model-data synthesis therefore goes beyond goodness-of-fit to investigate the success of alternative representations of component processes. Here we demonstrate this approach by comparing competing model hypotheses determining peak LAI. Of three alternative hypotheses—(1) optimization to maximize carbon export, (2) increasing specific leaf area with canopy depth, and (3) the pipe model—the pipe model produced peak LAI closest to the observations. This example illustrates how data sets from intensive field experiments such as FACE can be used to reduce model uncertainty despite compensating biases by evaluating individual model assumptions.

  6. Functional dynamic factor models with application to yield curve forecasting

    KAUST Repository

    Hays, Spencer; Shen, Haipeng; Huang, Jianhua Z.

    2012-01-01

    resulted in a trade-off between goodness of fit and consistency with economic theory. To address this, herein we propose a novel formulation which connects the dynamic factor model (DFM) framework with concepts from functional data analysis: a DFM

  7. Validation of the LOD score compared with APACHE II score in prediction of the hospital outcome in critically ill patients.

    Science.gov (United States)

    Khwannimit, Bodin

    2008-01-01

    The Logistic Organ Dysfunction score (LOD) is an organ dysfunction score that can predict hospital mortality. The aim of this study was to validate the performance of the LOD score compared with the Acute Physiology and Chronic Health Evaluation II (APACHE II) score in a mixed intensive care unit (ICU) at a tertiary referral university hospital in Thailand. The data were collected prospectively on consecutive ICU admissions over a 24 month period from July1, 2004 until June 30, 2006. Discrimination was evaluated by the area under the receiver operating characteristic curve (AUROC). The calibration was assessed by the Hosmer-Lemeshow goodness-of-fit H statistic. The overall fit of the model was evaluated by the Brier's score. Overall, 1,429 patients were enrolled during the study period. The mortality in the ICU was 20.9% and in the hospital was 27.9%. The median ICU and hospital lengths of stay were 3 and 18 days, respectively, for all patients. Both models showed excellent discrimination. The AUROC for the LOD and APACHE II were 0.860 [95% confidence interval (CI) = 0.838-0.882] and 0.898 (95% Cl = 0.879-0.917), respectively. The LOD score had perfect calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 10 (p = 0.44). However, the APACHE II had poor calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 75.69 (p < 0.001). Brier's score showed the overall fit for both models were 0.123 (95%Cl = 0.107-0.141) and 0.114 (0.098-0.132) for the LOD and APACHE II, respectively. Thus, the LOD score was found to be accurate for predicting hospital mortality for general critically ill patients in Thailand.

  8. Modelling the distribution of chickens, ducks, and geese in China

    Science.gov (United States)

    Prosser, Diann J.; Wu, Junxi; Ellis, Erie C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius

    2011-01-01

    Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China's chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for 1/4 of the sample data which were not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China's first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives.

  9. Collision prediction models using multivariate Poisson-lognormal regression.

    Science.gov (United States)

    El-Basyouny, Karim; Sayed, Tarek

    2009-07-01

    This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models.

  10. Comparing the Goodness of Different Statistical Criteria for Evaluating the Soil Water Infiltration Models

    Directory of Open Access Journals (Sweden)

    S. Mirzaee

    2016-02-01

    Full Text Available Introduction: The infiltration process is one of the most important components of the hydrologic cycle. Quantifying the infiltration water into soil is of great importance in watershed management. Prediction of flooding, erosion and pollutant transport all depends on the rate of runoff which is directly affected by the rate of infiltration. Quantification of infiltration water into soil is also necessary to determine the availability of water for crop growth and to estimate the amount of additional water needed for irrigation. Thus, an accurate model is required to estimate infiltration of water into soil. The ability of physical and empirical models in simulation of soil processes is commonly measured through comparisons of simulated and observed values. For these reasons, a large variety of indices have been proposed and used over the years in comparison of infiltration water into soil models. Among the proposed indices, some are absolute criteria such as the widely used root mean square error (RMSE, while others are relative criteria (i.e. normalized such as the Nash and Sutcliffe (1970 efficiency criterion (NSE. Selecting and using appropriate statistical criteria to evaluate and interpretation of the results for infiltration water into soil models is essential because each of the used criteria focus on specific types of errors. Also, descriptions of various goodness of fit indices or indicators including their advantages and shortcomings, and rigorous discussions on the suitability of each index are very important. The objective of this study is to compare the goodness of different statistical criteria to evaluate infiltration of water into soil models. Comparison techniques were considered to define the best models: coefficient of determination (R2, root mean square error (RMSE, efficiency criteria (NSEI and modified forms (such as NSEjI, NSESQRTI, NSElnI and NSEiI. Comparatively little work has been carried out on the meaning and

  11. Comparing holographic dark energy models with statefinder

    International Nuclear Information System (INIS)

    Cui, Jing-Lei; Zhang, Jing-Fei

    2014-01-01

    We apply the statefinder diagnostic to the holographic dark energy models, including the original holographic dark energy (HDE) model, the new holographic dark energy model, the new agegraphic dark energy (NADE) model, and the Ricci dark energy model. In the low-redshift region the holographic dark energy models are degenerate with each other and with the ΛCDM model in the H(z) and q(z) evolutions. In particular, the HDE model is highly degenerate with the ΛCDM model, and in the HDE model the cases with different parameter values are also in strong degeneracy. Since the observational data are mainly within the low-redshift region, it is very important to break this lowredshift degeneracy in the H(z) and q(z) diagnostics by using some quantities with higher order derivatives of the scale factor. It is shown that the statefinder diagnostic r(z) is very useful in breaking the low-redshift degeneracies. By employing the statefinder diagnostic the holographic dark energy models can be differentiated efficiently in the low-redshift region. The degeneracy between the holographic dark energy models and the ΛCDM model can also be broken by this method. Especially for the HDE model, all the previous strong degeneracies appearing in the H(z) and q(z) diagnostics are broken effectively. But for the NADE model, the degeneracy between the cases with different parameter values cannot be broken, even though the statefinder diagnostic is used. A direct comparison of the holographic dark energy models in the r-s plane is also made, in which the separations between the models (including the ΛCDM model) can be directly measured in the light of the current values {r 0 , s 0 } of the models. (orig.)

  12. modelling distances

    Directory of Open Access Journals (Sweden)

    Robert F. Love

    2001-01-01

    Full Text Available Distance predicting functions may be used in a variety of applications for estimating travel distances between points. To evaluate the accuracy of a distance predicting function and to determine its parameters, a goodness-of-fit criteria is employed. AD (Absolute Deviations, SD (Squared Deviations and NAD (Normalized Absolute Deviations are the three criteria that are mostly employed in practice. In the literature some assumptions have been made about the properties of each criterion. In this paper, we present statistical analyses performed to compare the three criteria from different perspectives. For this purpose, we employ the ℓkpθ-norm as the distance predicting function, and statistically compare the three criteria by using normalized absolute prediction error distributions in seventeen geographical regions. We find that there exist no significant differences between the criteria. However, since the criterion SD has desirable properties in terms of distance modelling procedures, we suggest its use in practice.

  13. Comparative Analysis of Investment Decision Models

    Directory of Open Access Journals (Sweden)

    Ieva Kekytė

    2017-06-01

    Full Text Available Rapid development of financial markets resulted new challenges for both investors and investment issues. This increased demand for innovative, modern investment and portfolio management decisions adequate for market conditions. Financial market receives special attention, creating new models, includes financial risk management and investment decision support systems.Researchers recognize the need to deal with financial problems using models consistent with the reality and based on sophisticated quantitative analysis technique. Thus, role mathematical modeling in finance becomes important. This article deals with various investments decision-making models, which include forecasting, optimization, stochatic processes, artificial intelligence, etc., and become useful tools for investment decisions.

  14. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2012-03-01

    Full Text Available Group Fallback only No Damballa, 2008 Crime Case studies Lone No No Owens et al, 2009 Warfare Literature Group Yes Yes Croom, 2010 Crime (APT) Case studies Group No No Dreijer, 2011 Warfare Previous models and case studies Group Yes No Van... be needed by a geographically or functionally distributed group of attackers. While some of the models describe the installation of a backdoor or an advanced persistent threat (APT), none of them describe the behaviour involved in returning to a...

  15. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2015-10-01

    Full Text Available would be needed by a Cyber Security Operations Centre in order to perform offensive cyber operations?". The analysis was performed, using as a springboard seven models of cyber-attack, and resulted in the development of what is described as a canonical...

  16. Comparative Distributions of Hazard Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Rana Abdul Wajid

    2006-07-01

    Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

  17. Modele bicamerale comparate. Romania: Monocameralism versus bicameralism

    Directory of Open Access Journals (Sweden)

    Cynthia Carmen CURT

    2007-06-01

    Full Text Available The paper attempts to evaluate the Romanian bicameral model as well as to identify and critically asses which are the options our country has in choosing between unicameral and bicameral system. The analysis attempts to observe the characteristics of some Second Chambers that are related to Romanian bicameralism by influencing the configuration of the Romanian bicameral legislature, or which devised constitutional mechanisms can be used in order to preserve an efficient bicameral formula. Also the alternative of giving up the bicameral formula due to some arguments related to the simplification and the efficiency of the legislative procedure is explored.

  18. A Model for Comparing Free Cloud Platforms

    Directory of Open Access Journals (Sweden)

    Radu LIXANDROIU

    2014-01-01

    Full Text Available VMware, VirtualBox, Virtual PC and other popular desktop virtualization applications are used only by a few users of IT techniques. This article attempts to make a comparison model for choosing the best cloud platform. Many virtualization applications such as VMware (VMware Player, Oracle VirtualBox and Microsoft Virtual PC are free for home users. The main goal of the virtualization software is that it allows users to run multiple operating systems simultane-ously on one virtual environment, using one computer desktop.

  19. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    Science.gov (United States)

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  20. Study of depression influencing factors with zero-inflated regression models in a large-scale population survey

    OpenAIRE

    Xu, Tao; Zhu, Guangjin; Han, Shaomei

    2017-01-01

    Objectives The number of depression symptoms can be considered as count data in order to get complete and accurate analyses findings in studies of depression. This study aims to compare the goodness of fit of four count outcomes models by a large survey sample to identify the optimum model for a risk factor study of the number of depression symptoms. Methods 15 820 subjects, aged 10 to 80 years old, who were not suffering from serious chronic diseases and had not run a high fever in the past ...

  1. Bayesian analysis of CCDM models

    Science.gov (United States)

    Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3αH0 model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  2. Bayesian analysis of CCDM models

    Energy Technology Data Exchange (ETDEWEB)

    Jesus, J.F. [Universidade Estadual Paulista (Unesp), Câmpus Experimental de Itapeva, Rua Geraldo Alckmin 519, Vila N. Sra. de Fátima, Itapeva, SP, 18409-010 Brazil (Brazil); Valentim, R. [Departamento de Física, Instituto de Ciências Ambientais, Químicas e Farmacêuticas—ICAQF, Universidade Federal de São Paulo (UNIFESP), Unidade José Alencar, Rua São Nicolau No. 210, Diadema, SP, 09913-030 Brazil (Brazil); Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk [Institute of Cosmology and Gravitation—University of Portsmouth, Burnaby Road, Portsmouth, PO1 3FX United Kingdom (United Kingdom)

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  3. Nonlinear models for fitting growth curves of Nellore cows reared in the Amazon Biome

    Directory of Open Access Journals (Sweden)

    Kedma Nayra da Silva Marinho

    2013-09-01

    Full Text Available Growth curves of Nellore cows were estimated by comparing six nonlinear models: Brody, Logistic, two alternatives by Gompertz, Richards and Von Bertalanffy. The models were fitted to weight-age data, from birth to 750 days of age of 29,221 cows, born between 1976 and 2006 in the Brazilian states of Acre, Amapá, Amazonas, Pará, Rondônia, Roraima and Tocantins. The models were fitted by the Gauss-Newton method. The goodness of fit of the models was evaluated by using mean square error, adjusted coefficient of determination, prediction error and mean absolute error. Biological interpretation of parameters was accomplished by plotting estimated weights versus the observed weight means, instantaneous growth rate, absolute maturity rate, relative instantaneous growth rate, inflection point and magnitude of the parameters A (asymptotic weight and K (maturing rate. The Brody and Von Bertalanffy models fitted the weight-age data but the other models did not. The average weight (A and growth rate (K were: 384.6±1.63 kg and 0.0022±0.00002 (Brody and 313.40±0.70 kg and 0.0045±0.00002 (Von Bertalanffy. The Brody model provides better goodness of fit than the Von Bertalanffy model.

  4. COMPARING OF DEPOSIT MODEL AND LIFE INSURANCE MODEL IN MACEDONIA

    Directory of Open Access Journals (Sweden)

    TATJANA ATANASOVA-PACHEMSKA

    2016-02-01

    Full Text Available In conditions of the continuous decline of the interest rates for bank deposits, and at a time when uncertainty about the future is increasing, physical and legal persons have doubts how to secure their future or how and where to invest their funds and thus to “fertilize” and increase their savings. Individuals usually choose to put their savings in the bank for a certain period, and for that period to receive certain interest, or decide to invest their savings in different types of life insurance and thus to "take care" of their life, their future and the future of their families. In mathematics are developed many models that relate to the compounding and the insurance. This paper is a comparison of the deposit model and the model of life insurance

  5. A Comparative Study Of Stock Price Forecasting Using Nonlinear Models

    Directory of Open Access Journals (Sweden)

    Diteboho Xaba

    2017-03-01

    Full Text Available This study compared the in-sample forecasting accuracy of three forecasting nonlinear models namely: the Smooth Transition Regression (STR model, the Threshold Autoregressive (TAR model and the Markov-switching Autoregressive (MS-AR model. Nonlinearity tests were used to confirm the validity of the assumptions of the study. The study used model selection criteria, SBC to select the optimal lag order and for the selection of appropriate models. The Mean Square Error (MSE, Mean Absolute Error (MAE and Root Mean Square Error (RMSE served as the error measures in evaluating the forecasting ability of the models. The MS-AR models proved to perform well with lower error measures as compared to LSTR and TAR models in most cases.

  6. Constant-parameter capture-recapture models

    Science.gov (United States)

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  7. A Comprehensive Method for Comparing Mental Models of Dynamic Systems

    OpenAIRE

    Schaffernicht, Martin; Grösser, Stefan N.

    2011-01-01

    Mental models are the basis on which managers make decisions even though external decision support systems may provide help. Research has demonstrated that more comprehensive and dynamic mental models seem to be at the foundation for improved policies and decisions. Eliciting and comparing such models can systematically explicate key variables and their main underlying structures. In addition, superior dynamic mental models can be identified. This paper reviews existing studies which measure ...

  8. Weighted cumulative exposure models helped identify an association between early knee-pain consultations and future knee OA diagnosis.

    Science.gov (United States)

    Yu, Dahai; Peat, George; Bedson, John; Edwards, John J; Turkiewicz, Aleksandra; Jordan, Kelvin P

    2016-08-01

    To establish the association between prior knee-pain consultations and early diagnosis of knee osteoarthritis (OA) by weighted cumulative exposure (WCE) models. Data were from an electronic health care record (EHR) database (Consultations in Primary Care Archive). WCE functions for modeling the cumulative effect of time-varying knee-pain consultations weighted by recency were derived as a predictive tool in a population-based case-control sample and validated in a prospective cohort sample. Two WCE functions ([i] weighting of the importance of past consultations determined a priori; [ii] flexible spline-based estimation) were comprehensively compared with two simpler models ([iii] time since most recent consultation; total number of past consultations) on model goodness of fit, discrimination, and calibration both in derivation and validation phases. People with the most recent and most frequent knee-pain consultations were more likely to have high WCE scores that were associated with increased risk of knee OA diagnosis both in derivation and validation phases. Better model goodness of fit, discrimination, and calibration were observed for flexible spline-based WCE models. WCE functions can be used to model prediagnostic symptoms within routine EHR data and provide novel low-cost predictive tools contributing to early diagnosis. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Health Promotion Behavior of Chinese International Students in Korea Including Acculturation Factors: A Structural Equation Model.

    Science.gov (United States)

    Kim, Sun Jung; Yoo, Il Young

    2016-03-01

    The purpose of this study was to explain the health promotion behavior of Chinese international students in Korea using a structural equation model including acculturation factors. A survey using self-administered questionnaires was employed. Data were collected from 272 Chinese students who have resided in Korea for longer than 6 months. The data were analyzed using structural equation modeling. The p value of final model is .31. The fitness parameters of the final model such as goodness of fit index, adjusted goodness of fit index, normed fit index, non-normed fit index, and comparative fit index were more than .95. Root mean square of residual and root mean square error of approximation also met the criteria. Self-esteem, perceived health status, acculturative stress and acculturation level had direct effects on health promotion behavior of the participants and the model explained 30.0% of variance. The Chinese students in Korea with higher self-esteem, perceived health status, acculturation level, and lower acculturative stress reported higher health promotion behavior. The findings can be applied to develop health promotion strategies for this population. Copyright © 2016. Published by Elsevier B.V.

  10. comparative analysis of some existing kinetic models with proposed

    African Journals Online (AJOL)

    IGNATIUS NWIDI

    two statistical parameters namely; linear regression coefficient of correlation (R2) and ... Keynotes: Heavy metals, Biosorption, Kinetics Models, Comparative analysis, Average Relative Error. 1. ... If the flow rate is low, a simple manual batch.

  11. Comparing Structural Brain Connectivity by the Infinite Relational Model

    DEFF Research Database (Denmark)

    Ambrosen, Karen Marie Sandø; Herlau, Tue; Dyrby, Tim

    2013-01-01

    The growing focus in neuroimaging on analyzing brain connectivity calls for powerful and reliable statistical modeling tools. We examine the Infinite Relational Model (IRM) as a tool to identify and compare structure in brain connectivity graphs by contrasting its performance on graphs from...

  12. Multi-criteria comparative evaluation of spallation reaction models

    Science.gov (United States)

    Andrianov, Andrey; Andrianova, Olga; Konobeev, Alexandr; Korovin, Yury; Kuptsov, Ilya

    2017-09-01

    This paper presents an approach to a comparative evaluation of the predictive ability of spallation reaction models based on widely used, well-proven multiple-criteria decision analysis methods (MAVT/MAUT, AHP, TOPSIS, PROMETHEE) and the results of such a comparison for 17 spallation reaction models in the presence of the interaction of high-energy protons with natPb.

  13. Comparative analysis of various methods for modelling permanent magnet machines

    NARCIS (Netherlands)

    Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.

    2017-01-01

    In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air

  14. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  15. p-values for model evaluation

    International Nuclear Information System (INIS)

    Beaujean, F.; Caldwell, A.; Kollar, D.; Kroeninger, K.

    2011-01-01

    Deciding whether a model provides a good description of data is often based on a goodness-of-fit criterion summarized by a p-value. Although there is considerable confusion concerning the meaning of p-values, leading to their misuse, they are nevertheless of practical importance in common data analysis tasks. We motivate their application using a Bayesian argumentation. We then describe commonly and less commonly known discrepancy variables and how they are used to define p-values. The distribution of these are then extracted for examples modeled on typical data analysis tasks, and comments on their usefulness for determining goodness-of-fit are given.

  16. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Comparing live and remote models in eating conformity research.

    Science.gov (United States)

    Feeney, Justin R; Polivy, Janet; Pliner, Patricia; Sullivan, Margot D

    2011-01-01

    Research demonstrates that people conform to how much other people eat. This conformity occurs in the presence of other people (live model) and when people view information about how much food prior participants ate (remote models). The assumption in the literature has been that remote models produce a similar effect to live models, but this has never been tested. To investigate this issue, we randomly paired participants with a live or remote model and compared their eating to those who ate alone. We found that participants exposed to both types of model differed significantly from those in the control group, but there was no significant difference between the two modeling procedures. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  18. Comparative calculations and validation studies with atmospheric dispersion models

    International Nuclear Information System (INIS)

    Paesler-Sauer, J.

    1986-11-01

    This report presents the results of an intercomparison of different mesoscale dispersion models and measured data of tracer experiments. The types of models taking part in the intercomparison are Gaussian-type, numerical Eulerian, and Lagrangian dispersion models. They are suited for the calculation of the atmospherical transport of radionuclides released from a nuclear installation. For the model intercomparison artificial meteorological situations were defined and corresponding arithmetical problems were formulated. For the purpose of model validation real dispersion situations of tracer experiments were used as input data for model calculations; in these cases calculated and measured time-integrated concentrations close to the ground are compared. Finally a valuation of the models concerning their efficiency in solving the problems is carried out by the aid of objective methods. (orig./HP) [de

  19. A comparative review of radiation-induced cancer risk models

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Hee; Kim, Ju Youl [FNC Technology Co., Ltd., Yongin (Korea, Republic of); Han, Seok Jung [Risk and Environmental Safety Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2017-06-15

    With the need for a domestic level 3 probabilistic safety assessment (PSA), it is essential to develop a Korea-specific code. Health effect assessments study radiation-induced impacts; in particular, long-term health effects are evaluated in terms of cancer risk. The objective of this study was to analyze the latest cancer risk models developed by foreign organizations and to compare the methodology of how they were developed. This paper also provides suggestions regarding the development of Korean cancer risk models. A review of cancer risk models was carried out targeting the latest models: the NUREG model (1993), the BEIR VII model (2006), the UNSCEAR model (2006), the ICRP 103 model (2007), and the U.S. EPA model (2011). The methodology of how each model was developed is explained, and the cancer sites, dose and dose rate effectiveness factor (DDREF) and mathematical models are also described in the sections presenting differences among the models. The NUREG model was developed by assuming that the risk was proportional to the risk coefficient and dose, while the BEIR VII, UNSCEAR, ICRP, and U.S. EPA models were derived from epidemiological data, principally from Japanese atomic bomb survivors. The risk coefficient does not consider individual characteristics, as the values were calculated in terms of population-averaged cancer risk per unit dose. However, the models derived by epidemiological data are a function of sex, exposure age, and attained age of the exposed individual. Moreover, the methodologies can be used to apply the latest epidemiological data. Therefore, methodologies using epidemiological data should be considered first for developing a Korean cancer risk model, and the cancer sites and DDREF should also be determined based on Korea-specific studies. This review can be used as a basis for developing a Korean cancer risk model in the future.

  20. Dispersion Modeling Using Ensemble Forecasts Compared to ETEX Measurements.

    Science.gov (United States)

    Straume, Anne Grete; N'dri Koffi, Ernest; Nodop, Katrin

    1998-11-01

    Numerous numerical models are developed to predict long-range transport of hazardous air pollution in connection with accidental releases. When evaluating and improving such a model, it is important to detect uncertainties connected to the meteorological input data. A Lagrangian dispersion model, the Severe Nuclear Accident Program, is used here to investigate the effect of errors in the meteorological input data due to analysis error. An ensemble forecast, produced at the European Centre for Medium-Range Weather Forecasts, is then used as model input. The ensemble forecast members are generated by perturbing the initial meteorological fields of the weather forecast. The perturbations are calculated from singular vectors meant to represent possible forecast developments generated by instabilities in the atmospheric flow during the early part of the forecast. The instabilities are generated by errors in the analyzed fields. Puff predictions from the dispersion model, using ensemble forecast input, are compared, and a large spread in the predicted puff evolutions is found. This shows that the quality of the meteorological input data is important for the success of the dispersion model. In order to evaluate the dispersion model, the calculations are compared with measurements from the European Tracer Experiment. The model manages to predict the measured puff evolution concerning shape and time of arrival to a fairly high extent, up to 60 h after the start of the release. The modeled puff is still too narrow in the advection direction.

  1. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    2010-07-01

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  2. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  3. Comparison of six different models describing survival of mammalian cells after irradiation

    International Nuclear Information System (INIS)

    Sontag, W.

    1990-01-01

    Six different cell-survival models have been compared. All models are based on the similar assumption that irradiated cells are able to exist in one of three states. S A is the state of a totally repaired cell, in state S C the cell contains lethal lesions and in state S b the cell contains potentially lethal lesions i.e. those which either can be repaired or converted into lethal lesions. The differences between the six models lie in the different mathematical relationships between the three states. To test the six models, six different sets of experimental data were used which describe cell survival at different repair times after irradiation with sparsely ionizing irradiation. In order to compare the models, a goodness-of-fit function was used. The differences between the six models were tested by use of the nonparametric Mann-Whitney two sample test. Based on the 95% confidence limit, this required separation into three groups. (orig.)

  4. Comparing the line broadened quasilinear model to Vlasov code

    International Nuclear Information System (INIS)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-01-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations

  5. Comparing the line broadened quasilinear model to Vlasov code

    Energy Technology Data Exchange (ETDEWEB)

    Ghantous, K. [Laboratoire de Physique des Plasmas, Ecole Polytechnique, 91128 Palaiseau Cedex (France); Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton, New Jersey 08543-0451 (United States); Berk, H. L. [Institute for Fusion Studies, University of Texas, 2100 San Jacinto Blvd, Austin, Texas 78712-1047 (United States); Gorelenkov, N. N. [Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton, New Jersey 08543-0451 (United States)

    2014-03-15

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  6. Comparing the line broadened quasilinear model to Vlasov code

    Science.gov (United States)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-03-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  7. Comparing the staffing models of outsourcing in selected companies

    OpenAIRE

    Chaloupková, Věra

    2010-01-01

    This thesis deals with problems of takeover of employees in outsourcing. The capital purpose is to compare the staffing model of outsourcing in selected companies. To compare in selected companies I chose multi-criteria analysis. This thesis is dividend into six chapters. The first charter is devoted to the theoretical part. In this charter describes the basic concepts as outsourcing, personal aspects, phase of the outsourcing projects, communications and culture. The rest of thesis is devote...

  8. Comparative Assessment of Nonlocal Continuum Solvent Models Exhibiting Overscreening

    Directory of Open Access Journals (Sweden)

    Ren Baihua

    2017-01-01

    Full Text Available Nonlocal continua have been proposed to offer a more realistic model for the electrostatic response of solutions such as the electrolyte solvents prominent in biology and electrochemistry. In this work, we review three nonlocal models based on the Landau-Ginzburg framework which have been proposed but not directly compared previously, due to different expressions of the nonlocal constitutive relationship. To understand the relationships between these models and the underlying physical insights from which they are derive, we situate these models into a single, unified Landau-Ginzburg framework. One of the models offers the capacity to interpret how temperature changes affect dielectric response, and we note that the variations with temperature are qualitatively reasonable even though predictions at ambient temperatures are not quantitatively in agreement with experiment. Two of these models correctly reproduce overscreening (oscillations between positive and negative polarization charge densities, and we observe small differences between them when we simulate the potential between parallel plates held at constant potential. These computations require reformulating the two models as coupled systems of local partial differential equations (PDEs, and we use spectral methods to discretize both problems. We propose further assessments to discriminate between the models, particularly in regards to establishing boundary conditions and comparing to explicit-solvent molecular dynamics simulations.

  9. New tips for structure prediction by comparative modeling

    Science.gov (United States)

    Rayan, Anwar

    2009-01-01

    Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence identity and model quality, we carried out an analysis of a set of 4753 sequence and structure alignments. Throughout this research, the model accuracy was measured by root mean square deviations of Cα atoms of the target-template structures. Surprisingly, the results show that sequence identity of the target protein to the template is not a good descriptor to predict the accuracy of the 3-D structure model. However, in a large number of cases, comparative modelling with lower sequence identity of target to template proteins led to more accurate 3-D structure model. As a consequence of this study, we suggest new tips for improving the quality of omparative models, particularly for models whose target-template sequence identity is below 50%. PMID:19255646

  10. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    Directory of Open Access Journals (Sweden)

    Villemereuil Pierre de

    2012-06-01

    Full Text Available Abstract Background Uncertainty in comparative analyses can come from at least two sources: a phylogenetic uncertainty in the tree topology or branch lengths, and b uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow and inflated significance in hypothesis testing (e.g. p-values will be too small. Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible

  11. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    Science.gov (United States)

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for

  12. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  13. Comparing several boson mappings with the shell model

    International Nuclear Information System (INIS)

    Menezes, D.P.; Yoshinaga, Naotaka; Bonatsos, D.

    1990-01-01

    Boson mappings are an essential step in establishing a connection between the successful phenomenological interacting boson model and the shell model. The boson mapping developed by Bonatsos, Klein and Li is applied to a single j-shell and the resulting energy levels and E2 transitions are shown for a pairing plus quadrupole-quadrupole Hamiltonian. The results are compared to the exact shell model calculation, as well as to these obtained through use of the Otsuka-Arima-Iachello mapping and the Zirnbauer-Brink mapping. In all cases good results are obtained for the spherical and near-vibrational cases

  14. Towards consensus in comparative chemical characterization modeling for LCIA

    DEFF Research Database (Denmark)

    Hauschild, Michael Zwicky; Bachmann, Till; Huijbregts, Mark

    2006-01-01

    work within, for instance, the OECD, and guidance from a series of expert workshops held between 2002 and 2005, preliminary guidelines focusing on chemical fate, and human and ecotoxic effects were established. For further elaboration of the fate-, exposure- and effect-sides of the modeling, six models...... by the Task Force and the model providers. While the compared models and their differences are important tools to further advance LCA science, the consensus model is intended to provide a generally agreed and scientifically sound method to calculate consistent characterization factors for use in LCA practice...... and to be the basis of the “recommended practice” for calculation of characterization factors for chemicals under authority of the UNEP/SETAC Life Cycle Initiative....

  15. A framework for testing and comparing binaural models.

    Science.gov (United States)

    Dietz, Mathias; Lestang, Jean-Hugues; Majdak, Piotr; Stern, Richard M; Marquardt, Torsten; Ewert, Stephan D; Hartmann, William M; Goodman, Dan F M

    2018-03-01

    Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Bayesian Evaluation of Dynamical Soil Carbon Models Using Soil Carbon Flux Data

    Science.gov (United States)

    Xie, H. W.; Romero-Olivares, A.; Guindani, M.; Allison, S. D.

    2017-12-01

    2016 was Earth's hottest year in the modern temperature record and the third consecutive record-breaking year. As the planet continues to warm, temperature-induced changes in respiration rates of soil microbes could reduce the amount of carbon sequestered in the soil organic carbon (SOC) pool, one of the largest terrestrial stores of carbon. This would accelerate temperature increases. In order to predict the future size of the SOC pool, mathematical soil carbon models (SCMs) describing interactions between the biosphere and atmosphere are needed. SCMs must be validated before they can be chosen for predictive use. In this study, we check two SCMs called CON and AWB for consistency with observed data using Bayesian goodness of fit testing that can be used in the future to compare other models. We compare the fit of the models to longitudinal soil respiration data from a meta-analysis of soil heating experiments using a family of Bayesian goodness of fit metrics called information criteria (IC), including the Widely Applicable Information Criterion (WAIC), the Leave-One-Out Information Criterion (LOOIC), and the Log Pseudo Marginal Likelihood (LPML). These IC's take the entire posterior distribution into account, rather than just one outputted model fit line. A lower WAIC and LOOIC and larger LPML indicate a better fit. We compare AWB and CON with fixed steady state model pool sizes. At equivalent SOC, dissolved organic carbon, and microbial pool sizes, CON always outperforms AWB quantitatively by all three IC's used. AWB monotonically improves in fit as we reduce the SOC steady state pool size while fixing all other pool sizes, and the same is almost true for CON. The AWB model with the lowest SOC is the best performing AWB model, while the CON model with the second lowest SOC is the best performing model. We observe that AWB displays more changes in slope sign and qualitatively displays more adaptive dynamics, which prevents AWB from being fully ruled out for

  17. Modeling hepatitis C virus kinetics under therapy using pharmacokinetic and pharmacodynamic information

    Energy Technology Data Exchange (ETDEWEB)

    Perelson, Alan S [Los Alamos National Laboratory; Shudo, Emi [Los Alamos National Laboratory; Ribeiro, Ruy M [Los Alamos National Laboratory

    2008-01-01

    Mathematical models have proven helpful in analyzing the virological response to antiviral therapy in hepatitis C virus (HCY) infected subjects. Objective: To summarize the uses and limitations of different models for analyzing HCY kinetic data under pegylated interferon therapy. Methods: We formulate mathematical models and fit them by nonlinear least square regression to patient data in order estimate model parameters. We compare the goodness of fit and parameter values estimated by different models statistically. Results/Conclusion: The best model for parameter estimation depends on the availability and the quality of data as well as the therapy used. We also discuss the mathematical models that will be needed to analyze HCV kinetic data from clinical trials with new antiviral drugs.

  18. Modelling a model?!! Prediction of observed and calculated daily pan evaporation in New Mexico, U.S.A.

    Science.gov (United States)

    Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.

    2012-04-01

    Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.

  19. Comparative analysis of used car price evaluation models

    Science.gov (United States)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  20. A microbial model of economic trading and comparative advantage.

    Science.gov (United States)

    Enyeart, Peter J; Simpson, Zachary B; Ellington, Andrew D

    2015-01-07

    The economic theory of comparative advantage postulates that beneficial trading relationships can be arrived at by two self-interested entities producing the same goods as long as they have opposing relative efficiencies in producing those goods. The theory predicts that upon entering trade, in order to maximize consumption both entities will specialize in producing the good they can produce at higher efficiency, that the weaker entity will specialize more completely than the stronger entity, and that both will be able to consume more goods as a result of trade than either would be able to alone. We extend this theory to the realm of unicellular organisms by developing mathematical models of genetic circuits that allow trading of a common good (specifically, signaling molecules) required for growth in bacteria in order to demonstrate comparative advantage interactions. In Conception 1, the experimenter controls production rates via exogenous inducers, allowing exploration of the parameter space of specialization. In Conception 2, the circuits self-regulate via feedback mechanisms. Our models indicate that these genetic circuits can demonstrate comparative advantage, and that cooperation in such a manner is particularly favored under stringent external conditions and when the cost of production is not overly high. Further work could involve implementing the models in living bacteria and searching for naturally occurring cooperative relationships between bacteria that conform to the principles of comparative advantage. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Spatial Distribution of Hydrologic Ecosystem Service Estimates: Comparing Two Models

    Science.gov (United States)

    Dennedy-Frank, P. J.; Ghile, Y.; Gorelick, S.; Logsdon, R. A.; Chaubey, I.; Ziv, G.

    2014-12-01

    We compare estimates of the spatial distribution of water quantity provided (annual water yield) from two ecohydrologic models: the widely-used Soil and Water Assessment Tool (SWAT) and the much simpler water models from the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) toolbox. These two models differ significantly in terms of complexity, timescale of operation, effort, and data required for calibration, and so are often used in different management contexts. We compare two study sites in the US: the Wildcat Creek Watershed (2083 km2) in Indiana, a largely agricultural watershed in a cold aseasonal climate, and the Upper Upatoi Creek Watershed (876 km2) in Georgia, a mostly forested watershed in a temperate aseasonal climate. We evaluate (1) quantitative estimates of water yield to explore how well each model represents this process, and (2) ranked estimates of water yield to indicate how useful the models are for management purposes where other social and financial factors may play significant roles. The SWAT and InVEST models provide very similar estimates of the water yield of individual subbasins in the Wildcat Creek Watershed (Pearson r = 0.92, slope = 0.89), and a similar ranking of the relative water yield of those subbasins (Spearman r = 0.86). However, the two models provide relatively different estimates of the water yield of individual subbasins in the Upper Upatoi Watershed (Pearson r = 0.25, slope = 0.14), and very different ranking of the relative water yield of those subbasins (Spearman r = -0.10). The Upper Upatoi watershed has a significant baseflow contribution due to its sandy, well-drained soils. InVEST's simple seasonality terms, which assume no change in storage over the time of the model run, may not accurately estimate water yield processes when baseflow provides such a strong contribution. Our results suggest that InVEST users take care in situations where storage changes are significant.

  2. Comparative modeling of InP solar cell structures

    Science.gov (United States)

    Jain, R. K.; Weinberg, I.; Flood, D. J.

    1991-01-01

    The comparative modeling of p(+)n and n(+)p indium phosphide solar cell structures is studied using a numerical program PC-1D. The optimal design study has predicted that the p(+)n structure offers improved cell efficiencies as compared to n(+)p structure, due to higher open-circuit voltage. The various cell material and process parameters to achieve the maximum cell efficiencies are reported. The effect of some of the cell parameters on InP cell I-V characteristics was studied. The available radiation resistance data on n(+)p and p(+)p InP solar cells are also critically discussed.

  3. Elastic models: a comparative study applied to retinal images.

    Science.gov (United States)

    Karali, E; Lambropoulou, S; Koutsouris, D

    2011-01-01

    In this work various methods of parametric elastic models are compared, namely the classical snake, the gradient vector field snake (GVF snake) and the topology-adaptive snake (t-snake), as well as the method of self-affine mapping system as an alternative to elastic models. We also give a brief overview of the methods used. The self-affine mapping system is implemented using an adapting scheme and minimum distance as optimization criterion, which is more suitable for weak edges detection. All methods are applied to glaucomatic retinal images with the purpose of segmenting the optical disk. The methods are compared in terms of segmentation accuracy and speed, as these are derived from cross-correlation coefficients between real and algorithm extracted contours and segmentation time, respectively. As a result, the method of self-affine mapping system presents adequate segmentation time and segmentation accuracy, and significant independence from initialization.

  4. Mathematical modelling of temperature effect on growth kinetics of Pseudomonas spp. on sliced mushroom (Agaricus bisporus).

    Science.gov (United States)

    Tarlak, Fatih; Ozdemir, Murat; Melikoglu, Mehmet

    2018-02-02

    The growth data of Pseudomonas spp. on sliced mushrooms (Agaricus bisporus) stored between 4 and 28°C were obtained and fitted to three different primary models, known as the modified Gompertz, logistic and Baranyi models. The goodness of fit of these models was compared by considering the mean squared error (MSE) and the coefficient of determination for nonlinear regression (pseudo-R 2 ). The Baranyi model yielded the lowest MSE and highest pseudo-R 2 values. Therefore, the Baranyi model was selected as the best primary model. Maximum specific growth rate (r max ) and lag phase duration (λ) obtained from the Baranyi model were fitted to secondary models namely, the Ratkowsky and Arrhenius models. High pseudo-R 2 and low MSE values indicated that the Arrhenius model has a high goodness of fit to determine the effect of temperature on r max . Observed number of Pseudomonas spp. on sliced mushrooms from independent experiments was compared with the predicted number of Pseudomonas spp. with the models used by considering the B f and A f values. The B f and A f values were found to be 0.974 and 1.036, respectively. The correlation between the observed and predicted number of Pseudomonas spp. was high. Mushroom spoilage was simulated as a function of temperature with the models used. The models used for Pseudomonas spp. growth can provide a fast and cost-effective alternative to traditional microbiological techniques to determine the effect of storage temperature on product shelf-life. The models can be used to evaluate the growth behaviour of Pseudomonas spp. on sliced mushroom, set limits for the quantitative detection of the microbial spoilage and assess product shelf-life. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Comparative assessment of PV plant performance models considering climate effects

    DEFF Research Database (Denmark)

    Tina, Giuseppe; Ventura, Cristina; Sera, Dezso

    2017-01-01

    . The methodological approach is based on comparative tests of the analyzed models applied to two PV plants installed respectively in north of Denmark (Aalborg) and in the south of Italy (Agrigento). The different ambient, operating and installation conditions allow to understand how these factors impact the precision...... the performance of the studied PV plants with others, the efficiency of the systems has been estimated by both conventional Performance Ratio and Corrected Performance Ratio...

  6. New tips for structure prediction by comparative modeling

    OpenAIRE

    Rayan, Anwar

    2009-01-01

    Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence iden...

  7. Comparing Neural Networks and ARMA Models in Artificial Stock Market

    Czech Academy of Sciences Publication Activity Database

    Krtek, Jiří; Vošvrda, Miloslav

    2011-01-01

    Roč. 18, č. 28 (2011), s. 53-65 ISSN 1212-074X R&D Projects: GA ČR GD402/09/H045 Institutional research plan: CEZ:AV0Z10750506 Keywords : neural networks * vector ARMA * artificial market Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2011/E/krtek-comparing neural networks and arma models in artificial stock market.pdf

  8. A comparative study of the constitutive models for silicon carbide

    Science.gov (United States)

    Ding, Jow-Lian; Dwivedi, Sunil; Gupta, Yogendra

    2001-06-01

    Most of the constitutive models for polycrystalline silicon carbide were developed and evaluated using data from either normal plate impact or Hopkinson bar experiments. At ISP, extensive efforts have been made to gain detailed insight into the shocked state of the silicon carbide (SiC) using innovative experimental methods, viz., lateral stress measurements, in-material unloading measurements, and combined compression shear experiments. The data obtained from these experiments provide some unique information for both developing and evaluating material models. In this study, these data for SiC were first used to evaluate some of the existing models to identify their strength and possible deficiencies. Motivated by both the results of this comparative study and the experimental observations, an improved phenomenological model was developed. The model incorporates pressure dependence of strength, rate sensitivity, damage evolution under both tension and compression, pressure confinement effect on damage evolution, stiffness degradation due to damage, and pressure dependence of stiffness. The model developments are able to capture most of the material features observed experimentally, but more work is needed to better match the experimental data quantitatively.

  9. Comparative analysis of existing models for power-grid synchronization

    International Nuclear Information System (INIS)

    Nishikawa, Takashi; Motter, Adilson E

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations. (paper)

  10. Saccharomyces cerevisiae as a model organism: a comparative study.

    Directory of Open Access Journals (Sweden)

    Hiren Karathia

    Full Text Available BACKGROUND: Model organisms are used for research because they provide a framework on which to develop and optimize methods that facilitate and standardize analysis. Such organisms should be representative of the living beings for which they are to serve as proxy. However, in practice, a model organism is often selected ad hoc, and without considering its representativeness, because a systematic and rational method to include this consideration in the selection process is still lacking. METHODOLOGY/PRINCIPAL FINDINGS: In this work we propose such a method and apply it in a pilot study of strengths and limitations of Saccharomyces cerevisiae as a model organism. The method relies on the functional classification of proteins into different biological pathways and processes and on full proteome comparisons between the putative model organism and other organisms for which we would like to extrapolate results. Here we compare S. cerevisiae to 704 other organisms from various phyla. For each organism, our results identify the pathways and processes for which S. cerevisiae is predicted to be a good model to extrapolate from. We find that animals in general and Homo sapiens in particular are some of the non-fungal organisms for which S. cerevisiae is likely to be a good model in which to study a significant fraction of common biological processes. We validate our approach by correctly predicting which organisms are phenotypically more distant from S. cerevisiae with respect to several different biological processes. CONCLUSIONS/SIGNIFICANCE: The method we propose could be used to choose appropriate substitute model organisms for the study of biological processes in other species that are harder to study. For example, one could identify appropriate models to study either pathologies in humans or specific biological processes in species with a long development time, such as plants.

  11. COMPARATIVE STUDY ON MAIN SOLVENCY ASSESSMENT MODELS FOR INSURANCE FIELD

    Directory of Open Access Journals (Sweden)

    Daniela Nicoleta SAHLIAN

    2015-07-01

    Full Text Available During the recent financial crisis of insurance domain, there were imposed new aspects that have to be taken into account concerning the risks management and surveillance activity. The insurance societies could develop internal models in order to determine the minimum capital requirement imposed by the new regulations that are to be adopted on 1 January 2016. In this respect, the purpose of this research paper is to offer a real presentation and comparing with the main solvency regulation systems used worldwide, the accent being on their common characteristics and current tendencies. Thereby, we would like to offer a better understanding of the similarities and differences between the existent solvency regimes in order to develop the best regime of solvency for Romania within the Solvency II project. The study will show that there are clear differences between the existent Solvency I regime and the new approaches based on risk and will also point out the fact that even the key principles supporting the new solvency regimes are convergent, there are a lot of approaches for the application of these principles. In this context, the question we would try to find the answer is "how could the global solvency models be useful for the financial surveillance authority of Romania for the implementation of general model and for the development of internal solvency models according to the requirements of Solvency II" and "which would be the requirements for the implementation of this type of approach?". This thing makes the analysis of solvency models an interesting exercise.

  12. Atterberg Limits Prediction Comparing SVM with ANFIS Model

    Directory of Open Access Journals (Sweden)

    Mohammad Murtaza Sherzoy

    2017-03-01

    Full Text Available Support Vector Machine (SVM and Adaptive Neuro-Fuzzy inference Systems (ANFIS both analytical methods are used to predict the values of Atterberg limits, such as the liquid limit, plastic limit and plasticity index. The main objective of this study is to make a comparison between both forecasts (SVM & ANFIS methods. All data of 54 soil samples are used and taken from the area of Peninsular Malaysian and tested for different parameters containing liquid limit, plastic limit, plasticity index and grain size distribution and were. The input parameter used in for this case are the fraction of grain size distribution which are the percentage of silt, clay and sand. The actual and predicted values of Atterberg limit which obtained from the SVM and ANFIS models are compared by using the correlation coefficient R2 and root mean squared error (RMSE value.  The outcome of the study show that the ANFIS model shows higher accuracy than SVM model for the liquid limit (R2 = 0.987, plastic limit (R2 = 0.949 and plastic index (R2 = 0966. RMSE value that obtained for both methods have shown that the ANFIS model has represent the best performance than SVM model to predict the Atterberg Limits as a whole.

  13. Dinucleotide controlled null models for comparative RNA gene prediction

    Directory of Open Access Journals (Sweden)

    Gesell Tanja

    2008-05-01

    Full Text Available Abstract Background Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. Results We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. Conclusion SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require

  14. Dinucleotide controlled null models for comparative RNA gene prediction.

    Science.gov (United States)

    Gesell, Tanja; Washietl, Stefan

    2008-05-27

    Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require randomization of multiple alignments can be considered. SISSIz

  15. Comparative Proteomic Analysis of Two Uveitis Models in Lewis Rats.

    Science.gov (United States)

    Pepple, Kathryn L; Rotkis, Lauren; Wilson, Leslie; Sandt, Angela; Van Gelder, Russell N

    2015-12-01

    Inflammation generates changes in the protein constituents of the aqueous humor. Proteins that change in multiple models of uveitis may be good biomarkers of disease or targets for therapeutic intervention. The present study was conducted to identify differentially-expressed proteins in the inflamed aqueous humor. Two models of uveitis were induced in Lewis rats: experimental autoimmune uveitis (EAU) and primed mycobacterial uveitis (PMU). Differential gel electrophoresis was used to compare naïve and inflamed aqueous humor. Differentially-expressed proteins were separated by using 2-D gel electrophoresis and excised for identification with matrix-assisted laser desorption/ionization-time of flight (MALDI-TOF). Expression of select proteins was verified by Western blot analysis in both the aqueous and vitreous. The inflamed aqueous from both models demonstrated an increase in total protein concentration when compared to naïve aqueous. Calprotectin, a heterodimer of S100A8 and S100A9, was increased in the aqueous in both PMU and EAU. In the vitreous, S100A8 and S100A9 were preferentially elevated in PMU. Apolipoprotein E was elevated in the aqueous of both uveitis models but was preferentially elevated in EAU. Beta-B2-crystallin levels decreased in the aqueous and vitreous of EAU but not PMU. The proinflammatory molecules S100A8 and S100A9 were elevated in both models of uveitis but may play a more significant role in PMU than EAU. The neuroprotective protein β-B2-crystallin was found to decline in EAU. Therapies to modulate these proteins in vivo may be good targets in the treatment of ocular inflammation.

  16. Comparing pharmacophore models derived from crystallography and NMR ensembles

    Science.gov (United States)

    Ghanakota, Phani; Carlson, Heather A.

    2017-11-01

    NMR and X-ray crystallography are the two most widely used methods for determining protein structures. Our previous study examining NMR versus X-Ray sources of protein conformations showed improved performance with NMR structures when used in our Multiple Protein Structures (MPS) method for receptor-based pharmacophores (Damm, Carlson, J Am Chem Soc 129:8225-8235, 2007). However, that work was based on a single test case, HIV-1 protease, because of the rich data available for that system. New data for more systems are available now, which calls for further examination of the effect of different sources of protein conformations. The MPS technique was applied to Growth factor receptor bound protein 2 (Grb2), Src SH2 homology domain (Src-SH2), FK506-binding protein 1A (FKBP12), and Peroxisome proliferator-activated receptor-γ (PPAR-γ). Pharmacophore models from both crystal and NMR ensembles were able to discriminate between high-affinity, low-affinity, and decoy molecules. As we found in our original study, NMR models showed optimal performance when all elements were used. The crystal models had more pharmacophore elements compared to their NMR counterparts. The crystal-based models exhibited optimum performance only when pharmacophore elements were dropped. This supports our assertion that the higher flexibility in NMR ensembles helps focus the models on the most essential interactions with the protein. Our studies suggest that the "extra" pharmacophore elements seen at the periphery in X-ray models arise as a result of decreased protein flexibility and make very little contribution to model performance.

  17. Evaluating the double Poisson generalized linear model.

    Science.gov (United States)

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Comparative assessment of condensation models for horizontal tubes

    International Nuclear Information System (INIS)

    Schaffrath, A.; Kruessenberg, A.K.; Lischke, W.; Gocht, U.; Fjodorow, A.

    1999-01-01

    The condensation in horizontal tubes plays an important role e.g. for the determination of the operation mode of horizontal steam generators of VVER reactors or passive safety systems for the next generation of nuclear power plants. Two different approaches (HOTKON and KONWAR) for modeling this process have been undertaken by Forschungszentrum Juelich (FZJ) and University for Applied Sciences Zittau/Goerlitz (HTWS) and implemented into the 1D-thermohydraulic code ATHLET, which is developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH for the analysis of anticipated and abnormal transients in light water reactors. Although the improvements of the condensation models are developed for different applications (VVER steam generators - emergency condenser of the SWR1000) with strongly different operation conditions (e.g. the temperature difference over the tube wall in HORUS is up to 30 K and in NOKO up to 250 K, the heat flux density in HORUS is up to 40 kW/m 2 and in NOKO up to 1 GW/m 2 ) both models are now compared and assessed by Forschungszentrum Rossendorf FZR e.V. Therefore, post test calculations of selected HORUS experiments were performed with ATHLET/KONWAR and compared to existing ATHLET and ATHLET/HOTKON calculations of HTWS. It can be seen that the calculations with the extension KONWAR as well as HOTKON improve significantly the agreement between computational and experimental data. (orig.) [de

  19. Comparative Modelling of the Spectra of Cool Giants

    Science.gov (United States)

    Lebzelter, T.; Heiter, U.; Abia, C.; Eriksson, K.; Ireland, M.; Neilson, H.; Nowotny, W; Maldonado, J; Merle, T.; Peterson, R.; hide

    2012-01-01

    Our ability to extract information from the spectra of stars depends on reliable models of stellar atmospheres and appropriate techniques for spectral synthesis. Various model codes and strategies for the analysis of stellar spectra are available today. Aims. We aim to compare the results of deriving stellar parameters using different atmosphere models and different analysis strategies. The focus is set on high-resolution spectroscopy of cool giant stars. Methods. Spectra representing four cool giant stars were made available to various groups and individuals working in the area of spectral synthesis, asking them to derive stellar parameters from the data provided. The results were discussed at a workshop in Vienna in 2010. Most of the major codes currently used in the astronomical community for analyses of stellar spectra were included in this experiment. Results. We present the results from the different groups, as well as an additional experiment comparing the synthetic spectra produced by various codes for a given set of stellar parameters. Similarities and differences of the results are discussed. Conclusions. Several valid approaches to analyze a given spectrum of a star result in quite a wide range of solutions. The main causes for the differences in parameters derived by different groups seem to lie in the physical input data and in the details of the analysis method. This clearly shows how far from a definitive abundance analysis we still are.

  20. Comparing Productivity Simulated with Inventory Data Using Different Modelling Technologies

    Science.gov (United States)

    Klopf, M.; Pietsch, S. A.; Hasenauer, H.

    2009-04-01

    The Lime Stone National Park in Austria was established in 1997 to protect sensible lime stone soils from degradation due to heavy forest management. Since 1997 the management activities were successively reduced and standing volume and coarse woody debris (CWD) increased and degraded soils began to recover. One option to study the rehabilitation process towards natural virgin forest state is the use of modelling technology. In this study we will test two different modelling approaches for their applicability to Lime Stone National Park. We will compare standing tree volume simulated resulting from (i) the individual tree growth model MOSES, and (ii) the species and management sensitive adaptation of the biogeochemical-mechanistic model Biome-BGC. The results from the two models are compared with filed observations form repeated permanent forest inventory plots of the Lime Stone National Park in Austria. The simulated CWD predictions of the BGC-model were compared with dead wood measurements (standing and lying dead wood) recorded at the permanent inventory plots. The inventory was established between 1994 and 1996 and remeasured from 2004 to 2005. For this analysis 40 plots of this inventory were selected which comprise the required dead wood components and are dominated by a single tree species. First we used the distance dependant individual tree growth model MOSES to derive the standing timber and the amount of mortality per hectare. MOSES is initialized with the inventory data at plot establishment and each sampling plot is treated as forest stand. The Biome-BGC is a process based biogeochemical model with extensions for Austrian tree species, a self initialization and a forest management tool. The initialization for the actual simulations with the BGC model was done as follows: We first used spin up runs to derive a balanced forest vegetation, similar to an undisturbed forest. Next we considered the management history of the past centuries (heavy clear cuts

  1. A Model of Self-Monitoring Blood Glucose Measurement Error.

    Science.gov (United States)

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  2. Comparative dynamic analysis of the full Grossman model.

    Science.gov (United States)

    Ried, W

    1998-08-01

    The paper applies the method of comparative dynamic analysis to the full Grossman model. For a particular class of solutions, it derives the equations implicitly defining the complete trajectories of the endogenous variables. Relying on the concept of Frisch decision functions, the impact of any parametric change on an endogenous variable can be decomposed into a direct and an indirect effect. The focus of the paper is on marginal changes in the rate of health capital depreciation. It also analyses the impact of either initial financial wealth or the initial stock of health capital. While the direction of most effects remains ambiguous in the full model, the assumption of a zero consumption benefit of health is sufficient to obtain a definite for any direct or indirect effect.

  3. Comparing soil moisture memory in satellite observations and models

    Science.gov (United States)

    Stacke, Tobias; Hagemann, Stefan; Loew, Alexander

    2013-04-01

    A major obstacle to a correct parametrization of soil processes in large scale global land surface models is the lack of long term soil moisture observations for large parts of the globe. Currently, a compilation of soil moisture data derived from a range of satellites is released by the ESA Climate Change Initiative (ECV_SM). Comprising the period from 1978 until 2010, it provides the opportunity to compute climatological relevant statistics on a quasi-global scale and to compare these to the output of climate models. Our study is focused on the investigation of soil moisture memory in satellite observations and models. As a proxy for memory we compute the autocorrelation length (ACL) of the available satellite data and the uppermost soil layer of the models. Additional to the ECV_SM data, AMSR-E soil moisture is used as observational estimate. Simulated soil moisture fields are taken from ERA-Interim reanalysis and generated with the land surface model JSBACH, which was driven with quasi-observational meteorological forcing data. The satellite data show ACLs between one week and one month for the greater part of the land surface while the models simulate a longer memory of up to two months. Some pattern are similar in models and observations, e.g. a longer memory in the Sahel Zone and the Arabian Peninsula, but the models are not able to reproduce regions with a very short ACL of just a few days. If the long term seasonality is subtracted from the data the memory is strongly shortened, indicating the importance of seasonal variations for the memory in most regions. Furthermore, we analyze the change of soil moisture memory in the different soil layers of the models to investigate to which extent the surface soil moisture includes information about the whole soil column. A first analysis reveals that the ACL is increasing for deeper layers. However, its increase is stronger in the soil moisture anomaly than in its absolute values and the first even exceeds the

  4. Comparing Numerical Spall Simulations with a Nonlinear Spall Formation Model

    Science.gov (United States)

    Ong, L.; Melosh, H. J.

    2012-12-01

    Spallation accelerates lightly shocked ejecta fragments to speeds that can exceed the escape velocity of the parent body. We present high-resolution simulations of nonlinear shock interactions in the near surface. Initial results show the acceleration of near-surface material to velocities up to 1.8 times greater than the peak particle velocity in the detached shock, while experiencing little to no shock pressure. These simulations suggest a possible nonlinear spallation mechanism to produce the high-velocity, low show pressure meteorites from other planets. Here we pre-sent the numerical simulations that test the production of spall through nonlinear shock interactions in the near sur-face, and compare the results with a model proposed by Kamegai (1986 Lawrence Livermore National Laboratory Report). We simulate near-surface shock interactions using the SALES_2 hydrocode and the Murnaghan equation of state. We model the shock interactions in two geometries: rectangular and spherical. In the rectangular case, we model a planar shock approaching the surface at a constant angle phi. In the spherical case, the shock originates at a point below the surface of the domain and radiates spherically from that point. The angle of the shock front with the surface is dependent on the radial distance of the surface point from the shock origin. We model the target as a solid with a nonlinear Murnaghan equation of state. This idealized equation of state supports nonlinear shocks but is tem-perature independent. We track the maximum pressure and maximum velocity attained in every cell in our simula-tions and compare them to the Hugoniot equations that describe the material conditions in front of and behind the shock. Our simulations demonstrate that nonlinear shock interactions in the near surface produce lightly shocked high-velocity material for both planar and cylindrical shocks. The spall is the result of the free surface boundary condi-tion, which forces a pressure gradient

  5. Comparative study of computational model for pipe whip analysis

    International Nuclear Information System (INIS)

    Koh, Sugoong; Lee, Young-Shin

    1993-01-01

    Many types of pipe whip restraints are installed to protect the structural components from the anticipated pipe whip phenomena of high energy lines in nuclear power plants. It is necessary to investigate these phenomena accurately in order to evaluate the acceptability of the pipe whip restraint design. Various research programs have been conducted in many countries to develop analytical methods and to verify the validity of the methods. In this study, various calculational models in ANSYS code and in ADLPIPE code, the general purpose finite element computer programs, were used to simulate the postulated pipe whips to obtain impact loads and the calculated results were compared with the specific experimental results from the sample pipe whip test for the U-shaped pipe whip restraints. Some calculational models, having the spring element between the pipe whip restraint and the pipe line, give reasonably good transient responses of the restraint forces compared with the experimental results, and could be useful in evaluating the acceptability of the pipe whip restraint design. (author)

  6. THE FLAT TAX - A COMPARATIVE STUDY OF THE EXISTING MODELS

    Directory of Open Access Journals (Sweden)

    Schiau (Macavei Laura - Liana

    2011-07-01

    Full Text Available In the two last decades the flat tax systems have spread all around the globe from East and Central Europe to Asia and Central America. Many specialists consider this phenomenon a real fiscal revolution, but others see it as a mistake as long as the new systems are just a feint of the true flat tax designed by the famous Stanford University professors Robert Hall and Alvin Rabushka. In this context this paper tries to determine which of the existing flat tax systems resemble the true flat tax model by comparing and contrasting their main characteristics with the features of the model proposed by Hall and Rabushka. The research also underlines the common features and the differences between the existing models. The idea of this kind of study is not really new, others have done it but the comparison was limited to one country. For example Emil Kalchev from New Bulgarian University has asses the Bulgarian income system, by comparing it with the flat tax and concluding that taxation in Bulgaria is not simple, neutral and non-distortive. Our research is based on several case studies and on compare and contrast qualitative and quantitative methods. The study starts form the fiscal design drawn by the two American professors in the book The Flat Tax. Four main characteristics of the flat tax system were chosen in order to build the comparison: fiscal design, simplicity, avoidance of double taxation and uniformity of the tax rates. The jurisdictions chosen for the case study are countries all around the globe with fiscal systems which are considered flat tax systems. The results obtained show that the fiscal design of Hong Kong is the only flat tax model which is built following an economic logic and not a legal sense, being in the same time a simple and transparent system. Others countries as Slovakia, Albania, Macedonia in Central and Eastern Europe fulfill the requirement regarding the uniformity of taxation. Other jurisdictions avoid the double

  7. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  8. Linear and Poisson models for genetic evaluation of tick resistance in cross-bred Hereford x Nellore cattle.

    Science.gov (United States)

    Ayres, D R; Pereira, R J; Boligon, A A; Silva, F F; Schenkel, F S; Roso, V M; Albuquerque, L G

    2013-12-01

    Cattle resistance to ticks is measured by the number of ticks infesting the animal. The model used for the genetic analysis of cattle resistance to ticks frequently requires logarithmic transformation of the observations. The objective of this study was to evaluate the predictive ability and goodness of fit of different models for the analysis of this trait in cross-bred Hereford x Nellore cattle. Three models were tested: a linear model using logarithmic transformation of the observations (MLOG); a linear model without transformation of the observations (MLIN); and a generalized linear Poisson model with residual term (MPOI). All models included the classificatory effects of contemporary group and genetic group and the covariates age of animal at the time of recording and individual heterozygosis, as well as additive genetic effects as random effects. Heritability estimates were 0.08 ± 0.02, 0.10 ± 0.02 and 0.14 ± 0.04 for MLIN, MLOG and MPOI models, respectively. The model fit quality, verified by deviance information criterion (DIC) and residual mean square, indicated fit superiority of MPOI model. The predictive ability of the models was compared by validation test in independent sample. The MPOI model was slightly superior in terms of goodness of fit and predictive ability, whereas the correlations between observed and predicted tick counts were practically the same for all models. A higher rank correlation between breeding values was observed between models MLOG and MPOI. Poisson model can be used for the selection of tick-resistant animals. © 2013 Blackwell Verlag GmbH.

  9. Bayesian inference with information content model check for Langevin equations

    DEFF Research Database (Denmark)

    Krog, Jens F. C.; Lomholt, Michael Andersen

    2017-01-01

    The Bayesian data analysis framework has been proven to be a systematic and effective method of parameter inference and model selection for stochastic processes. In this work we introduce an information content model check which may serve as a goodness-of-fit, like the chi-square procedure...

  10. Comparative benefit of malaria chemoprophylaxis modelled in United Kingdom travellers.

    Science.gov (United States)

    Toovey, Stephen; Nieforth, Keith; Smith, Patrick; Schlagenhauf, Patricia; Adamcova, Miriam; Tatt, Iain; Tomianovic, Danitza; Schnetzler, Gabriel

    2014-01-01

    .3% decrease in estimated infections. The number of travellers experiencing moderate adverse events (AE) or those requiring medical attention or drug withdrawal per case prevented is as follows: C ± P 170, Mq 146, Dx 114, AP 103. The model correctly predicted the number of malaria deaths, providing a robust and reliable estimate of the number of imported malaria cases in the UK, and giving a measure of benefit derived from chemoprophylaxis use against the likely adverse events generated. Overall numbers needed to prevent a malaria infection are comparable among the four options and are sensitive to changes in the background infection rates. Only a limited impact on the number of infections can be expected if Mq is substituted by AP.

  11. Testing a Poisson counter model for visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks.

    Science.gov (United States)

    Kyllingsbæk, Søren; Markussen, Bo; Bundesen, Claus

    2012-06-01

    The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is continued until the stimulus disappears, and the overt response is based on the categorization made the greatest number of times. The model was evaluated by Monte Carlo tests of goodness of fit against observed probability distributions of responses in two extensive experiments and also by quantifications of the information loss of the model compared with the observed data by use of information theoretic measures. The model provided a close fit to individual data on identification of digits and an apparently perfect fit to data on identification of Landolt rings.

  12. A comparative study of machine learning models for ethnicity classification

    Science.gov (United States)

    Trivedi, Advait; Bessie Amali, D. Geraldine

    2017-11-01

    This paper endeavours to adopt a machine learning approach to solve the problem of ethnicity recognition. Ethnicity identification is an important vision problem with its use cases being extended to various domains. Despite the multitude of complexity involved, ethnicity identification comes naturally to humans. This meta information can be leveraged to make several decisions, be it in target marketing or security. With the recent development of intelligent systems a sub module to efficiently capture ethnicity would be useful in several use cases. Several attempts to identify an ideal learning model to represent a multi-ethnic dataset have been recorded. A comparative study of classifiers such as support vector machines, logistic regression has been documented. Experimental results indicate that the logical classifier provides a much accurate classification than the support vector machine.

  13. Comparative metabolomics of drought acclimation in model and forage legumes.

    Science.gov (United States)

    Sanchez, Diego H; Schwabe, Franziska; Erban, Alexander; Udvardi, Michael K; Kopka, Joachim

    2012-01-01

    Water limitation has become a major concern for agriculture. Such constraints reinforce the urgent need to understand mechanisms by which plants cope with water deprivation. We used a non-targeted metabolomic approach to explore plastic systems responses to non-lethal drought in model and forage legume species of the Lotus genus. In the model legume Lotus. japonicus, increased water stress caused gradual increases of most of the soluble small molecules profiled, reflecting a global and progressive reprogramming of metabolic pathways. The comparative metabolomic approach between Lotus species revealed conserved and unique metabolic responses to drought stress. Importantly, only few drought-responsive metabolites were conserved among all species. Thus we highlight a potential impediment to translational approaches that aim to engineer traits linked to the accumulation of compatible solutes. Finally, a broad comparison of the metabolic changes elicited by drought and salt acclimation revealed partial conservation of these metabolic stress responses within each of the Lotus species, but only few salt- and drought-responsive metabolites were shared between all. The implications of these results are discussed with regard to the current insights into legume water stress physiology. © 2011 Blackwell Publishing Ltd.

  14. Static response of deformable microchannels: a comparative modelling study

    Science.gov (United States)

    Shidhore, Tanmay C.; Christov, Ivan C.

    2018-02-01

    We present a comparative modelling study of fluid-structure interactions in microchannels. Through a mathematical analysis based on plate theory and the lubrication approximation for low-Reynolds-number flow, we derive models for the flow rate-pressure drop relation for long shallow microchannels with both thin and thick deformable top walls. These relations are tested against full three-dimensional two-way-coupled fluid-structure interaction simulations. Three types of microchannels, representing different elasticity regimes and having been experimentally characterized previously, are chosen as benchmarks for our theory and simulations. Good agreement is found in most cases for the predicted, simulated and measured flow rate-pressure drop relationships. The numerical simulations performed allow us to also carefully examine the deformation profile of the top wall of the microchannel in any cross section, showing good agreement with the theory. Specifically, the prediction that span-wise displacement in a long shallow microchannel decouples from the flow-wise deformation is confirmed, and the predicted scaling of the maximum displacement with the hydrodynamic pressure and the various material and geometric parameters is validated.

  15. COMPAR

    International Nuclear Information System (INIS)

    Kuefner, K.

    1976-01-01

    COMPAR works on FORTRAN arrays with four indices: A = A(i,j,k,l) where, for each fixed k 0 ,l 0 , only the 'plane' [A(i,j,k 0 ,l 0 ), i = 1, isub(max), j = 1, jsub(max)] is held in fast memory. Given two arrays A, B of this type COMPAR has the capability to 1) re-norm A and B ind different ways; 2) calculate the deviations epsilon defined as epsilon(i,j,k,l): =[A(i,j,k,l) - B(i,j,k,l)] / GEW(i,j,k,l) where GEW (i,j,k,l) may be chosen in three different ways; 3) calculate mean, standard deviation and maximum in the array epsilon (by several intermediate stages); 4) determine traverses in the array epsilon; 5) plot these traverses by a printer; 6) simplify plots of these traverses by the PLOTEASY-system by creating input data blocks for this system. The main application of COMPAR is given (so far) by the comparison of two- and three-dimensional multigroup neutron flux-fields. (orig.) [de

  16. Comparing Transformation Possibilities of Topological Functioning Model and BPMN in the Context of Model Driven Architecture

    Directory of Open Access Journals (Sweden)

    Solomencevs Artūrs

    2016-05-01

    Full Text Available The approach called “Topological Functioning Model for Software Engineering” (TFM4SE applies the Topological Functioning Model (TFM for modelling the business system in the context of Model Driven Architecture. TFM is a mathematically formal computation independent model (CIM. TFM4SE is compared to an approach that uses BPMN as a CIM. The comparison focuses on CIM modelling and on transformation to UML Sequence diagram on the platform independent (PIM level. The results show the advantages and drawbacks the formalism of TFM brings into the development.

  17. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  18. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  19. Jump Model / Comparability Ratio Model — Joinpoint Help System 4.4.0.0

    Science.gov (United States)

    The Jump Model / Comparability Ratio Model in the Joinpoint software provides a direct estimation of trend data (e.g. cancer rates) where there is a systematic scale change, which causes a “jump” in the rates, but is assumed not to affect the underlying trend.

  20. Characterizing Cavities in Model Inclusion Fullerenes: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Francisco Torrens

    2001-06-01

    Full Text Available Abstract: The fullerene-82 cavity is selected as a model system in order to test several methods for characterizing inclusion molecules. The methods are based on different technical foundations such as a square and triangular tessellation of the molecular surface, spherical tessellation of the molecular surface, numerical integration of the atomic volumes and surfaces, triangular tessellation of the molecular surface, and cubic lattice approach to the molecular volume. Accurate measures of the molecular volume and surface area have been performed with the pseudorandom Monte Carlo (MCVS and uniform Monte Carlo (UMCVS methods. These calculations serve as a reference for the rest of the methods. The SURMO2 method does not recognize the cavity and may not be convenient for intercalation compounds. The programs that detect the cavities never exceed 1% deviation relative to the reference value for molecular volume and 5% for surface area. The GEPOL algorithm, alone or combined with TOPO, shows results in good agreement with those of the UMCVS reference. The uniform random number generator provides the fastest convergence for UMCVS and a correct estimate of the standard deviations. The effect of the internal cavity on the solvent-accessible surfaces has been calculated. Fullerene-82 is compared with fullerene-60 and -70.

  1. Comparing Entrepreneurship Intention: A Multigroup Structural Equation Modeling Approach

    Directory of Open Access Journals (Sweden)

    Sabrina O. Sihombing

    2012-04-01

    Full Text Available Unemployment is one of the main social and economic problems that many countries face nowadays. One strategic way to overcome this problem is by fostering entrepreneurship spirit especially for unem ployment graduates. Entrepreneurship is becoming an alternative Job for students after they graduate. This is because entrepreneurship of-fers major benefits, such as setting up one’s own business and the pos-sibility of having significant financial rewards than working for others. Entrepreneurship is then offered by many universities. This research applies the theory of planned behavior (TPB by incorporating attitude toward success as an antecedent variable of the attitude to examine students’ intention to become an entrepreneur. The objective of this research is to compare entrepreneurship intention between business students and non-business students. A self-administered questionnaire was used to collect data for this study. Questionnaires were distributed to respondents by applying the drop-off/pick-up method. A number of 294 by questionnaires were used in the analysis. Data were analyzed by using structural equation modeling. Two out of four hypotheses were confirmed. These hypotheses are the relationship between the attitude toward becoming an entrepreneur and the intention to try becoming an entrepreneur, and the relationship perceived behavioral control and intention to try becoming an entrepreneur. This paper also provides a discussion and offers directions for future research.

  2. Modeling Conformal Growth in Photonic Crystals and Comparing to Experiment

    Science.gov (United States)

    Brzezinski, Andrew; Chen, Ying-Chieh; Wiltzius, Pierre; Braun, Paul

    2008-03-01

    Conformal growth, e.g. atomic layer deposition (ALD), of materials such as silicon and TiO2 on three dimensional (3D) templates is important for making photonic crystals. However, reliable calculations of optical properties as a function of the conformal growth, such as the optical band structure, are hampered by difficultly in accurately assessing a deposited material's spatial distribution. A widely used approximation ignores ``pinch off'' of precursor gas and assumes complete template infilling. Another approximation results in non-uniform growth velocity by employing iso-intensity surfaces of the 3D interference pattern used to create the template. We have developed an accurate model of conformal growth in arbitrary 3D periodic structures, allowing for arbitrary surface orientation. Results are compared with the above approximations and with experimentally fabricated photonic crystals. We use an SU8 polymer template created by 4-beam interference lithography, onto which various amounts of TiO2 are grown by ALD. Characterization is performed by analysis of cross-sectional scanning electron micrographs and by solid angle resolved optical spectroscopy.

  3. Comparing Entrepreneurship Intention: A Multigroup Structural Equation Modeling Approach

    Directory of Open Access Journals (Sweden)

    Sabrina O. Sihombing

    2012-04-01

    Full Text Available Unemployment is one of the main social and economic problems that many countries face nowadays. One strategic way to overcome this problem is by fostering entrepreneurship spirit especially for unem-ployment graduates. Entrepreneurship is becoming an alternative Job for students after they graduate. This is because entrepreneurship of fers major benefits, such as setting up one’s own business and the pos sibility of having significant financial rewards than working for others. Entrepreneurship is then offered by many universities. This research applies the theory of planned behavior (TPB by incorporating attitude toward success as an antecedent variable of the attitude to examine students’ intention to become an entrepreneur. The objective of this research is to compare entrepreneurship intention between business students and non-business students. A self-administered questionnaire was used to collect data for this study. Questionnaires were distributed to respondents by applying the drop-off/pick-up method. A number of 294 by questionnaires were used in the analysis. Data were analyzed by using structural equation modeling. Two out of four hypotheses were confirmed. These hypotheses are the relationship between the attitude toward becoming an entrepreneur and the intention to try becoming an entrepreneur, and the relationship perceived behavioral control and intention to try becoming an entrepreneur. This paper also provides a discussion and offers directions for future research.

  4. Comparative analysis of business rules and business process modeling languages

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2013-03-01

    Full Text Available During developing an information system is important to create clear models and choose suitable modeling languages. The article analyzes the SRML, SBVR, PRR, SWRL, OCL rules specifying language and UML, DFD, CPN, EPC and IDEF3 BPMN business process modeling language. The article presents business rules and business process modeling languages theoretical comparison. The article according to selected modeling aspects of the comparison between different business process modeling languages ​​and business rules representation languages sets. Also, it is selected the best fit of language set for three layer framework for business rule based software modeling.

  5. GEOQUIMICO : an interactive tool for comparing sorption conceptual models (surface complexation modeling versus K[D])

    International Nuclear Information System (INIS)

    Hammond, Glenn E.; Cygan, Randall Timothy

    2007-01-01

    Within reactive geochemical transport, several conceptual models exist for simulating sorption processes in the subsurface. Historically, the K D approach has been the method of choice due to ease of implementation within a reactive transport model and straightforward comparison with experimental data. However, for modeling complex sorption phenomenon (e.g. sorption of radionuclides onto mineral surfaces), this approach does not systematically account for variations in location, time, or chemical conditions, and more sophisticated methods such as a surface complexation model (SCM) must be utilized. It is critical to determine which conceptual model to use; that is, when the material variation becomes important to regulatory decisions. The geochemical transport tool GEOQUIMICO has been developed to assist in this decision-making process. GEOQUIMICO provides a user-friendly framework for comparing the accuracy and performance of sorption conceptual models. The model currently supports the K D and SCM conceptual models. The code is written in the object-oriented Java programming language to facilitate model development and improve code portability. The basic theory underlying geochemical transport and the sorption conceptual models noted above is presented in this report. Explanations are provided of how these physicochemical processes are instrumented in GEOQUIMICO and a brief verification study comparing GEOQUIMICO results to data found in the literature is given

  6. Kinetic study and modeling of biosurfactant production using Bacillus sp.

    Directory of Open Access Journals (Sweden)

    Hesty Heryani

    2017-05-01

    Conclusions: For further development and industrial applications, the modified Gompertz equation is proposed to predict the cell mass and biosurfactant production as a goodness of fit was obtained with this model. The modified Gompertz equation was also extended to enable the excellent prediction of the surface tension.

  7. Model selection for contingency tables with algebraic statistics

    NARCIS (Netherlands)

    Krampe, A.; Kuhnt, S.; Gibilisco, P.; Riccimagno, E.; Rogantin, M.P.; Wynn, H.P.

    2009-01-01

    Goodness-of-fit tests based on chi-square approximations are commonly used in the analysis of contingency tables. Results from algebraic statistics combined with MCMC methods provide alternatives to the chi-square approximation. However, within a model selection procedure usually a large number of

  8. Comparing satellite SAR and wind farm wake models

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Vincent, P.; Husson, R.

    2015-01-01

    . These extend several tens of kilometres downwind e.g. 70 km. Other SAR wind maps show near-field fine scale details of wake behind rows of turbines. The satellite SAR wind farm wake cases are modelled by different wind farm wake models including the PARK microscale model, the Weather Research and Forecasting...... (WRF) model in high resolution and WRF with coupled microscale parametrization....

  9. A Model of Comparative Ethics Education for Social Workers

    Science.gov (United States)

    Pugh, Greg L.

    2017-01-01

    Social work ethics education models have not effectively engaged social workers in practice in formal ethical reasoning processes, potentially allowing personal bias to affect ethical decisions. Using two of the primary ethical models from medicine, a new social work ethics model for education and practical application is proposed. The strengths…

  10. Models of Purposive Human Organization: A Comparative Study

    Science.gov (United States)

    1984-02-01

    develop techniques for organizational diagnosis with the D-M model, to be followed by intervention by S-T methodology. 2. Introduction 2.1. Background In...relational and object data for Dinnat-Murphree model construction. 2. Develop techniques for organizational diagnosis with the Dinnat-Murphree model

  11. Poisson goodness-of-fit tests for radiation-induced chromosome aberrations

    International Nuclear Information System (INIS)

    Merkle, W.

    1981-01-01

    Asymptotic and exact Poisson goodness-to-fit tests have been reviewed with regard to their applicability in analysing distributional properties of data on chromosome aberrations. It has been demonstrated that for typical cytogenetic samples, i.e. when the average number of aberrations per cell is smaller than one, results of asymptotic tests, especially of the most commonly used u-test, differ greatly from results of corresponding exact tests. While the u-statistic can serve as a qualitative index to indicate a tendency towards under- or over-dispersion, exact tests should be used if the assumption of a Poisson distribution is crucial, e.g. in investigating induction mechanisms. If the main interest is to detect a difference between the mean and the variance of a sample it is furthermore important to realize that a much larger sample size is required to detect underdispersion than it is to detect overdispersion. (author)

  12. Asymptotically Distribution-Free Goodness-of-Fit Testing for Copulas

    NARCIS (Netherlands)

    Can, S.U.; Einmahl, John; Laeven, R.J.A.

    2017-01-01

    Consider a random sample from a continuous multivariate distribution function F with copula C. In order to test the null hypothesis that C belongs to a certain parametric family, we construct an under H0 asymptotically distribution-free process that serves as a tests generator. The process is a

  13. Practical Statistics for Particle Physics Analyses: Chi-Squared and Goodness of Fit (2/4)

    CERN Multimedia

    CERN. Geneva; Moneta, Lorenzo

    2016-01-01

    This will be a 4-day series of 2-hour sessions as part of CERN's Academic Training Course. Each session will consist of a 1-hour lecture followed by one hour of practical computing, which will have exercises based on that day's lecture. While it is possible to follow just the lectures or just the computing exercises, we highly recommend that, because of the way this course is designed, participants come to both parts. In order to follow the hands-on exercises sessions, students need to bring their own laptops. The exercises will be run on a dedicated CERN Web notebook service, SWAN (swan.cern.ch), which is open to everybody holding a CERN computing account. The requirement to use the SWAN service is to have a CERN account and to have also access to Cernbox, the shared storage service at CERN. New users of cernbox are invited to activate beforehand cernbox by simply connecting to https://cernbox.cern.ch. A basic prior knowledge of ROOT and C++ is also recommended for participation in the practical session....

  14. Beyond the goodness of fit: A preference-based account of Europeanization

    NARCIS (Netherlands)

    Mastenbroek, E.; Keulen, M. van; Haverland, M; Holzhacker, R

    2006-01-01

    This paper is concerned with formulating and testing a preference-based explanation of EU implementation. The hypothesis is that, rather than the ‘goodness of fit’ with existing policies, the fit with national preferences predicts the ease of implementation of new EU legislation. This hypothesis is

  15. Lithium-ion battery models: a comparative study and a model-based powerline communication

    Directory of Open Access Journals (Sweden)

    F. Saidani

    2017-09-01

    Full Text Available In this work, various Lithium-ion (Li-ion battery models are evaluated according to their accuracy, complexity and physical interpretability. An initial classification into physical, empirical and abstract models is introduced. Also known as white, black and grey boxes, respectively, the nature and characteristics of these model types are compared. Since the Li-ion battery cell is a thermo-electro-chemical system, the models are either in the thermal or in the electrochemical state-space. Physical models attempt to capture key features of the physical process inside the cell. Empirical models describe the system with empirical parameters offering poor analytical, whereas abstract models provide an alternative representation. In addition, a model selection guideline is proposed based on applications and design requirements. A complex model with a detailed analytical insight is of use for battery designers but impractical for real-time applications and in situ diagnosis. In automotive applications, an abstract model reproducing the battery behavior in an equivalent but more practical form, mainly as an equivalent circuit diagram, is recommended for the purpose of battery management. As a general rule, a trade-off should be reached between the high fidelity and the computational feasibility. Especially if the model is embedded in a real-time monitoring unit such as a microprocessor or a FPGA, the calculation time and memory requirements rise dramatically with a higher number of parameters. Moreover, examples of equivalent circuit models of Lithium-ion batteries are covered. Equivalent circuit topologies are introduced and compared according to the previously introduced criteria. An experimental sequence to model a 20 Ah cell is presented and the results are used for the purposes of powerline communication.

  16. Comparing Intrinsic Connectivity Models for the Primary Auditory Cortices

    Science.gov (United States)

    Hamid, Khairiah Abdul; Yusoff, Ahmad Nazlim; Mohamad, Mazlyfarina; Hamid, Aini Ismafairus Abd; Manan, Hanani Abd

    2010-07-01

    This fMRI study is about modeling the intrinsic connectivity between Heschl' gyrus (HG) and superior temporal gyrus (STG) in human primary auditory cortices. Ten healthy male subjects participated and required to listen to white noise stimulus during the fMRI scans. Two intrinsic connectivity models comprising bilateral HG and STG were constructed using statistical parametric mapping (SPM) and dynamic causal modeling (DCM). Group Bayes factor (GBF), positive evidence ratio (PER) and Bayesian model selection (BMS) for group studies were used in model comparison. Group results indicated significant bilateral asymmetrical activation (puncorr < 0.001) in HG and STG. Comparison results showed strong evidence of Model 2 as the preferred model (STG as the input center) with GBF value of 5.77 × 1073 The model is preferred by 6 out of 10 subjects. The results were supported by BMS results for group studies. One-sample t-test on connection values obtained from Model 2 indicates unidirectional parallel connections from STG to bilateral HG (p<0.05). Model 2 was determined to be the most probable intrinsic connectivity model between bilateral HG and STG when listening to white noise.

  17. Identification of a putative man-made object from an underwater crash site using CAD model superimposition.

    Science.gov (United States)

    Vincelli, Jay; Calakli, Fatih; Stone, Michael; Forrester, Graham; Mellon, Timothy; Jarrell, John

    2018-04-01

    In order to identify an object in video, a comparison with an exemplar object is typically needed. In this paper, we discuss the methodology used to identify an object detected in underwater video that was recorded during an investigation into Amelia Earhart's purported crash site. A computer aided design (CAD) model of the suspected aircraft component was created based on measurements made from orthogonally rectified images of a reference aircraft, and validated against historical photographs of the subject aircraft prior to the crash. The CAD model was then superimposed on the underwater video, and specific features on the object were geometrically compared between the CAD model and the video. This geometrical comparison was used to assess the goodness of fit between the purported object and the object identified in the underwater video. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Comparative performance of high-fidelity training models for flexible ureteroscopy: Are all models effective?

    Directory of Open Access Journals (Sweden)

    Shashikant Mishra

    2011-01-01

    Full Text Available Objective: We performed a comparative study of high-fidelity training models for flexible ureteroscopy (URS. Our objective was to determine whether high-fidelity non-virtual reality (VR models are as effective as the VR model in teaching flexible URS skills. Materials and Methods: Twenty-one trained urologists without clinical experience of flexible URS underwent dry lab simulation practice. After a warm-up period of 2 h, tasks were performed on a high-fidelity non-VR (Uro-scopic Trainer TM ; Endo-Urologie-Modell TM and a high-fidelity VR model (URO Mentor TM . The participants were divided equally into three batches with rotation on each of the three stations for 30 min. Performance of the trainees was evaluated by an expert ureteroscopist using pass rating and global rating score (GRS. The participants rated a face validity questionnaire at the end of each session. Results: The GRS improved statistically at evaluation performed after second rotation (P<0.001 for batches 1, 2 and 3. Pass ratings also improved significantly for all training models when the third and first rotations were compared (P<0.05. The batch that was trained on the VR-based model had more improvement on pass ratings on second rotation but could not achieve statistical significance. Most of the realistic domains were higher for a VR model as compared with the non-VR model, except the realism of the flexible endoscope. Conclusions: All the models used for training flexible URS were effective in increasing the GRS and pass ratings irrespective of the VR status.

  19. Comparative study between a QCD inspired model and a multiple diffraction model

    International Nuclear Information System (INIS)

    Luna, E.G.S.; Martini, A.F.; Menon, M.J.

    2003-01-01

    A comparative study between a QCD Inspired Model (QCDIM) and a Multiple Diffraction Model (MDM) is presented, with focus on the results for pp differential cross section at √s = 52.8 GeV. It is shown that the MDM predictions are in agreement with experimental data, except for the dip region and that the QCDIM describes only the diffraction peak region. Interpretations in terms of the corresponding eikonals are also discussed. (author)

  20. Repeated holdout Cross-Validation of Model to Estimate Risk of Lyme Disease by Landscape Attributes

    Science.gov (United States)

    We previously modeled Lyme disease (LD) risk at the landscape scale; here we evaluate the model's overall goodness-of-fit using holdout validation. Landscapes were characterized within road-bounded analysis units (AU). Observed LD cases (obsLD) were ascertained per AU. Data were ...

  1. Estimating structural equation models with non-normal variables by using transformations

    NARCIS (Netherlands)

    Montfort, van K.; Mooijaart, A.; Meijerink, F.

    2009-01-01

    We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample

  2. Assessing model fit in latent class analysis when asymptotics do not hold

    NARCIS (Netherlands)

    van Kollenburg, Geert H.; Mulder, Joris; Vermunt, Jeroen K.

    2015-01-01

    The application of latent class (LC) analysis involves evaluating the LC model using goodness-of-fit statistics. To assess the misfit of a specified model, say with the Pearson chi-squared statistic, a p-value can be obtained using an asymptotic reference distribution. However, asymptotic p-values

  3. Mathematical model comparing of the multi-level economics systems

    Science.gov (United States)

    Brykalov, S. M.; Kryanev, A. V.

    2017-12-01

    The mathematical model (scheme) of a multi-level comparison of the economic system, characterized by the system of indices, is worked out. In the mathematical model of the multi-level comparison of the economic systems, the indicators of peer review and forecasting of the economic system under consideration can be used. The model can take into account the uncertainty in the estimated values of the parameters or expert estimations. The model uses the multi-criteria approach based on the Pareto solutions.

  4. Comparative study of Moore and Mealy machine models adaptation ...

    African Journals Online (AJOL)

    Information and Communications Technology has influenced the need for automated machines that can carry out important production procedures and, automata models are among the computational models used in design and construction of industrial processes. The production process of the popular African Black Soap ...

  5. Criteria for comparing economic impact models of tourism

    NARCIS (Netherlands)

    Klijs, J.; Heijman, W.J.M.; Korteweg Maris, D.; Bryon, J.

    2012-01-01

    There are substantial differences between models of the economic impacts of tourism. Not only do the nature and precision of results vary, but data demands, complexity and underlying assumptions also differ. Often, it is not clear whether the models chosen are appropriate for the specific situation

  6. Testy dobré shody pro model zrychleného času v analýze přežití

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2010-01-01

    Roč. 22, č. 3 (2010), s. 89-93 ISSN 1210-8022. [16. letní škola JČMF Robust 2010. Králíky, 30.01.2010-05.02.2010] R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : survival analysis * goodness-of-fit test * accelerated failure time model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2010/SI/novak-goodness-of-fit tests for the aft model in survival analysis.pdf

  7. Clinical validation of the LKB model and parameter sets for predicting radiation-induced pneumonitis from breast cancer radiotherapy

    International Nuclear Information System (INIS)

    Tsougos, Ioannis; Mavroidis, Panayiotis; Theodorou, Kyriaki; Rajala, J; Pitkaenen, M A; Holli, K; Ojala, A T; Hyoedynmaa, S; Jaervenpaeae, Ritva; Lind, Bengt K; Kappas, Constantin

    2006-01-01

    The choice of the appropriate model and parameter set in determining the relation between the incidence of radiation pneumonitis and dose distribution in the lung is of great importance, especially in the case of breast radiotherapy where the observed incidence is fairly low. From our previous study based on 150 breast cancer patients, where the fits of dose-volume models to clinical data were estimated (Tsougos et al 2005 Evaluation of dose-response models and parameters predicting radiation induced pneumonitis using clinical data from breast cancer radiotherapy Phys. Med. Biol. 50 3535-54), one could get the impression that the relative seriality is significantly better than the LKB NTCP model. However, the estimation of the different NTCP models was based on their goodness-of-fit on clinical data, using various sets of published parameters from other groups, and this fact may provisionally justify the results. Hence, we sought to investigate further the LKB model, by applying different published parameter sets for the very same group of patients, in order to be able to compare the results. It was shown that, depending on the parameter set applied, the LKB model is able to predict the incidence of radiation pneumonitis with acceptable accuracy, especially when implemented on a sub-group of patients (120) receiving D-bar-bar vertical bar EUD higher than 8 Gy. In conclusion, the goodness-of-fit of a certain radiobiological model on a given clinical case is closely related to the selection of the proper scoring criteria and parameter set as well as to the compatibility of the clinical case from which the data were derived. (letter to the editor)

  8. comparative study of moore and mealy machine models adaptation

    African Journals Online (AJOL)

    user

    automata model was developed for ABS manufacturing process using Moore and Mealy Finite State Machines. Simulation ... The simulation results showed that the Mealy Machine is faster than the Moore ..... random numbers from MATLAB.

  9. Cost Valuation: A Model for Comparing Dissimilar Aircraft Platforms

    National Research Council Canada - National Science Library

    Long, Eric J

    2006-01-01

    .... A demonstration of the model's validity using aircraft and cost data from the Predator UAV and the F-16 was then performed to illustrate how it can be used to aid comparisons of dissimilar aircraft...

  10. A Comparative Study of Three Methodologies for Modeling Dynamic Stall

    Science.gov (United States)

    Sankar, L.; Rhee, M.; Tung, C.; ZibiBailly, J.; LeBalleur, J. C.; Blaise, D.; Rouzaud, O.

    2002-01-01

    During the past two decades, there has been an increased reliance on the use of computational fluid dynamics methods for modeling rotors in high speed forward flight. Computational methods are being developed for modeling the shock induced loads on the advancing side, first-principles based modeling of the trailing wake evolution, and for retreating blade stall. The retreating blade dynamic stall problem has received particular attention, because the large variations in lift and pitching moments encountered in dynamic stall can lead to blade vibrations and pitch link fatigue. Restricting to aerodynamics, the numerical prediction of dynamic stall is still a complex and challenging CFD problem, that, even in two dimensions at low speed, gathers the major difficulties of aerodynamics, such as the grid resolution requirements for the viscous phenomena at leading-edge bubbles or in mixing-layers, the bias of the numerical viscosity, and the major difficulties of the physical modeling, such as the turbulence models, the transition models, whose both determinant influences, already present in static maximal-lift or stall computations, are emphasized by the dynamic aspect of the phenomena.

  11. Comparing single- and dual-process models of memory development.

    Science.gov (United States)

    Hayes, Brett K; Dunn, John C; Joubert, Amy; Taylor, Robert

    2017-11-01

    This experiment examined single-process and dual-process accounts of the development of visual recognition memory. The participants, 6-7-year-olds, 9-10-year-olds and adults, were presented with a list of pictures which they encoded under shallow or deep conditions. They then made recognition and confidence judgments about a list containing old and new items. We replicated the main trends reported by Ghetti and Angelini () in that recognition hit rates increased from 6 to 9 years of age, with larger age changes following deep than shallow encoding. Formal versions of the dual-process high threshold signal detection model and several single-process models (equal variance signal detection, unequal variance signal detection, mixture signal detection) were fit to the developmental data. The unequal variance and mixture signal detection models gave a better account of the data than either of the other models. A state-trace analysis found evidence for only one underlying memory process across the age range tested. These results suggest that single-process memory models based on memory strength are a viable alternative to dual-process models for explaining memory development. © 2016 John Wiley & Sons Ltd.

  12. Comparative Analysis Of Three Largest World Models Of Business Excellence

    Directory of Open Access Journals (Sweden)

    Jasminka Samardžija

    2009-07-01

    Full Text Available Business excellence has become the strongest means of achieving competitive advantage of companies while total management of quality has become the road that ensures support of excellent results recognized by many world companies. Despite many differences, we can conclude that models have many common elements. By the audit in 2005, the DP and MBNQA moved the focus from excellence of product, i.e service, onto the excellence of quality of the entire organization process. Thus, the quality got strategic dimension instead of technical one and the accent passed from the technical quality on the total excellence of all organization processes. The joint movement goes to the direction of good management and appreciation of systems thinking. The very structure of EFOM model criteria itself is adjusted to strategic dimension of quality and that is why the model underwent only short audits within the criteria themselves. Essentially, the model remained unchanged. In all models, the accent is on the satisfaction of buyers, employees and community. National rewards for quality have an important role in promotion and giving a prize to excellence in organization performances. Moreover, they raise quality standards of companies and the country profile as a whole. Considering the GDP per capita and the percentage of certification level of companies, Croatia has all the predispositions for introduction the EFQM model of business excellence with the basic aim of deficit decrease in foreign trade balance and strengthening of competitiveness as the necessary preliminary work for the entrance in the competitive market of the EU. Quality management was introduced in many organizations. The methods used at that time developed in the course of years, and what are to predict is the continuation of the evolution road model as well as the method of business excellence.

  13. Comparing hierarchical models via the marginalized deviance information criterion.

    Science.gov (United States)

    Quintero, Adrian; Lesaffre, Emmanuel

    2018-07-20

    Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.

  14. COMPARING FINANCIAL DISTRESS PREDICTION MODELS BEFORE AND DURING RECESSION

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2011-02-01

    Full Text Available The purpose of this paper is to design three separate financial distress prediction models that will track the changes in a relative importance of financial ratios throughout three consecutive years. The models were based on the financial data from 2000 privately-owned small and medium-sized enterprises in Croatia from 2006 to 2009, and developed by means of logistic regression. Macroeconomic conditions as well as market dynamic have been changed over the mentioned period. Financial ratios that were less important in one period become more important in the next period. Composition of model starting in 2006 has been changed in the next years. It tells us what financial ratios are more important during the time of economic downturn. Besides, it helps us to understand behavior of small and medium-sized enterprises in the period of prerecession and in the period of recession.

  15. Energy modeling and comparative assessment beyond the market

    International Nuclear Information System (INIS)

    Rogner, H.-H.; Langlois, L.; McDonald, A.; Jalal, I.

    2004-01-01

    Market participants engage in constant comparative assessment of prices, available supplies, consumer options. Such implicit comparative assessment is a sine qua non for decision making in, and the smooth function of, competitive markets, but it is not always sufficient for policy makers who make decisions based on priorities other than or in addition to market prices. Supplementary mechanisms are needed to make explicit, to expose for consideration and to incorporate into their decision making processes, broader factors that are not necessarily reflected directly in the market price of a good or service. These would include, for example, employment, environment, national security or trade considerations. They would include long-term considerations, e.g., global warming or greatly diminished future supplies of oil and gas. This paper explores different applications of comparative assessment beyond the market, reviews different approaches for accomplishing such evaluations, and presents some tools available for conducting various types of extra-market comparative assessment, including those currently in use by Member States of the IAEA.(author)

  16. Psychological model of ART adherence behaviors in persons living with HIV/AIDS in Mexico: a structural equation analysis

    Directory of Open Access Journals (Sweden)

    José Luis Ybarra Sagarduy

    2017-09-01

    Full Text Available ABSTRACT OBJECTIVE The objective of this study has been to test the ability of variables of a psychological model to predict antiretroviral therapy medication adherence behavior. METHODS We have conducted a cross-sectional study among 172 persons living with HIV/AIDS (PLWHA, who completed four self-administered assessments: 1 the Psychological Variables and Adherence Behaviors Questionnaire, 2 the Stress-Related Situation Scale to assess the variable of Personality, 3 The Zung Depression Scale, and 4 the Duke-UNC Functional Social Support Questionnaire. Structural equation modeling was used to construct a model to predict medication adherence behaviors. RESULTS Out of all the participants, 141 (82% have been considered 100% adherent to antiretroviral therapy. Structural equation modeling has confirmed the direct effect that personality (decision-making and tolerance of frustration has on motives to behave, or act accordingly, which was in turn directly related to medication adherence behaviors. In addition, these behaviors have had a direct and significant effect on viral load, as well as an indirect effect on CD4 cell count. The final model demonstrates the congruence between theory and data (x2/df. = 1.480, goodness of fit index = 0.97, adjusted goodness of fit index = 0.94, comparative fit index = 0.98, root mean square error of approximation = 0.05, accounting for 55.7% of the variance. CONCLUSIONS The results of this study support our theoretical model as a conceptual framework for the prediction of medication adherence behaviors in persons living with HIV/AIDS. Implications for designing, implementing, and evaluating intervention programs based on the model are to be discussed.

  17. Psychological model of ART adherence behaviors in persons living with HIV/AIDS in Mexico: a structural equation analysis.

    Science.gov (United States)

    Sagarduy, José Luis Ybarra; López, Julio Alfonso Piña; Ramírez, Mónica Teresa González; Dávila, Luis Enrique Fierros

    2017-09-04

    The objective of this study has been to test the ability of variables of a psychological model to predict antiretroviral therapy medication adherence behavior. We have conducted a cross-sectional study among 172 persons living with HIV/AIDS (PLWHA), who completed four self-administered assessments: 1) the Psychological Variables and Adherence Behaviors Questionnaire, 2) the Stress-Related Situation Scale to assess the variable of Personality, 3) The Zung Depression Scale, and 4) the Duke-UNC Functional Social Support Questionnaire. Structural equation modeling was used to construct a model to predict medication adherence behaviors. Out of all the participants, 141 (82%) have been considered 100% adherent to antiretroviral therapy. Structural equation modeling has confirmed the direct effect that personality (decision-making and tolerance of frustration) has on motives to behave, or act accordingly, which was in turn directly related to medication adherence behaviors. In addition, these behaviors have had a direct and significant effect on viral load, as well as an indirect effect on CD4 cell count. The final model demonstrates the congruence between theory and data (x2/df. = 1.480, goodness of fit index = 0.97, adjusted goodness of fit index = 0.94, comparative fit index = 0.98, root mean square error of approximation = 0.05), accounting for 55.7% of the variance. The results of this study support our theoretical model as a conceptual framework for the prediction of medication adherence behaviors in persons living with HIV/AIDS. Implications for designing, implementing, and evaluating intervention programs based on the model are to be discussed.

  18. A comparative study of independent particle model based ...

    Indian Academy of Sciences (India)

    We find that among these three independent particle model based methods, the ss-VSCF method provides most accurate results in the thermal averages followed by t-SCF and the v-VSCF is the least accurate. However, the ss-VSCF is found to be computationally very expensive for the large molecules. The t-SCF gives ...

  19. Nature of Science and Models: Comparing Portuguese Prospective Teachers' Views

    Science.gov (United States)

    Torres, Joana; Vasconcelos, Clara

    2015-01-01

    Despite the relevance of nature of science and scientific models in science education, studies reveal that students do not possess adequate views regarding these topics. Bearing in mind that both teachers' views and knowledge strongly influence students' educational experiences, the main scope of this study was to evaluate Portuguese prospective…

  20. Classifying and comparing spatial models of fire dynamics

    Science.gov (United States)

    Geoffrey J. Cary; Robert E. Keane; Mike D. Flannigan

    2007-01-01

    Wildland fire is a significant disturbance in many ecosystems worldwide and the interaction of fire with climate and vegetation over long time spans has major effects on vegetation dynamics, ecosystem carbon budgets, and patterns of biodiversity. Landscape-Fire-Succession Models (LFSMs) that simulate the linked processes of fire and vegetation development in a spatial...

  1. Target normal sheath acceleration analytical modeling, comparative study and developments

    International Nuclear Information System (INIS)

    Perego, C.; Batani, D.; Zani, A.; Passoni, M.

    2012-01-01

    Ultra-intense laser interaction with solid targets appears to be an extremely promising technique to accelerate ions up to several MeV, producing beams that exhibit interesting properties for many foreseen applications. Nowadays, most of all the published experimental results can be theoretically explained in the framework of the target normal sheath acceleration (TNSA) mechanism proposed by Wilks et al. [Phys. Plasmas 8(2), 542 (2001)]. As an alternative to numerical simulation various analytical or semi-analytical TNSA models have been published in the latest years, each of them trying to provide predictions for some of the ion beam features, given the initial laser and target parameters. However, the problem of developing a reliable model for the TNSA process is still open, which is why the purpose of this work is to enlighten the present situation of TNSA modeling and experimental results, by means of a quantitative comparison between measurements and theoretical predictions of the maximum ion energy. Moreover, in the light of such an analysis, some indications for the future development of the model proposed by Passoni and Lontano [Phys. Plasmas 13(4), 042102 (2006)] are then presented.

  2. Estimation of the lifetime distribution of mechatronic systems in the presence of a covariate: A comparison among parametric, semiparametric and nonparametric models

    International Nuclear Information System (INIS)

    Bobrowski, Sebastian; Chen, Hong; Döring, Maik; Jensen, Uwe; Schinköthe, Wolfgang

    2015-01-01

    In practice manufacturers may have lots of failure data of similar products using the same technology basis under different operating conditions. Thus, one can try to derive predictions for the distribution of the lifetime of newly developed components or new application environments through the existing data using regression models based on covariates. Three categories of such regression models are considered: a parametric, a semiparametric and a nonparametric approach. First, we assume that the lifetime is Weibull distributed, where its parameters are modelled as linear functions of the covariate. Second, the Cox proportional hazards model, well-known in Survival Analysis, is applied. Finally, a kernel estimator is used to interpolate between empirical distribution functions. In particular the last case is new in the context of reliability analysis. We propose a goodness of fit measure (GoF), which can be applied to all three types of regression models. Using this GoF measure we discuss a new model selection procedure. To illustrate this method of reliability prediction, the three classes of regression models are applied to real test data of motor experiments. Further the performance of the approaches is investigated by Monte Carlo simulations. - Highlights: • We estimate the lifetime distribution in the presence of a covariate. • Three types of regression models are considered and compared. • A new nonparametric estimator based on our particular data structure is introduced. • We propose a goodness of fit measure and show a new model selection procedure. • A case study with real data and Monte Carlo simulations are performed

  3. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  4. Activity Modelling and Comparative Evaluation of WSN MAC Security Attacks

    DEFF Research Database (Denmark)

    Pawar, Pranav M.; Nielsen, Rasmus Hjorth; Prasad, Neeli R.

    2012-01-01

    and initiate security attacks that disturb the normal functioning of the network in a severe manner. Such attacks affect the performance of the network by increasing the energy consumption, by reducing throughput and by inducing long delays. Of all existing WSN attacks, MAC layer attacks are considered...... the most harmful as they directly affect the available resources and thus the nodes’ energy consumption. The first endeavour of this paper is to model the activities of MAC layer security attacks to understand the flow of activities taking place when mounting the attack and when actually executing it....... The second aim of the paper is to simulate these attacks on hybrid MAC mechanisms, which shows the performance degradation of aWSN under the considered attacks. The modelling and implementation of the security attacks give an actual view of the network which can be useful in further investigating secure...

  5. Integration of a Three-Dimensional Process-Based Hydrological Model into the Object Modeling System

    Directory of Open Access Journals (Sweden)

    Giuseppe Formetta

    2016-01-01

    Full Text Available The integration of a spatial process model into an environmental modeling framework can enhance the model’s capabilities. This paper describes a general methodology for integrating environmental models into the Object Modeling System (OMS regardless of the model’s complexity, the programming language, and the operating system used. We present the integration of the GEOtop model into the OMS version 3.0 and illustrate its application in a small watershed. OMS is an environmental modeling framework that facilitates model development, calibration, evaluation, and maintenance. It provides innovative techniques in software design such as multithreading, implicit parallelism, calibration and sensitivity analysis algorithms, and cloud-services. GEOtop is a physically based, spatially distributed rainfall-runoff model that performs three-dimensional finite volume calculations of water and energy budgets. Executing GEOtop as an OMS model component allows it to: (1 interact directly with the open-source geographical information system (GIS uDig-JGrass to access geo-processing, visualization, and other modeling components; and (2 use OMS components for automatic calibration, sensitivity analysis, or meteorological data interpolation. A case study of the model in a semi-arid agricultural catchment is presented for illustration and proof-of-concept. Simulated soil water content and soil temperature results are compared with measured data, and model performance is evaluated using goodness-of-fit indices. This study serves as a template for future integration of process models into OMS.

  6. Comparing the MOLAP the ROLAP storage models Marysol

    Directory of Open Access Journals (Sweden)

    N Tamayo

    2006-09-01

    Full Text Available Data Warehouses (DWs, supported by OLAP, have played a key role in helping company decision-making du- ring the last few years. DWs can be stored in ROLAP and/or MOLAP data storage systems. Data is stored in a relational database in ROLAP and in multidimensional matrices in MOLAP. This paper presents a comparative example, analysing the performance and advantages and disadvantages of ROLAP and MOLAP in a specific database management system (DBMS. An overview of DBMS is also given to see how these technologies are being incorporated.

  7. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2009-08-01

    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  8. Comparative analysis of calculation models of railway subgrade

    Directory of Open Access Journals (Sweden)

    I.O. Sviatko

    2013-08-01

    Full Text Available Purpose. In transport engineering structures design, the primary task is to determine the parameters of foundation soil and nuances of its work under loads. It is very important to determine the parameters of shear resistance and the parameters, determining the development of deep deformations in foundation soils, while calculating the soil subgrade - upper track structure interaction. Search for generalized numerical modeling methods of embankment foundation soil work that include not only the analysis of the foundation stress state but also of its deformed one. Methodology. The analysis of existing modern and classical methods of numerical simulation of soil samples under static load was made. Findings. According to traditional methods of analysis of ground masses work, limitation and the qualitative estimation of subgrade deformations is possible only indirectly, through the estimation of stress and comparison of received values with the boundary ones. Originality. A new computational model was proposed in which it will be applied not only classical approach analysis of the soil subgrade stress state, but deformed state will be also taken into account. Practical value. The analysis showed that for accurate analysis of ground masses work it is necessary to develop a generalized methodology for analyzing of the rolling stock - railway subgrade interaction, which will use not only the classical approach of analyzing the soil subgrade stress state, but also take into account its deformed one.

  9. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  10. Purple Unicorns, True Models, and Other Things I've Never Seen

    Science.gov (United States)

    Edwards, Michael C.

    2013-01-01

    This author has had the privilege of knowing Professor Maydeu-Olivares for almost a decade and although their paths cross only occasionally, such instances were always enjoyable and enlightening. Edwards states that Maydeu-Olivares' target article for this issue, ("Goodness-of-Fit Assessment of Item Response Theory Models") provides…

  11. Reduction of the number of parameters needed for a polynomial random regression test-day model

    NARCIS (Netherlands)

    Pool, M.H.; Meuwissen, T.H.E.

    2000-01-01

    Legendre polynomials were used to describe the (co)variance matrix within a random regression test day model. The goodness of fit depended on the polynomial order of fit, i.e., number of parameters to be estimated per animal but is limited by computing capacity. Two aspects: incomplete lactation

  12. Comparing droplet activation parameterisations against adiabatic parcel models using a novel inverse modelling framework

    Science.gov (United States)

    Partridge, Daniel; Morales, Ricardo; Stier, Philip

    2015-04-01

    Many previous studies have compared droplet activation parameterisations against adiabatic parcel models (e.g. Ghan et al., 2001). However, these have often involved comparisons for a limited number of parameter combinations based upon certain aerosol regimes. Recent studies (Morales et al., 2014) have used wider ranges when evaluating their parameterisations, however, no study has explored the full possible multi-dimensional parameter space that would be experienced by droplet activations within a global climate model (GCM). It is important to be able to efficiently highlight regions of the entire multi-dimensional parameter space in which we can expect the largest discrepancy between parameterisation and cloud parcel models in order to ascertain which regions simulated by a GCM can be expected to be a less accurate representation of the process of cloud droplet activation. This study provides a new, efficient, inverse modelling framework for comparing droplet activation parameterisations to more complex cloud parcel models. To achieve this we couple a Markov Chain Monte Carlo algorithm (Partridge et al., 2012) to two independent adiabatic cloud parcel models and four droplet activation parameterisations. This framework is computationally faster than employing a brute force Monte Carlo simulation, and allows us to transparently highlight which parameterisation provides the closest representation across all aerosol physiochemical and meteorological environments. The parameterisations are demonstrated to perform well for a large proportion of possible parameter combinations, however, for certain key parameters; most notably the vertical velocity and accumulation mode aerosol concentration, large discrepancies are highlighted. These discrepancies correspond for parameter combinations that result in very high/low simulated values of maximum supersaturation. By identifying parameter interactions or regimes within the multi-dimensional parameter space we hope to guide

  13. Microbial comparative pan-genomics using binomial mixture models

    DEFF Research Database (Denmark)

    Ussery, David; Snipen, L; Almøy, T

    2009-01-01

    The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter...... approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. RESULTS: We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection...... probabilities. Estimated pan-genome sizes range from small (around 2600 gene families) in Buchnera aphidicola to large (around 43000 gene families) in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely...

  14. Comparative study of cost models for tokamak DEMO fusion reactors

    International Nuclear Information System (INIS)

    Oishi, Tetsutarou; Yamazaki, Kozo; Arimoto, Hideki; Ban, Kanae; Kondo, Takuya; Tobita, Kenji; Goto, Takuya

    2012-01-01

    Cost evaluation analysis of the tokamak-type demonstration reactor DEMO using the PEC (physics-engineering-cost) system code is underway to establish a cost evaluation model for the DEMO reactor design. As a reference case, a DEMO reactor with reference to the SSTR (steady state tokamak reactor) was designed using PEC code. The calculated total capital cost was in the same order of that proposed previously in cost evaluation studies for the SSTR. Design parameter scanning analysis and multi regression analysis illustrated the effect of parameters on the total capital cost. The capital cost was predicted to be inside the range of several thousands of M$s in this study. (author)

  15. Comparative digital cartilage histology for human and common osteoarthritis models

    Directory of Open Access Journals (Sweden)

    Pedersen DR

    2013-02-01

    Full Text Available Douglas R Pedersen, Jessica E Goetz, Gail L Kurriger, James A MartinDepartment of Orthopaedics and Rehabilitation, University of Iowa, Iowa City, IA, USAPurpose: This study addresses the species-specific and site-specific details of weight-bearing articular cartilage zone depths and chondrocyte distributions among humans and common osteoarthritis (OA animal models using contemporary digital imaging tools. Histological analysis is the gold-standard research tool for evaluating cartilage health, OA severity, and treatment efficacy. Historically, evaluations were made by expert analysts. However, state-of-the-art tools have been developed that allow for digitization of entire histological sections for computer-aided analysis. Large volumes of common digital cartilage metrics directly complement elucidation of trends in OA inducement and concomitant potential treatments.Materials and methods: Sixteen fresh human knees, 26 adult New Zealand rabbit stifles, and 104 bovine lateral plateaus were measured for four cartilage zones and the cell densities within each zone. Each knee was divided into four weight-bearing sites: the medial and lateral plateaus and femoral condyles.Results: One-way analysis of variance followed by pairwise multiple comparisons (Holm–Sidak method at a significance of 0.05 clearly confirmed the variability between cartilage depths at each site, between sites in the same species, and between weight-bearing articular cartilage definitions in different species.Conclusion: The present study clearly demonstrates multisite, multispecies differences in normal weight-bearing articular cartilage, which can be objectively quantified by a common digital histology imaging technique. The clear site-specific differences in normal cartilage must be taken into consideration when characterizing the pathoetiology of OA models. Together, these provide a path to consistently analyze the volume and variety of histologic slides necessarily generated

  16. Comparing deep learning models for population screening using chest radiography

    Science.gov (United States)

    Sivaramakrishnan, R.; Antani, Sameer; Candemir, Sema; Xue, Zhiyun; Abuya, Joseph; Kohli, Marc; Alderson, Philip; Thoma, George

    2018-02-01

    According to the World Health Organization (WHO), tuberculosis (TB) remains the most deadly infectious disease in the world. In a 2015 global annual TB report, 1.5 million TB related deaths were reported. The conditions worsened in 2016 with 1.7 million reported deaths and more than 10 million people infected with the disease. Analysis of frontal chest X-rays (CXR) is one of the most popular methods for initial TB screening, however, the method is impacted by the lack of experts for screening chest radiographs. Computer-aided diagnosis (CADx) tools have gained significance because they reduce the human burden in screening and diagnosis, particularly in countries that lack substantial radiology services. State-of-the-art CADx software typically is based on machine learning (ML) approaches that use hand-engineered features, demanding expertise in analyzing the input variances and accounting for the changes in size, background, angle, and position of the region of interest (ROI) on the underlying medical imagery. More automatic Deep Learning (DL) tools have demonstrated promising results in a wide range of ML applications. Convolutional Neural Networks (CNN), a class of DL models, have gained research prominence in image classification, detection, and localization tasks because they are highly scalable and deliver superior results with end-to-end feature extraction and classification. In this study, we evaluated the performance of CNN based DL models for population screening using frontal CXRs. The results demonstrate that pre-trained CNNs are a promising feature extracting tool for medical imagery including the automated diagnosis of TB from chest radiographs but emphasize the importance of large data sets for the most accurate classification.

  17. Comparative analysis of Klafki and Heimann's didactic models

    Directory of Open Access Journals (Sweden)

    Bojović Žana P.

    2016-01-01

    Full Text Available A comparative analysis of Klafki's didactic thinking which is based on an analysis of different kinds of theories on the nature of education and Heimann's didactic which is based on the theory of teaching and learning shows that both are dealing with teaching in its entirety. Both authors emphasize the role of contents, methods, procedures and resources for material and formal education and both use anthropological and social reality as their starting point. According to Klafki, resources, procedures, and methods are in form of dependency where it is important to know what and why should something be learnt, whereas Heimann sees the same elements in the form of interdependency. Each of the didactic conceptions, from their point of view, define the position of goals and tasks in education as well as how to achieve them. Determination and formulation of objectives is a complex, responsible, and very difficult task, and a goal must be clearly defined, because it emanates the guidelines for the preparation of didactic methodology educational programs and their planning. The selection of content in didactic methodology scenarios of education and learning, are only possible if the knowledge, skills and abilities that are necessary for a student to develop are explicitly indicated. The question of educational goals is the main problem of didactics for only a clearly defined objective implicates the selection of appropriate methods and means for its achievement, and it should be a permanent task of the current didactic conception now and in the future.

  18. Characterizing cavities in model inclusion molecules: a comparative study.

    Science.gov (United States)

    Torrens, F; Sánchez-Marín, J; Nebot-Gil, I

    1998-04-01

    We have selected fullerene-60 and -70 cavities as model systems in order to test several methods for characterizing inclusion molecules. The methods are based on different technical foundations such as a square and triangular tessellation of the molecule taken as a unitary sphere, spherical tessellation of the molecular surface, numerical integration of the atomic volumes and surfaces, triangular tessellation of the molecular surface, and a cubic lattice approach to a molecular space. Accurate measures of the molecular volume and surface area have been performed with the pseudo-random Monte Carlo (MCVS) and uniform Monte Carlo (UMCVS) methods. These calculations serve as a reference for the rest of the methods. The SURMO2 and MS methods have not recognized the cavities and may not be convenient for intercalation compounds. The programs that have detected the cavities never exceed 5% deviation relative to the reference values for molecular volume and surface area. The GEPOL algorithm, alone or combined with TOPO, shows results in good agreement with those of the UMCVS reference. The uniform random number generator provides the fastest convergence for UMCVS and a correct estimate of the standard deviations. The effect of the internal cavity on the accessible surfaces has been calculated.

  19. Building v/s Exploring Models: Comparing Learning of Evolutionary Processes through Agent-based Modeling

    Science.gov (United States)

    Wagh, Aditi

    Two strands of work motivate the three studies in this dissertation. Evolutionary change can be viewed as a computational complex system in which a small set of rules operating at the individual level result in different population level outcomes under different conditions. Extensive research has documented students' difficulties with learning about evolutionary change (Rosengren et al., 2012), particularly in terms of levels slippage (Wilensky & Resnick, 1999). Second, though building and using computational models is becoming increasingly common in K-12 science education, we know little about how these two modalities compare. This dissertation adopts agent-based modeling as a representational system to compare these modalities in the conceptual context of micro-evolutionary processes. Drawing on interviews, Study 1 examines middle-school students' productive ways of reasoning about micro-evolutionary processes to find that the specific framing of traits plays a key role in whether slippage explanations are cued. Study 2, which was conducted in 2 schools with about 150 students, forms the crux of the dissertation. It compares learning processes and outcomes when students build their own models or explore a pre-built model. Analysis of Camtasia videos of student pairs reveals that builders' and explorers' ways of accessing rules, and sense-making of observed trends are of a different character. Builders notice rules through available blocks-based primitives, often bypassing their enactment while explorers attend to rules primarily through the enactment. Moreover, builders' sense-making of observed trends is more rule-driven while explorers' is more enactment-driven. Pre and posttests reveal that builders manifest a greater facility with accessing rules, providing explanations manifesting targeted assembly. Explorers use rules to construct explanations manifesting non-targeted assembly. Interviews reveal varying degrees of shifts away from slippage in both

  20. Comparing different stimulus configurations for population receptive field mapping in human fMRI

    Directory of Open Access Journals (Sweden)

    Ivan eAlvarez

    2015-02-01

    Full Text Available Population receptive field (pRF mapping is a widely used approach to measuring aggregate human visual receptive field properties by recording non-invasive signals using functional MRI. Despite growing interest, no study to date has systematically investigated the effects of different stimulus configurations on pRF estimates from human visual cortex. Here we compared the effects of three different stimulus configurations on a model-based approach to pRF estimation: size-invariant bars and eccentricity-scaled bars defined in Cartesian coordinates and traveling along the cardinal axes, and a novel simultaneous ‘wedge and ring’ stimulus defined in polar coordinates, systematically covering polar and eccentricity axes. We found that the presence or absence of eccentricity scaling had a significant effect on goodness of fit and pRF size estimates. Further, variability in pRF size estimates was directly influenced by stimulus configuration, particularly for higher visual areas including V5/MT+. Finally, we compared eccentricity estimation between phase-encoded and model-based pRF approaches. We observed a tendency for more peripheral eccentricity estimates using phase-encoded methods, independent of stimulus size. We conclude that both eccentricity scaling and polar rather than Cartesian stimulus configuration are important considerations for optimal experimental design in pRF mapping. While all stimulus configurations produce adequate estimates, simultaneous wedge and ring stimulation produced higher fit reliability, with a significant advantage in reduced acquisition time.

  1. Problems in detecting misfit of latent class models in diagnostic research without a gold standard were shown

    NARCIS (Netherlands)

    van Smeden, M.; Oberski, D.L.; Reitsma, J.B.; Vermunt, J.K.; Moons, K.G.M.; de Groot, J.A.H.

    2016-01-01

    Objectives The objective of this study was to evaluate the performance of goodness-of-fit testing to detect relevant violations of the assumptions underlying the criticized “standard” two-class latent class model. Often used to obtain sensitivity and specificity estimates for diagnostic tests in the

  2. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  3. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    Science.gov (United States)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  4. Road network safety evaluation using Bayesian hierarchical joint model.

    Science.gov (United States)

    Wang, Jie; Huang, Helai

    2016-05-01

    Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Evaluation of some infiltration models and hydraulic parameters

    International Nuclear Information System (INIS)

    Haghighi, F.; Gorji, M.; Shorafa, M.; Sarmadian, F.; Mohammadi, M. H.

    2010-01-01

    The evaluation of infiltration characteristics and some parameters of infiltration models such as sorptivity and final steady infiltration rate in soils are important in agriculture. The aim of this study was to evaluate some of the most common models used to estimate final soil infiltration rate. The equality of final infiltration rate with saturated hydraulic conductivity (Ks) was also tested. Moreover, values of the estimated sorptivity from the Philips model were compared to estimates by selected pedotransfer functions (PTFs). The infiltration experiments used the doublering method on soils with two different land uses in the Taleghan watershed of Tehran province, Iran, from September to October, 2007. The infiltration models of Kostiakov-Lewis, Philip two-term and Horton were fitted to observed infiltration data. Some parameters of the models and the coefficient of determination goodness of fit were estimated using MATLAB software. The results showed that, based on comparing measured and model-estimated infiltration rate using root mean squared error (RMSE), Hortons model gave the best prediction of final infiltration rate in the experimental area. Laboratory measured Ks values gave significant differences and higher values than estimated final infiltration rates from the selected models. The estimated final infiltration rate was not equal to laboratory measured Ks values in the study area. Moreover, the estimated sorptivity factor by Philips model was significantly different to those estimated by selected PTFs. It is suggested that the applicability of PTFs is limited to specific, similar conditions. (Author) 37 refs.

  6. On selection of optimal stochastic model for accelerated life testing

    International Nuclear Information System (INIS)

    Volf, P.; Timková, J.

    2014-01-01

    This paper deals with the problem of proper lifetime model selection in the context of statistical reliability analysis. Namely, we consider regression models describing the dependence of failure intensities on a covariate, for instance, a stressor. Testing the model fit is standardly based on the so-called martingale residuals. Their analysis has already been studied by many authors. Nevertheless, the Bayes approach to the problem, in spite of its advantages, is just developing. We shall present the Bayes procedure of estimation in several semi-parametric regression models of failure intensity. Then, our main concern is the Bayes construction of residual processes and goodness-of-fit tests based on them. The method is illustrated with both artificial and real-data examples. - Highlights: • Statistical survival and reliability analysis and Bayes approach. • Bayes semi-parametric regression modeling in Cox's and AFT models. • Bayes version of martingale residuals and goodness-of-fit test

  7. A comparative analysis of several vehicle emission models for road freight transportation

    NARCIS (Netherlands)

    Demir, E.; Bektas, T.; Laporte, G.

    2011-01-01

    Reducing greenhouse gas emissions in freight transportation requires using appropriate emission models in the planning process. This paper reviews and numerically compares several available freight transportation vehicle emission models and also considers their outputs in relations to field studies.

  8. Comparative analysis of methods and tools for open and closed fuel cycles modeling: MESSAGE and DESAE

    International Nuclear Information System (INIS)

    Andrianov, A.A.; Korovin, Yu.A.; Murogov, V.M.; Fedorova, E.V.; Fesenko, G.A.

    2006-01-01

    Comparative analysis of optimization and simulation methods by the example of MESSAGE and DESAE programs is carried out for nuclear power prospects and advanced fuel cycles modeling. Test calculations for open and two-component nuclear power and closed fuel cycle are performed. Auxiliary simulation-dynamic model is developed to specify MESSAGE and DESAE modeling approaches difference. The model description is given [ru

  9. Comparing of four IRT models when analyzing two tests for inductive reasoning

    NARCIS (Netherlands)

    de Koning, E.; Sijtsma, K.; Hamers, J.H.M.

    2002-01-01

    This article discusses the use of the nonparametric IRT Mokken models of monotone homogeneity and double monotonicity and the parametric Rasch and Verhelst models for the analysis of binary test data. First, the four IRT models are discussed and compared at the theoretical level, and for each model,

  10. Jackson System Development, Entity-relationship Analysis and Data Flow Models: a comparative study

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1994-01-01

    This report compares JSD with ER modeling and data flow modeling. It is shown that JSD can be combined with ER modeling and that the result is a richer method than either of the two. The resulting method can serve as a basis for a pratical object-oriented modeling method and has some resemblance to

  11. A Comparative Study of Neural Networks and Fuzzy Systems in Modeling of a Nonlinear Dynamic System

    Directory of Open Access Journals (Sweden)

    Metin Demirtas

    2011-07-01

    Full Text Available The aim of this paper is to compare the neural networks and fuzzy modeling approaches on a nonlinear system. We have taken Permanent Magnet Brushless Direct Current (PMBDC motor data and have generated models using both approaches. The predictive performance of both methods was compared on the data set for model configurations. The paper describes the results of these tests and discusses the effects of changing model parameters on predictive and practical performance. Modeling sensitivity was used to compare for two methods.

  12. Comparing Apples to Apples: Paleoclimate Model-Data comparison via Proxy System Modeling

    Science.gov (United States)

    Dee, Sylvia; Emile-Geay, Julien; Evans, Michael; Noone, David

    2014-05-01

    The wealth of paleodata spanning the last millennium (hereinafter LM) provides an invaluable testbed for CMIP5-class GCMs. However, comparing GCM output to paleodata is non-trivial. High-resolution paleoclimate proxies generally contain a multivariate and non-linear response to regional climate forcing. Disentangling the multivariate environmental influences on proxies like corals, speleothems, and trees can be complex due to spatiotemporal climate variability, non-stationarity, and threshold dependence. Given these and other complications, many paleodata-GCM comparisons take a leap of faith, relating climate fields (e.g. precipitation, temperature) to geochemical signals in proxy data (e.g. δ18O in coral aragonite or ice cores) (e.g. Braconnot et al., 2012). Isotope-enabled GCMs are a step in the right direction, with water isotopes providing a connector point between GCMs and paleodata. However, such studies are still rare, and isotope fields are not archived as part of LM PMIP3 simulations. More importantly, much of the complexity in how proxy systems record and transduce environmental signals remains unaccounted for. In this study we use proxy system models (PSMs, Evans et al., 2013) to bridge this conceptual gap. A PSM mathematically encodes the mechanistic understanding of the physical, geochemical and, sometimes biological influences on each proxy. To translate GCM output to proxy space, we have synthesized a comprehensive, consistently formatted package of published PSMs, including δ18O in corals, tree ring cellulose, speleothems, and ice cores. Each PSM is comprised of three sub-models: sensor, archive, and observation. For the first time, these different components are coupled together for four major proxy types, allowing uncertainties due to both dating and signal interpretation to be treated within a self-consistent framework. The output of this process is an ensemble of many (say N = 1,000) realizations of the proxy network, all equally plausible

  13. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    Science.gov (United States)

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  14. Identifying nonproportional covariates in the Cox model

    Czech Academy of Sciences Publication Activity Database

    Kraus, David

    2008-01-01

    Roč. 37, č. 4 (2008), s. 617-625 ISSN 0361-0926 R&D Projects: GA AV ČR(CZ) IAA101120604; GA MŠk(CZ) 1M06047; GA ČR(CZ) GD201/05/H007 Institutional research plan: CEZ:AV0Z10750506 Keywords : Cox model * goodness of fit * proportional hazards assumption * time-varying coefficients Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.324, year: 2008

  15. Comparing i-Tree modeled ozone deposition with field measurements in a periurban Mediterranean forest

    Science.gov (United States)

    A. Morani; D. Nowak; S. Hirabayashi; G. Guidolotti; M. Medori; V. Muzzini; S. Fares; G. Scarascia Mugnozza; C. Calfapietra

    2014-01-01

    Ozone flux estimates from the i-Tree model were compared with ozone flux measurements using the Eddy Covariance technique in a periurban Mediterranean forest near Rome (Castelporziano). For the first time i-Tree model outputs were compared with field measurements in relation to dry deposition estimates. Results showed generally a...

  16. Comparing the Applicability of Commonly Used Hydrological Ecosystem Services Models for Integrated Decision-Support

    Directory of Open Access Journals (Sweden)

    Anna Lüke

    2018-01-01

    Full Text Available Different simulation models are used in science and practice in order to incorporate hydrological ecosystem services in decision-making processes. This contribution compares three simulation models, the Soil and Water Assessment Tool, a traditional hydrological model and two ecosystem services models, the Integrated Valuation of Ecosystem Services and Trade-offs model and the Resource Investment Optimization System model. The three models are compared on a theoretical and conceptual basis as well in a comparative case study application. The application of the models to a study area in Nicaragua reveals that a practical benefit to apply these models for different questions in decision-making generally exists. However, modelling of hydrological ecosystem services is associated with a high application effort and requires input data that may not always be available. The degree of detail in temporal and spatial variability in ecosystem service provision is higher when using the Soil and Water Assessment Tool compared to the two ecosystem service models. In contrast, the ecosystem service models have lower requirements on input data and process knowledge. A relationship between service provision and beneficiaries is readily produced and can be visualized as a model output. The visualization is especially useful for a practical decision-making context.

  17. Comparative evaluation of life cycle assessment models for solid waste management

    International Nuclear Information System (INIS)

    Winkler, Joerg; Bilitewski, Bernd

    2007-01-01

    This publication compares a selection of six different models developed in Europe and America by research organisations, industry associations and governmental institutions. The comparison of the models reveals the variations in the results and the differences in the conclusions of an LCA study done with these models. The models are compared by modelling a specific case - the waste management system of Dresden, Germany - with each model and an in-detail comparison of the life cycle inventory results. Moreover, a life cycle impact assessment shows if the LCA results of each model allows for comparable and consecutive conclusions, which do not contradict the conclusions derived from the other models' results. Furthermore, the influence of different level of detail in the life cycle inventory of the life cycle assessment is demonstrated. The model comparison revealed that the variations in the LCA results calculated by the models for the case show high variations and are not negligible. In some cases the high variations in results lead to contradictory conclusions concerning the environmental performance of the waste management processes. The static, linear modelling approach chosen by all models analysed is inappropriate for reflecting actual conditions. Moreover, it was found that although the models' approach to LCA is comparable on a general level, the level of detail implemented in the software tools is very different

  18. Hydrochemical analysis of groundwater using a tree-based model

    Science.gov (United States)

    Litaor, M. Iggy; Brielmann, H.; Reichmann, O.; Shenker, M.

    2010-06-01

    SummaryHydrochemical indices are commonly used to ascertain aquifer characteristics, salinity problems, anthropogenic inputs and resource management, among others. This study was conducted to test the applicability of a binary decision tree model to aquifer evaluation using hydrochemical indices as input. The main advantage of the tree-based model compared to other commonly used statistical procedures such as cluster and factor analyses is the ability to classify groundwater samples with assigned probability and the reduction of a large data set into a few significant variables without creating new factors. We tested the model using data sets collected from headwater springs of the Jordan River, Israel. The model evaluation consisted of several levels of complexity, from simple separation between the calcium-magnesium-bicarbonate water type of karstic aquifers to the more challenging separation of calcium-sodium-bicarbonate water type flowing through perched and regional basaltic aquifers. In all cases, the model assigned measures for goodness of fit in the form of misclassification errors and singled out the most significant variable in the analysis. The model proceeded through a sequence of partitions providing insight into different possible pathways and changing lithology. The model results were extremely useful in constraining the interpretation of geological heterogeneity and constructing a conceptual flow model for a given aquifer. The tree model clearly identified the hydrochemical indices that were excluded from the analysis, thus providing information that can lead to a decrease in the number of routinely analyzed variables and a significant reduction in laboratory cost.

  19. Local fit evaluation of structural equation models using graphical criteria.

    Science.gov (United States)

    Thoemmes, Felix; Rosseel, Yves; Textor, Johannes

    2018-03-01

    Evaluation of model fit is critically important for every structural equation model (SEM), and sophisticated methods have been developed for this task. Among them are the χ² goodness-of-fit test, decomposition of the χ², derived measures like the popular root mean square error of approximation (RMSEA) or comparative fit index (CFI), or inspection of residuals or modification indices. Many of these methods provide a global approach to model fit evaluation: A single index is computed that quantifies the fit of the entire SEM to the data. In contrast, graphical criteria like d-separation or trek-separation allow derivation of implications that can be used for local fit evaluation, an approach that is hardly ever applied. We provide an overview of local fit evaluation from the viewpoint of SEM practitioners. In the presence of model misfit, local fit evaluation can potentially help in pinpointing where the problem with the model lies. For models that do fit the data, local tests can identify the parts of the model that are corroborated by the data. Local tests can also be conducted before a model is fitted at all, and they can be used even for models that are globally underidentified. We discuss appropriate statistical local tests, and provide applied examples. We also present novel software in R that automates this type of local fit evaluation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. A comparative study of two fast nonlinear free-surface water wave models

    DEFF Research Database (Denmark)

    Ducrozet, Guillaume; Bingham, Harry B.; Engsig-Karup, Allan Peter

    2012-01-01

    simply directly solves the three-dimensional problem. Both models have been well validated on standard test cases and shown to exhibit attractive convergence properties and an optimal scaling of the computational effort with increasing problem size. These two models are compared for solution of a typical...... used in OceanWave3D, the closer the results come to the HOS model....

  1. A comparative study of two phenomenological models of dephasing in series and parallel resistors

    International Nuclear Information System (INIS)

    Bandopadhyay, Swarnali; Chaudhuri, Debasish; Jayannavar, Arun M.

    2010-01-01

    We compare two recent phenomenological models of dephasing using a double barrier and a quantum ring geometry. While the stochastic absorption model generates controlled dephasing leading to Ohm's law for large dephasing strengths, a Gaussian random phase based statistical model shows many inconsistencies.

  2. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    Science.gov (United States)

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  3. Comparing supply-side specifications in models of global agriculture and the food system

    NARCIS (Netherlands)

    Robinson, S.; Meijl, van J.C.M.; Willenbockel, D.; Valin, H.; Fujimori, S.; Masui, T.; Sands, R.; Wise, M.; Calvin, K.V.; Mason d'Croz, D.; Tabeau, A.A.; Kavallari, A.; Schmitz, C.; Dietrich, J.P.; Lampe, von M.

    2014-01-01

    This article compares the theoretical and functional specification of production in partial equilibrium (PE) and computable general equilibrium (CGE) models of the global agricultural and food system included in the AgMIP model comparison study. The two model families differ in their scope—partial

  4. Comparative Analysis of Smart Meters Deployment Business Models on the Example of the Russian Federation Markets

    Science.gov (United States)

    Daminov, Ildar; Tarasova, Ekaterina; Andreeva, Tatyana; Avazov, Artur

    2016-02-01

    This paper presents the comparison of smart meter deployment business models to determine the most suitable option providing smart meters deployment. Authors consider 3 main business model of companies: distribution grid company, energy supplier (energosbyt) and metering company. The goal of the article is to compare the business models of power companies from massive smart metering roll out in power system of Russian Federation.

  5. Prospective comparative effectiveness cohort study comparing two models of advance care planning provision for Australian community aged care clients.

    Science.gov (United States)

    Detering, Karen Margaret; Carter, Rachel Zoe; Sellars, Marcus William; Lewis, Virginia; Sutton, Elizabeth Anne

    2017-12-01

    Conduct a prospective comparative effectiveness cohort study comparing two models of advance care planning (ACP) provision in community aged care: ACP conducted by the client's case manager (CM) ('Facilitator') and ACP conducted by an external ACP service ('Referral') over a 6-month period. This Australian study involved CMs and their clients. Eligible CM were English speaking, ≥18 years, had expected availability for the trial and worked ≥3 days per week. CMs were recruited via their organisations, sequentially allocated to a group and received education based on the group allocation. They were expected to initiate ACP with all clients and to facilitate ACP or refer for ACP. Outcomes were quantity of new ACP conversations and quantity and quality of new advance care directives (ACDs). 30 CMs (16 Facilitator, 14 Referral) completed the study; all 784 client's files (427 Facilitator, 357 Referral) were audited. ACP was initiated with 508 (65%) clients (293 Facilitator, 215 Referral; p<0.05); 89 (18%) of these (53 Facilitator, 36 Referral) and 41 (46%) (13 Facilitator, 28 Referral; p<0.005) completed ACDs. Most ACDs (71%) were of poor quality/not valid. A further 167 clients (facilitator 124; referral 43; p<0.005) reported ACP was in progress at study completion. While there were some differences, overall, models achieved similar outcomes. ACP was initiated with 65% of clients. However, fewer clients completed ACP, there was low numbers of ACDs and document quality was generally poor. The findings raise questions for future implementation and research into community ACP provision. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. An integrated modeling framework of socio-economic, biophysical, and hydrological processes in Midwest landscapes: Remote sensing data, agro-hydrological model, and agent-based model

    Science.gov (United States)

    Ding, Deng

    Intensive human-environment interactions are taking place in Midwestern agricultural systems. An integrated modeling framework is suitable for predicting dynamics of key variables of the socio-economic, biophysical, hydrological processes as well as exploring the potential transitions of system states in response to changes of the driving factors. The purpose of this dissertation is to address issues concerning the interacting processes and consequent changes in land use, water balance, and water quality using an integrated modeling framework. This dissertation is composed of three studies in the same agricultural watershed, the Clear Creek watershed in East-Central Iowa. In the first study, a parsimonious hydrologic model, the Threshold-Exceedance-Lagrangian Model (TELM), is further developed into RS-TELM (Remote Sensing TELM) to integrate remote sensing vegetation data for estimating evapotranspiration. The goodness of fit of RS-TELM is comparable to a well-calibrated SWAT (Soil and Water Assessment Tool) and even slightly superior in capturing intra-seasonal variability of stream flow. The integration of RS LAI (Leaf Area Index) data improves the model's performance especially over the agriculture dominated landscapes. The input of rainfall datasets with spatially explicit information plays a critical role in increasing the model's goodness of fit. In the second study, an agent-based model is developed to simulate farmers' decisions on crop type and fertilizer application in response to commodity and biofuel crop prices. The comparison between simulated crop land percentage and crop rotations with satellite-based land cover data suggest that farmers may be underestimating the effects that continuous corn production has on yields (yield drag). The simulation results given alternative market scenarios based on a survey of agricultural land owners and operators in the Clear Creek Watershed show that, farmers see cellulosic biofuel feedstock production in the form

  7. Evaluating the relationship between job stress and job satisfaction among female hospital nurses in Babol: An application of structural equation modeling

    Directory of Open Access Journals (Sweden)

    Majid Bagheri Hosseinabadi

    2018-04-01

    Full Text Available Background: This study was designed to investigate job satisfaction and its relation to perceived job stress among hospital nurses in Babol County, Iran.Methods: This cross-sectional study was conducted on 406 female nurses in 6 Babol hospitals.Respondents completed the Minnesota Satisfaction Questionnaire (MSQ, the health and safety executive (HSE indicator tool and a demographic questionnaire. Descriptive, analytical and structural equation modeling (SEM analyses were carried out applying SPSS v. 22 and AMOS v. 22.Results: The Normed Fit Index (NFI, Non-normed Fit Index (NNFI, Incremental Fit Index (IFIand Comparative Fit Index (CFI were greater than 0.9. Also, goodness of fit index (GFI=0.99and adjusted goodness of fit index (AGFI were greater than 0.8, and root mean square error of approximation (RMSEA were 0.04, The model was found to be with an appropriate fit. The R-squared was 0.42 for job satisfaction, and all its dimensions were related to job stress. The dimensions of job stress explained 42% of changes in the variance of job satisfaction. There was a significant relationship between the dimensions of job stress such as demand (β =0.173,CI =0.095 - 0.365, P≤0.001, control (β =0.135, CI =0.062 - 0.404, P =0.008, relationships(β =-0.208, CI =-0.637– -0.209; P≤0.001 and changes (β =0.247, CI =0.360 - 1.026, P≤0.001with job satisfaction.Conclusion: One of the important interventions to increase job satisfaction among nurses maybe improvement in the workplace. Reducing the level of workload in order to improve job demand and minimizing role conflict through reducing conflicting demands are recommended.

  8. Evaluating the relationship between job stress and job satisfaction among female hospital nurses in Babol: An application of structural equation modeling.

    Science.gov (United States)

    Bagheri Hosseinabadi, Majid; Etemadinezhad, Siavash; Khanjani, Narges; Ahmadi, Omran; Gholinia, Hemat; Galeshi, Mina; Samaei, Seyed Ehsan

    2018-01-01

    Background: This study was designed to investigate job satisfaction and its relation to perceived job stress among hospital nurses in Babol County, Iran. Methods: This cross-sectional study was conducted on 406 female nurses in 6 Babol hospitals. Respondents completed the Minnesota Satisfaction Questionnaire (MSQ), the health and safety executive (HSE) indicator tool and a demographic questionnaire. Descriptive, analytical and structural equation modeling (SEM) analyses were carried out applying SPSS v. 22 and AMOS v. 22. Results: The Normed Fit Index (NFI), Non-normed Fit Index (NNFI), Incremental Fit Index (IFI)and Comparative Fit Index (CFI) were greater than 0.9. Also, goodness of fit index (GFI=0.99)and adjusted goodness of fit index (AGFI) were greater than 0.8, and root mean square error of approximation (RMSEA) were 0.04, The model was found to be with an appropriate fit. The R-squared was 0.42 for job satisfaction, and all its dimensions were related to job stress. The dimensions of job stress explained 42% of changes in the variance of job satisfaction. There was a significant relationship between the dimensions of job stress such as demand (β =0.173,CI =0.095 - 0.365, P≤0.001), control (β =0.135, CI =0.062 - 0.404, P =0.008), relationships(β =-0.208, CI =-0.637- -0.209; P≤0.001) and changes (β =0.247, CI =0.360 - 1.026, P≤0.001)with job satisfaction. Conclusion: One of the important interventions to increase job satisfaction among nurses maybe improvement in the workplace. Reducing the level of workload in order to improve job demand and minimizing role conflict through reducing conflicting demands are recommended.

  9. Case management: a randomized controlled study comparing a neighborhood team and a centralized individual model.

    OpenAIRE

    Eggert, G M; Zimmer, J G; Hall, W J; Friedman, B

    1991-01-01

    This randomized controlled study compared two types of case management for skilled nursing level patients living at home: the centralized individual model and the neighborhood team model. The team model differed from the individual model in that team case managers performed client assessments, care planning, some direct services, and reassessments; they also had much smaller caseloads and were assigned a specific catchment area. While patients in both groups incurred very high estimated healt...

  10. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins.

    Science.gov (United States)

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)(2)-V(2), Modweb were used for the comparison and model generation. Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure.

  11. A Propagation Environment Modeling in Foliage

    Directory of Open Access Journals (Sweden)

    Samn SherwoodW

    2010-01-01

    Full Text Available Foliage clutter, which can be very large and mask targets in backscattered signals, is a crucial factor that degrades the performance of target detection, tracking, and recognition. Previous literature has intensively investigated land clutter and sea clutter, whereas foliage clutter is still an open-research area. In this paper, we propose that foliage clutter should be more accurately described by a log-logistic model. On a basis of pragmatic data collected by ultra-wideband (UWB radars, we analyze two different datasets by means of maximum likelihood (ML parameter estimation as well as the root mean square error (RMSE performance. We not only investigate log-logistic model, but also compare it with other popular clutter models, namely, log-normal, Weibull, and Nakagami. It shows that the log-logistic model achieves the smallest standard deviation (STD error in parameter estimation, as well as the best goodness-of-fit and smallest RMSE for both poor and good foliage clutter signals.

  12. The importance of regional models in assessing canine cancer incidences in Switzerland.

    Science.gov (United States)

    Boo, Gianluca; Leyk, Stefan; Brunsdon, Christopher; Graf, Ramona; Pospischil, Andreas; Fabrikant, Sara Irina

    2018-01-01

    Fitting canine cancer incidences through a conventional regression model assumes constant statistical relationships across the study area in estimating the model coefficients. However, it is often more realistic to consider that these relationships may vary over space. Such a condition, known as spatial non-stationarity, implies that the model coefficients need to be estimated locally. In these kinds of local models, the geographic scale, or spatial extent, employed for coefficient estimation may also have a pervasive influence. This is because important variations in the local model coefficients across geographic scales may impact the understanding of local relationships. In this study, we fitted canine cancer incidences across Swiss municipal units through multiple regional models. We computed diagnostic summaries across the different regional models, and contrasted them with the diagnostics of the conventional regression model, using value-by-alpha maps and scalograms. The results of this comparative assessment enabled us to identify variations in the goodness-of-fit and coefficient estimates. We detected spatially non-stationary relationships, in particular, for the variables related to biological risk factors. These variations in the model coefficients were more important at small geographic scales, making a case for the need to model canine cancer incidences locally in contrast to more conventional global approaches. However, we contend that prior to undertaking local modeling efforts, a deeper understanding of the effects of geographic scale is needed to better characterize and identify local model relationships.

  13. Comparative analysis of modified PMV models and SET models to predict human thermal sensation in naturally ventilated buildings

    DEFF Research Database (Denmark)

    Gao, Jie; Wang, Yi; Wargocki, Pawel

    2015-01-01

    In this paper, a comparative analysis was performed on the human thermal sensation estimated by modified predicted mean vote (PMV) models and modified standard effective temperature (SET) models in naturally ventilated buildings; the data were collected in field study. These prediction models were....../s, the expectancy factors for the extended PMV model and the extended SET model were from 0.770 to 0.974 and from 1.330 to 1.363, and the adaptive coefficients for the adaptive PMV model and the adaptive SET model were from 0.029 to 0.167 and from-0.213 to-0.195. In addition, the difference in thermal sensation...... between the measured and predicted values using the modified PMV models exceeded 25%, while the difference between the measured thermal sensation and the predicted thermal sensation using modified SET models was approximately less than 25%. It is concluded that the modified SET models can predict human...

  14. Comparative studies on constitutive models for cohesive interface cracks of quasi-brittle materials

    International Nuclear Information System (INIS)

    Shen Xinpu; Shen Guoxiao; Zhou Lin

    2005-01-01

    In this paper, Concerning on the modelling of quasi-brittle fracture process zone at interface crack of quasi-brittle materials and structures, typical constitutive models of interface cracks were compared. Numerical calculations of the constitutive behaviours of selected models were carried out at local level. Aiming at the simulation of quasi-brittle fracture of concrete-like materials and structures, the emphases of the qualitative comparisons of selected cohesive models are focused on: (1) the fundamental mode I and mode II behaviours of selected models; (2) dilatancy properties of the selected models under mixed mode fracture loading conditions. (authors)

  15. Assessment and Challenges of Ligand Docking into Comparative Models of G-Protein Coupled Receptors

    DEFF Research Database (Denmark)

    Nguyen, E.D.; Meiler, J.; Norn, C.

    2013-01-01

    screening and to design and optimize drug candidates. However, low sequence identity between receptors, conformational flexibility, and chemical diversity of ligands present an enormous challenge to molecular modeling approaches. It is our hypothesis that rapid Monte-Carlo sampling of protein backbone...... extracellular loop. Furthermore, these models are consistently correlated with low Rosetta energy score. To predict their binding modes, ligand conformers of the 14 ligands co-crystalized with the GPCRs were docked against the top ranked comparative models. In contrast to the comparative models themselves...

  16. Modeling Mixed Bicycle Traffic Flow: A Comparative Study on the Cellular Automata Approach

    Directory of Open Access Journals (Sweden)

    Dan Zhou

    2015-01-01

    Full Text Available Simulation, as a powerful tool for evaluating transportation systems, has been widely used in transportation planning, management, and operations. Most of the simulation models are focused on motorized vehicles, and the modeling of nonmotorized vehicles is ignored. The cellular automata (CA model is a very important simulation approach and is widely used for motorized vehicle traffic. The Nagel-Schreckenberg (NS CA model and the multivalue CA (M-CA model are two categories of CA model that have been used in previous studies on bicycle traffic flow. This paper improves on these two CA models and also compares their characteristics. It introduces a two-lane NS CA model and M-CA model for both regular bicycles (RBs and electric bicycles (EBs. In the research for this paper, many cases, featuring different values for the slowing down probability, lane-changing probability, and proportion of EBs, were simulated, while the fundamental diagrams and capacities of the proposed models were analyzed and compared between the two models. Field data were collected for the evaluation of the two models. The results show that the M-CA model exhibits more stable performance than the two-lane NS model and provides results that are closer to real bicycle traffic.

  17. Functional dynamic factor models with application to yield curve forecasting

    KAUST Repository

    Hays, Spencer

    2012-09-01

    Accurate forecasting of zero coupon bond yields for a continuum of maturities is paramount to bond portfolio management and derivative security pricing. Yet a universal model for yield curve forecasting has been elusive, and prior attempts often resulted in a trade-off between goodness of fit and consistency with economic theory. To address this, herein we propose a novel formulation which connects the dynamic factor model (DFM) framework with concepts from functional data analysis: a DFM with functional factor loading curves. This results in a model capable of forecasting functional time series. Further, in the yield curve context we show that the model retains economic interpretation. Model estimation is achieved through an expectation- maximization algorithm, where the time series parameters and factor loading curves are simultaneously estimated in a single step. Efficient computing is implemented and a data-driven smoothing parameter is nicely incorporated. We show that our model performs very well on forecasting actual yield data compared with existing approaches, especially in regard to profit-based assessment for an innovative trading exercise. We further illustrate the viability of our model to applications outside of yield forecasting.

  18. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  19. Comparing Epileptiform Behavior of Mesoscale Detailed Models and Population Models of Neocortex

    NARCIS (Netherlands)

    Visser, S.; Meijer, Hil Gaétan Ellart; Lee, Hyong C.; van Drongelen, Wim; van Putten, Michel Johannes Antonius Maria; van Gils, Stephanus A.

    2010-01-01

    Two models of the neocortex are developed to study normal and pathologic neuronal activity. One model contains a detailed description of a neocortical microcolumn represented by 656 neurons, including superficial and deep pyramidal cells, four types of inhibitory neurons, and realistic synaptic

  20. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  1. A comparative study on effective dynamic modeling methods for flexible pipe

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang Ho; Hong, Sup; Kim, Hyung Woo [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of); Kim, Sung Soo [Chungnam National University, Daejeon (Korea, Republic of)

    2015-07-15

    In this paper, in order to select a suitable method that is applicable to the large deflection with a small strain problem of pipe systems in the deep seabed mining system, the finite difference method with lumped mass from the field of cable dynamics and the substructure method from the field of flexible multibody dynamics were compared. Due to the difficulty of obtaining experimental results from an actual pipe system in the deep seabed mining system, a thin cantilever beam model with experimental results was employed for the comparative study. Accuracy of the methods was investigated by comparing the experimental results and simulation results from the cantilever beam model with different numbers of elements. Efficiency of the methods was also examined by comparing the operational counts required for solving equations of motion. Finally, this cantilever beam model with comparative study results can be promoted to be a benchmark problem for the flexible multibody dynamics.

  2. Comparative study of boron transport models in NRC Thermal-Hydraulic Code Trace

    Energy Technology Data Exchange (ETDEWEB)

    Olmo-Juan, Nicolás; Barrachina, Teresa; Miró, Rafael; Verdú, Gumersindo; Pereira, Claubia, E-mail: nioljua@iqn.upv.es, E-mail: tbarrachina@iqn.upv.es, E-mail: rmiro@iqn.upv.es, E-mail: gverdu@iqn.upv.es, E-mail: claubia@nuclear.ufmg.br [Institute for Industrial, Radiophysical and Environmental Safety (ISIRYM). Universitat Politècnica de València (Spain); Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2017-07-01

    Recently, the interest in the study of various types of transients involving changes in the boron concentration inside the reactor, has led to an increase in the interest of developing and studying new models and tools that allow a correct study of boron transport. Therefore, a significant variety of different boron transport models and spatial difference schemes are available in the thermal-hydraulic codes, as TRACE. According to this interest, in this work it will be compared the results obtained using the different boron transport models implemented in the NRC thermal-hydraulic code TRACE. To do this, a set of models have been created using the different options and configurations that could have influence in boron transport. These models allow to reproduce a simple event of filling or emptying the boron concentration in a long pipe. Moreover, with the aim to compare the differences obtained when one-dimensional or three-dimensional components are chosen, it has modeled many different cases using only pipe components or a mix of pipe and vessel components. In addition, the influence of the void fraction in the boron transport has been studied and compared under close conditions to BWR commercial model. A final collection of the different cases and boron transport models are compared between them and those corresponding to the analytical solution provided by the Burgers equation. From this comparison, important conclusions are drawn that will be the basis of modeling the boron transport in TRACE adequately. (author)

  3. The utility of comparative models and the local model quality for protein crystal structure determination by Molecular Replacement

    Directory of Open Access Journals (Sweden)

    Pawlowski Marcin

    2012-11-01

    Full Text Available Abstract Background Computational models of protein structures were proved to be useful as search models in Molecular Replacement (MR, a common method to solve the phase problem faced by macromolecular crystallography. The success of MR depends on the accuracy of a search model. Unfortunately, this parameter remains unknown until the final structure of the target protein is determined. During the last few years, several Model Quality Assessment Programs (MQAPs that predict the local accuracy of theoretical models have been developed. In this article, we analyze whether the application of MQAPs improves the utility of theoretical models in MR. Results For our dataset of 615 search models, the real local accuracy of a model increases the MR success ratio by 101% compared to corresponding polyalanine templates. On the contrary, when local model quality is not utilized in MR, the computational models solved only 4.5% more MR searches than polyalanine templates. For the same dataset of the 615 models, a workflow combining MR with predicted local accuracy of a model found 45% more correct solution than polyalanine templates. To predict such accuracy MetaMQAPclust, a “clustering MQAP” was used. Conclusions Using comparative models only marginally increases the MR success ratio in comparison to polyalanine structures of templates. However, the situation changes dramatically once comparative models are used together with their predicted local accuracy. A new functionality was added to the GeneSilico Fold Prediction Metaserver in order to build models that are more useful for MR searches. Additionally, we have developed a simple method, AmIgoMR (Am I good for MR?, to predict if an MR search with a template-based model for a given template is likely to find the correct solution.

  4. A Comparison of Competing Models for Understanding Industrial Organization’s Acceptance of Cloud Services

    Directory of Open Access Journals (Sweden)

    Shui-Lien Chen

    2018-03-01

    Full Text Available Cloud computing is the next generation in computing, and the next natural step in the evolution of on-demand information technology services and products. However, only a few studies have addressed the adoption of cloud computing from an organizational perspective, which have not proven whether the research model is the best-fitting model. The purpose of this paper is to construct research competing models (RCMs and determine the best-fitting model for understanding industrial organization’s acceptance of cloud services. This research integrated the technology acceptance model and the principle of model parsimony to develop four cloud service adoption RCMs with enterprise usage intention being used as a proxy for actual behavior, and then compared the RCMs using structural equation modeling (SEM. Data derived from a questionnaire-based survey of 227 firms in Taiwan were tested against the relationships through SEM. Based on the empirical study, the results indicated that, although all four RCMs had a high goodness of fit, in both nested and non-nested structure comparisons, research competing model A (Model A demonstrated superior performance and was the best-fitting model. This study introduced a model development strategy that can most accurately explain and predict the behavioral intention of organizations to adopt cloud services.

  5. Animal Models for Evaluation of Bone Implants and Devices: Comparative Bone Structure and Common Model Uses.

    Science.gov (United States)

    Wancket, L M

    2015-09-01

    Bone implants and devices are a rapidly growing field within biomedical research, and implants have the potential to significantly improve human and animal health. Animal models play a key role in initial product development and are important components of nonclinical data included in applications for regulatory approval. Pathologists are increasingly being asked to evaluate these models at the initial developmental and nonclinical biocompatibility testing stages, and it is important to understand the relative merits and deficiencies of various species when evaluating a new material or device. This article summarizes characteristics of the most commonly used species in studies of bone implant materials, including detailed information about the relevance of a particular model to human bone physiology and pathology. Species reviewed include mice, rats, rabbits, guinea pigs, dogs, sheep, goats, and nonhuman primates. Ultimately, a comprehensive understanding of the benefits and limitations of different model species will aid in rigorously evaluating a novel bone implant material or device. © The Author(s) 2015.

  6. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    Directory of Open Access Journals (Sweden)

    Erin E Poor

    Full Text Available Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent and expert-based (Analytic Hierarchy Process. We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  7. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    Science.gov (United States)

    Poor, Erin E; Loucks, Colby; Jakes, Andrew; Urban, Dean L

    2012-01-01

    Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  8. Effects of stimulus order on discrimination processes in comparative and equality judgements: data and models.

    Science.gov (United States)

    Dyjas, Oliver; Ulrich, Rolf

    2014-01-01

    In typical discrimination experiments, participants are presented with a constant standard and a variable comparison stimulus and their task is to judge which of these two stimuli is larger (comparative judgement). In these experiments, discrimination sensitivity depends on the temporal order of these stimuli (Type B effect) and is usually higher when the standard precedes rather than follows the comparison. Here, we outline how two models of stimulus discrimination can account for the Type B effect, namely the weighted difference model (or basic Sensation Weighting model) and the Internal Reference Model. For both models, the predicted psychometric functions for comparative judgements as well as for equality judgements, in which participants indicate whether they perceived the two stimuli to be equal or not equal, are derived and it is shown that the models also predict a Type B effect for equality judgements. In the empirical part, the models' predictions are evaluated. To this end, participants performed a duration discrimination task with comparative judgements and with equality judgements. In line with the models' predictions, a Type B effect was observed for both judgement types. In addition, a time-order error, as indicated by shifts of the psychometric functions, and differences in response times were observed only for the equality judgement. Since both models entail distinct additional predictions, it seems worthwhile for future research to unite the two models into one conceptual framework.

  9. A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models

    Science.gov (United States)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha

    2018-01-01

    It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.

  10. A comparative modeling study of a dual tracer experiment in a large lysimeter under atmospheric conditions

    Science.gov (United States)

    Stumpp, C.; Nützmann, G.; Maciejewski, S.; Maloszewski, P.

    2009-09-01

    SummaryIn this paper, five model approaches with different physical and mathematical concepts varying in their model complexity and requirements were applied to identify the transport processes in the unsaturated zone. The applicability of these model approaches were compared and evaluated investigating two tracer breakthrough curves (bromide, deuterium) in a cropped, free-draining lysimeter experiment under natural atmospheric boundary conditions. The data set consisted of time series of water balance, depth resolved water contents, pressure heads and resident concentrations measured during 800 days. The tracer transport parameters were determined using a simple stochastic (stream tube model), three lumped parameter (constant water content model, multi-flow dispersion model, variable flow dispersion model) and a transient model approach. All of them were able to fit the tracer breakthrough curves. The identified transport parameters of each model approach were compared. Despite the differing physical and mathematical concepts the resulting parameters (mean water contents, mean water flux, dispersivities) of the five model approaches were all in the same range. The results indicate that the flow processes are also describable assuming steady state conditions. Homogeneous matrix flow is dominant and a small pore volume with enhanced flow velocities near saturation was identified with variable saturation flow and transport approach. The multi-flow dispersion model also identified preferential flow and additionally suggested a third less mobile flow component. Due to high fitting accuracy and parameter similarity all model approaches indicated reliable results.

  11. What can be learned from computer modeling? Comparing expository and modeling approaches to teaching dynamic systems behavior

    NARCIS (Netherlands)

    van Borkulo, S.P.; van Joolingen, W.R.; Savelsbergh, E.R.; de Jong, T.

    2012-01-01

    Computer modeling has been widely promoted as a means to attain higher order learning outcomes. Substantiating these benefits, however, has been problematic due to a lack of proper assessment tools. In this study, we compared computer modeling with expository instruction, using a tailored assessment

  12. Comparing Video Modeling and Graduated Guidance Together and Video Modeling Alone for Teaching Role Playing Skills to Children with Autism

    Science.gov (United States)

    Akmanoglu, Nurgul; Yanardag, Mehmet; Batu, E. Sema

    2014-01-01

    Teaching play skills is important for children with autism. The purpose of the present study was to compare effectiveness and efficiency of providing video modeling and graduated guidance together and video modeling alone for teaching role playing skills to children with autism. The study was conducted with four students. The study was conducted…

  13. Experience gained with the application of the MODIS diffusion model compared with the ATMOS Gauss-function-based model

    International Nuclear Information System (INIS)

    Mueller, A.

    1985-01-01

    The advantage of the Gauss-function-based models doubtlessly consists in their proven propagation parameter sets and empirical stack plume rise formulas and in their easy matchability and handability. However, grid models based on trace matter transport equation are more convincing concerning their fundamental principle. Grid models of the MODIS type are to acquire a practical applicability comparable to Gauss models by developing techniques allowing to consider the vertical self-movement of the plumes in grid models and to secure improved diffusion co-efficient determination. (orig./PW) [de

  14. Model predictions of metal speciation in freshwaters compared to measurements by in situ techniques.

    NARCIS (Netherlands)

    Unsworth, Emily R; Warnken, Kent W; Zhang, Hao; Davison, William; Black, Frank; Buffle, Jacques; Cao, Jun; Cleven, Rob; Galceran, Josep; Gunkel, Peggy; Kalis, Erwin; Kistler, David; Leeuwen, Herman P van; Martin, Michel; Noël, Stéphane; Nur, Yusuf; Odzak, Niksa; Puy, Jaume; Riemsdijk, Willem van; Sigg, Laura; Temminghoff, Erwin; Tercier-Waeber, Mary-Lou; Toepperwien, Stefanie; Town, Raewyn M; Weng, Liping; Xue, Hanbin

    2006-01-01

    Measurements of trace metal species in situ in a softwater river, a hardwater lake, and a hardwater stream were compared to the equilibrium distribution of species calculated using two models, WHAM 6, incorporating humic ion binding model VI and visual MINTEQ incorporating NICA-Donnan. Diffusive

  15. The Development of Working Memory: Further Note on the Comparability of Two Models of Working Memory.

    Science.gov (United States)

    de Ribaupierre, Anik; Bailleux, Christine

    2000-01-01

    Summarizes similarities and differences between the working memory models of Pascual-Leone and Baddeley. Debates whether each model makes a specific contribution to explanation of Kemps, De Rammelaere, and Desmet's results. Argues for necessity of theoretical task analyses. Compares a study similar to that of Kemps et al. in which different…

  16. Comparative Effectiveness of Echoic and Modeling Procedures in Language Instruction With Culturally Disadvantaged Children.

    Science.gov (United States)

    Stern, Carolyn; Keislar, Evan

    In an attempt to explore a systematic approach to language expansion and improved sentence structure, echoic and modeling procedures for language instruction were compared. Four hypotheses were formulated: (1) children who use modeling procedures will produce better structured sentences than children who use echoic prompting, (2) both echoic and…

  17. Comparing Multidimensional and Continuum Models of Vocabulary Acquisition: An Empirical Examination of the Vocabulary Knowledge Scale

    Science.gov (United States)

    Stewart, Jeffrey; Batty, Aaron Olaf; Bovee, Nicholas

    2012-01-01

    Second language vocabulary acquisition has been modeled both as multidimensional in nature and as a continuum wherein the learner's knowledge of a word develops along a cline from recognition through production. In order to empirically examine and compare these models, the authors assess the degree to which the Vocabulary Knowledge Scale (VKS;…

  18. Comparative nonlinear modeling of renal autoregulation in rats: Volterra approach versus artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Holstein-Rathlou, N H; Marsh, D J

    1998-01-01

    kernel estimation method based on Laguerre expansions. The results for the two types of artificial neural networks and the Volterra models are comparable in terms of normalized mean square error (NMSE) of the respective output prediction for independent testing data. However, the Volterra models obtained...

  19. COMPARING THE UTILITY OF MULTIMEDIA MODELS FOR HUMAN AND ECOLOGICAL EXPOSURE ANALYSIS: TWO CASES

    Science.gov (United States)

    A number of models are available for exposure assessment; however, few are used as tools for both human and ecosystem risks. This discussion will consider two modeling frameworks that have recently been used to support human and ecological decision making. The study will compare ...

  20. Comparing fire spread algorithms using equivalence testing and neutral landscape models

    Science.gov (United States)

    Brian R. Miranda; Brian R. Sturtevant; Jian Yang; Eric J. Gustafson

    2009-01-01

    We demonstrate a method to evaluate the degree to which a meta-model approximates spatial disturbance processes represented by a more detailed model across a range of landscape conditions, using neutral landscapes and equivalence testing. We illustrate this approach by comparing burn patterns produced by a relatively simple fire spread algorithm with those generated by...

  1. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    Science.gov (United States)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  2. The Consensus String Problem and the Complexity of Comparing Hidden Markov Models

    DEFF Research Database (Denmark)

    Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm

    2002-01-01

    The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing...... the probability of generating a given string, or computing the most likely path generating a given string. In this paper we consider the problem of computing the most likely string, or consensus string, generated by a given model, and its implications on the complexity of comparing hidden Markov models. We show...... that computing the consensus string, and approximating its probability within any constant factor, is NP-hard, and that the same holds for the closely related labeling problem for class hidden Markov models. Furthermore, we establish the NP-hardness of comparing two hidden Markov models under the L∞- and L1...

  3. Comparative Analysis of Smart Meters Deployment Business Models on the Example of the Russian Federation Markets

    Directory of Open Access Journals (Sweden)

    Daminov Ildar

    2016-01-01

    Full Text Available This paper presents the comparison of smart meter deployment business models to determine the most suitable option providing smart meters deployment. Authors consider 3 main business model of companies: distribution grid company, energy supplier (energosbyt and metering company. The goal of the article is to compare the business models of power companies from massive smart metering roll out in power system of Russian Federation.

  4. Statistical modeling of road contribution as emission sources to total suspended particles (TSP) under MCF model downtown Medellin - Antioquia - Colombia, 2004

    International Nuclear Information System (INIS)

    Gomez, Miryam; Saldarriaga, Julio; Correa, Mauricio; Posada, Enrique; Castrillon M, Francisco Javier

    2007-01-01

    Sand fields, constructions, carbon boilers, roads, and biologic sources are air-contaminant-constituent factors in down town Valle de Aburra, among others. the distribution of road contribution data to total suspended particles according to the source receptor model MCF, source correlation modeling, is nearly a gamma distribution. Chi-square goodness of fit is used to model statistically. This test for goodness of fit also allows estimating the parameters of the distribution utilizing maximum likelihood method. As convergence criteria, the estimation maximization algorithm is used. The mean of road contribution data to total suspended particles according to the source receptor model MCF, is straightforward and validates the road contribution factor to the atmospheric pollution of the zone under study

  5. Comparing predictive models of glioblastoma multiforme built using multi-institutional and local data sources.

    Science.gov (United States)

    Singleton, Kyle W; Hsu, William; Bui, Alex A T

    2012-01-01

    The growing amount of electronic data collected from patient care and clinical trials is motivating the creation of national repositories where multiple institutions share data about their patient cohorts. Such efforts aim to provide sufficient sample sizes for data mining and predictive modeling, ultimately improving treatment recommendations and patient outcome prediction. While these repositories offer the potential to improve our understanding of a disease, potential issues need to be addressed to ensure that multi-site data and resultant predictive models are useful to non-contributing institutions. In this paper we examine the challenges of utilizing National Cancer Institute datasets for modeling glioblastoma multiforme. We created several types of prognostic models and compared their results against models generated using data solely from our institution. While overall model performance between the data sources was similar, different variables were selected during model generation, suggesting that mapping data resources between models is not a straightforward issue.

  6. Comparative analysis of diffused solar radiation models for optimum tilt angle determination for Indian locations

    International Nuclear Information System (INIS)

    Yadav, P.; Chandel, S.S.

    2014-01-01

    Tilt angle and orientation greatly are influenced on the performance of the solar photo voltaic panels. The tilt angle of solar photovoltaic panels is one of the important parameters for the optimum sizing of solar photovoltaic systems. This paper analyses six different isotropic and anisotropic diffused solar radiation models for optimum tilt angle determination. The predicted optimum tilt angles are compared with the experimentally measured values for summer season under outdoor conditions. The Liu and Jordan model is found to exhibit t lowest error as compared to other models for the location. (author)

  7. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my [School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang (Malaysia)

    2016-02-01

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments of the tether.

  8. Eigenvector Spatial Filtering Regression Modeling of Ground PM2.5 Concentrations Using Remotely Sensed Data

    Directory of Open Access Journals (Sweden)

    Jingyi Zhang

    2018-06-01

    Full Text Available This paper proposes a regression model using the Eigenvector Spatial Filtering (ESF method to estimate ground PM2.5 concentrations. Covariates are derived from remotely sensed data including aerosol optical depth, normal differential vegetation index, surface temperature, air pressure, relative humidity, height of planetary boundary layer and digital elevation model. In addition, cultural variables such as factory densities and road densities are also used in the model. With the Yangtze River Delta region as the study area, we constructed ESF-based Regression (ESFR models at different time scales, using data for the period between December 2015 and November 2016. We found that the ESFR models effectively filtered spatial autocorrelation in the OLS residuals and resulted in increases in the goodness-of-fit metrics as well as reductions in residual standard errors and cross-validation errors, compared to the classic OLS models. The annual ESFR model explained 70% of the variability in PM2.5 concentrations, 16.7% more than the non-spatial OLS model. With the ESFR models, we performed detail analyses on the spatial and temporal distributions of PM2.5 concentrations in the study area. The model predictions are lower than ground observations but match the general trend. The experiment shows that ESFR provides a promising approach to PM2.5 analysis and prediction.

  9. Eigenvector Spatial Filtering Regression Modeling of Ground PM2.5 Concentrations Using Remotely Sensed Data.

    Science.gov (United States)

    Zhang, Jingyi; Li, Bin; Chen, Yumin; Chen, Meijie; Fang, Tao; Liu, Yongfeng

    2018-06-11

    This paper proposes a regression model using the Eigenvector Spatial Filtering (ESF) method to estimate ground PM 2.5 concentrations. Covariates are derived from remotely sensed data including aerosol optical depth, normal differential vegetation index, surface temperature, air pressure, relative humidity, height of planetary boundary layer and digital elevation model. In addition, cultural variables such as factory densities and road densities are also used in the model. With the Yangtze River Delta region as the study area, we constructed ESF-based Regression (ESFR) models at different time scales, using data for the period between December 2015 and November 2016. We found that the ESFR models effectively filtered spatial autocorrelation in the OLS residuals and resulted in increases in the goodness-of-fit metrics as well as reductions in residual standard errors and cross-validation errors, compared to the classic OLS models. The annual ESFR model explained 70% of the variability in PM 2.5 concentrations, 16.7% more than the non-spatial OLS model. With the ESFR models, we performed detail analyses on the spatial and temporal distributions of PM 2.5 concentrations in the study area. The model predictions are lower than ground observations but match the general trend. The experiment shows that ESFR provides a promising approach to PM 2.5 analysis and prediction.

  10. Antibiotic Resistances in Livestock: A Comparative Approach to Identify an Appropriate Regression Model for Count Data

    Directory of Open Access Journals (Sweden)

    Anke Hüls

    2017-05-01

    Full Text Available Antimicrobial resistance in livestock is a matter of general concern. To develop hygiene measures and methods for resistance prevention and control, epidemiological studies on a population level are needed to detect factors associated with antimicrobial resistance in livestock holdings. In general, regression models are used to describe these relationships between environmental factors and resistance outcome. Besides the study design, the correlation structures of the different outcomes of antibiotic resistance and structural zero measurements on the resistance outcome as well as on the exposure side are challenges for the epidemiological model building process. The use of appropriate regression models that acknowledge these complexities is essential to assure valid epidemiological interpretations. The aims of this paper are (i to explain the model building process comparing several competing models for count data (negative binomial model, quasi-Poisson model, zero-inflated model, and hurdle model and (ii to compare these models using data from a cross-sectional study on antibiotic resistance in animal husbandry. These goals are essential to evaluate which model is most suitable to identify potential prevention measures. The dataset used as an example in our analyses was generated initially to study the prevalence and associated factors for the appearance of cefotaxime-resistant Escherichia coli in 48 German fattening pig farms. For each farm, the outcome was the count of samples with resistant bacteria. There was almost no overdispersion and only moderate evidence of excess zeros in the data. Our analyses show that it is essential to evaluate regression models in studies analyzing the relationship between environmental factors and antibiotic resistances in livestock. After model comparison based on evaluation of model predictions, Akaike information criterion, and Pearson residuals, here the hurdle model was judged to be the most appropriate

  11. Canis familiaris As a Model for Non-Invasive Comparative Neuroscience.

    Science.gov (United States)

    Bunford, Nóra; Andics, Attila; Kis, Anna; Miklósi, Ádám; Gácsi, Márta

    2017-07-01

    There is an ongoing need to improve animal models for investigating human behavior and its biological underpinnings. The domestic dog (Canis familiaris) is a promising model in cognitive neuroscience. However, before it can contribute to advances in this field in a comparative, reliable, and valid manner, several methodological issues warrant attention. We review recent non-invasive canine neuroscience studies, primarily focusing on (i) variability among dogs and between dogs and humans in cranial characteristics, and (ii) generalizability across dog and dog-human studies. We argue not for methodological uniformity but for functional comparability between methods, experimental designs, and neural responses. We conclude that the dog may become an innovative and unique model in comparative neuroscience, complementing more traditional models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A Comparative Study of Theoretical Graph Models for Characterizing Structural Networks of Human Brain

    Directory of Open Access Journals (Sweden)

    Xiaojin Li

    2013-01-01

    Full Text Available Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY and scale-free gene duplication model (SF-GD, that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.

  13. Comparative Analysis of River Flow Modelling by Using Supervised Learning Technique

    Science.gov (United States)

    Ismail, Shuhaida; Mohamad Pandiahi, Siraj; Shabri, Ani; Mustapha, Aida

    2018-04-01

    The goal of this research is to investigate the efficiency of three supervised learning algorithms for forecasting monthly river flow of the Indus River in Pakistan, spread over 550 square miles or 1800 square kilometres. The algorithms include the Least Square Support Vector Machine (LSSVM), Artificial Neural Network (ANN) and Wavelet Regression (WR). The forecasting models predict the monthly river flow obtained from the three models individually for river flow data and the accuracy of the all models were then compared against each other. The monthly river flow of the said river has been forecasted using these three models. The obtained results were compared and statistically analysed. Then, the results of this analytical comparison showed that LSSVM model is more precise in the monthly river flow forecasting. It was found that LSSVM has he higher r with the value of 0.934 compared to other models. This indicate that LSSVM is more accurate and efficient as compared to the ANN and WR model.

  14. Comparing spatial diversification and meta-population models in the Indo-Australian Archipelago.

    Science.gov (United States)

    Chalmandrier, Loïc; Albouy, Camille; Descombes, Patrice; Sandel, Brody; Faurby, Soren; Svenning, Jens-Christian; Zimmermann, Niklaus E; Pellissier, Loïc

    2018-03-01

    Reconstructing the processes that have shaped the emergence of biodiversity gradients is critical to understand the dynamics of diversification of life on Earth. Islands have traditionally been used as model systems to unravel the processes shaping biological diversity. MacArthur and Wilson's island biogeographic model predicts diversity to be based on dynamic interactions between colonization and extinction rates, while treating islands themselves as geologically static entities. The current spatial configuration of islands should influence meta-population dynamics, but long-term geological changes within archipelagos are also expected to have shaped island biodiversity, in part by driving diversification. Here, we compare two mechanistic models providing inferences on species richness at a biogeographic scale: a mechanistic spatial-temporal model of species diversification and a spatial meta-population model. While the meta-population model operates over a static landscape, the diversification model is driven by changes in the size and spatial configuration of islands through time. We compare the inferences of both models to floristic diversity patterns among land patches of the Indo-Australian Archipelago. Simulation results from the diversification model better matched observed diversity than a meta-population model constrained only by the contemporary landscape. The diversification model suggests that the dynamic re-positioning of islands promoting land disconnection and reconnection induced an accumulation of particularly high species diversity on Borneo, which is central within the island network. By contrast, the meta-population model predicts a higher diversity on the mainlands, which is less compatible with empirical data. Our analyses highlight that, by comparing models with contrasting assumptions, we can pinpoint the processes that are most compatible with extant biodiversity patterns.

  15. Comparing spatial diversification and meta-population models in the Indo-Australian Archipelago

    Science.gov (United States)

    Chalmandrier, Loïc; Albouy, Camille; Descombes, Patrice; Sandel, Brody; Faurby, Soren; Svenning, Jens-Christian; Zimmermann, Niklaus E.

    2018-01-01

    Reconstructing the processes that have shaped the emergence of biodiversity gradients is critical to understand the dynamics of diversification of life on Earth. Islands have traditionally been used as model systems to unravel the processes shaping biological diversity. MacArthur and Wilson's island biogeographic model predicts diversity to be based on dynamic interactions between colonization and extinction rates, while treating islands themselves as geologically static entities. The current spatial configuration of islands should influence meta-population dynamics, but long-term geological changes within archipelagos are also expected to have shaped island biodiversity, in part by driving diversification. Here, we compare two mechanistic models providing inferences on species richness at a biogeographic scale: a mechanistic spatial-temporal model of species diversification and a spatial meta-population model. While the meta-population model operates over a static landscape, the diversification model is driven by changes in the size and spatial configuration of islands through time. We compare the inferences of both models to floristic diversity patterns among land patches of the Indo-Australian Archipelago. Simulation results from the diversification model better matched observed diversity than a meta-population model constrained only by the contemporary landscape. The diversification model suggests that the dynamic re-positioning of islands promoting land disconnection and reconnection induced an accumulation of particularly high species diversity on Borneo, which is central within the island network. By contrast, the meta-population model predicts a higher diversity on the mainlands, which is less compatible with empirical data. Our analyses highlight that, by comparing models with contrasting assumptions, we can pinpoint the processes that are most compatible with extant biodiversity patterns. PMID:29657753

  16. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.

    Science.gov (United States)

    Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-08-16

    Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.

  17. A computational approach to compare regression modelling strategies in prediction research.

    Science.gov (United States)

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  18. Tuning of Lee Path Loss Model based on recent RF measurements in 400 MHZ conducted in Riyadh City, Saudi Arabia

    International Nuclear Information System (INIS)

    Alotaibi, Faihan D.; Ali, Adel A.

    2008-01-01

    In mobile radio systems, path loss models are necessary for proper planning, interference estimations, frequently assignments and cell parameters which are basic for network planning process as well as Location Based Services (LBS) techniques that are not based on GPS system. Empirical models are the most adjustable models that can be suited to different types of environments. In this paper, the Lee path loss model has been tuned using Least Square (LS) algorithm to fit measured data for TETRA system operating 400 MHz in Riyadh urban and suburbs. Consequently, Lee model's parameter (L0, y) are obtained for the targeted areas. The performance of the tuned Lee model is then compared to the three most widely used empirical path loss models: Hat, ITU-R and Cost 231 Walfisch-Ikegami non line-of-sight (CWI-NLOS) path loss models. The performance criterion selected for the comparison of various empirical path loss models are the Root Mean Square Error (RMSE) and goodness of fit (R2). The RMSE and R2between the actual and predicted data are calculated for various path loss models. It turned that the tuned Lee model outperforms the other empirical models. (author)

  19. Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines

    Science.gov (United States)

    Wood, Wm M.

    2018-02-01

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.

  20. A comparative analysis of diffusion and transport models applying to releases in the marine environment

    International Nuclear Information System (INIS)

    Mejon, M.J.

    1984-05-01

    This study is a contribution to the development of methodologies allowing to assess the radiological impact of liquid effluent releases from nuclear power plants. It first concerns hydrodynamics models and their applications to the North sea, which is of great interest to the European Community. Starting from basic equations of geophysical fluid mechanics, the assumptions made at each step in order to simplifly resolution are analysed and commented. The results published on the application of the Liege University models (NIHOUL, RONDAY et al.) are compared to observations both on tides and tempests and residual circulation which is responsible for the long-terme transport of pollutants. The results for residual circulation compare satisfactorily, and the expected accuracy of the other models is indicated. A dispersion model by the same authors is then studied with a numerical integration method using a moving grid. Others models (Laboratoire National d'Hydraulique, EDF) used for the Channel, are also presented [fr

  1. Comparing the engineering program feeders from SiF and convention models

    Science.gov (United States)

    Roongruangsri, Warawaran; Moonpa, Niwat; Vuthijumnonk, Janyawat; Sangsuwan, Kampanart

    2018-01-01

    This research aims to compare the relationship between two types of engineering program feeder models within the technical education systems of Rajamangala University of Technology Lanna (RMUTL), Chiangmai, Thailand. To illustrate, the paper refers to two typologies of feeder models, which are the convention and the school in factory (SiF) models. The new SiF model is developed through a collaborative educational process between the sectors of industry, government and academia, using work-integrated learning. The research methodology were use to compared features of the the SiF model with conventional models in terms of learning outcome, funding budget for the study, the advantages and disadvantages from the point of view of students, professors, the university, government and industrial partners. The results of this research indicate that the developed SiF feeder model is the most pertinent ones as it meet the requirements of the university, the government and the industry. The SiF feeder model showed the ability to yield positive learning outcomes with low expenditures per student for both the family and the university. In parallel, the sharing of knowledge between university and industry became increasingly important in the process, which resulted in the improvement of industrial skills for professors and an increase in industrial based research for the university. The SiF feeder model meets its demand of public policy in supporting a skilled workforce for the industry, which could be an effective tool for the triple helix educational model of Thailand.

  2. Towards a systemic functional model for comparing forms of discourse in academic writing Towards a systemic functional model for comparing forms of discourse in academic writing

    Directory of Open Access Journals (Sweden)

    Meriel Bloor

    2008-04-01

    Full Text Available This article reports on research into the variation of texts across disciplines and considers the implications of this work for the teaching of writing. The research was motivated by the need to improve students’ academic writing skills in English and the limitations of some current pedagogic advice. The analysis compares Methods sections of research articles across four disciplines, including applied and hard sciences, on a cline, or gradient, termed slow to fast. The analysis considers the characteristics the texts share, but more importantly identifies the variation between sets of linguistic features. Working within a systemic functional framework, the texts are analysed for length, sentence length, lexical density, readability, grammatical metaphor, Thematic choice, as well as various rhetorical functions. Contextually relevant reasons for the differences are considered and the implications of the findings are related to models of text and discourse. Recommendations are made for developing domain models that relate clusters of features to positions on a cline. This article reports on research into the variation of texts across disciplines and considers the implications of this work for the teaching of writing. The research was motivated by the need to improve students’ academic writing skills in English and the limitations of some current pedagogic advice. The analysis compares Methods sections of research articles across four disciplines, including applied and hard sciences, on a cline, or gradient, termed slow to fast. The analysis considers the characteristics the texts share, but more importantly identifies the variation between sets of linguistic features. Working within a systemic functional framework, the texts are analysed for length, sentence length, lexical density, readability, grammatical metaphor, Thematic choice, as well as various rhetorical functions. Contextually relevant reasons for the differences are considered

  3. Impact of rotavirus vaccination on hospitalisations in Belgium: comparing model predictions with observed data.

    Directory of Open Access Journals (Sweden)

    Baudouin Standaert

    Full Text Available BACKGROUND: Published economic assessments of rotavirus vaccination typically use modelling, mainly static Markov cohort models with birth cohorts followed up to the age of 5 years. Rotavirus vaccination has now been available for several years in some countries, and data have been collected to evaluate the real-world impact of vaccination on rotavirus hospitalisations. This study compared the economic impact of vaccination between model estimates and observed data on disease-specific hospitalisation reductions in a country for which both modelled and observed datasets exist (Belgium. METHODS: A previously published Markov cohort model estimated the impact of rotavirus vaccination on the number of rotavirus hospitalisations in children aged <5 years in Belgium using vaccine efficacy data from clinical development trials. Data on the number of rotavirus-positive gastroenteritis hospitalisations in children aged <5 years between 1 June 2004 and 31 May 2006 (pre-vaccination study period or 1 June 2007 to 31 May 2010 (post-vaccination study period were analysed from nine hospitals in Belgium and compared with the modelled estimates. RESULTS: The model predicted a smaller decrease in hospitalisations over time, mainly explained by two factors. First, the observed data indicated indirect vaccine protection in children too old or too young for vaccination. This herd effect is difficult to capture in static Markov cohort models and therefore was not included in the model. Second, the model included a 'waning' effect, i.e. reduced vaccine effectiveness over time. The observed data suggested this waning effect did not occur during that period, and so the model systematically underestimated vaccine effectiveness during the first 4 years after vaccine implementation. CONCLUSIONS: Model predictions underestimated the direct medical economic value of rotavirus vaccination during the first 4 years of vaccination by approximately 10% when assessing

  4. Comparative Validation of Realtime Solar Wind Forecasting Using the UCSD Heliospheric Tomography Model

    Science.gov (United States)

    MacNeice, Peter; Taktakishvili, Alexandra; Jackson, Bernard; Clover, John; Bisi, Mario; Odstrcil, Dusan

    2011-01-01

    The University of California, San Diego 3D Heliospheric Tomography Model reconstructs the evolution of heliospheric structures, and can make forecasts of solar wind density and velocity up to 72 hours in the future. The latest model version, installed and running in realtime at the Community Coordinated Modeling Center(CCMC), analyzes scintillations of meter wavelength radio point sources recorded by the Solar-Terrestrial Environment Laboratory(STELab) together with realtime measurements of solar wind speed and density recorded by the Advanced Composition Explorer(ACE) Solar Wind Electron Proton Alpha Monitor(SWEPAM).The solution is reconstructed using tomographic techniques and a simple kinematic wind model. Since installation, the CCMC has been recording the model forecasts and comparing them with ACE measurements, and with forecasts made using other heliospheric models hosted by the CCMC. We report the preliminary results of this validation work and comparison with alternative models.

  5. A comparative study of the use of different risk-assessment models in Danish municipalities

    DEFF Research Database (Denmark)

    Sørensen, Kresta Munkholt

    2018-01-01

    Risk-assessment models are widely used in casework involving vulnerable children and families. Internationally, there are a number of different kinds of models with great variation in regard to the characteristics of factors that harm children. Lists of factors have been made but most of them give...... very little advice on how the factors should be weighted. This paper will address the use of risk-assessment models in six different Danish municipalities. The paper presents a comparative analysis and discussion of differences and similarities between three models: the Integrated Children’s System...... (ICS), the Signs of Safety (SoS) model and models developed by the municipalities themselves (MM). The analysis will answer the following two key questions: (i) to which risk and protective factors do the caseworkers give most weight in the risk assessment? and (ii) does each of the different models...

  6. COMPARATIVE EFFICIENCIES STUDY OF SLOT MODEL AND MOUSE MODEL IN PRESSURISED PIPE FLOW

    Directory of Open Access Journals (Sweden)

    Saroj K. Pandit

    2014-01-01

    Full Text Available The flow in sewers is unsteady and variable between free-surfac e to full pipe pressurized flow. Sewers are designed on the basis of free surf ace flow (gravity flow however they may carry pressurized flow. Preissmann Slot concep t is widely used numerical approach in unsteady free surface-pressurized flow as it provides the advantage of using free surface flow as a single type flow. Slo t concept uses the Saint- Venant’s equations as a basic equation for one-dimensional unst eady free surface flow. This paper includes two different numerical models using Saint Venant’s equations. The Saint Venant’s e quations of continuity and momen tum are solved by the Method of Characteristics and presented in forms for direct substitution into FORTRAN programming for numerical analysis in the first model. The MOUSE model carries out computation of unsteady flows which is founde d on an implicit, finite difference numerical solut ion of the basic one dimension al Saint Venant’s equations of free surface flow. The simulation results are comp ared to analyze the nature and degree of errors for further improvement.

  7. COMPARATIVE ANALYSIS BETWEEN THE TRADITIONAL MODEL OF CORPORATE GOVERNANCE AND ISLAMIC MODEL

    Directory of Open Access Journals (Sweden)

    DAN ROXANA LOREDANA

    2016-08-01

    Full Text Available Corporate governance represents a set of processes and policies by which a company is administered, controlled and directed to achieve the predetermined management objectives settled by the shareholders. The most important benefits of the corporate governance to the organisations are related to business success, investor confidence and minimisation of wastage. For business, the improved controls and decision-making will aid corporate success as well as growth in revenues and profits. For the investor confidence, corporate governance will mean that investors are more likely to trust that the company is being well run. This will not only make it easier and cheaper for the company to raise finance, but also has a positive effect on the share price. When we talk about the minimisation of wastage we relate to the strong corporate governance that should help to minimise waste within the organisation, as well as the corruption, risks and mismanagement. Thus, in our research, we are trying to determine the common elements, and also, the differences that have occured between two well known models of corporate governance, the traditional Anglo – Saxon model and also, the Islamic model of corporate governance.

  8. A comparative study of turbulence models for dissolved air flotation flow analysis

    International Nuclear Information System (INIS)

    Park, Min A; Lee, Kyun Ho; Chung, Jae Dong; Seo, Seung Ho

    2015-01-01

    The dissolved air flotation (DAF) system is a water treatment process that removes contaminants by attaching micro bubbles to them, causing them to float to the water surface. In the present study, two-phase flow of air-water mixture is simulated to investigate changes in the internal flow analysis of DAF systems caused by using different turbulence models. Internal micro bubble distribution, velocity, and computation time are compared between several turbulence models for a given DAF geometry and condition. As a result, it is observed that the standard κ-ε model, which has been frequently used in previous research, predicts somewhat different behavior than other turbulence models

  9. Non-linear modelling to describe lactation curve in Gir crossbred cows

    Directory of Open Access Journals (Sweden)

    Yogesh C. Bangar

    2017-02-01

    Full Text Available Abstract Background The modelling of lactation curve provides guidelines in formulating farm managerial practices in dairy cows. The aim of the present study was to determine the suitable non-linear model which most accurately fitted to lactation curves of five lactations in 134 Gir crossbred cows reared in Research-Cum-Development Project (RCDP on Cattle farm, MPKV (Maharashtra. Four models viz. gamma-type function, quadratic model, mixed log function and Wilmink model were fitted to each lactation separately and then compared on the basis of goodness of fit measures viz. adjusted R2, root mean square error (RMSE, Akaike’s Informaion Criteria (AIC and Bayesian Information Criteria (BIC. Results In general, highest milk yield was observed in fourth lactation whereas it was lowest in first lactation. Among the models investigated, mixed log function and gamma-type function provided best fit of the lactation curve of first and remaining lactations, respectively. Quadratic model gave least fit to lactation curve in almost all lactations. Peak yield was observed as highest and lowest in fourth and first lactation, respectively. Further, first lactation showed highest persistency but relatively higher time to achieve peak yield than other lactations. Conclusion Lactation curve modelling using gamma-type function may be helpful to setting the management strategies at farm level, however, modelling must be optimized regularly before implementing them to enhance productivity in Gir crossbred cows.

  10. Bayesian Poisson hierarchical models for crash data analysis: Investigating the impact of model choice on site-specific predictions.

    Science.gov (United States)

    Khazraee, S Hadi; Johnson, Valen; Lord, Dominique

    2018-08-01

    The Poisson-gamma (PG) and Poisson-lognormal (PLN) regression models are among the most popular means for motor vehicle crash data analysis. Both models belong to the Poisson-hierarchical family of models. While numerous studies have compared the overall performance of alternative Bayesian Poisson-hierarchical models, little research has addressed the impact of model choice on the expected crash frequency prediction at individual sites. This paper sought to examine whether there are any trends among candidate models predictions e.g., that an alternative model's prediction for sites with certain conditions tends to be higher (or lower) than that from another model. In addition to the PG and PLN models, this research formulated a new member of the Poisson-hierarchical family of models: the Poisson-inverse gamma (PIGam). Three field datasets (from Texas, Michigan and Indiana) covering a wide range of over-dispersion characteristics were selected for analysis. This study demonstrated that the model choice can be critical when the calibrated models are used for prediction at new sites, especially when the data are highly over-dispersed. For all three datasets, the PIGam model would predict higher expected crash frequencies than would the PLN and PG models, in order, indicating a clear link between the models predictions and the shape of their mixing distributions (i.e., gamma, lognormal, and inverse gamma, respectively). The thicker tail of the PIGam and PLN models (in order) may provide an advantage when the data are highly over-dispersed. The analysis results also illustrated a major deficiency of the Deviance Information Criterion (DIC) in comparing the goodness-of-fit of hierarchical models; models with drastically different set of coefficients (and thus predictions for new sites) may yield similar DIC values, because the DIC only accounts for the parameters in the lowest (observation) level of the hierarchy and ignores the higher levels (regression coefficients

  11. Comparative study between single core model and detail core model of CFD modelling on reactor core cooling behaviour

    Science.gov (United States)

    Darmawan, R.

    2018-01-01

    Nuclear power industry is facing uncertainties since the occurrence of the unfortunate accident at Fukushima Daiichi Nuclear Power Plant. The issue of nuclear power plant safety becomes the major hindrance in the planning of nuclear power program for new build countries. Thus, the understanding of the behaviour of reactor system is very important to ensure the continuous development and improvement on reactor safety. Throughout the development of nuclear reactor technology, investigation and analysis on reactor safety have gone through several phases. In the early days, analytical and experimental methods were employed. For the last four decades 1D system level codes were widely used. The continuous development of nuclear reactor technology has brought about more complex system and processes of nuclear reactor operation. More detailed dimensional simulation codes are needed to assess these new reactors. Recently, 2D and 3D system level codes such as CFD are being explored. This paper discusses a comparative study on two different approaches of CFD modelling on reactor core cooling behaviour.

  12. Daily reservoir inflow forecasting combining QPF into ANNs model

    Science.gov (United States)

    Zhang, Jun; Cheng, Chun-Tian; Liao, Sheng-Li; Wu, Xin-Yu; Shen, Jian-Jian

    2009-01-01

    Daily reservoir inflow predictions with lead-times of several days are essential to the operational planning and scheduling of hydroelectric power system. The demand for quantitative precipitation forecasting (QPF) is increasing in hydropower operation with the dramatic advances in the numerical weather prediction (NWP) models. This paper presents a simple and an effective algorithm for daily reservoir inflow predictions which solicits the observed precipitation, forecasted precipitation from QPF as predictors and discharges in following 1 to 6 days as predicted targets for multilayer perceptron artificial neural networks (MLP-ANNs) modeling. An improved error back-propagation algorithm with self-adaptive learning rate and self-adaptive momentum coefficient is used to make the supervised training procedure more efficient in both time saving and search optimization. Several commonly used error measures are employed to evaluate the performance of the proposed model and the results, compared with that of ARIMA model, show that the proposed model is capable of obtaining satisfactory forecasting not only in goodness of fit but also in generalization. Furthermore, the presented algorithm is integrated into a practical software system which has been severed for daily inflow predictions with lead-times varying from 1 to 6 days of more than twenty reservoirs operated by the Fujian Province Grid Company, China.

  13. Evaluating performances of simplified physically based landslide susceptibility models.

    Science.gov (United States)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk

  14. DIDEM - An integrated model for comparative health damage costs calculation of air pollution

    Science.gov (United States)

    Ravina, Marco; Panepinto, Deborah; Zanetti, Maria Chiara

    2018-01-01

    Air pollution represents a continuous hazard to human health. Administration, companies and population need efficient indicators of the possible effects given by a change in decision, strategy or habit. The monetary quantification of health effects of air pollution through the definition of external costs is increasingly recognized as a useful indicator to support decision and information at all levels. The development of modelling tools for the calculation of external costs can provide support to analysts in the development of consistent and comparable assessments. In this paper, the DIATI Dispersion and Externalities Model (DIDEM) is presented. The DIDEM model calculates the delta-external costs of air pollution comparing two alternative emission scenarios. This tool integrates CALPUFF's advanced dispersion modelling with the latest WHO recommendations on concentration-response functions. The model is based on the impact pathway method. It was designed to work with a fine spatial resolution and a local or national geographic scope. The modular structure allows users to input their own data sets. The DIDEM model was tested on a real case study, represented by a comparative analysis of the district heating system in Turin, Italy. Additional advantages and drawbacks of the tool are discussed in the paper. A comparison with other existing models worldwide is reported.

  15. Comparative analysis of coupled creep-damage model implementations and application

    International Nuclear Information System (INIS)

    Bhandari, S.; Feral, X.; Bergheau, J.M.; Mottet, G.; Dupas, P.; Nicolas, L.

    1998-01-01

    Creep rupture of a reactor pressure vessel in a severe accident occurs after complex load and temperature histories leading to interactions between creep deformations, stress relaxation, material damaging and plastic instability. The concepts of continuous damage introduced by Kachanov and Robotnov allow to formulate models coupling elasto-visco-plasticity and damage. However, the integration of such models in a finite element code creates some difficulties related to the strong non-linearity of the constitutive equations. It was feared that different methods of implementation of such a model might lead to different results which, consequently, might limit the application and usefulness of such a model. The Commissariat a l'Energie Atomique (CEA), Electricite de France (EDF) and Framasoft (FRA) have worked out numerical solutions to implement such a model in respectively CASTEM 2000, ASTER and SYSTUS codes. A ''benchmark'' was set up, chosen on the basis of a cylinder studied in the programme ''RUPTHER''. The aim of this paper is not to enter into the numerical details of the implementation of the model, but to present the results of the comparative study made using the three codes mentioned above, on a case of engineering interest. The results of the coupled model will also be compared to an uncoupled model to evaluate differences one can obtain between a simple uncoupled model and a more sophisticated coupled model. The main conclusion drawn from this study is that the different numerical implementations used for the coupled damage-visco-plasticity model give quite consistent results. The numerical difficulty inherent to the integration of the strongly non-linear constitutive equations have been resolved using Runge-Kutta or mid-point rule. The usefulness of the coupled model comes from the fact the uncoupled model leads to too conservative results, at least in the example treated and in particular for the uncoupled analysis under the hypothesis of the small

  16. Comparative Study of Fatigue Damage Models Using Different Number of Classes Combined with the Rainflow Method

    Directory of Open Access Journals (Sweden)

    S. Zengah

    2013-06-01

    Full Text Available Fatigue damage increases with applied load cycles in a cumulative manner. Fatigue damage models play a key role in life prediction of components and structures subjected to random loading. The aim of this paper is the examination of the performance of the “Damaged Stress Model”, proposed and validated, against other fatigue models under random loading before and after reconstruction of the load histories. To achieve this objective, some linear and nonlinear models proposed for fatigue life estimation and a batch of specimens made of 6082T6 aluminum alloy is subjected to random loading. The damage was cumulated by Miner’s rule, Damaged Stress Model (DSM, Henry model and Unified Theory (UT and random cycles were counted with a rain-flow algorithm. Experimental data on high-cycle fatigue by complex loading histories with different mean and amplitude stress values are analyzed for life calculation and model predictions are compared.

  17. A comparative study of manhole hydraulics using stereoscopic PIV and different RANS models.

    Science.gov (United States)

    Beg, Md Nazmul Azim; Carvalho, Rita F; Tait, Simon; Brevis, Wernher; Rubinato, Matteo; Schellart, Alma; Leandro, Jorge

    2017-04-01

    Flows in manholes are complex and may include swirling and recirculation flow with significant turbulence and vorticity. However, how these complex 3D flow patterns could generate different energy losses and so affect flow quantity in the wider sewer network is unknown. In this work, 2D3C stereo Particle Image Velocimetry measurements are made in a surcharged scaled circular manhole. A computational fluid dynamics (CFD) model in OpenFOAM ® with four different Reynolds Averaged Navier Stokes (RANS) turbulence model is constructed using a volume of fluid model, to represent flows in this manhole. Velocity profiles and pressure distributions from the models are compared with the experimental data in view of finding the best modelling approach. It was found among four different RANS models that the re-normalization group (RNG) k-ɛ and k-ω shear stress transport (SST) gave a better approximation for velocity and pressure.

  18. Comparative evaluation of kinetic, equilibrium and semi-equilibrium models for biomass gasification

    Energy Technology Data Exchange (ETDEWEB)

    Buragohain, Buljit [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Chakma, Sankar; Kumar, Peeush [Department of Chemical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Mahanta, Pinakeswar [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Department of Mechanical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Moholkar, Vijayanand S. [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Department of Chemical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India)

    2013-07-01

    Modeling of biomass gasification has been an active area of research for past two decades. In the published literature, three approaches have been adopted for the modeling of this process, viz. thermodynamic equilibrium, semi-equilibrium and kinetic. In this paper, we have attempted to present a comparative assessment of these three types of models for predicting outcome of the gasification process in a circulating fluidized bed gasifier. Two model biomass, viz. rice husk and wood particles, have been chosen for analysis, with gasification medium being air. Although the trends in molar composition, net yield and LHV of the producer gas predicted by three models are in concurrence, significant quantitative difference is seen in the results. Due to rather slow kinetics of char gasification and tar oxidation, carbon conversion achieved in single pass of biomass through the gasifier, calculated using kinetic model, is quite low, which adversely affects the yield and LHV of the producer gas. Although equilibrium and semi-equilibrium models reveal relative insensitivity of producer gas characteristics towards temperature, the kinetic model shows significant effect of temperature on LHV of the gas at low air ratios. Kinetic models also reveal volume of the gasifier to be an insignificant parameter, as the net yield and LHV of the gas resulting from 6 m and 10 m riser is same. On a whole, the analysis presented in this paper indicates that thermodynamic models are useful tools for quantitative assessment of the gasification process, while kinetic models provide physically more realistic picture.

  19. Comparative systems biology between human and animal models based on next-generation sequencing methods.

    Science.gov (United States)

    Zhao, Yu-Qi; Li, Gong-Hua; Huang, Jing-Fei

    2013-04-01

    Animal models provide myriad benefits to both experimental and clinical research. Unfortunately, in many situations, they fall short of expected results or provide contradictory results. In part, this can be the result of traditional molecular biological approaches that are relatively inefficient in elucidating underlying molecular mechanism. To improve the efficacy of animal models, a technological breakthrough is required. The growing availability and application of the high-throughput methods make systematic comparisons between human and animal models easier to perform. In the present study, we introduce the concept of the comparative systems biology, which we define as "comparisons of biological systems in different states or species used to achieve an integrated understanding of life forms with all their characteristic complexity of interactions at multiple levels". Furthermore, we discuss the applications of RNA-seq and ChIP-seq technologies to comparative systems biology between human and animal models and assess the potential applications for this approach in the future studies.

  20. A cross-comparison of different techniques for modeling macro-level cyclist crashes.

    Science.gov (United States)

    Guo, Yanyong; Osama, Ahmed; Sayed, Tarek

    2018-04-01

    Despite the recognized benefits of cycling as a sustainable mode of transportation, cyclists are considered vulnerable road users and there are concerns about their safety. Therefore, it is essential to investigate the factors affecting cyclist safety. The goal of this study is to evaluate and compare different approaches of modeling macro-level cyclist safety as well as investigating factors that contribute to cyclist crashes using a comprehensive list of covariates. Data from 134 traffic analysis zones (TAZs) in the City of Vancouver were used to develop macro-level crash models (CM) incorporating variables related to actual traffic exposure, socio-economics, land use, built environment, and bike network. Four types of CMs were developed under a full Bayesian framework: Poisson lognormal model (PLN), random intercepts PLN model (RIPLN), random parameters PLN model (RPPLN), and spatial PLN model (SPLN). The SPLN model had the best goodness of fit, and the results highlighted the significant effects of spatial correlation. The models showed that the cyclist crashes were positively associated with bike and vehicle exposure measures, households, commercial area density, and signal density. On the other hand, negative associations were found between cyclist crashes and some bike network indicators such as average edge length, average zonal slope, and off-street bike links. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. A Network-Based Approach to Modeling and Predicting Product Coconsideration Relations

    Directory of Open Access Journals (Sweden)

    Zhenghui Sha

    2018-01-01

    Full Text Available Understanding customer preferences in consideration decisions is critical to choice modeling in engineering design. While existing literature has shown that the exogenous effects (e.g., product and customer attributes are deciding factors in customers’ consideration decisions, it is not clear how the endogenous effects (e.g., the intercompetition among products would influence such decisions. This paper presents a network-based approach based on Exponential Random Graph Models to study customers’ consideration behaviors according to engineering design. Our proposed approach is capable of modeling the endogenous effects among products through various network structures (e.g., stars and triangles besides the exogenous effects and predicting whether two products would be conisdered together. To assess the proposed model, we compare it against the dyadic network model that only considers exogenous effects. Using buyer survey data from the China automarket in 2013 and 2014, we evaluate the goodness of fit and the predictive power of the two models. The results show that our model has a better fit and predictive accuracy than the dyadic network model. This underscores the importance of the endogenous effects on customers’ consideration decisions. The insights gained from this research help explain how endogenous effects interact with exogeous effects in affecting customers’ decision-making.

  2. Modelling road accident blackspots data with the discrete generalized Pareto distribution.

    Science.gov (United States)

    Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María

    2014-10-01

    This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. The Comparative Study of Collaborative Learning and SDLC Model to develop IT Group Projects

    OpenAIRE

    Sorapak Pukdesree

    2017-01-01

    The main objectives of this research were to compare the attitudes of learners between applying SDLC model with collaborative learning and typical SDLC model and to develop electronic courseware as group projects. The research was a quasi-experimental research. The populations of the research were students who took Computer Organization and Architecture course in the academic year 2015. There were 38 students who participated to the research. The participants were divided voluntary into two g...

  4. Efem vs. XFEM: a comparative study for modeling strong discontinuity in geomechanics

    OpenAIRE

    Das, Kamal C.; Ausas, Roberto Federico; Segura Segarra, José María; Narang, Ankur; Rodrigues, Eduardo; Carol, Ignacio; Lakshmikantha, Ramasesha Mookanahallipatna; Mello,, U.

    2015-01-01

    Modeling of big faults or weak planes of strong and weak discontinuities is of major importance to assess the Geomechanical behaviour of mining/civil tunnel, reservoirs etc. For modelling fractures in Geomechanics, prior art has been limited to Interface Elements which suffer from numerical instability and where faults are required to be aligned with element edges. In this paper, we consider comparative study on finite elements for capturing strong discontinuities by means of elemental (EFEM)...

  5. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  6. The separatrix response of diverted TCV plasmas compared to the CREATE-L model

    International Nuclear Information System (INIS)

    Vyas, P.; Lister, J.B.; Villone, F.; Albanese, R.

    1997-11-01

    The response of Ohmic, single-null diverted, non-centred plasmas in TCV to poloidal field coil stimulation has been compared to the linear CREATE-L MHD equilibrium response model. The closed loop responses of directly measured quantities, reconstructed parameters, and the reconstructed plasma contour were all examined. Provided that the plasma position and shape perturbation were small enough for the linearity assumption to hold, the model-experiment agreement was good. For some stimulations the open loop vertical position instability growth rate changed significantly, illustrating the limitations of a linear model. A different model was developed with the assumption that the flux at the plasma boundary is frozen and was also compared with experimental results. It proved not to be as reliable as the CREATE-L model for some simulation parameters showing that the experiments were able to discriminate between different plasma response models. The closed loop response was also found to be sensitive to changes in the modelled plasma shape. It was not possible to invalidate the CREATE-L model despite the extensive range of responses excited by the experiments. (author) figs., tabs., 5 refs

  7. Statistical modelling for recurrent events: an application to sports injuries.

    Science.gov (United States)

    Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F

    2014-09-01

    Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. A comparative study to identify a suitable model of ownership for Iran football pro league clubs

    Directory of Open Access Journals (Sweden)

    Saeed Amirnejad

    2018-01-01

    Full Text Available Today the government ownership of the professional football clubs is absolutely illogical view point. Most of sports clubs are conducted by private sector using different models of ownership all over the world. In Iran, government credits benefit was main reason that the professional sport was firstly developed by government firms and organizations. Therefore, the sports team ownership is without the professionalization standards. The present comparative study was to examine the different football club ownership structures of the top leagues and the current condition of Iran football pro league ownership and then present a suitable ownership structure of Iran football clubs to leave behind the government club ownership. Among the initial 120 scientific texts, the thirty two cases including papers, books and reports were found relevant to this study. We studied the ownership prominence and several football club models of ownership focused on stock listing model of ownership, private investor model of ownership, supporter trust model of ownership and Japan partnership model of ownership; theoretical concepts, empirical studies, main findings, strengths and weaknesses were covered in analysis procedure. According to various models of ownership in leagues and the models’ productivity in football clubs, each model of ownership considering national environmental, economic, social conditions has strengths and weaknesses. So, we cannot present a definite model of ownership for Iran football pro league clubs due to different micro-environments of Iran clubs. We need a big planning to provide a supporter-investor mixed model of ownership to Iranian clubs. Considering strengths and weaknesses in the models of ownership as well as the micro and macro environment of Iran football clubs, German model and Japan partnership model are offered as suitable ones to probable new model of ownership in Iran pro league clubs. Consequently, more studies are required

  9. Comparing artificial neural networks, general linear models and support vector machines in building predictive models for small interfering RNAs.

    Directory of Open Access Journals (Sweden)

    Kyle A McQuisten

    2009-10-01

    Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are

  10. Multilevel modeling and panel data analysis in educational research (Case study: National examination data senior high school in West Java)

    Science.gov (United States)

    Zulvia, Pepi; Kurnia, Anang; Soleh, Agus M.

    2017-03-01

    Individual and environment are a hierarchical structure consist of units grouped at different levels. Hierarchical data structures are analyzed based on several levels, with the lowest level nested in the highest level. This modeling is commonly call multilevel modeling. Multilevel modeling is widely used in education research, for example, the average score of National Examination (UN). While in Indonesia UN for high school student is divided into natural science and social science. The purpose of this research is to develop multilevel and panel data modeling using linear mixed model on educational data. The first step is data exploration and identification relationships between independent and dependent variable by checking correlation coefficient and variance inflation factor (VIF). Furthermore, we use a simple model approach with highest level of the hierarchy (level-2) is regency/city while school is the lowest of hierarchy (level-1). The best model was determined by comparing goodness-of-fit and checking assumption from residual plots and predictions for each model. Our finding that for natural science and social science, the regression with random effects of regency/city and fixed effects of the time i.e multilevel model has better performance than the linear mixed model in explaining the variability of the dependent variable, which is the average scores of UN.

  11. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    Science.gov (United States)

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  12. Roadmap for Lean implementation in Indian automotive component manufacturing industry: comparative study of UNIDO Model and ISM Model

    Science.gov (United States)

    Jadhav, J. R.; Mantha, S. S.; Rane, S. B.

    2015-06-01

    The demands for automobiles increased drastically in last two and half decades in India. Many global automobile manufacturers and Tier-1 suppliers have already set up research, development and manufacturing facilities in India. The Indian automotive component industry started implementing Lean practices to fulfill the demand of these customers. United Nations Industrial Development Organization (UNIDO) has taken proactive approach in association with Automotive Component Manufacturers Association of India (ACMA) and the Government of India to assist Indian SMEs in various clusters since 1999 to make them globally competitive. The primary objectives of this research are to study the UNIDO-ACMA Model as well as ISM Model of Lean implementation and validate the ISM Model by comparing with UNIDO-ACMA Model. It also aims at presenting a roadmap for Lean implementation in Indian automotive component industry. This paper is based on secondary data which include the research articles, web articles, doctoral thesis, survey reports and books on automotive industry in the field of Lean, JIT and ISM. ISM Model for Lean practice bundles was developed by authors in consultation with Lean practitioners. The UNIDO-ACMA Model has six stages whereas ISM Model has eight phases for Lean implementation. The ISM-based Lean implementation model is validated through high degree of similarity with UNIDO-ACMA Model. The major contribution of this paper is the proposed ISM Model for sustainable Lean implementation. The ISM-based Lean implementation framework presents greater insight of implementation process at more microlevel as compared to UNIDO-ACMA Model.

  13. Segmental Modification of the Mualem Model by Remolded Loess

    Directory of Open Access Journals (Sweden)

    Le-fan Wang

    2017-01-01

    Full Text Available The measured diffusion coefficient and soil-water characteristic curve (SWCC of remolded loess were used to modify the Mualem model for increasing its accuracy. The obtained results show that the goodness of fit between the Mualem model and the variable parameter-modified Mualem method comparing with the test results was not high. The saturation of 0.65 was introduced as the boundary to divide the curve of the measured diffusion coefficient into two segments. When the segmentation method combined with the variable parameter method was used to modify the Mualem model, the fitting correlation coefficient was increased to 0.921–0.998. The modified parameters Ko and L corresponding to remolded loess were calculated for different dry densities. Based on the exponential function between Ko and dry density and the linear relation between L and dry density, the segmentally modified Mualem model was established for remolded loess by considering variation in dry density. The results of the study can be used for directly determining the unsaturated infiltration coefficient and for indirectly determining the SWCC through diffusion coefficient.

  14. Risk Estimation for Lung Cancer in Libya: Analysis Based on Standardized Morbidity Ratio, Poisson-Gamma Model, BYM Model and Mixture Model

    Science.gov (United States)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-03-01

    Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. Creative Commons Attribution License

  15. Comparative study for different statistical models to optimize cutting parameters of CNC end milling machines

    International Nuclear Information System (INIS)

    El-Berry, A.; El-Berry, A.; Al-Bossly, A.

    2010-01-01

    In machining operation, the quality of surface finish is an important requirement for many work pieces. Thus, that is very important to optimize cutting parameters for controlling the required manufacturing quality. Surface roughness parameter (Ra) in mechanical parts depends on turning parameters during the turning process. In the development of predictive models, cutting parameters of feed, cutting speed, depth of cut, are considered as model variables. For this purpose, this study focuses on comparing various machining experiments which using CNC vertical machining center, work pieces was aluminum 6061. Multiple regression models are used to predict the surface roughness at different experiments.

  16. The Consensus String Problem and the Complexity of Comparing Hidden Markov Models

    DEFF Research Database (Denmark)

    Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm

    2002-01-01

    The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing......-norms. We discuss the applicability of the technique used for proving the hardness of comparing two hidden Markov models under the L1-norm to other measures of distance between probability distributions. In particular, we show that it cannot be used for proving NP-hardness of determining the Kullback...

  17. The Fracture Mechanical Markov Chain Fatigue Model Compared with Empirical Data

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    The applicability of the FMF-model (Fracture Mechanical Markov Chain Fatigue Model) introduced in Gansted, L., R. Brincker and L. Pilegaard Hansen (1991) is tested by simulations and compared with empirical data. Two sets of data have been used, the Virkler data (aluminium alloy) and data...... established at the Laboratory of Structural Engineering at Aalborg University, the AUC-data, (mild steel). The model, which is based on the assumption, that the crack propagation process can be described by a discrete Space Markov theory, is applicable to constant as well as random loading. It is shown...

  18. Comparative modeling of coevolution in communities of unicellular organisms: adaptability and biodiversity.

    Science.gov (United States)

    Lashin, Sergey A; Suslov, Valentin V; Matushkin, Yuri G

    2010-06-01

    We propose an original program "Evolutionary constructor" that is capable of computationally efficient modeling of both population-genetic and ecological problems, combining these directions in one model of required detail level. We also present results of comparative modeling of stability, adaptability and biodiversity dynamics in populations of unicellular haploid organisms which form symbiotic ecosystems. The advantages and disadvantages of two evolutionary strategies of biota formation--a few generalists' taxa-based biota formation and biodiversity-based biota formation--are discussed.

  19. Development of multivariate NTCP models for radiation-induced hypothyroidism: a comparative analysis

    International Nuclear Information System (INIS)

    Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D’Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto

    2012-01-01

    Hypothyroidism is a frequent late side effect of radiation therapy of the cervical region. Purpose of this work is to develop multivariate normal tissue complication probability (NTCP) models for radiation-induced hypothyroidism (RHT) and to compare them with already existing NTCP models for RHT. Fifty-three patients treated with sequential chemo-radiotherapy for Hodgkin’s lymphoma (HL) were retrospectively reviewed for RHT events. Clinical information along with thyroid gland dose distribution parameters were collected and their correlation to RHT was analyzed by Spearman’s rank correlation coefficient (Rs). Multivariate logistic regression method using resampling methods (bootstrapping) was applied to select model order and parameters for NTCP modeling. Model performance was evaluated through the area under the receiver operating characteristic curve (AUC). Models were tested against external published data on RHT and compared with other published NTCP models. If we express the thyroid volume exceeding X Gy as a percentage (V x (%)), a two-variable NTCP model including V 30 (%) and gender resulted to be the optimal predictive model for RHT (Rs = 0.615, p < 0.001. AUC = 0.87). Conversely, if absolute thyroid volume exceeding X Gy (V x (cc)) was analyzed, an NTCP model based on 3 variables including V 30 (cc), thyroid gland volume and gender was selected as the most predictive model (Rs = 0.630, p < 0.001. AUC = 0.85). The three-variable model performs better when tested on an external cohort characterized by large inter-individuals variation in thyroid volumes (AUC = 0.914, 95% CI 0.760–0.984). A comparable performance was found between our model and that proposed in the literature based on thyroid gland mean dose and volume (p = 0.264). The absolute volume of thyroid gland exceeding 30 Gy in combination with thyroid gland volume and gender provide an NTCP model for RHT with improved prediction capability not only within our patient population but also in an

  20. MODELLING OF FINANCIAL EFFECTIVENESS AND COMPARATIVE ANALYSIS OF PUBLIC-PRIVATE PARTNERSHIP PROJECTS AND PUBLIC PROCUREMENT

    Directory of Open Access Journals (Sweden)

    Kuznetsov Aleksey Alekseevich

    2017-10-01

    Full Text Available The article substantiates the necessity of extension and development of tools for methodological evaluation of effectiveness of public-private partnership (PPP projects both individually and in comparison of effectiveness of various mechanisms of projects realization on the example of traditional public procurement. The author proposed an original technique of modelling cash flows of private and public partners when realizing the projects based on PPP and on public procurement. The model enables us promptly and with sufficient accuracy to reveal comparative advantages of project forms of PPP and public procurement, and also assess financial effectiveness of the PPP projects for each partner. The modelling is relatively straightforward and reliable. The model also enables us to evaluate public partner's expenses for availability, find the terms and thresholds for interest rates of financing attracted by the partners and for risk probabilities to ensure comparative advantage of PPP project. Proposed criteria of effectiveness are compared with methodological recommendations provided by the Ministry of Economic Development of the Russian Federation. Subject: public and private organizations, financial institutions, development institutions and their theoretical and practical techniques for effectiveness evaluation of public-private partnership (PPP projects. Complexity of effectiveness evaluation and the lack of unified and accepted methodology are among the factors that limit the development of PPP in the Russian Federation nowadays. Research objectives: development of methodological methods for assessing financial efficiency of PPP projects by creating and justifying application of new principles and methods of modelling, and also criteria for effectiveness of PPP projects both individually and in comparison with the public procurement. Materials and methods: open database of ongoing PPP projects in the Russian Federation and abroad was used. The

  1. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    Science.gov (United States)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the

  2. Anatomical knowledge gain through a clay-modeling exercise compared to live and video observations.

    Science.gov (United States)

    Kooloos, Jan G M; Schepens-Franke, Annelieke N; Bergman, Esther M; Donders, Rogier A R T; Vorstenbosch, Marc A T M

    2014-01-01

    Clay modeling is increasingly used as a teaching method other than dissection. The haptic experience during clay modeling is supposed to correspond to the learning effect of manipulations during exercises in the dissection room involving tissues and organs. We questioned this assumption in two pretest-post-test experiments. In these experiments, the learning effects of clay modeling were compared to either live observations (Experiment I) or video observations (Experiment II) of the clay-modeling exercise. The effects of learning were measured with multiple choice questions, extended matching questions, and recognition of structures on illustrations of cross-sections. Analysis of covariance with pretest scores as the covariate was used to elaborate the results. Experiment I showed a significantly higher post-test score for the observers, whereas Experiment II showed a significantly higher post-test score for the clay modelers. This study shows that (1) students who perform clay-modeling exercises show less gain in anatomical knowledge than students who attentively observe the same exercise being carried out and (2) performing a clay-modeling exercise is better in anatomical knowledge gain compared to the study of a video of the recorded exercise. The most important learning effect seems to be the engagement in the exercise, focusing attention and stimulating time on task. © 2014 American Association of Anatomists.

  3. The new ICRP respiratory model for radiation protection (ICRP 66) : applications and comparative evaluations

    International Nuclear Information System (INIS)

    Castellani, C.M.; Luciani, A.

    1996-02-01

    The aim of this report is to present the New ICRP Respiratory Model Tract for Radiological Protection. The model allows considering anatomical and physiological characteristics, giving reference values for children aged 3 months, 1, 5,10, and 15 years for adults; it also takes into account aerosol and gas characteristics. After a general description of the model structure, deposition, clearance and dosimetric models are presented. To compare the new and previous model (ICRP 30), dose coefficients (committed effective dose for unit intake) foe inhalation of radionuclides by workers are calculated considering aerosol granulometries with activity median aerodynamic of 1 and 5 μm, reference values for the respective publications. Dose coefficients and annual limits of intakes concerning respective dose limits (50 and 20 mSv respectively for ICRP 26 and 60) for workers and for members of population in case of dispersion of fission products aerosols, are finally calculated

  4. Using ROC curves to compare neural networks and logistic regression for modeling individual noncatastrophic tree mortality

    Science.gov (United States)

    Susan L. King

    2003-01-01

    The performance of two classifiers, logistic regression and neural networks, are compared for modeling noncatastrophic individual tree mortality for 21 species of trees in West Virginia. The output of the classifier is usually a continuous number between 0 and 1. A threshold is selected between 0 and 1 and all of the trees below the threshold are classified as...

  5. Overview, comparative assessment and recommendations of forecasting models for short-term water demand prediction

    CSIR Research Space (South Africa)

    Anele, AO

    2017-11-01

    Full Text Available -term water demand (STWD) forecasts. In view of this, an overview of forecasting methods for STWD prediction is presented. Based on that, a comparative assessment of the performance of alternative forecasting models from the different methods is studied. Times...

  6. Comparing mixing-length models of the diabatic wind profile over homogeneous terrain

    DEFF Research Database (Denmark)

    Pena Diaz, Alfredo; Gryning, Sven-Erik; Hasager, Charlotte Bay

    2010-01-01

    Models of the diabatic wind profile over homogeneous terrain for the entire atmospheric boundary layer are developed using mixing-length theory and are compared to wind speed observations up to 300 m at the National Test Station for Wind Turbines at Høvsøre, Denmark. The measurements are performe...

  7. Feeding Behavior of Aplysia: A Model System for Comparing Cellular Mechanisms of Classical and Operant Conditioning

    Science.gov (United States)

    Baxter, Douglas A.; Byrne, John H.

    2006-01-01

    Feeding behavior of Aplysia provides an excellent model system for analyzing and comparing mechanisms underlying appetitive classical conditioning and reward operant conditioning. Behavioral protocols have been developed for both forms of associative learning, both of which increase the occurrence of biting following training. Because the neural…

  8. Writ in water, lines in sand: Ancient trade routes, models and comparative evidence

    Directory of Open Access Journals (Sweden)

    Eivind Heldaas Seland

    2015-12-01

    Full Text Available Historians and archaeologists often take connectivity for granted, and fail to address the problems of documenting patterns of movement. This article highlights the methodological challenges of reconstructing trade routes in prehistory and early history. The argument is made that these challenges are best met through the application of modern models of connectivity, in combination with the conscious use of comparative approaches.

  9. A Comparative Analysis of Spatial Visualization Ability and Drafting Models for Industrial and Technology Education Students

    Science.gov (United States)

    Katsioloudis, Petros; Jovanovic, Vukica; Jones, Mildred

    2014-01-01

    The main purpose of this study was to determine significant positive effects among the use of three different types of drafting models, and to identify whether any differences exist towards promotion of spatial visualization ability for students in Industrial Technology and Technology Education courses. In particular, the study compared the use of…

  10. Comparing Fuzzy Sets and Random Sets to Model the Uncertainty of Fuzzy Shorelines

    NARCIS (Netherlands)

    Dewi, Ratna Sari; Bijker, Wietske; Stein, Alfred

    2017-01-01

    This paper addresses uncertainty modelling of shorelines by comparing fuzzy sets and random sets. Both methods quantify extensional uncertainty of shorelines extracted from remote sensing images. Two datasets were tested: pan-sharpened Pleiades with four bands (Pleiades) and pan-sharpened Pleiades

  11. Numerical modeling of carrier gas flow in atomic layer deposition vacuum reactor: A comparative study of lattice Boltzmann models

    International Nuclear Information System (INIS)

    Pan, Dongqing; Chien Jen, Tien; Li, Tao; Yuan, Chris

    2014-01-01

    This paper characterizes the carrier gas flow in the atomic layer deposition (ALD) vacuum reactor by introducing Lattice Boltzmann Method (LBM) to the ALD simulation through a comparative study of two LBM models. Numerical models of gas flow are constructed and implemented in two-dimensional geometry based on lattice Bhatnagar–Gross–Krook (LBGK)-D2Q9 model and two-relaxation-time (TRT) model. Both incompressible and compressible scenarios are simulated and the two models are compared in the aspects of flow features, stability, and efficiency. Our simulation outcome reveals that, for our specific ALD vacuum reactor, TRT model generates better steady laminar flow features all over the domain with better stability and reliability than LBGK-D2Q9 model especially when considering the compressible effects of the gas flow. The LBM-TRT is verified indirectly by comparing the numerical result with conventional continuum-based computational fluid dynamics solvers, and it shows very good agreement with these conventional methods. The velocity field of carrier gas flow through ALD vacuum reactor was characterized by LBM-TRT model finally. The flow in ALD is in a laminar steady state with velocity concentrated at the corners and around the wafer. The effects of flow fields on precursor distributions, surface absorptions, and surface reactions are discussed in detail. Steady and evenly distributed velocity field contribute to higher precursor concentration near the wafer and relatively lower particle velocities help to achieve better surface adsorption and deposition. The ALD reactor geometry needs to be considered carefully if a steady and laminar flow field around the wafer and better surface deposition are desired

  12. Numerical modeling of carrier gas flow in atomic layer deposition vacuum reactor: A comparative study of lattice Boltzmann models

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Dongqing; Chien Jen, Tien [Department of Mechanical Engineering, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin 53201 (United States); Li, Tao [School of Mechanical Engineering, Dalian University of Technology, Dalian 116024 (China); Yuan, Chris, E-mail: cyuan@uwm.edu [Department of Mechanical Engineering, University of Wisconsin-Milwaukee, 3200 North Cramer Street, Milwaukee, Wisconsin 53211 (United States)

    2014-01-15

    This paper characterizes the carrier gas flow in the atomic layer deposition (ALD) vacuum reactor by introducing Lattice Boltzmann Method (LBM) to the ALD simulation through a comparative study of two LBM models. Numerical models of gas flow are constructed and implemented in two-dimensional geometry based on lattice Bhatnagar–Gross–Krook (LBGK)-D2Q9 model and two-relaxation-time (TRT) model. Both incompressible and compressible scenarios are simulated and the two models are compared in the aspects of flow features, stability, and efficiency. Our simulation outcome reveals that, for our specific ALD vacuum reactor, TRT model generates better steady laminar flow features all over the domain with better stability and reliability than LBGK-D2Q9 model especially when considering the compressible effects of the gas flow. The LBM-TRT is verified indirectly by comparing the numerical result with conventional continuum-based computational fluid dynamics solvers, and it shows very good agreement with these conventional methods. The velocity field of carrier gas flow through ALD vacuum reactor was characterized by LBM-TRT model finally. The flow in ALD is in a laminar steady state with velocity concentrated at the corners and around the wafer. The effects of flow fields on precursor distributions, surface absorptions, and surface reactions are discussed in detail. Steady and evenly distributed velocity field contribute to higher precursor concentration near the wafer and relatively lower particle velocities help to achieve better surface adsorption and deposition. The ALD reactor geometry needs to be considered carefully if a steady and laminar flow field around the wafer and better surface deposition are desired.

  13. Comparative analysis of the planar capacitor and IDT piezoelectric thin-film micro-actuator models

    International Nuclear Information System (INIS)

    Myers, Oliver J; Anjanappa, M; Freidhoff, Carl B

    2011-01-01

    A comparison of the analysis of similarly developed microactuators is presented. Accurate modeling and simulation techniques are vital for piezoelectrically actuated microactuators. Coupling analytical and numerical modeling techniques with variational design parameters, accurate performance predictions can be realized. Axi-symmetric two-dimensional and three-dimensional static deflection and harmonic models of a planar capacitor actuator are presented. Planar capacitor samples were modeled as unimorph diaphragms with sandwiched piezoelectric material. The harmonic frequencies were calculated numerically and compared well to predicted values and deformations. The finite element modeling reflects the impact of the d 31 piezoelectric constant. Two-dimensional axi-symmetric models of circularly interdigitated piezoelectrically membranes are also presented. The models include the piezoelectric material and properties, the membrane materials and properties, and incorporates various design considerations of the model. These models also include the electro-mechanical coupling for piezoelectric actuation and highlight a novel approach to take advantage of the higher d 33 piezoelectric coupling coefficient. Performance is evaluated for varying parameters such as electrode pitch, electrode width, and piezoelectric material thickness. The models also showed that several of the design parameters were naturally coupled. The static numerical models correlate well with the maximum static deflection of the experimental devices. Finally, this paper deals with the development of numerical harmonic models of piezoelectrically actuated planar capacitor and interdigitated diaphragms. The models were able to closely predict the first two harmonics, conservatively predict the third through sixth harmonics and predict the estimated values of center deflection using plate theory. Harmonic frequency and deflection simulations need further correlation by conducting extensive iterative

  14. A comparative empirical analysis of statistical models for evaluating highway segment crash frequency

    Directory of Open Access Journals (Sweden)

    Bismark R.D.K. Agbelie

    2016-08-01

    Full Text Available The present study conducted an empirical highway segment crash frequency analysis on the basis of fixed-parameters negative binomial and random-parameters negative binomial models. Using a 4-year data from a total of 158 highway segments, with a total of 11,168 crashes, the results from both models were presented, discussed, and compared. About 58% of the selected variables produced normally distributed parameters across highway segments, while the remaining produced fixed parameters. The presence of a noise barrier along a highway segment would increase mean annual crash frequency by 0.492 for 88.21% of the highway segments, and would decrease crash frequency for 11.79% of the remaining highway segments. Besides, the number of vertical curves per mile along a segment would increase mean annual crash frequency by 0.006 for 84.13% of the highway segments, and would decrease crash frequency for 15.87% of the remaining highway segments. Thus, constraining the parameters to be fixed across all highway segments would lead to an inaccurate conclusion. Although, the estimated parameters from both models showed consistency in direction, the magnitudes were significantly different. Out of the two models, the random-parameters negative binomial model was found to be statistically superior in evaluating highway segment crashes compared with the fixed-parameters negative binomial model. On average, the marginal effects from the fixed-parameters negative binomial model were observed to be significantly overestimated compared with those from the random-parameters negative binomial model.

  15. Comparing a quasi-3D to a full 3D nearshore circulation model: SHORECIRC and ROMS

    Science.gov (United States)

    Haas, Kevin A.; Warner, John C.

    2009-01-01

    Predictions of nearshore and surf zone processes are important for determining coastal circulation, impacts of storms, navigation, and recreational safety. Numerical modeling of these systems facilitates advancements in our understanding of coastal changes and can provide predictive capabilities for resource managers. There exists many nearshore coastal circulation models, however they are mostly limited or typically only applied as depth integrated models. SHORECIRC is an established surf zone circulation model that is quasi-3D to allow the effect of the variability in the vertical structure of the currents while maintaining the computational advantage of a 2DH model. Here we compare SHORECIRC to ROMS, a fully 3D ocean circulation model which now includes a three dimensional formulation for the wave-driven flows. We compare the models with three different test applications for: (i) spectral waves approaching a plane beach with an oblique angle of incidence; (ii) monochromatic waves driving longshore currents in a laboratory basin; and (iii) monochromatic waves on a barred beach with rip channels in a laboratory basin. Results identify that the models are very similar for the depth integrated flows and qualitatively consistent for the vertically varying components. The differences are primarily the result of the vertically varying radiation stress utilized by ROMS and the utilization of long wave theory for the radiation stress formulation in vertical varying momentum balance by SHORECIRC. The quasi-3D model is faster, however the applicability of the fully 3D model allows it to extend over a broader range of processes, temporal, and spatial scales.

  16. When Theory Meets Data: Comparing Model Predictions Of Hillslope Sediment Size With Field Measurements.

    Science.gov (United States)

    Mahmoudi, M.; Sklar, L. S.; Leclere, S.; Davis, J. D.; Stine, A.

    2017-12-01

    The size distributions of sediment produced on hillslopes and supplied to river channels influence a wide range of fluvial processes, from bedrock river incision to the creation of aquatic habitats. However, the factors that control hillslope sediment size are poorly understood, limiting our ability to predict sediment size and model the evolution of sediment size distributions across landscapes. Recently separate field and theoretical investigations have begun to address this knowledge gap. Here we compare the predictions of several emerging modeling approaches to landscapes where high quality field data are available. Our goals are to explore the sensitivity and applicability of the theoretical models in each field context, and ultimately to provide a foundation for incorporating hillslope sediment size into models of landscape evolution. The field data include published measurements of hillslope sediment size from the Kohala peninsula on the island of Hawaii and tributaries to the Feather River in the northern Sierra Nevada mountains of California, and an unpublished data set from the Inyo Creek catchment of the southern Sierra Nevada. These data are compared to predictions adapted from recently published modeling approaches that include elements of topography, geology, structure, climate and erosion rate. Predictive models for each site are built in ArcGIS using field condition datasets: DEM topography (slope, aspect, curvature), bedrock geology (lithology, mineralogy), structure (fault location, fracture density), climate data (mean annual precipitation and temperature), and estimates of erosion rates. Preliminary analysis suggests that models may be finely tuned to the calibration sites, particularly when field conditions most closely satisfy model assumptions, leading to unrealistic predictions from extrapolation. We suggest a path forward for developing a computationally tractable method for incorporating spatial variation in production of hillslope

  17. A Comparative Assessment of Aerodynamic Models for Buffeting and Flutter of Long-Span Bridges

    Directory of Open Access Journals (Sweden)

    Igor Kavrakov

    2017-12-01

    Full Text Available Wind-induced vibrations commonly represent the leading criterion in the design of long-span bridges. The aerodynamic forces in bridge aerodynamics are mainly based on the quasi-steady and linear unsteady theory. This paper aims to investigate different formulations of self-excited and buffeting forces in the time domain by comparing the dynamic response of a multi-span cable-stayed bridge during the critical erection condition. The bridge is selected to represent a typical reference object with a bluff concrete box girder for large river crossings. The models are viewed from a perspective of model complexity, comparing the influence of the aerodynamic properties implied in the aerodynamic models, such as aerodynamic damping and stiffness, fluid memory in the buffeting and self-excited forces, aerodynamic nonlinearity, and aerodynamic coupling on the bridge response. The selected models are studied for a wind-speed range that is typical for the construction stage for two levels of turbulence intensity. Furthermore, a simplified method for the computation of buffeting forces including the aerodynamic admittance is presented, in which rational approximation is avoided. The critical flutter velocities are also compared for the selected models under laminar flow. Keywords: Buffeting, Flutter, Long-span bridges, Bridge aerodynamics, Bridge aeroelasticity, Erection stage

  18. THE STELLAR MASS COMPONENTS OF GALAXIES: COMPARING SEMI-ANALYTICAL MODELS WITH OBSERVATION

    International Nuclear Information System (INIS)

    Liu Lei; Yang Xiaohu; Mo, H. J.; Van den Bosch, Frank C.; Springel, Volker

    2010-01-01

    We compare the stellar masses of central and satellite galaxies predicted by three independent semi-analytical models (SAMs) with observational results obtained from a large galaxy group catalog constructed from the Sloan Digital Sky Survey. In particular, we compare the stellar mass functions of centrals and satellites, the relation between total stellar mass and halo mass, and the conditional stellar mass functions, Φ(M * |M h ), which specify the average number of galaxies of stellar mass M * that reside in a halo of mass M h . The SAMs only predict the correct stellar masses of central galaxies within a limited mass range and all models fail to reproduce the sharp decline of stellar mass with decreasing halo mass observed at the low mass end. In addition, all models over-predict the number of satellite galaxies by roughly a factor of 2. The predicted stellar mass in satellite galaxies can be made to match the data by assuming that a significant fraction of satellite galaxies are tidally stripped and disrupted, giving rise to a population of intra-cluster stars (ICS) in their host halos. However, the amount of ICS thus predicted is too large compared to observation. This suggests that current galaxy formation models still have serious problems in modeling star formation in low-mass halos.

  19. Comparative Analysis of Soft Computing Models in Prediction of Bending Rigidity of Cotton Woven Fabrics

    Science.gov (United States)

    Guruprasad, R.; Behera, B. K.

    2015-10-01

    Quantitative prediction of fabric mechanical properties is an essential requirement for design engineering of textile and apparel products. In this work, the possibility of prediction of bending rigidity of cotton woven fabrics has been explored with the application of Artificial Neural Network (ANN) and two hybrid methodologies, namely Neuro-genetic modeling and Adaptive Neuro-Fuzzy Inference System (ANFIS) modeling. For this purpose, a set of cotton woven grey fabrics was desized, scoured and relaxed. The fabrics were then conditioned and tested for bending properties. With the database thus created, a neural network model was first developed using back propagation as the learning algorithm. The second model was developed by applying a hybrid learning strategy, in which genetic algorithm was first used as a learning algorithm to optimize the number of neurons and connection weights of the neural network. The Genetic algorithm optimized network structure was further allowed to learn using back propagation algorithm. In the third model, an ANFIS modeling approach was attempted to map the input-output data. The prediction performances of the models were compared and a sensitivity analysis was reported. The results show that the prediction by neuro-genetic and ANFIS models were better in comparison with that of back propagation neural network model.

  20. Forecasting Long-Term Crude Oil Prices Using a Bayesian Model with Informative Priors

    Directory of Open Access Journals (Sweden)

    Chul-Yong Lee

    2017-01-01

    Full Text Available In the long-term, crude oil prices may impact the economic stability and sustainability of many countries, especially those depending on oil imports. This study thus suggests an alternative model for accurately forecasting oil prices while reflecting structural changes in the oil market by using a Bayesian approach. The prior information is derived from the recent and expected structure of the oil market, using a subjective approach, and then updated with available market data. The model includes as independent variables factors affecting oil prices, such as world oil demand and supply, the financial situation, upstream costs, and geopolitical events. To test the model’s forecasting performance, it is compared with other models, including a linear ordinary least squares model and a neural network model. The proposed model outperforms on the forecasting performance test even though the neural network model shows the best results on a goodness-of-fit test. The results show that the crude oil price is estimated to increase to $169.3/Bbl by 2040.

  1. Development of a subway operation incident delay model using accelerated failure time approaches.

    Science.gov (United States)

    Weng, Jinxian; Zheng, Yang; Yan, Xuedong; Meng, Qiang

    2014-12-01

    This study aims to develop a subway operational incident delay model using the parametric accelerated time failure (AFT) approach. Six parametric AFT models including the log-logistic, lognormal and Weibull models, with fixed and random parameters are built based on the Hong Kong subway operation incident data from 2005 to 2012, respectively. In addition, the Weibull model with gamma heterogeneity is also considered to compare the model performance. The goodness-of-fit test results show that the log-logistic AFT model with random parameters is most suitable for estimating the subway incident delay. First, the results show that a longer subway operation incident delay is highly correlated with the following factors: power cable failure, signal cable failure, turnout communication disruption and crashes involving a casualty. Vehicle failure makes the least impact on the increment of subway operation incident delay. According to these results, several possible measures, such as the use of short-distance and wireless communication technology (e.g., Wifi and Zigbee) are suggested to shorten the delay caused by subway operation incidents. Finally, the temporal transferability test results show that the developed log-logistic AFT model with random parameters is stable over time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Comparing wall modeled LES and prescribed boundary layer approach in infinite wind farm simulations

    DEFF Research Database (Denmark)

    Sarlak, Hamid; Mikkelsen, Robert; Sørensen, Jens Nørkær

    2015-01-01

    be imposed to study the wake and dynamics of vortices. The methodology is used for simulation of interactions of an infinitely long wind farm with the neutral ABL. Flow statistics are compared with the WMLES computations in terms of mean velocity as well as higher order statistical moments. The results......This paper aims at presenting a simple and computationally fast method for simulation of the Atmospheric Boundary Layer (ABL) and comparing the results with the commonly used wall-modelled Large Eddy Simulation (WMLES). The simple method, called Prescribed Mean Shear and Turbulence (PMST) hereafter......, is based on imposing body forces over the whole domain to maintain a desired unsteady ow, where the ground is modeled as a slip-free boundary which in return hampers the need for grid refinement and/or wall modeling close to the solid walls. Another strength of this method besides being computationally...

  3. Does the Model Matter? Comparing Video Self-Modeling and Video Adult Modeling for Task Acquisition and Maintenance by Adolescents with Autism Spectrum Disorders

    Science.gov (United States)

    Cihak, David F.; Schrader, Linda

    2009-01-01

    The purpose of this study was to compare the effectiveness and efficiency of learning and maintaining vocational chain tasks using video self-modeling and video adult modeling instruction. Four adolescents with autism spectrum disorders were taught vocational and prevocational skills. Although both video modeling conditions were effective for…

  4. Goodness of Fit between Children and Classrooms: Effects of Child Temperament and Preschool Classroom Quality on Achievement Trajectories

    Science.gov (United States)

    Vitiello, Virginia E.; Moas, Olga; Henderson, Heather A.; Greenfield, Daryl B.; Munis, Pelin M.

    2012-01-01

    Research Findings: The purpose of this study was to examine whether child temperament differentially predicted academic school readiness depending on the quality of classroom interactions for 179 Head Start preschoolers. Teachers rated children's temperament as overcontrolled, resilient, or undercontrolled in the fall and reported on children's…

  5. Comparative analysis of Bouc–Wen and Jiles–Atherton models under symmetric excitations

    Energy Technology Data Exchange (ETDEWEB)

    Laudani, Antonino, E-mail: alaudani@uniroma3.it; Fulginei, Francesco Riganti; Salvini, Alessandro

    2014-02-15

    The aim of the present paper is to validate the Bouc–Wen (BW) hysteresis model when it is applied to predict dynamic ferromagnetic loops. Indeed, although the Bouc–Wen model has had an increasing interest in last few years, it is usually adopted in mechanical and structural systems and very rarely for magnetic applications. Thus, for addressing this goal the Bouc–Wen model is compared with the dynamic Jiles–Atherton model that, instead, was ideated exactly for simulating magnetic hysteresis. The comparative analysis has involved saturated and symmetric hysteresis loops in ferromagnetic materials. In addition in order to identify the Bouc–Wen parameters a very effective recent heuristic, called Metric-Topological and Evolutionary Optimization (MeTEO) has been utilized. It is based on a hybridization of three meta-heuristics: the Flock-of-Starlings Optimization, the Particle Swarm Optimization and the Bacterial Chemotaxis Algorithm. Thanks to the specific properties of these heuristic, MeTEO allow us to achieve effective identification of such kind of models. Several hysteresis loops have been utilized for final validation tests with the aim to investigate if the BW model can follow the different hysteresis behaviors of both static (quasi-static) and dynamic cases.

  6. Comparing the impact of time displaced and biased precipitation estimates for online updated urban runoff models.

    Science.gov (United States)

    Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen

    2013-01-01

    When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.

  7. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  8. Case management: a randomized controlled study comparing a neighborhood team and a centralized individual model.

    Science.gov (United States)

    Eggert, G M; Zimmer, J G; Hall, W J; Friedman, B

    1991-10-01

    This randomized controlled study compared two types of case management for skilled nursing level patients living at home: the centralized individual model and the neighborhood team model. The team model differed from the individual model in that team case managers performed client assessments, care planning, some direct services, and reassessments; they also had much smaller caseloads and were assigned a specific catchment area. While patients in both groups incurred very high estimated health services costs, the average annual cost during 1983-85 for team cases was 13.6 percent less than that of individual model cases. While the team cases were 18.3 percent less expensive among "old" patients (patients who entered the study from the existing ACCESS caseload), they were only 2.7 percent less costly among "new" cases. The lower costs were due to reductions in hospital days and home care. Team cases averaged 26 percent fewer hospital days per year and 17 percent fewer home health aide hours. Nursing home use was 48 percent higher for the team group than for the individual model group. Mortality was almost exactly the same for both groups during the first year (about 30 percent), but was lower for team patients during the second year (11 percent as compared to 16 percent). Probable mechanisms for the observed results are discussed.

  9. Comparing and Validating Machine Learning Models for Mycobacterium tuberculosis Drug Discovery.

    Science.gov (United States)

    Lane, Thomas; Russo, Daniel P; Zorn, Kimberley M; Clark, Alex M; Korotcov, Alexandru; Tkachenko, Valery; Reynolds, Robert C; Perryman, Alexander L; Freundlich, Joel S; Ekins, Sean

    2018-04-26

    Tuberculosis is a global health dilemma. In 2016, the WHO reported 10.4 million incidences and 1.7 million deaths. The need to develop new treatments for those infected with Mycobacterium tuberculosis ( Mtb) has led to many large-scale phenotypic screens and many thousands of new active compounds identified in vitro. However, with limited funding, efforts to discover new active molecules against Mtb needs to be more efficient. Several computational machine learning approaches have been shown to have good enrichment and hit rates. We have curated small molecule Mtb data and developed new models with a total of 18,886 molecules with activity cutoffs of 10 μM, 1 μM, and 100 nM. These data sets were used to evaluate different machine learning methods (including deep learning) and metrics and to generate predictions for additional molecules published in 2017. One Mtb model, a combined in vitro and in vivo data Bayesian model at a 100 nM activity yielded the following metrics for 5-fold cross validation: accuracy = 0.88, precision = 0.22, recall = 0.91, specificity = 0.88, kappa = 0.31, and MCC = 0.41. We have also curated an evaluation set ( n = 153 compounds) published in 2017, and when used to test our model, it showed the comparable statistics (accuracy = 0.83, precision = 0.27, recall = 1.00, specificity = 0.81, kappa = 0.36, and MCC = 0.47). We have also compared these models with additional machine learning algorithms showing Bayesian machine learning models constructed with literature Mtb data generated by different laboratories generally were equivalent to or outperformed deep neural networks with external test sets. Finally, we have also compared our training and test sets to show they were suitably diverse and different in order to represent useful evaluation sets. Such Mtb machine learning models could help prioritize compounds for testing in vitro and in vivo.

  10. Comparing flow-through and static ice cave models for Shoshone Ice Cave

    Directory of Open Access Journals (Sweden)

    Kaj E. Williams

    2015-05-01

    Full Text Available In this paper we suggest a new ice cave type: the “flow-through” ice cave. In a flow-through ice cave external winds blow into the cave and wet cave walls chill the incoming air to the wet-bulb temperature, thereby achieving extra cooling of the cave air. We have investigated an ice cave in Idaho, located in a lava tube that is reported to have airflow through porous wet end-walls and could therefore be a flow-through cave. We have instrumented the site and collected data for one year. In order to determine the actual ice cave type present at Shoshone, we have constructed numerical models for static and flow-through caves (dynamic is not relevant here. The models are driven with exterior measurements of air temperature, relative humidity and wind speed. The model output is interior air temperature and relative humidity. We then compare the output of both models to the measured interior air temperatures and relative humidity. While both the flow-through and static cave models are capable of preserving ice year-round (a net zero or positive ice mass balance, both models show very different cave air temperature and relative humidity output. We find the empirical data support a hybrid model of the static and flow-through models: permitting a static ice cave to have incoming air chilled to the wet-bulb temperature fits the data best for the Shoshone Ice Cave.

  11. Comparative study of chemo-electro-mechanical transport models for an electrically stimulated hydrogel

    International Nuclear Information System (INIS)

    Elshaer, S E; Moussa, W A

    2014-01-01

    The main objective of this work is to introduce a new expression for the hydrogel’s hydration for use within the Poisson Nernst–Planck chemo electro mechanical (PNP CEM) transport models. This new contribution to the models support large deformation by considering the higher order terms in the Green–Lagrangian strain tensor. A detailed discussion of the CEM transport models using Poisson Nernst–Planck (PNP) and Poisson logarithmic Nernst–Planck (PLNP) equations for chemically and electrically stimulated hydrogels will be presented. The assumptions made to simplify both CEM transport models for electric field application in the order of 0.833 kV m −1 and a highly diluted electrolyte solution (97% is water) will be explained. This PNP CEM model has been verified accurately against experimental and numerical results. In addition, different definitions for normalizing the parameters are used to derive the dimensionless forms of both the PNP and PLNP CEM. Four models, PNP CEM, PLNP CEM, dimensionless PNP CEM and dimensionless PNLP CEM transport models were employed on an axially symmetric cylindrical hydrogel problem with an aspect ratio (diameter to thickness) of 175:3. The displacement and osmotic pressure obtained for the four models are compared against the variation of the number of elements for finite element analysis, simulation duration and solution rate when using the direct numerical solver. (papers)

  12. Comparability of results from pair and classical model formulations for different sexually transmitted infections.

    Directory of Open Access Journals (Sweden)

    Jimmy Boon Som Ong

    Full Text Available The "classical model" for sexually transmitted infections treats partnerships as instantaneous events summarized by partner change rates, while individual-based and pair models explicitly account for time within partnerships and gaps between partnerships. We compared predictions from the classical and pair models over a range of partnership and gap combinations. While the former predicted similar or marginally higher prevalence at the shortest partnership lengths, the latter predicted self-sustaining transmission for gonorrhoea (GC and Chlamydia (CT over much broader partnership and gap combinations. Predictions on the critical level of condom use (C(c required to prevent transmission also differed substantially when using the same parameters. When calibrated to give the same disease prevalence as the pair model by adjusting the infectious duration for GC and CT, and by adjusting transmission probabilities for HIV, the classical model then predicted much higher C(c values for GC and CT, while C(c predictions for HIV were fairly close. In conclusion, the two approaches give different predictions over potentially important combinations of partnership and gap lengths. Assuming that it is more correct to explicitly model partnerships and gaps, then pair or individual-based models may be needed for GC and CT since model calibration does not resolve the differences.

  13. Comparative Analysis of Bulge Deformation between 2D and 3D Finite Element Models

    Directory of Open Access Journals (Sweden)

    Qin Qin

    2014-02-01

    Full Text Available Bulge deformation of the slab is one of the main factors that affect slab quality in continuous casting. This paper describes an investigation into bulge deformation using ABAQUS to model the solidification process. A three-dimensional finite element analysis model of the slab solidification process has been first established because the bulge deformation is closely related to slab temperature distributions. Based on slab temperature distributions, a three-dimensional thermomechanical coupling model including the slab, the rollers, and the dynamic contact between them has also been constructed and applied to a case study. The thermomechanical coupling model produces outputs such as the rules of bulge deformation. Moreover, the three-dimensional model has been compared with a two-dimensional model to discuss the differences between the two models in calculating the bulge deformation. The results show that the platform zone exists in the wide side of the slab and the bulge deformation is affected strongly by the ratio of width-to-thickness. The indications are also that the difference of the bulge deformation for the two modeling ways is little when the ratio of width-to-thickness is larger than six.

  14. Comparative analysis of numerical models of pipe handling equipment used in offshore drilling applications

    Energy Technology Data Exchange (ETDEWEB)

    Pawlus, Witold, E-mail: witold.p.pawlus@ieee.org; Ebbesen, Morten K.; Hansen, Michael R.; Choux, Martin; Hovland, Geir [Department of Engineering Sciences, University of Agder, PO Box 509, N-4898 Grimstad (Norway)

    2016-06-08

    Design of offshore drilling equipment is a task that involves not only analysis of strict machine specifications and safety requirements but also consideration of changeable weather conditions and harsh environment. These challenges call for a multidisciplinary approach and make the design process complex. Various modeling software products are currently available to aid design engineers in their effort to test and redesign equipment before it is manufactured. However, given the number of available modeling tools and methods, the choice of the proper modeling methodology becomes not obvious and – in some cases – troublesome. Therefore, we present a comparative analysis of two popular approaches used in modeling and simulation of mechanical systems: multibody and analytical modeling. A gripper arm of the offshore vertical pipe handling machine is selected as a case study for which both models are created. In contrast to some other works, the current paper shows verification of both systems by benchmarking their simulation results against each other. Such criteria as modeling effort and results accuracy are evaluated to assess which modeling strategy is the most suitable given its eventual application.

  15. Comparing ESC and iPSC—Based Models for Human Genetic Disorders

    Directory of Open Access Journals (Sweden)

    Tomer Halevy

    2014-10-01

    Full Text Available Traditionally, human disorders were studied using animal models or somatic cells taken from patients. Such studies enabled the analysis of the molecular mechanisms of numerous disorders, and led to the discovery of new treatments. Yet, these systems are limited or even irrelevant in modeling multiple genetic diseases. The isolation of human embryonic stem cells (ESCs from diseased blastocysts, the derivation of induced pluripotent stem cells (iPSCs from patients’ somatic cells, and the new technologies for genome editing of pluripotent stem cells have opened a new window of opportunities in the field of disease modeling, and enabled studying diseases that couldn’t be modeled in the past. Importantly, despite the high similarity between ESCs and iPSCs, there are several fundamental differences between these cells, which have important implications regarding disease modeling. In this review we compare ESC-based models to iPSC-based models, and highlight the advantages and disadvantages of each system. We further suggest a roadmap for how to choose the optimal strategy to model each specific disorder.

  16. Comparing ESC and iPSC-Based Models for Human Genetic Disorders.

    Science.gov (United States)

    Halevy, Tomer; Urbach, Achia

    2014-10-24

    Traditionally, human disorders were studied using animal models or somatic cells taken from patients. Such studies enabled the analysis of the molecular mechanisms of numerous disorders, and led to the discovery of new treatments. Yet, these systems are limited or even irrelevant in modeling multiple genetic diseases. The isolation of human embryonic stem cells (ESCs) from diseased blastocysts, the derivation of induced pluripotent stem cells (iPSCs) from patients' somatic cells, and the new technologies for genome editing of pluripotent stem cells have opened a new window of opportunities in the field of disease modeling, and enabled studying diseases that couldn't be modeled in the past. Importantly, despite the high similarity between ESCs and iPSCs, there are several fundamental differences between these cells, which have important implications regarding disease modeling. In this review we compare ESC-based models to iPSC-based models, and highlight the advantages and disadvantages of each system. We further suggest a roadmap for how to choose the optimal strategy to model each specific disorder.

  17. THE PROPAGATION OF UNCERTAINTIES IN STELLAR POPULATION SYNTHESIS MODELING. II. THE CHALLENGE OF COMPARING GALAXY EVOLUTION MODELS TO OBSERVATIONS

    International Nuclear Information System (INIS)

    Conroy, Charlie; Gunn, James E.; White, Martin

    2010-01-01

    Models for the formation and evolution of galaxies readily predict physical properties such as star formation rates, metal-enrichment histories, and, increasingly, gas and dust content of synthetic galaxies. Such predictions are frequently compared to the spectral energy distributions of observed galaxies via the stellar population synthesis (SPS) technique. Substantial uncertainties in SPS exist, and yet their relevance to the task of comparing galaxy evolution models to observations has received little attention. In the present work, we begin to address this issue by investigating the importance of uncertainties in stellar evolution, the initial stellar mass function (IMF), and dust and interstellar medium (ISM) properties on the translation from models to observations. We demonstrate that these uncertainties translate into substantial uncertainties in the ultraviolet, optical, and near-infrared colors of synthetic galaxies. Aspects that carry significant uncertainties include the logarithmic slope of the IMF above 1 M sun , dust attenuation law, molecular cloud disruption timescale, clumpiness of the ISM, fraction of unobscured starlight, and treatment of advanced stages of stellar evolution including blue stragglers, the horizontal branch, and the thermally pulsating asymptotic giant branch. The interpretation of the resulting uncertainties in the derived colors is highly non-trivial because many of the uncertainties are likely systematic, and possibly correlated with the physical properties of galaxies. We therefore urge caution when comparing models to observations.

  18. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  19. Comparing photo modeling methodologies and techniques: the instance of the Great Temple of Abu Simbel

    Directory of Open Access Journals (Sweden)

    Sergio Di Tondo

    2013-10-01

    Full Text Available After fifty years from the Salvage of the Abu Simbel Temples it has been possible to experiment the contemporary photo-modeling tools beginning from the original data of the photogrammetrical survey carried out in the 1950s. This produced a reflection on “Image Based” methods and modeling techniques, comparing strict 3d digital photogrammetry with the latest Structure From Motion (SFM systems. The topographic survey data, the original photogrammetric stereo couples, the points coordinates and their representation in contour lines, allowed to obtain a model of the monument in his configuration before the moving of the temples. The impossibility to carry out a direct survey led to touristic shots to create SFM models to use for geometric comparisons.

  20. Comparative study of wall-force models for the simulation of bubbly flows

    Energy Technology Data Exchange (ETDEWEB)

    Rzehak, Roland, E-mail: r.rzehak@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Institute of Fluid Dynamics, POB 510119, D-01314 Dresden (Germany); Krepper, Eckhard, E-mail: E.Krepper@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Institute of Fluid Dynamics, POB 510119, D-01314 Dresden (Germany); Lifante, Conxita, E-mail: Conxita.Lifante@ansys.com [ANSYS Germany GmbH, Staudenfeldweg 12, 83624 Otterfing (Germany)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Comparison of common models for the wall force with an experimental database. Black-Right-Pointing-Pointer Identification of suitable closure for bubbly flow. Black-Right-Pointing-Pointer Enables prediction of location and height of wall peak in void fraction profiles. - Abstract: Accurate numerical prediction of void-fraction profiles in bubbly multiphase-flow relies on suitable closure models for the momentum exchange between liquid and gas phases. We here consider forces acting on the bubbles in the vicinity of a wall. A number of different models for this so-called wall-force have been proposed in the literature and are implemented in widely used CFD-codes. Simulations using a selection of these models are compared with a set of experimental data on bubbly air-water flow in round pipes of different diameter. Based on the results, recommendations on suitable closures are given.

  1. Prediction of paddy drying kinetics: A comparative study between mathematical and artificial neural network modelling

    Directory of Open Access Journals (Sweden)

    Beigi Mohsen

    2017-01-01

    Full Text Available The present study aimed at investigation of deep bed drying of rough rice kernels at various thin layers at different drying air temperatures and flow rates. A comparative study was performed between mathematical thin layer models and artificial neural networks to estimate the drying curves of rough rice. The suitability of nine mathematical models in simulating the drying kinetics was examined and the Midilli model was determined as the best approach for describing drying curves. Different feed forward-back propagation artificial neural networks were examined to predict the moisture content variations of the grains. The ANN with 4-18-18-1 topology, transfer function of hyperbolic tangent sigmoid and a Levenberg-Marquardt back propagation training algorithm provided the best results with the maximum correlation coefficient and the minimum mean square error values. Furthermore, it was revealed that ANN modeling had better performance in prediction of drying curves with lower root mean square error values.

  2. Metal accumulation in the earthworm Lumbricus rubellus. Model predictions compared to field data

    Science.gov (United States)

    Veltman, K.; Huijbregts, M.A.J.; Vijver, M.G.; Peijnenburg, W.J.G.M.; Hobbelen, P.H.F.; Koolhaas, J.E.; van Gestel, C.A.M.; van Vliet, P.C.J.; Jan, Hendriks A.

    2007-01-01

    The mechanistic bioaccumulation model OMEGA (Optimal Modeling for Ecotoxicological Applications) is used to estimate accumulation of zinc (Zn), copper (Cu), cadmium (Cd) and lead (Pb) in the earthworm Lumbricus rubellus. Our validation to field accumulation data shows that the model accurately predicts internal cadmium concentrations. In addition, our results show that internal metal concentrations in the earthworm are less than linearly (slope < 1) related to the total concentration in soil, while risk assessment procedures often assume the biota-soil accumulation factor (BSAF) to be constant. Although predicted internal concentrations of all metals are generally within a factor 5 compared to field data, incorporation of regulation in the model is necessary to improve predictability of the essential metals such as zinc and copper. ?? 2006 Elsevier Ltd. All rights reserved.

  3. Ultracentrifuge separative power modeling with multivariate regression using covariance matrix

    International Nuclear Information System (INIS)

    Migliavacca, Elder

    2004-01-01

    In this work, the least-squares methodology with covariance matrix is applied to determine a data curve fitting to obtain a performance function for the separative power δU of a ultracentrifuge as a function of variables that are experimentally controlled. The experimental data refer to 460 experiments on the ultracentrifugation process for uranium isotope separation. The experimental uncertainties related with these independent variables are considered in the calculation of the experimental separative power values, determining an experimental data input covariance matrix. The process variables, which significantly influence the δU values are chosen in order to give information on the ultracentrifuge behaviour when submitted to several levels of feed flow rate F, cut θ and product line pressure P p . After the model goodness-of-fit validation, a residual analysis is carried out to verify the assumed basis concerning its randomness and independence and mainly the existence of residual heteroscedasticity with any explained regression model variable. The surface curves are made relating the separative power with the control variables F, θ and P p to compare the fitted model with the experimental data and finally to calculate their optimized values. (author)

  4. Comparative Accuracy of Facial Models Fabricated Using Traditional and 3D Imaging Techniques.

    Science.gov (United States)

    Lincoln, Ketu P; Sun, Albert Y T; Prihoda, Thomas J; Sutton, Alan J

    2016-04-01

    The purpose of this investigation was to compare the accuracy of facial models fabricated using facial moulage impression methods to the three-dimensional printed (3DP) fabrication methods using soft tissue images obtained from cone beam computed tomography (CBCT) and 3D stereophotogrammetry (3D-SPG) scans. A reference phantom model was fabricated using a 3D-SPG image of a human control form with ten fiducial markers placed on common anthropometric landmarks. This image was converted into the investigation control phantom model (CPM) using 3DP methods. The CPM was attached to a camera tripod for ease of image capture. Three CBCT and three 3D-SPG images of the CPM were captured. The DICOM and STL files from the three 3dMD and three CBCT were imported to the 3DP, and six testing models were made. Reversible hydrocolloid and dental stone were used to make three facial moulages of the CPM, and the impressions/casts were poured in type IV gypsum dental stone. A coordinate measuring machine (CMM) was used to measure the distances between each of the ten fiducial markers. Each measurement was made using one point as a static reference to the other nine points. The same measuring procedures were accomplished on all specimens. All measurements were compared between specimens and the control. The data were analyzed using ANOVA and Tukey pairwise comparison of the raters, methods, and fiducial markers. The ANOVA multiple comparisons showed significant difference among the three methods (p 3D-SPG showed statistical difference in comparison to the models fabricated using the traditional method of facial moulage and 3DP models fabricated from CBCT imaging. 3DP models fabricated using 3D-SPG were less accurate than the CPM and models fabricated using facial moulage and CBCT imaging techniques. © 2015 by the American College of Prosthodontists.

  5. Comparing models of rapidly rotating relativistic stars constructed by two numerical methods

    Science.gov (United States)

    Stergioulas, Nikolaos; Friedman, John L.

    1995-05-01

    We present the first direct comparison of codes based on two different numerical methods for constructing rapidly rotating relativistic stars. A code based on the Komatsu-Eriguchi-Hachisu (KEH) method (Komatsu et al. 1989), written by Stergioulas, is compared to the Butterworth-Ipser code (BI), as modified by Friedman, Ipser, & Parker. We compare models obtained by each method and evaluate the accuracy and efficiency of the two codes. The agreement is surprisingly good, and error bars in the published numbers for maximum frequencies based on BI are dominated not by the code inaccuracy but by the number of models used to approximate a continuous sequence of stars. The BI code is faster per iteration, and it converges more rapidly at low density, while KEH converges more rapidly at high density; KEH also converges in regions where BI does not, allowing one to compute some models unstable against collapse that are inaccessible to the BI code. A relatively large discrepancy recently reported (Eriguchi et al. 1994) for models based on Friedman-Pandharipande equation of state is found to arise from the use of two different versions of the equation of state. For two representative equations of state, the two-dimensional space of equilibrium configurations is displayed as a surface in a three-dimensional space of angular momentum, mass, and central density. We find, for a given equation of state, that equilibrium models with maximum values of mass, baryon mass, and angular momentum are (generically) either all unstable to collapse or are all stable. In the first case, the stable model with maximum angular velocity is also the model with maximum mass, baryon mass, and angular momentum. In the second case, the stable models with maximum values of these quantities are all distinct. Our implementation of the KEH method will be available as a public domain program for interested users.

  6. Comparing convective heat fluxes derived from thermodynamics to a radiative-convective model and GCMs

    Science.gov (United States)

    Dhara, Chirag; Renner, Maik; Kleidon, Axel

    2015-04-01

    The convective transport of heat and moisture plays a key role in the climate system, but the transport is typically parameterized in models. Here, we aim at the simplest possible physical representation and treat convective heat fluxes as the result of a heat engine. We combine the well-known Carnot limit of this heat engine with the energy balances of the surface-atmosphere system that describe how the temperature difference is affected by convective heat transport, yielding a maximum power limit of convection. This results in a simple analytic expression for convective strength that depends primarily on surface solar absorption. We compare this expression with an idealized grey atmosphere radiative-convective (RC) model as well as Global Circulation Model (GCM) simulations at the grid scale. We find that our simple expression as well as the RC model can explain much of the geographic variation of the GCM output, resulting in strong linear correlations among the three approaches. The RC model, however, shows a lower bias than our simple expression. We identify the use of the prescribed convective adjustment in RC-like models as the reason for the lower bias. The strength of our model lies in its ability to capture the geographic variation of convective strength with a parameter-free expression. On the other hand, the comparison with the RC model indicates a method for improving the formulation of radiative transfer in our simple approach. We also find that the latent heat fluxes compare very well among the approaches, as well as their sensitivity to surface warming. What our comparison suggests is that the strength of convection and their sensitivity in the climatic mean can be estimated relatively robustly by rather simple approaches.

  7. Software Code Smell Prediction Model Using Shannon, Rényi and Tsallis Entropies

    Directory of Open Access Journals (Sweden)

    Aakanshi Gupta

    2018-05-01

    Full Text Available The current era demands high quality software in a limited time period to achieve new goals and heights. To meet user requirements, the source codes undergo frequent modifications which can generate the bad smells in software that deteriorate the quality and reliability of software. Source code of the open source software is easily accessible by any developer, thus frequently modifiable. In this paper, we have proposed a mathematical model to predict the bad smells using the concept of entropy as defined by the Information Theory. Open-source software Apache Abdera is taken into consideration for calculating the bad smells. Bad smells are collected using a detection tool from sub components of the Apache Abdera project, and different measures of entropy (Shannon, Rényi and Tsallis entropy. By applying non-linear regression techniques, the bad smells that can arise in the future versions of software are predicted based on the observed bad smells and entropy measures. The proposed model has been validated using goodness of fit parameters (prediction error, bias, variation, and Root Mean Squared Prediction Error (RMSPE. The values of model performance statistics ( R 2 , adjusted R 2 , Mean Square Error (MSE and standard error also justify the proposed model. We have compared the results of the prediction model with the observed results on real data. The results of the model might be helpful for software development industries and future researchers.

  8. Interpreting the cosmic far-infrared background anisotropies using a gas regulator model

    Science.gov (United States)

    Wu, Hao-Yi; Doré, Olivier; Teyssier, Romain; Serra, Paolo

    2018-04-01

    Cosmic far-infrared background (CFIRB) is a powerful probe of the history of star formation rate (SFR) and the connection between baryons and dark matter across cosmic time. In this work, we explore to which extent the CFIRB anisotropies can be reproduced by a simple physical framework for galaxy evolution, the gas regulator (bathtub) model. This model is based on continuity equations for gas, stars, and metals, taking into account cosmic gas accretion, star formation, and gas ejection. We model the large-scale galaxy bias and small-scale shot noise self-consistently, and we constrain our model using the CFIRB power spectra measured by Planck. Because of the simplicity of the physical model, the goodness of fit is limited. We compare our model predictions with the observed correlation between CFIRB and gravitational lensing, bolometric infrared luminosity functions, and submillimetre source counts. The strong clustering of CFIRB indicates a large galaxy bias, which corresponds to haloes of mass 1012.5 M⊙ at z = 2, higher than the mass associated with the peak of the star formation efficiency. We also find that the far-infrared luminosities of haloes above 1012 M⊙ are higher than the expectation from the SFR observed in ultraviolet and optical surveys.

  9. A MAGNIFIED GLANCE INTO THE DARK SECTOR: PROBING COSMOLOGICAL MODELS WITH STRONG LENSING IN A1689

    International Nuclear Information System (INIS)

    Magaña, Juan; Motta, V.; Cárdenas, Victor H.; Verdugo, T.; Jullo, Eric

    2015-01-01

    In this paper we constrain four alternative models to the late cosmic acceleration in the universe: Chevallier–Polarski–Linder (CPL), interacting dark energy (IDE), Ricci holographic dark energy (HDE), and modified polytropic Cardassian (MPC). Strong lensing (SL) images of background galaxies produced by the galaxy cluster Abell 1689 are used to test these models. To perform this analysis we modify the LENSTOOL lens modeling code. The value added by this probe is compared with other complementary probes: Type Ia supernovae (SN Ia), baryon acoustic oscillations (BAO), and cosmic microwave background (CMB). We found that the CPL constraints obtained for the SL data are consistent with those estimated using the other probes. The IDE constraints are consistent with the complementary bounds only if large errors in the SL measurements are considered. The Ricci HDE and MPC constraints are weak, but they are similar to the BAO, SN Ia, and CMB estimations. We also compute the figure of merit as a tool to quantify the goodness of fit of the data. Our results suggest that the SL method provides statistically significant constraints on the CPL parameters but is weak for those of the other models. Finally, we show that the use of the SL measurements in galaxy clusters is a promising and powerful technique to constrain cosmological models. The advantage of this method is that cosmological parameters are estimated by modeling the SL features for each underlying cosmology. These estimations could be further improved by SL constraints coming from other galaxy clusters

  10. A comparative study of the proposed models for the components of the national health information system.

    Science.gov (United States)

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-04-01

    National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the

  11. A Frank mixture copula family for modeling higher-order correlations of neural spike counts

    International Nuclear Information System (INIS)

    Onken, Arno; Obermayer, Klaus

    2009-01-01

    In order to evaluate the importance of higher-order correlations in neural spike count codes, flexible statistical models of dependent multivariate spike counts are required. Copula families, parametric multivariate distributions that represent dependencies, can be applied to construct such models. We introduce the Frank mixture family as a new copula family that has separate parameters for all pairwise and higher-order correlations. In contrast to the Farlie-Gumbel-Morgenstern copula family that shares this property, the Frank mixture copula can model strong correlations. We apply spike count models based on the Frank mixture copula to data generated by a network of leaky integrate-and-fire neurons and compare the goodness of fit to distributions based on the Farlie-Gumbel-Morgenstern family. Finally, we evaluate the importance of using proper single neuron spike count distributions on the Shannon information. We find notable deviations in the entropy that increase with decreasing firing rates. Moreover, we find that the Frank mixture family increases the log likelihood of the fit significantly compared to the Farlie-Gumbel-Morgenstern family. This shows that the Frank mixture copula is a useful tool to assess the importance of higher-order correlations in spike count codes.

  12. The Comparative Study of Collaborative Learning and SDLC Model to develop IT Group Projects

    Directory of Open Access Journals (Sweden)

    Sorapak Pukdesree

    2017-11-01

    Full Text Available The main objectives of this research were to compare the attitudes of learners between applying SDLC model with collaborative learning and typical SDLC model and to develop electronic courseware as group projects. The research was a quasi-experimental research. The populations of the research were students who took Computer Organization and Architecture course in the academic year 2015. There were 38 students who participated to the research. The participants were divided voluntary into two groups including an experimental group with 28 students using SDLC model with collaborative learning and a control group with 10 students using typical SDLC model. The research instruments were attitude questionnaire, semi-structured interview and self-assessment questionnaire. The collected data was analysed by arithmetic mean, standard deviation, and independent sample t-test. The results of the questionnaire revealed that the attitudes of the learners using collaborative learning and SDLC model were statistically significant difference between the mean score for experimental group and control group at a significance level of 0.05. The independent statistical analyses were significantly different between the two groups at a significance level of 0.05. The results of the interviewing revealed that most of the learners had the corresponding opinions that collaborative learning was very useful with highest level of their attitudes comparing with the previous methodology. Learners had left some feedbacks that collaborative learning should be applied to other courses.

  13. Alfven waves in the auroral ionosphere: A numerical model compared with measurements

    International Nuclear Information System (INIS)

    Knudsen, D.J.; Kelley, M.C.; Vickrey, J.F.

    1992-01-01

    The authors solve a linear numerical model of Alfven waves reflecting from the high-latitude ionosphere, both to better understanding the role of the ionosphere in the magnetosphere/ionosphere coupling process and to compare model results with in situ measurements. They use the model to compute the frequency-dependent amplitude and phase relations between the meridional electric and the zonal magnetic fields due to Alfven waves. These relations are compared with measurements taken by an auroral sounding rocket flow in the morningside oval and by the HILAT satellite traversing the oval at local noon. The sounding rocket's trajectory was mostly parallel to the auroral oval, and is measured enhanced fluctuating field energy in regions of electron precipitation. The rocket-measured phase data are in excellent agreement with the Alfven wave model, and the relation between the modeled and the measured by HILAT are related by the height-integrated Pedersen conductivity Σ p , indicating that the measured field fluctuations were due mainly to structured field-aligned current systems. A reason for the relative lack of Alfven wave energy in the HILAT measurements could be the fact that the satellite traveled mostly perpendicular to the oval and therefore quickly traversed narrow regions of electron precipitation and associated wave activity

  14. A Comparative Study of CFD Models of a Real Wind Turbine in Solar Chimney Power Plants

    Directory of Open Access Journals (Sweden)

    Ehsan Gholamalizadeh

    2017-10-01

    Full Text Available A solar chimney power plant consists of four main parts, a solar collector, a chimney, an energy storage layer, and a wind turbine. So far, several investigations on the performance of the solar chimney power plant have been conducted. Among them, different approaches have been applied to model the turbine inside the system. In particular, a real wind turbine coupled to the system was simulated using computational fluid dynamics (CFD in three investigations. Gholamalizadeh et al. simulated a wind turbine with the same blade profile as the Manzanares SCPP’s turbine (FX W-151-A blade profile, while a CLARK Y blade profile was modelled by Guo et al. and Ming et al. In this study, simulations of the Manzanares prototype were carried out using the CFD model developed by Gholamalizadeh et al. Then, results obtained by modelling different turbine blade profiles at different turbine rotational speeds were compared. The results showed that a turbine with the CLARK Y blade profile significantly overestimates the value of the pressure drop across the Manzanares prototype turbine as compared to the FX W-151-A blade profile. In addition, modelling of both blade profiles led to very similar trends in changes in turbine efficiency and power output with respect to rotational speed.

  15. Psychobiological model of temperament and character: Validation and cross-cultural comparations

    Directory of Open Access Journals (Sweden)

    Džamonja-Ignjatović Tamara

    2005-01-01

    Full Text Available The paper presents research results regarding Psychobiological model of personality by Robert Cloninger. The primary research goal was to test the new TCI-5 inventory and compare our results with US normative data. We also analyzed the factor structure of the model and the reliability of basic TCI-5 scales and sub-scales. The sample consisted of 473 subjects from the normal population, age range between 18-50 years. Results showed significant differences between Serbian and American samples. Compared to the American sample, Novelty seeking was higher in the Serbian sample, while Persistence Self-directedness and Cooperativeness were lower. For the most part results of the present study confirmed a seven factor structure model although some sub-scales did not coincide with basic dimensions as predicted by the theoretical model. Therefore certain theoretical revisions of the model are required in order to fit in the empirical findings. Similarly, the discrepancy between the theoretical and empirical was also noticed regarding the reliability of TCI-5 scales. They also need to be re-examined. Thus the results of the study showed satisfactory reliability of Persistence (.90, Self-directedness (.89 and Harm avoidance (.87, but low reliability of the Novelty seeking (.78, Reward dependence (.79 and Self-transcendence (.78.

  16. Financial impact of errors in business forecasting: a comparative study of linear models and neural networks

    Directory of Open Access Journals (Sweden)

    Claudimar Pereira da Veiga

    2012-08-01

    Full Text Available The importance of demand forecasting as a management tool is a well documented issue. However, it is difficult to measure costs generated by forecasting errors and to find a model that assimilate the detailed operation of each company adequately. In general, when linear models fail in the forecasting process, more complex nonlinear models are considered. Although some studies comparing traditional models and neural networks have been conducted in the literature, the conclusions are usually contradictory. In this sense, the objective was to compare the accuracy of linear methods and neural networks with the current method used by the company. The results of this analysis also served as input to evaluate influence of errors in demand forecasting on the financial performance of the company. The study was based on historical data from five groups of food products, from 2004 to 2008. In general, one can affirm that all models tested presented good results (much better than the current forecasting method used, with mean absolute percent error (MAPE around 10%. The total financial impact for the company was 6,05% on annual sales.

  17. Human X-chromosome inactivation pattern distributions fit a model of genetically influenced choice better than models of completely random choice

    Science.gov (United States)

    Renault, Nisa K E; Pritchett, Sonja M; Howell, Robin E; Greer, Wenda L; Sapienza, Carmen; Ørstavik, Karen Helene; Hamilton, David C

    2013-01-01

    In eutherian mammals, one X-chromosome in every XX somatic cell is transcriptionally silenced through the process of X-chromosome inactivation (XCI). Females are thus functional mosaics, where some cells express genes from the paternal X, and the others from the maternal X. The relative abundance of the two cell populations (X-inactivation pattern, XIP) can have significant medical implications for some females. In mice, the ‘choice' of which X to inactivate, maternal or paternal, in each cell of the early embryo is genetically influenced. In humans, the timing of XCI choice and whether choice occurs completely randomly or under a genetic influence is debated. Here, we explore these questions by analysing the distribution of XIPs in large populations of normal females. Models were generated to predict XIP distributions resulting from completely random or genetically influenced choice. Each model describes the discrete primary distribution at the onset of XCI, and the continuous secondary distribution accounting for changes to the XIP as a result of development and ageing. Statistical methods are used to compare models with empirical data from Danish and Utah populations. A rigorous data treatment strategy maximises information content and allows for unbiased use of unphased XIP data. The Anderson–Darling goodness-of-fit statistics and likelihood ratio tests indicate that a model of genetically influenced XCI choice better fits the empirical data than models of completely random choice. PMID:23652377

  18. Comparative analysis between Hec-RAS models and IBER in the hydraulic assessment of bridges

    OpenAIRE

    Rincón, Jean; Pérez, María; Delfín, Guillermo; Freitez, Carlos; Martínez, Fabiana

    2017-01-01

    This work aims to perform a comparative analysis between the Hec-RAS and IBER models, in the hydraulic evaluation of rivers with structures such as bridges. The case of application was the La Guardia creek, located in the road that communicates the cities of Barquisimeto-Quíbor, Venezuela. The first phase of the study consisted in the comparison of the models from the conceptual point of view and the management of both. The second phase focused on the case study, and the comparison of ...

  19. Cold Nuclear Matter effects on J/psi production at RHIC: comparing shadowing models

    Energy Technology Data Exchange (ETDEWEB)

    Ferreiro, E.G.; /Santiago de Compostela U.; Fleuret, F.; /Ecole Polytechnique; Lansberg, J.P.; /SLAC; Rakotozafindrabe, A.; /SPhN, DAPNIA, Saclay

    2009-06-19

    We present a wide study on the comparison of different shadowing models and their influence on J/{psi} production. We have taken into account the possibility of different partonic processes for the c{bar c}-pair production. We notice that the effect of shadowing corrections on J/{psi} production clearly depends on the partonic process considered. Our results are compared to the available data on dAu collisions at RHIC energies. We try different break up cross section for each of the studied shadowing models.

  20. Extra-Tropical Cyclones at Climate Scales: Comparing Models to Observations

    Science.gov (United States)

    Tselioudis, G.; Bauer, M.; Rossow, W.

    2009-04-01

    Climate is often defined as the accumulation of weather, and weather is not the concern of climate models. Justification for this latter sentiment has long been hidden behind coarse model resolutions and blunt validation tools based on climatological maps. The spatial-temporal resolutions of today's climate models and observations are converging onto meteorological scales, however, which means that with the correct tools we can test the largely unproven assumption that climate model weather is correct enough that its accumulation results in a robust climate simulation. Towards this effort we introduce a new tool for extracting detailed cyclone statistics from observations and climate model output. These include the usual cyclone characteristics (centers, tracks), but also adaptive cyclone-centric composites. We have created a novel dataset, the MAP Climatology of Mid-latitude Storminess (MCMS), which provides a detailed 6 hourly assessment of the areas under the influence of mid-latitude cyclones, using a search algorithm that delimits the boundaries of each system from the outer-most closed SLP contour. Using this we then extract composites of cloud, radiation, and precipitation properties from sources such as ISCCP and GPCP to create a large comparative dataset for climate model validation. A demonstration of the potential usefulness of these tools in process-based climate model evaluation studies will be shown.

  1. a Comparative Analysis of Spatiotemporal Data Fusion Models for Landsat and Modis Data

    Science.gov (United States)

    Hazaymeh, K.; Almagbile, A.

    2018-04-01

    In this study, three documented spatiotemporal data fusion models were applied to Landsat-7 and MODIS surface reflectance, and NDVI. The algorithms included the spatial and temporal adaptive reflectance fusion model (STARFM), sparse representation based on a spatiotemporal reflectance fusion model (SPSTFM), and spatiotemporal image-fusion model (STI-FM). The objectives of this study were to (i) compare the performance of these three fusion models using a one Landsat-MODIS spectral reflectance image pairs using time-series datasets from the Coleambally irrigation area in Australia, and (ii) quantitatively evaluate the accuracy of the synthetic images generated from each fusion model using statistical measurements. Results showed that the three fusion models predicted the synthetic Landsat-7 image with adequate agreements. The STI-FM produced more accurate reconstructions of both Landsat-7 spectral bands and NDVI. Furthermore, it produced surface reflectance images having the highest correlation with the actual Landsat-7 images. This study indicated that STI-FM would be more suitable for spatiotemporal data fusion applications such as vegetation monitoring, drought monitoring, and evapotranspiration.

  2. A simplified MHD model of capillary Z-Pinch compared with experiments

    Energy Technology Data Exchange (ETDEWEB)

    Shapolov, A.A.; Kiss, M.; Kukhlevsky, S.V. [Institute of Physics, University of Pecs (Hungary)

    2016-11-15

    The most accurate models of the capillary Z-pinches used for excitation of soft X-ray lasers and photolithography XUV sources currently are based on the magnetohydrodynamics theory (MHD). The output of MHD-based models greatly depends on details in the mathematical description, such as initial and boundary conditions, approximations of plasma parameters, etc. Small experimental groups who develop soft X-ray/XUV sources often use the simplest Z-pinch models for analysis of their experimental results, despite of these models are inconsistent with the MHD equations. In the present study, keeping only the essential terms in the MHD equations, we obtained a simplified MHD model of cylindrically symmetric capillary Z-pinch. The model gives accurate results compared to experiments with argon plasmas, and provides simple analysis of temporal evolution of main plasma parameters. The results clarify the influence of viscosity, heat flux and approximations of plasma conductivity on the dynamics of capillary Z-pinch plasmas. The model can be useful for researchers, especially experimentalists, who develop the soft X-ray/XUV sources. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  3. A comparative study of spherical and flat-Earth geopotential modeling at satellite elevations

    Science.gov (United States)

    Parrott, M. H.; Hinze, W. J.; Braile, L. W.; Vonfrese, R. R. B.

    1985-01-01

    Flat-Earth modeling is a desirable alternative to the complex spherical-Earth modeling process. These methods were compared using 2 1/2 dimensional flat-earth and spherical modeling to compute gravity and scalar magnetic anomalies along profiles perpendicular to the strike of variably dimensioned rectangular prisms at altitudes of 150, 300, and 450 km. Comparison was achieved with percent error computations (spherical-flat/spherical) at critical anomaly points. At the peak gravity anomaly value, errors are less than + or - 5% for all prisms. At 1/2 and 1/10 of the peak, errors are generally less than 10% and 40% respectively, increasing to these values with longer and wider prisms at higher altitudes. For magnetics, the errors at critical anomaly points are less than -10% for all prisms, attaining these magnitudes with longer and wider prisms at higher altitudes. In general, in both gravity and magnetic modeling, errors increase greatly for prisms wider than 500 km, although gravity modeling is more sensitive than magnetic modeling to spherical-Earth effects. Preliminary modeling of both satellite gravity and magnetic anomalies using flat-Earth assumptions is justified considering the errors caused by uncertainties in isolating anomalies.

  4. Generalized outcome-based strategy classification: comparing deterministic and probabilistic choice models.

    Science.gov (United States)

    Hilbig, Benjamin E; Moshagen, Morten

    2014-12-01

    Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.

  5. Modelling and Comparative Performance Analysis of a Time-Reversed UWB System

    Directory of Open Access Journals (Sweden)

    Popovski K

    2007-01-01

    Full Text Available The effects of multipath propagation lead to a significant decrease in system performance in most of the proposed ultra-wideband communication systems. A time-reversed system utilises the multipath channel impulse response to decrease receiver complexity, through a prefiltering at the transmitter. This paper discusses the modelling and comparative performance of a UWB system utilising time-reversed communications. System equations are presented, together with a semianalytical formulation on the level of intersymbol interference and multiuser interference. The standardised IEEE 802.15.3a channel model is applied, and the estimated error performance is compared through simulation with the performance of both time-hopped time-reversed and RAKE-based UWB systems.

  6. A comparative study of the tail ion distribution with reduced Fokker-Planck models

    Science.gov (United States)

    McDevitt, C. J.; Tang, Xian-Zhu; Guo, Zehua; Berk, H. L.

    2014-03-01

    A series of reduced models are used to study the fast ion tail in the vicinity of a transition layer between plasmas at disparate temperatures and densities, which is typical of the gas and pusher interface in inertial confinement fusion targets. Emphasis is placed on utilizing progressively more comprehensive models in order to identify the essential physics for computing the fast ion tail at energies comparable to the Gamow peak. The resulting fast ion tail distribution is subsequently used to compute the fusion reactivity as a function of collisionality and temperature. While a significant reduction of the fusion reactivity in the hot spot compared to the nominal Maxwellian case is present, this reduction is found to be partially recovered by an increase of the fusion reactivity in the neighboring cold region.

  7. Comparing different CFD wind turbine modelling approaches with wind tunnel measurements

    International Nuclear Information System (INIS)

    Kalvig, Siri; Hjertager, Bjørn; Manger, Eirik

    2014-01-01

    The performance of a model wind turbine is simulated with three different CFD methods: actuator disk, actuator line and a fully resolved rotor. The simulations are compared with each other and with measurements from a wind tunnel experiment. The actuator disk is the least accurate and most cost-efficient, and the fully resolved rotor is the most accurate and least cost-efficient. The actuator line method is believed to lie in between the two ends of the scale. The fully resolved rotor produces superior wake velocity results compared to the actuator models. On average it also produces better results for the force predictions, although the actuator line method had a slightly better match for the design tip speed. The open source CFD tool box, OpenFOAM, was used for the actuator disk and actuator line calculations, whereas the market leading commercial CFD code, ANSYS/FLUENT, was used for the fully resolved rotor approach

  8. Feedforward Object-Vision Models Only Tolerate Small Image Variations Compared to Human

    Directory of Open Access Journals (Sweden)

    Masoud eGhodrati

    2014-07-01

    Full Text Available Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modelling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well when images with more complex variations of the same object are applied to them. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e. briefly presented masked stimuli with complex image variations, human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modelling. We show that this approach is not of significant help in solving the computational crux of object recognition (that is invariant object recognition when the identity-preserving image variations become more complex.

  9. Using the Landlab toolkit to evaluate and compare alternative geomorphic and hydrologic model formulations

    Science.gov (United States)

    Tucker, G. E.; Adams, J. M.; Doty, S. G.; Gasparini, N. M.; Hill, M. C.; Hobley, D. E. J.; Hutton, E.; Istanbulluoglu, E.; Nudurupati, S. S.

    2016-12-01

    Developing a better understanding of catchment hydrology and geomorphology ideally involves quantitative hypothesis testing. Often one seeks to identify the simplest mathematical and/or computational model that accounts for the essential dynamics in the system of interest. Development of alternative hypotheses involves testing and comparing alternative formulations, but the process of comparison and evaluation is made challenging by the rigid nature of many computational models, which are often built around a single assumed set of equations. Here we review a software framework for two-dimensional computational modeling that facilitates the creation, testing, and comparison of surface-dynamics models. Landlab is essentially a Python-language software library. Its gridding module allows for easy generation of a structured (raster, hex) or unstructured (Voronoi-Delaunay) mesh, with the capability to attach data arrays to particular types of element. Landlab includes functions that implement common numerical operations, such as gradient calculation and summation of fluxes within grid cells. Landlab also includes a collection of process components, which are encapsulated pieces of software that implement a numerical calculation of a particular process. Examples include downslope flow routing over topography, shallow-water hydrodynamics, stream erosion, and sediment transport on hillslopes. Individual components share a common grid and data arrays, and they can be coupled through the use of a simple Python script. We illustrate Landlab's capabilities with a case study of Holocene landscape development in the northeastern US, in which we seek to identify a collection of model components that can account for the formation of a series of incised canyons that have that developed since the Laurentide ice sheet last retreated. We compare sets of model ingredients related to (1) catchment hydrologic response, (2) hillslope evolution, and (3) stream channel and gully incision

  10. Applied stochastic modelling

    CERN Document Server

    Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P

    2008-01-01

    Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...

  11. Hydrologic Model Development and Calibration: Contrasting a Single- and Multi-Objective Approach for Comparing Model Performance

    Science.gov (United States)

    Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.

    2009-05-01

    Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment

  12. Computations for the 1:5 model of the THTR pressure vessel compared with experimental results

    International Nuclear Information System (INIS)

    Stangenberg, F.

    1972-01-01

    In this report experimental results measured at the 1:5-model of the prestressed concrete pressure vessel of the THTR-nuclear power station Schmehausen in 1971, are compared with the results of axis-symmetrical computations. Linear-elastic computations were performed as well as approximate computations for overload pressures taking into consideration the influences of the load history (prestressing, temperature, creep) and the effects of the steel components. (orig.) [de

  13. Comparative Analysis of Photogrammetric Methods for 3D Models for Museums

    DEFF Research Database (Denmark)

    Hafstað Ármannsdottir, Unnur Erla; Antón Castro, Francesc/François; Mioc, Darka

    2014-01-01

    The goal of this paper is to make a comparative analysis and selection of methodologies for making 3D models of historical items, buildings and cultural heritage and how to preserve information such as temporary exhibitions and archaeological findings. Two of the methodologies analyzed correspond...... matrix has been used. Prototypes are made partly or fully and evaluated from the point of view of preservation of information by a museum....

  14. Comparative Analysis of Market Volatility in Indian Banking and IT Sectors by using Average Decline Model

    OpenAIRE

    Kirti AREKAR; Rinku JAIN

    2017-01-01

    The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE) by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in...

  15. Thermodynamic Molecular Switch in Sequence-Specific Hydrophobic Interaction: Two Computational Models Compared

    Directory of Open Access Journals (Sweden)

    Paul Chun

    2003-01-01

    Full Text Available We have shown in our published work the existence of a thermodynamic switch in biological systems wherein a change of sign in ΔCp°(Treaction leads to a true negative minimum in the Gibbs free energy change of reaction, and hence, a maximum in the related Keq. We have examined 35 pair-wise, sequence-specific hydrophobic interactions over the temperature range of 273–333 K, based on data reported by Nemethy and Scheraga in 1962. A closer look at a single example, the pair-wise hydrophobic interaction of leucine-isoleucine, will demonstrate the significant differences when the data are analyzed using the Nemethy-Scheraga model or treated by the Planck-Benzinger methodology which we have developed. The change in inherent chemical bond energy at 0 K, ΔH°(T0 is 7.53 kcal mol-1 compared with 2.4 kcal mol-1, while ‹ts› is 365 K as compared with 355 K, for the Nemethy-Scheraga and Planck-Benzinger model, respectively. At ‹tm›, the thermal agitation energy is about five times greater than ΔH°(T0 in the Planck-Benzinger model, that is 465 K compared to 497 K in the Nemethy-Scheraga model. The results imply that the negative Gibbs free energy minimum at a well-defined ‹ts›, where TΔS° = 0 at about 355 K, has its origin in the sequence-specific hydrophobic interactions, which are highly dependent on details of molecular structure. The Nemethy-Scheraga model shows no evidence of the thermodynamic molecular switch that we have found to be a universal feature of biological interactions. The Planck-Benzinger method is the best known for evaluating the innate temperature-invariant enthalpy, ΔH°(T0, and provides for better understanding of the heat of reaction for biological molecules.

  16. Water Management in the Camargue Biosphere Reserve: Insights from Comparative Mental Models Analysis

    Directory of Open Access Journals (Sweden)

    Raphael Mathevet

    2011-03-01

    Full Text Available Mental models are the cognitive representations of the world that frame how people interact with the world. Learning implies changing these mental models. The successful management of complex social-ecological systems requires the coordination of actions to achieve shared goals. The coordination of actions requires a level of shared understanding of the system or situation; a shared or common mental model. We first describe the elicitation and analysis of mental models of different stakeholder groups associated with water management in the Camargue Biosphere Reserve in the Rhône River delta on the French Mediterranean coast. We use cultural consensus analysis to explore the degree to which different groups shared mental models of the whole system, of stakeholders, of resources, of processes, and of interactions among these last three. The analysis of the elicited data from this group structure enabled us to tentatively explore the evidence for learning in the nonstatute Water Board; comprising important stakeholders related to the management of the central Rhône delta. The results indicate that learning does occur and results in richer mental models that are more likely to be shared among group members. However, the results also show lower than expected levels of agreement with these consensual mental models. Based on this result, we argue that a careful process and facilitation design can greatly enhance the functioning of the participatory process in the Water Board. We conclude that this methodology holds promise for eliciting and comparing mental models. It enriches group-model building and participatory approaches with a broader view of social learning and knowledge-sharing issues.

  17. A Comparative Study of Two Decision Models: Frisch’s model and a simple Dutch planning model

    NARCIS (Netherlands)

    J. Tinbergen (Jan)

    1951-01-01

    textabstractThe significance of Frisch's notion of decision models is, in the first place, that they draw full attention upon "inverted problems" which economic policy puts before us. In these problems the data are no longer those in the traditional economic problems, but partly the political

  18. A Field Guide to Extra-Tropical Cyclones: Comparing Models to Observations

    Science.gov (United States)

    Bauer, M.

    2008-12-01

    Climate it is said is the accumulation of weather. And weather is not the concern of climate models. Justification for this latter sentiment has long hidden behind coarse model resolutions and blunt validation tools based on climatological maps and the like. The spatial-temporal resolutions of today's models and observations are converging onto meteorological scales however, which means that with the correct tools we can test the largely unproven assumption that climate model weather is correct enough, or at least lacks perverting biases, such that its accumulation does in fact result in a robust climate prediction. Towards this effort we introduce a new tool for extracting detailed cyclone statistics from climate model output. These include the usual cyclone distribution statistics (maps, histograms), but also adaptive cyclone- centric composites. We have also created a complementary dataset, The MAP Climatology of Mid-latitude Storminess (MCMS), which provides a detailed 6 hourly assessment of the areas under the influence of mid- latitude cyclones based on Reanalysis products. Using this we then extract complimentary composites from sources such as ISCCP and GPCP to create a large comparative dataset for climate model validation. A demonstration of the potential usefulness of these tools will be shown. dime.giss.nasa.gov/mcms/mcms.html

  19. Comparing GIS-based habitat models for applications in EIA and SEA

    International Nuclear Information System (INIS)

    Gontier, Mikael; Moertberg, Ulla; Balfors, Berit

    2010-01-01

    Land use changes, urbanisation and infrastructure developments in particular, cause fragmentation of natural habitats and threaten biodiversity. Tools and measures must be adapted to assess and remedy the potential effects on biodiversity caused by human activities and developments. Within physical planning, environmental impact assessment (EIA) and strategic environmental assessment (SEA) play important roles in the prediction and assessment of biodiversity-related impacts from planned developments. However, adapted prediction tools to forecast and quantify potential impacts on biodiversity components are lacking. This study tested and compared four different GIS-based habitat models and assessed their relevance for applications in environmental assessment. The models were implemented in the Stockholm region in central Sweden and applied to data on the crested tit (Parus cristatus), a sedentary bird species of coniferous forest. All four models performed well and allowed the distribution of suitable habitats for the crested tit in the Stockholm region to be predicted. The models were also used to predict and quantify habitat loss for two regional development scenarios. The study highlighted the importance of model selection in impact prediction. Criteria that are relevant for the choice of model for predicting impacts on biodiversity were identified and discussed. Finally, the importance of environmental assessment for the preservation of biodiversity within the general frame of biodiversity conservation is emphasised.

  20. NTCP modelling of lung toxicity after SBRT comparing the universal survival curve and the linear quadratic model for fractionation correction

    International Nuclear Information System (INIS)

    Wennberg, Berit M.; Baumann, Pia; Gagliardi, Giovanna

    2011-01-01

    Background. In SBRT of lung tumours no established relationship between dose-volume parameters and the incidence of lung toxicity is found. The aim of this study is to compare the LQ model and the universal survival curve (USC) to calculate biologically equivalent doses in SBRT to see if this will improve knowledge on this relationship. Material and methods. Toxicity data on radiation pneumonitis grade 2 or more (RP2+) from 57 patients were used, 10.5% were diagnosed with RP2+. The lung DVHs were corrected for fractionation (LQ and USC) and analysed with the Lyman- Kutcher-Burman (LKB) model. In the LQ-correction α/β = 3 Gy was used and the USC parameters used were: α/β = 3 Gy, D 0 = 1.0 Gy, n = 10, α 0.206 Gy-1 and d T = 5.8 Gy. In order to understand the relative contribution of different dose levels to the calculated NTCP the concept of fractional NTCP was used. This might give an insight to the questions of whether 'high doses to small volumes' or 'low doses to large volumes' are most important for lung toxicity. Results and Discussion. NTCP analysis with the LKB-model using parameters m = 0.4, D50 = 30 Gy resulted for the volume dependence parameter (n) with LQ correction n = 0.87 and with USC correction n = 0.71. Using parameters m = 0.3, D 50 = 20 Gy n = 0.93 with LQ correction and n 0.83 with USC correction. In SBRT of lung tumours, NTCP modelling of lung toxicity comparing models (LQ,USC) for fractionation correction, shows that low dose contribute less and high dose more to the NTCP when using the USC-model. Comparing NTCP modelling of SBRT data and data from breast cancer, lung cancer and whole lung irradiation implies that the response of the lung is treatment specific. More data are however needed in order to have a more reliable modelling

  1. Comparing the reported burn conditions for different severity burns in porcine models: a systematic review.

    Science.gov (United States)

    Andrews, Christine J; Cuttle, Leila

    2017-12-01

    There are many porcine burn models that create burns using different materials (e.g. metal, water) and different burn conditions (e.g. temperature and duration of exposure). This review aims to determine whether a pooled analysis of these studies can provide insight into the burn materials and conditions required to create burns of a specific severity. A systematic review of 42 porcine burn studies describing the depth of burn injury with histological evaluation is presented. Inclusion criteria included thermal burns, burns created with a novel method or material, histological evaluation within 7 days post-burn and method for depth of injury assessment specified. Conditions causing deep dermal scald burns compared to contact burns of equivalent severity were disparate, with lower temperatures and shorter durations reported for scald burns (83°C for 14 seconds) compared to contact burns (111°C for 23 seconds). A valuable archive of the different mechanisms and materials used for porcine burn models is presented to aid design and optimisation of future models. Significantly, this review demonstrates the effect of the mechanism of injury on burn severity and that caution is recommended when burn conditions established by porcine contact burn models are used by regulators to guide scald burn prevention strategies. © 2017 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  2. @TOME-2: a new pipeline for comparative modeling of protein–ligand complexes

    Science.gov (United States)

    Pons, Jean-Luc; Labesse, Gilles

    2009-01-01

    @TOME 2.0 is new web pipeline dedicated to protein structure modeling and small ligand docking based on comparative analyses. @TOME 2.0 allows fold recognition, template selection, structural alignment editing, structure comparisons, 3D-model building and evaluation. These tasks are routinely used in sequence analyses for structure prediction. In our pipeline the necessary software is efficiently interconnected in an original manner to accelerate all the processes. Furthermore, we have also connected comparative docking of small ligands that is performed using protein–protein superposition. The input is a simple protein sequence in one-letter code with no comment. The resulting 3D model, protein–ligand complexes and structural alignments can be visualized through dedicated Web interfaces or can be downloaded for further studies. These original features will aid in the functional annotation of proteins and the selection of templates for molecular modeling and virtual screening. Several examples are described to highlight some of the new functionalities provided by this pipeline. The server and its documentation are freely available at http://abcis.cbs.cnrs.fr/AT2/ PMID:19443448

  3. @TOME-2: a new pipeline for comparative modeling of protein-ligand complexes.

    Science.gov (United States)

    Pons, Jean-Luc; Labesse, Gilles

    2009-07-01

    @TOME 2.0 is new web pipeline dedicated to protein structure modeling and small ligand docking based on comparative analyses. @TOME 2.0 allows fold recognition, template selection, structural alignment editing, structure comparisons, 3D-model building and evaluation. These tasks are routinely used in sequence analyses for structure prediction. In our pipeline the necessary software is efficiently interconnected in an original manner to accelerate all the processes. Furthermore, we have also connected comparative docking of small ligands that is performed using protein-protein superposition. The input is a simple protein sequence in one-letter code with no comment. The resulting 3D model, protein-ligand complexes and structural alignments can be visualized through dedicated Web interfaces or can be downloaded for further studies. These original features will aid in the functional annotation of proteins and the selection of templates for molecular modeling and virtual screening. Several examples are described to highlight some of the new functionalities provided by this pipeline. The server and its documentation are freely available at http://abcis.cbs.cnrs.fr/AT2/

  4. State regulation of nuclear sector: comparative study of Argentina and Brazil models

    International Nuclear Information System (INIS)

    Monteiro Filho, Joselio Silveira

    2004-08-01

    This research presents a comparative assessment of the regulation models of the nuclear sector in Argentina - under the responsibility of the Autoridad Regulatoria Nuclear (ARN), and Brazil - under the responsibility of Comissao Nacional de Energia Nuclear (CNEN), trying to identify which model is more adequate aiming the safe use of nuclear energy. Due to the methodology adopted, the theoretical framework resulted in criteria of analysis that corresponds to the characteristics of the Brazilian regulatory agencies created for other economic sector during the State reform staring in the middle of the nineties. Later, these criteria of analysis were used as comparison patterns between the regulation models of the nuclear sectors of Argentina and Brazil. The comparative assessment showed that the regulatory structure of the nuclear sector in Argentina seems to be more adequate, concerning the safe use of nuclear energy, than the model adopted in Brazil by CNEN, because its incorporates the criteria of functional, institutional and financial independence, competence definitions, technical excellence and transparency, indispensable to the development of its functions with autonomy, ethics, exemption and agility. (author)

  5. Is tuberculosis treatment really free in China? A study comparing two areas with different management models.

    Directory of Open Access Journals (Sweden)

    Sangsang Qiu

    Full Text Available China has implemented a free-service policy for tuberculosis. However, patients still have to pay a substantial proportion of their annual income for treatment of this disease. This study describes the economic burden on patients with tuberculosis; identifies related factors by comparing two areas with different management models; and provides policy recommendation for tuberculosis control reform in China.There are three tuberculosis management models in China: the tuberculosis dispensary model, specialist model and integrated model. We selected Zhangjiagang (ZJG and Taixing (TX as the study sites, which correspond to areas implementing the integrated model and dispensary model, respectively. Patients diagnosed and treated for tuberculosis since January 2010 were recruited as study subjects. A total of 590 patients (316 patients from ZJG and 274 patients from TX were interviewed with a response rate of 81%. The economic burden attributed to tuberculosis, including direct costs and indirect costs, was estimated and compared between the two study sites. The Mann-Whitney U Test was used to compare the cost differences between the two groups. Potential factors related to the total out-of-pocket costs were analyzed based on a step-by-step multivariate linear regression model after the logarithmic transformation of the costs.The average (median, interquartile range total cost was 18793.33 (9965, 3200-24400 CNY for patients in ZJG, which was significantly higher than for patients in TX (mean: 6598.33, median: 2263, interquartile range: 983-6688 (Z = 10.42, P < 0.001. After excluding expenses covered by health insurance, the average out-of-pocket costs were 14304.4 CNY in ZJG and 5639.2 CNY in TX. Based on the multivariable linear regression analysis, factors related to the total out-of-pocket costs were study site, age, number of clinical visits, residence, diagnosis delay, hospitalization, intake of liver protective drugs and use of the second

  6. Comparative study: TQ and Lean Production ownership models in health services.

    Science.gov (United States)

    Eiro, Natalia Yuri; Torres-Junior, Alvair Silveira

    2015-01-01

    compare the application of Total Quality (TQ) models used in processes of a health service, cases of lean healthcare and literature from another institution that has also applied this model. this is a qualitative research that was conducted through a descriptive case study. through critical analysis of the institutions studied it was possible to make a comparison between the traditional quality approach checked in one case and the theoretical and practice lean production approach used in another case and the specifications are described below. the research identified that the lean model was better suited for people that work systemically and generate the flow. It also pointed towards some potential challenges in the introduction and implementation of lean methods in health.

  7. Comparative analysis of turbulence models for flow simulation around a vertical axis wind turbine

    Energy Technology Data Exchange (ETDEWEB)

    Roy, S.; Saha, U.K. [Indian Institute of Technology Guwahati, Dept. of Mechanical Engineering, Guwahati (India)

    2012-07-01

    An unsteady computational investigation of the static torque characteristics of a drag based vertical axis wind turbine (VAWT) has been carried out using the finite volume based computational fluid dynamics (CFD) software package Fluent 6.3. A comparative study among the various turbulence models was conducted in order to predict the flow over the turbine at static condition and the results are validated with the available experimental results. CFD simulations were carried out at different turbine angular positions between 0 deg.-360 deg. in steps of 15 deg.. Results have shown that due to high static pressure on the returning blade of the turbine, the net static torque is negative at angular positions of 105 deg.-150 deg.. The realizable k-{epsilon} turbulent model has shown a better simulation capability over the other turbulent models for the analysis of static torque characteristics of the drag based VAWT. (Author)

  8. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  9. Individual-Tree Diameter Growth Models for Mixed Nothofagus Second Growth Forests in Southern Chile

    Directory of Open Access Journals (Sweden)

    Paulo C. Moreno

    2017-12-01

    Full Text Available Second growth forests of Nothofagus obliqua (roble, N. alpina (raulí, and N. dombeyi (coihue, known locally as RORACO, are among the most important native mixed forests in Chile. To improve the sustainable management of these forests, managers need adequate information and models regarding not only existing forest conditions, but their future states with varying alternative silvicultural activities. In this study, an individual-tree diameter growth model was developed for the full geographical distribution of the RORACO forest type. This was achieved by fitting a complete model by comparing two variable selection procedures: cross-validation (CV, and least absolute shrinkage and selection operator (LASSO regression. A small set of predictors successfully explained a large portion of the annual increment in diameter at breast height (DBH growth, particularly variables associated with competition at both the tree- and stand-level. Goodness-of-fit statistics for this final model showed an empirical coefficient of correlation (R2emp of 0.56, relative root mean square error of 44.49% and relative bias of −1.96% for annual DBH growth predictions, and R2emp of 0.98 and 0.97 for DBH projection at 6 and 12 years, respectively. This model constitutes a simple and useful tool to support management plans for these forest ecosystems.

  10. Comparison of Artificial Neural Networks and ARIMA statistical models in simulations of target wind time series

    Science.gov (United States)

    Kolokythas, Kostantinos; Vasileios, Salamalikis; Athanassios, Argiriou; Kazantzidis, Andreas

    2015-04-01

    The wind is a result of complex interactions of numerous mechanisms taking place in small or large scales, so, the better knowledge of its behavior is essential in a variety of applications, especially in the field of power production coming from wind turbines. In the literature there is a considerable number of models, either physical or statistical ones, dealing with the problem of simulation and prediction of wind speed. Among others, Artificial Neural Networks (ANNs) are widely used for the purpose of wind forecasting and, in the great majority of cases, outperform other conventional statistical models. In this study, a number of ANNs with different architectures, which have been created and applied in a dataset of wind time series, are compared to Auto Regressive Integrated Moving Average (ARIMA) statistical models. The data consist of mean hourly wind speeds coming from a wind farm on a hilly Greek region and cover a period of one year (2013). The main goal is to evaluate the models ability to simulate successfully the wind speed at a significant point (target). Goodness-of-fit statistics are performed for the comparison of the different methods. In general, the ANN showed the best performance in the estimation of wind speed prevailing over the ARIMA models.

  11. The Evolution of the Solar Magnetic Field: A Comparative Analysis of Two Models

    Science.gov (United States)

    McMichael, K. D.; Karak, B. B.; Upton, L.; Miesch, M. S.; Vierkens, O.

    2017-12-01

    Understanding the complexity of the solar magnetic cycle is a task that has plagued scientists for decades. However, with the help of computer simulations, we have begun to gain more insight into possible solutions to the plethora of questions inside the Sun. STABLE (Surface Transport and Babcock Leighton) is a newly developed 3D dynamo model that can reproduce features of the solar cycle. In this model, the tilted bipolar sunspots are formed on the surface (based on the toroidal field at the bottom of the convection zone) and then decay and disperse, producing the poloidal field. Since STABLE is a 3D model, it is able to solve the full induction equation in the entirety of the solar convection zone as well as incorporate many free parameters (such as spot depth and turbulent diffusion) which are difficult to observe. In an attempt to constrain some of these free parameters, we compare STABLE to a surface flux transport model called AFT (Advective Flux Transport) which solves the radial component of the magnetic field on the solar surface. AFT is a state-of-the-art surface flux transport model that has a proven record of being able to reproduce solar observations with great accuracy. In this project, we implement synthetic bipolar sunspots into both models, using identical surface parameters, and run the models for comparison. We demonstrate that the 3D structure of the sunspots in the interior and the vertical diffusion of the sunspot magnetic field play an important role in establishing the surface magnetic field in STABLE. We found that when a sufficient amount of downward magnetic pumping is included in STABLE, the surface magnetic field from this model becomes insensitive to the internal structure of the sunspot and more consistent with that of AFT.

  12. Representing macropore flow at the catchment scale: a comparative modeling study

    Science.gov (United States)

    Liu, D.; Li, H. Y.; Tian, F.; Leung, L. R.

    2017-12-01

    Macropore flow is an important hydrological process that generally enhances the soil infiltration capacity and velocity of subsurface water. Up till now, macropore flow is mostly simulated with high-resolution models. One possible drawback of this modeling approach is the difficulty to effectively represent the overall typology and connectivity of the macropore networks. We hypothesize that modeling macropore flow directly at the catchment scale may be complementary to the existing modeling strategy and offer some new insights. Tsinghua Representative Elementary Watershed model (THREW model) is a semi-distributed hydrology model, where the fundamental building blocks are representative elementary watersheds (REW) linked by the river channel network. In THREW, all the hydrological processes are described with constitutive relationships established directly at the REW level, i.e., catchment scale. In this study, the constitutive relationship of macropore flow drainage is established as part of THREW. The enhanced THREW model is then applied at two catchments with deep soils but distinct climates, the humid Asu catchment in the Amazon River basin, and the arid Wei catchment in the Yellow River basin. The Asu catchment has an area of 12.43km2 with mean annual precipitation of 2442mm. The larger Wei catchment has an area of 24800km2 but with mean annual precipitation of only 512mm. The rainfall-runoff processes are simulated at a hourly time step from 2002 to 2005 in the Asu catchment and from 2001 to 2012 in the Wei catchment. The role of macropore flow on the catchment hydrology will be analyzed comparatively over the Asu and Wei catchments against the observed streamflow, evapotranspiration and other auxiliary data.

  13. A comparative analysis of hazard models for predicting debris flows in Madison County, VA

    Science.gov (United States)

    Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.

    2001-01-01

    During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).

  14. Comparing Free-Free and Shaker Table Model Correlation Methods Using Jim Beam

    Science.gov (United States)

    Ristow, James; Smith, Kenneth Wayne, Jr.; Johnson, Nathaniel; Kinney, Jackson

    2018-01-01

    Finite element model correlation as part of a spacecraft program has always been a challenge. For any NASA mission, the coupled system response of the spacecraft and launch vehicle can be determined analytically through a Coupled Loads Analysis (CLA), as it is not possible to test the spacecraft and launch vehicle coupled system before launch. The value of the CLA is highly dependent on the accuracy of the frequencies and mode shapes extracted from the spacecraft model. NASA standards require the spacecraft model used in the final Verification Loads Cycle to be correlated by either a modal test or by comparison of the model with Frequency Response Functions (FRFs) obtained during the environmental qualification test. Due to budgetary and time constraints, most programs opt to correlate the spacecraft dynamic model during the environmental qualification test, conducted on a large shaker table. For any model correlation effort, the key has always been finding a proper definition of the boundary conditions. This paper is a correlation case study to investigate the difference in responses of a simple structure using a free-free boundary, a fixed boundary on the shaker table, and a base-drive vibration test, all using identical instrumentation. The NAVCON Jim Beam test structure, featured in the IMAC round robin modal test of 2009, was selected as a simple, well recognized and well characterized structure to conduct this investigation. First, a free-free impact modal test of the Jim Beam was done as an experimental control. Second, the Jim Beam was mounted to a large 20,000 lbf shaker, and an impact modal test in this fixed configuration was conducted. Lastly, a vibration test of the Jim Beam was conducted on the shaker table. The free-free impact test, the fixed impact test, and the base-drive test were used to assess the effect of the shaker modes, evaluate the validity of fixed-base modeling assumptions, and compare final model correlation results between these

  15. Bayesian meta-analysis models for microarray data: a comparative study

    Directory of Open Access Journals (Sweden)

    Song Joon J

    2007-03-01

    Full Text Available Abstract Background With the growing abundance of microarray data, statistical methods are increasingly needed to integrate results across studies. Two common approaches for meta-analysis of microarrays include either combining gene expression measures across studies or combining summaries such as p-values, probabilities or ranks. Here, we compare two Bayesian meta-analysis models that are analogous to these methods. Results Two Bayesian meta-analysis models for microarray data have recently been introduced. The first model combines standardized gene expression measures across studies into an overall mean, accounting for inter-study variability, while the second combines probabilities of differential expression without combining expression values. Both models produce the gene-specific posterior probability of differential expression, which is the basis for inference. Since the standardized expression integration model includes inter-study variability, it may improve accuracy of results versus the probability integration model. However, due to the small number of studies typical in microarray meta-analyses, the variability between studies is challenging to estimate. The probability integration model eliminates the need to model variability between studies, and thus its implementation is more straightforward. We found in simulations of two and five studies that combining probabilities outperformed combining standardized gene expression measures for three comparison values: the percent of true discovered genes in meta-analysis versus individual studies; the percent of true genes omitted in meta-analysis versus separate studies, and the number of true discovered genes for fixed levels of Bayesian false discovery. We identified similar results when pooling two independent studies of Bacillus subtilis. We assumed that each study was produced from the same microarray platform with only two conditions: a treatment and control, and that the data sets

  16. Comparative Study of Injury Models for Studying Muscle Regeneration in Mice.

    Directory of Open Access Journals (Sweden)

    David Hardy

    Full Text Available A longstanding goal in regenerative medicine is to reconstitute functional tissues or organs after injury or disease. Attention has focused on the identification and relative contribution of tissue specific stem cells to the regeneration process. Relatively little is known about how the physiological process is regulated by other tissue constituents. Numerous injury models are used to investigate tissue regeneration, however, these models are often poorly understood. Specifically, for skeletal muscle regeneration several models are reported in the literature, yet the relative impact on muscle physiology and the distinct cells types have not been extensively characterised.We have used transgenic Tg:Pax7nGFP and Flk1GFP/+ mouse models to respectively count the number of muscle stem (satellite cells (SC and number/shape of vessels by confocal microscopy. We performed histological and immunostainings to assess the differences in the key regeneration steps. Infiltration of immune cells, chemokines and cytokines production was assessed in vivo by Luminex®.We compared the 4 most commonly used injury models i.e. freeze injury (FI, barium chloride (BaCl2, notexin (NTX and cardiotoxin (CTX. The FI was the most damaging. In this model, up to 96% of the SCs are destroyed with their surrounding environment (basal lamina and vasculature leaving a "dead zone" devoid of viable cells. The regeneration process itself is fulfilled in all 4 models with virtually no fibrosis 28 days post-injury, except in the FI model. Inflammatory cells return to basal levels in the CTX, BaCl2 but still significantly high 1-month post-injury in the FI and NTX models. Interestingly the number of SC returned to normal only in the FI, 1-month post-injury, with SCs that are still cycling up to 3-months after the induction of the injury in the other models.Our studies show that the nature of the injury model should be chosen carefully depending on the experimental design and desired

  17. Identifying the Source of Misfit in Item Response Theory Models.

    Science.gov (United States)

    Liu, Yang; Maydeu-Olivares, Alberto

    2014-01-01

    When an item response theory model fails to fit adequately, the items for which the model provides a good fit and those for which it does not must be determined. To this end, we compare the performance of several fit statistics for item pairs with known asymptotic distributions under maximum likelihood estimation of the item parameters: (a) a mean and variance adjustment to bivariate Pearson's X(2), (b) a bivariate subtable analog to Reiser's (1996) overall goodness-of-fit test, (c) a z statistic for the bivariate residual cross product, and (d) Maydeu-Olivares and Joe's (2006) M2 statistic applied to bivariate subtables. The unadjusted Pearson's X(2) with heuristically determined degrees of freedom is also included in the comparison. For binary and ordinal data, our simulation results suggest that the z statistic has the best Type I error and power behavior among all the statistics under investigation when the observed information matrix is used in its computation. However, if one has to use the cross-product information, the mean and variance adjusted X(2) is recommended. We illustrate the use of pairwise fit statistics in 2 real-data examples and discuss possible extensions of the current research in various directions.

  18. Diabetes and quality of life: Comparing results from utility instruments and Diabetes-39.

    Science.gov (United States)

    Chen, Gang; Iezzi, Angelo; McKie, John; Khan, Munir A; Richardson, Jeff

    2015-08-01

    To compare the Diabetes-39 (D-39) with six multi-attribute utility (MAU) instruments (15D, AQoL-8D, EQ-5D, HUI3, QWB, and SF-6D), and to develop mapping algorithms which could be used to transform the D-39 scores into the MAU scores. Self-reported diabetes sufferers (N=924) and members of the healthy public (N=1760), aged 18 years and over, were recruited from 6 countries (Australia 18%, USA 18%, UK 17%, Canada 16%, Norway 16%, and Germany 15%). Apart from the QWB which was distributed normally, non-parametric rank tests were used to compare subgroup utilities and D-39 scores. Mapping algorithms were estimated using ordinary least squares (OLS) and generalised linear models (GLM). MAU instruments discriminated between diabetes patients and the healthy public; however, utilities varied between instruments. The 15D, SF-6D, AQoL-8D had the strongest correlations with the D-39. Except for the HUI3, there were significant differences by gender. Mapping algorithms based on the OLS estimator consistently gave better goodness-of-fit results. The mean absolute error (MAE) values ranged from 0.061 to 0.147, the root mean square error (RMSE) values 0.083 to 0.198, and the R-square statistics 0.428 and 0.610. Based on MAE and RMSE values the preferred mapping is D-39 into 15D. R-square statistics and the range of predicted utilities indicate the preferred mapping is D-39 into AQoL-8D. Utilities estimated from different MAU instruments differ significantly and the outcome of a study could depend upon the instrument used. The algorithms reported in this paper enable D-39 data to be mapped into utilities predicted from any of six instruments. This provides choice for those conducting cost-utility analyses. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise

    Science.gov (United States)

    Brown, Patrick T.; Li, Wenhong; Cordero, Eugene C.; Mauget, Steven A.

    2015-01-01

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal. PMID:25898351

  20. Comparing risk of failure models in water supply networks using ROC curves

    International Nuclear Information System (INIS)

    Debon, A.; Carrion, A.; Cabrera, E.; Solano, H.

    2010-01-01

    The problem of predicting the failure of water mains has been considered from different perspectives and using several methodologies in engineering literature. Nowadays, it is important to be able to accurately calculate the failure probabilities of pipes over time, since water company profits and service quality for citizens depend on pipe survival; forecasting pipe failures could have important economic and social implications. Quantitative tools (such as managerial or statistical indicators and reliable databases) are required in order to assess the current and future state of networks. Companies managing these networks are trying to establish models for evaluating the risk of failure in order to develop a proactive approach to the renewal process, instead of using traditional reactive pipe substitution schemes. The main objective of this paper is to compare models for evaluating the risk of failure in water supply networks. Using real data from a water supply company, this study has identified which network characteristics affect the risk of failure and which models better fit data to predict service breakdown. The comparison using the receiver operating characteristics (ROC) graph leads us to the conclusion that the best model is a generalized linear model. Also, we propose a procedure that can be applied to a pipe failure database, allowing the most appropriate decision rule to be chosen.

  1. Comparing personality disorder models: cross-method assessment of the FFM and DSM-IV-TR.

    Science.gov (United States)

    Samuel, Douglas B; Widiger, Thomas W

    2010-12-01

    The current edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR; American Psychiatric Association, 2000) defines personality disorders as categorical entities that are distinct from each other and from normal personality traits. However, many scientists now believe that personality disorders are best conceptualized using a dimensional model of traits that span normal and abnormal personality, such as the Five-Factor Model (FFM). However, if the FFM or any dimensional model is to be considered as a credible alternative to the current model, it must first demonstrate an increment in the validity of the assessment offered within a clinical setting. Thus, the current study extended previous research by comparing the convergent and discriminant validity of the current DSM-IV-TR model to the FFM across four assessment methodologies. Eighty-eight individuals receiving ongoing psychotherapy were assessed for the FFM and the DSM-IV-TR personality disorders using self-report, informant report, structured interview, and therapist ratings. The results indicated that the FFM had an appreciable advantage over the DSM-IV-TR in terms of discriminant validity and, at the domain level, convergent validity. Implications of the findings and directions for future research are discussed.

  2. Comparing risk of failure models in water supply networks using ROC curves

    Energy Technology Data Exchange (ETDEWEB)

    Debon, A., E-mail: andeau@eio.upv.e [Centro de Gestion de la Calidad y del Cambio, Dpt. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Carrion, A. [Centro de Gestion de la Calidad y del Cambio, Dpt. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Cabrera, E. [Dpto. De Ingenieria Hidraulica Y Medio Ambiente, Instituto Tecnologico del Agua, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Solano, H. [Universidad Diego Portales, Santiago (Chile)

    2010-01-15

    The problem of predicting the failure of water mains has been considered from different perspectives and using several methodologies in engineering literature. Nowadays, it is important to be able to accurately calculate the failure probabilities of pipes over time, since water company profits and service quality for citizens depend on pipe survival; forecasting pipe failures could have important economic and social implications. Quantitative tools (such as managerial or statistical indicators and reliable databases) are required in order to assess the current and future state of networks. Companies managing these networks are trying to establish models for evaluating the risk of failure in order to develop a proactive approach to the renewal process, instead of using traditional reactive pipe substitution schemes. The main objective of this paper is to compare models for evaluating the risk of failure in water supply networks. Using real data from a water supply company, this study has identified which network characteristics affect the risk of failure and which models better fit data to predict service breakdown. The comparison using the receiver operating characteristics (ROC) graph leads us to the conclusion that the best model is a generalized linear model. Also, we propose a procedure that can be applied to a pipe failure database, allowing the most appropriate decision rule to be chosen.

  3. Comparing two non-equilibrium approaches to modelling of a free-burning arc

    International Nuclear Information System (INIS)

    Baeva, M; Uhrlandt, D; Benilov, M S; Cunha, M D

    2013-01-01

    Two models of high-pressure arc discharges are compared with each other and with experimental data for an atmospheric-pressure free-burning arc in argon for arc currents of 20–200 A. The models account for space-charge effects and thermal and ionization non-equilibrium in somewhat different ways. One model considers space-charge effects, thermal and ionization non-equilibrium in the near-cathode region and thermal non-equilibrium in the bulk plasma. The other model considers thermal and ionization non-equilibrium in the entire arc plasma and space-charge effects in the near-cathode region. Both models are capable of predicting the arc voltage in fair agreement with experimental data. Differences are observed in the arc attachment to the cathode, which do not strongly affect the near-cathode voltage drop and the total arc voltage for arc currents exceeding 75 A. For lower arc currents the difference is significant but the arc column structure is quite similar and the predicted bulk plasma characteristics are relatively close to each other. (paper)

  4. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise.

    Science.gov (United States)

    Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A

    2015-04-21

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.

  5. Comparative Study on a Solving Model and Algorithm for a Flush Air Data Sensing System

    Directory of Open Access Journals (Sweden)

    Yanbin Liu

    2014-05-01

    Full Text Available With the development of high-performance aircraft, precise air data are necessary to complete challenging tasks such as flight maneuvering with large angles of attack and high speed. As a result, the flush air data sensing system (FADS was developed to satisfy the stricter control demands. In this paper, comparative stuides on the solving model and algorithm for FADS are conducted. First, the basic principles of FADS are given to elucidate the nonlinear relations between the inputs and the outputs. Then, several different solving models and algorithms of FADS are provided to compute the air data, including the angle of attck, sideslip angle, dynamic pressure and static pressure. Afterwards, the evaluation criteria of the resulting models and algorithms are discussed to satisfy the real design demands. Futhermore, a simulation using these algorithms is performed to identify the properites of the distinct models and algorithms such as the measuring precision and real-time features. The advantages of these models and algorithms corresponding to the different flight conditions are also analyzed, furthermore, some suggestions on their engineering applications are proposed to help future research.

  6. In vitro radiosensitivity of six human cell lines. A comparative study with different statistical models

    International Nuclear Information System (INIS)

    Fertil, B.; Deschavanne, P.J.; Lachet, B.; Malaise, E.P.

    1980-01-01

    The intrinsic radiosensitivity of human cell lines (five tumor and one nontransformed fibroblastic) was studied in vitro. The survival curves were fitted by the single-hit multitarget, the two-hit multitarget, the single-hit multitarget with initial slope, and the quadratic models. The accuracy of the experimental results permitted evaluation of the various fittings. Both a statistical test (comparison of variances left unexplained by the four models) and a biological consideration (check for independence of the fitted parameters vis-a-vis the portion of the survival curve in question) were carried out. The quadratic model came out best with each of them. It described the low-dose effects satisfactorily, revealing a single-hit lethal component. This finding and the fact that the six survival curves displayed a continuous curvature ruled out the adoption of the target models as well as the widely used linear regression. As calculated by the quadratic model, the parameters of the six cell lines lead to the following conclusions: (a) the intrinsic radiosensitivity varies greatly among the different cell lines; (b) the interpretation of the fibroblast survival curve is not basically different from that of the tumor cell lines; and (c) the radiosensitivity of these human cell lines is comparable to that of other mammalian cell lines

  7. A statin a day keeps the doctor away: comparative proverb assessment modelling study

    Science.gov (United States)

    Mizdrak, Anja; Scarborough, Peter

    2013-01-01

    Objective To model the effect on UK vascular mortality of all adults over 50 years old being prescribed either a statin or an apple a day. Design Comparative proverb assessment modelling study. Setting United Kingdom. Population Adults aged over 50 years. Intervention Either a statin a day for people not already taking a statin or an apple a day for everyone, assuming 70% compliance and no change in calorie consumption. The modelling used routinely available UK population datasets; parameters describing the relations between statins, apples, and health were derived from meta-analyses. Main outcome measure Mortality due to vascular disease. Results The estimated annual reduction in deaths from vascular disease of a statin a day, assuming 70% compliance and a reduction in vascular mortality of 12% (95% confidence interval 9% to 16%) per 1.0 mmol/L reduction in low density lipoprotein cholesterol, is 9400 (7000 to 12 500). The equivalent reduction from an apple a day, modelled using the PRIME model (assuming an apple weighs 100 g and that overall calorie consumption remains constant) is 8500 (95% credible interval 6200 to 10 800). Conclusions Both nutritional and pharmaceutical approaches to the prevention of vascular disease may have the potential to reduce UK mortality significantly. With similar reductions in mortality, a 150 year old health promotion message is able to match modern medicine and is likely to have fewer side effects.

  8. Comparative Analysis of Sectoral Innovation System and Diamond Model: The Case of Telecom Sector of Iran

    Directory of Open Access Journals (Sweden)

    Mohammad Hosein Rezazadeh Mehrizi

    2008-08-01

    Full Text Available Porter’s model of Competitive advantage of nations (named as Diamond Model has been widely used and criticized as well, over recent two decades. On the other hand, non-mainstream economists have tried to propose new frameworks for industrial analysis, that among them, Sectoral Innovation System (SIS is one of the most influential ones. After proposing an assessment framework, we use this framework to compare SIS and Porter’s models and apply them to the case of second mobile operator in Iran. Briefly, SIS model sheds light on the innovation process and competence building and focuses on system failures that are of special importance in the context of developing countries, while Diamond model has the advantage of brining the production process and the influential role of government into focus, but each one has its own shortcomings for analyzing industrial development in developing countries and both of them fail to pay enough attention to foreign relations and international linkages.

  9. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    Directory of Open Access Journals (Sweden)

    Christopher W. Walmsley

    2013-11-01

    Full Text Available Finite element analysis (FEA is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation.Here we report an extensive sensitivity analysis where high resolution finite element (FE models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous, scaling (standardising volume, surface area, or length, tooth position (front, mid, or back tooth engagement, and linear load case (type of loading for each feeding type.Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different

  10. Comparative analysis of detection methods for congenital cytomegalovirus infection in a Guinea pig model.

    Science.gov (United States)

    Park, Albert H; Mann, David; Error, Marc E; Miller, Matthew; Firpo, Matthew A; Wang, Yong; Alder, Stephen C; Schleiss, Mark R

    2013-01-01

    To assess the validity of the guinea pig as a model for congenital cytomegalovirus (CMV) infection by comparing the effectiveness of detecting the virus by real-time polymerase chain reaction (PCR) in blood, urine, and saliva. Case-control study. Academic research. Eleven pregnant Hartley guinea pigs. Blood, urine, and saliva samples were collected from guinea pig pups delivered from pregnant dams inoculated with guinea pig CMV. These samples were then evaluated for the presence of guinea pig CMV by real-time PCR assuming 100% transmission. Thirty-one pups delivered from 9 inoculated pregnant dams and 8 uninfected control pups underwent testing for guinea pig CMV and for auditory brainstem response hearing loss. Repeated-measures analysis of variance demonstrated no statistically significantly lower weight for the infected pups compared with the noninfected control pups. Six infected pups demonstrated auditory brainstem response hearing loss. The sensitivity and specificity of the real-time PCR assay on saliva samples were 74.2% and 100.0%, respectively. The sensitivity of the real-time PCR on blood and urine samples was significantly lower than that on saliva samples. Real-time PCR assays of blood, urine, and saliva revealed that saliva samples show high sensitivity and specificity for detecting congenital CMV infection in guinea pigs. This finding is consistent with recent screening studies in human newborns. The guinea pig may be a good animal model in which to compare different diagnostic assays for congenital CMV infection.

  11. Comparative analysis of elements and models of implementation in local-level spatial plans in Serbia

    Directory of Open Access Journals (Sweden)

    Stefanović Nebojša

    2017-01-01

    Full Text Available Implementation of local-level spatial plans is of paramount importance to the development of the local community. This paper aims to demonstrate the importance of and offer further directions for research into the implementation of spatial plans by presenting the results of a study on models of implementation. The paper describes the basic theoretical postulates of a model for implementing spatial plans. A comparative analysis of the application of elements and models of implementation of plans in practice was conducted based on the spatial plans for the local municipalities of Arilje, Lazarevac and Sremska Mitrovica. The analysis includes four models of implementation: the strategy and policy of spatial development; spatial protection; the implementation of planning solutions of a technical nature; and the implementation of rules of use, arrangement and construction of spaces. The main results of the analysis are presented and used to give recommendations for improving the elements and models of implementation. Final deliberations show that models of implementation are generally used in practice and combined in spatial plans. Based on the analysis of how models of implementation are applied in practice, a general conclusion concerning the complex character of the local level of planning is presented and elaborated. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. TR 36035: Spatial, Environmental, Energy and Social Aspects of Developing Settlements and Climate Change - Mutual Impacts and Grant no. III 47014: The Role and Implementation of the National Spatial Plan and Regional Development Documents in Renewal of Strategic Research, Thinking and Governance in Serbia

  12. Stochastic or statistic? Comparing flow duration curve models in ungauged basins and changing climates

    Science.gov (United States)

    Müller, M. F.; Thompson, S. E.

    2015-09-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drives of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by a strong wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are strongly favored over statistical models.

  13. Comparing stream-specific to generalized temperature models to guide salmonid management in a changing climate

    Science.gov (United States)

    Andrew K. Carlson,; William W. Taylor,; Hartikainen, Kelsey M.; Dana M. Infante,; Beard, Douglas; Lynch, Abigail

    2017-01-01

    Global climate change is predicted to increase air and stream temperatures and alter thermal habitat suitability for growth and survival of coldwater fishes, including brook charr (Salvelinus fontinalis), brown trout (Salmo trutta), and rainbow trout (Oncorhynchus mykiss). In a changing climate, accurate stream temperature modeling is increasingly important for sustainable salmonid management throughout the world. However, finite resource availability (e.g. funding, personnel) drives a tradeoff between thermal model accuracy and efficiency (i.e. cost-effective applicability at management-relevant spatial extents). Using different projected climate change scenarios, we compared the accuracy and efficiency of stream-specific and generalized (i.e. region-specific) temperature models for coldwater salmonids within and outside the State of Michigan, USA, a region with long-term stream temperature data and productive coldwater fisheries. Projected stream temperature warming between 2016 and 2056 ranged from 0.1 to 3.8 °C in groundwater-dominated streams and 0.2–6.8 °C in surface-runoff dominated systems in the State of Michigan. Despite their generally lower accuracy in predicting exact stream temperatures, generalized models accurately projected salmonid thermal habitat suitability in 82% of groundwater-dominated streams, including those with brook charr (80% accuracy), brown trout (89% accuracy), and rainbow trout (75% accuracy). In contrast, generalized models predicted thermal habitat suitability in runoff-dominated streams with much lower accuracy (54%). These results suggest that, amidst climate change and constraints in resource availability, generalized models are appropriate to forecast thermal conditions in groundwater-dominated streams within and outside Michigan and inform regional-level salmonid management strategies that are practical for coldwater fisheries managers, policy makers, and the public. We recommend fisheries professionals reserve resource

  14. Comparative Performance and Model Agreement of Three Common Photovoltaic Array Configurations.

    Science.gov (United States)

    Boyd, Matthew T

    2018-02-01

    Three grid-connected monocrystalline silicon arrays on the National Institute of Standards and Technology (NIST) campus in Gaithersburg, MD have been instrumented and monitored for 1 yr, with only minimal gaps in the data sets. These arrays range from 73 kW to 271 kW, and all use the same module, but have different tilts, orientations, and configurations. One array is installed facing east and west over a parking lot, one in an open field, and one on a flat roof. Various measured relationships and calculated standard metrics have been used to compare the relative performance of these arrays in their different configurations. Comprehensive performance models have also been created in the modeling software pvsyst for each array, and its predictions using measured on-site weather data are compared to the arrays' measured outputs. The comparisons show that all three arrays typically have monthly performance ratios (PRs) above 0.75, but differ significantly in their relative output, strongly correlating to their operating temperature and to a lesser extent their orientation. The model predictions are within 5% of the monthly delivered energy values except during the winter months, when there was intermittent snow on the arrays, and during maintenance and other outages.

  15. Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods

    KAUST Repository

    Wang, Cheng

    2018-05-17

    Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.

  16. Comparing regional precipitation and temperature extremes in climate model and reanalysis products

    Directory of Open Access Journals (Sweden)

    Oliver Angélil

    2016-09-01

    Full Text Available A growing field of research aims to characterise the contribution of anthropogenic emissions to the likelihood of extreme weather and climate events. These analyses can be sensitive to the shapes of the tails of simulated distributions. If tails are found to be unrealistically short or long, the anthropogenic signal emerges more or less clearly, respectively, from the noise of possible weather. Here we compare the chance of daily land-surface precipitation and near-surface temperature extremes generated by three Atmospheric Global Climate Models typically used for event attribution, with distributions from six reanalysis products. The likelihoods of extremes are compared for area-averages over grid cell and regional sized spatial domains. Results suggest a bias favouring overly strong attribution estimates for hot and cold events over many regions of Africa and Australia, and a bias favouring overly weak attribution estimates over regions of North America and Asia. For rainfall, results are more sensitive to geographic location. Although the three models show similar results over many regions, they do disagree over others. Equally, results highlight the discrepancy amongst reanalyses products. This emphasises the importance of using multiple reanalysis and/or observation products, as well as multiple models in event attribution studies.

  17. Models and analysis for multivariate failure time data

    Science.gov (United States)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the

  18. Evaluating performance of simplified physically based models for shallow landslide susceptibility

    Directory of Open Access Journals (Sweden)

    G. Formetta

    2016-11-01

    Full Text Available Rainfall-induced shallow landslides can lead to loss of life and significant damage to private and public properties, transportation systems, etc. Predicting locations that might be susceptible to shallow landslides is a complex task and involves many disciplines: hydrology, geotechnical science, geology, hydrogeology, geomorphology, and statistics. Two main approaches are commonly used: statistical or physically based models. Reliable model applications involve automatic parameter calibration, objective quantification of the quality of susceptibility maps, and model sensitivity analyses. This paper presents a methodology to systemically and objectively calibrate, verify, and compare different models and model performance indicators in order to identify and select the models whose behavior is the most reliable for particular case studies.The procedure was implemented in a package of models for landslide susceptibility analysis and integrated in the NewAge-JGrass hydrological model. The package includes three simplified physically based models for landslide susceptibility analysis (M1, M2, and M3 and a component for model verification. It computes eight