WorldWideScience

Sample records for model model fitting

  1. Fitting PAC spectra with stochastic models: PolyPacFit

    Energy Technology Data Exchange (ETDEWEB)

    Zacate, M. O., E-mail: zacatem1@nku.edu [Northern Kentucky University, Department of Physics and Geology (United States); Evenson, W. E. [Utah Valley University, College of Science and Health (United States); Newhouse, R.; Collins, G. S. [Washington State University, Department of Physics and Astronomy (United States)

    2010-04-15

    PolyPacFit is an advanced fitting program for time-differential perturbed angular correlation (PAC) spectroscopy. It incorporates stochastic models and provides robust options for customization of fits. Notable features of the program include platform independence and support for (1) fits to stochastic models of hyperfine interactions, (2) user-defined constraints among model parameters, (3) fits to multiple spectra simultaneously, and (4) any spin nuclear probe.

  2. A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models

    Directory of Open Access Journals (Sweden)

    Erin Scott

    2016-01-01

    Full Text Available The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE models to observation reference data (Mackinson et al. 2009. The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting >1000 specific individual searches to find the statistically ‘best fit’ model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the ‘best fit’ model for ecological accuracy.

  3. Fitting neuron models to spike trains

    Directory of Open Access Journals (Sweden)

    Cyrille eRossant

    2011-02-01

    Full Text Available Computational modeling is increasingly used to understand the function of neural circuitsin systems neuroscience.These studies require models of individual neurons with realisticinput-output properties.Recently, it was found that spiking models can accurately predict theprecisely timed spike trains produced by cortical neurons in response tosomatically injected currents,if properly fitted. This requires fitting techniques that are efficientand flexible enough to easily test different candidate models.We present a generic solution, based on the Brian simulator(a neural network simulator in Python, which allowsthe user to define and fit arbitrary neuron models to electrophysiological recordings.It relies on vectorization and parallel computing techniques toachieve efficiency.We demonstrate its use on neural recordings in the barrel cortex andin the auditory brainstem, and confirm that simple adaptive spiking modelscan accurately predict the response of cortical neurons. Finally, we show how a complexmulticompartmental model can be reduced to a simple effective spiking model.

  4. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    DEFF Research Database (Denmark)

    Bolker, B.M.; Gardner, B.; Maunder, M.

    2013-01-01

    Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. R is convenient and (relatively) easy...... to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield...

  5. Induced subgraph searching for geometric model fitting

    Science.gov (United States)

    Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi

    2017-11-01

    In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.

  6. Fitting Hidden Markov Models to Psychological Data

    Directory of Open Access Journals (Sweden)

    Ingmar Visser

    2002-01-01

    Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.

  7. Contrast Gain Control Model Fits Masking Data

    Science.gov (United States)

    Watson, Andrew B.; Solomon, Joshua A.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    We studied the fit of a contrast gain control model to data of Foley (JOSA 1994), consisting of thresholds for a Gabor patch masked by gratings of various orientations, or by compounds of two orientations. Our general model includes models of Foley and Teo & Heeger (IEEE 1994). Our specific model used a bank of Gabor filters with octave bandwidths at 8 orientations. Excitatory and inhibitory nonlinearities were power functions with exponents of 2.4 and 2. Inhibitory pooling was broad in orientation, but narrow in spatial frequency and space. Minkowski pooling used an exponent of 4. All of the data for observer KMF were well fit by the model. We have developed a contrast gain control model that fits masking data. Unlike Foley's, our model accepts images as inputs. Unlike Teo & Heeger's, our model did not require multiple channels for different dynamic ranges.

  8. Modeling Evolution on Nearly Neutral Network Fitness Landscapes

    Science.gov (United States)

    Yakushkina, Tatiana; Saakian, David B.

    2017-08-01

    To describe virus evolution, it is necessary to define a fitness landscape. In this article, we consider the microscopic models with the advanced version of neutral network fitness landscapes. In this problem setting, we suppose a fitness difference between one-point mutation neighbors to be small. We construct a modification of the Wright-Fisher model, which is related to ordinary infinite population models with nearly neutral network fitness landscape at the large population limit. From the microscopic models in the realistic sequence space, we derive two versions of nearly neutral network models: with sinks and without sinks. We claim that the suggested model describes the evolutionary dynamics of RNA viruses better than the traditional Wright-Fisher model with few sequences.

  9. Analytical fitting model for rough-surface BRDF.

    Science.gov (United States)

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  10. Random-growth urban model with geographical fitness

    Science.gov (United States)

    Kii, Masanobu; Akimoto, Keigo; Doi, Kenji

    2012-12-01

    This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.

  11. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    Science.gov (United States)

    Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise

    2013-01-01

    1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.

  12. The FITS model office ergonomics program: a model for best practice.

    Science.gov (United States)

    Chim, Justine M Y

    2014-01-01

    An effective office ergonomics program can predict positive results in reducing musculoskeletal injury rates, enhancing productivity, and improving staff well-being and job satisfaction. Its objective is to provide a systematic solution to manage the potential risk of musculoskeletal disorders among computer users in an office setting. A FITS Model office ergonomics program is developed. The FITS Model Office Ergonomics Program has been developed which draws on the legislative requirements for promoting the health and safety of workers using computers for extended periods as well as previous research findings. The Model is developed according to the practical industrial knowledge in ergonomics, occupational health and safety management, and human resources management in Hong Kong and overseas. This paper proposes a comprehensive office ergonomics program, the FITS Model, which considers (1) Furniture Evaluation and Selection; (2) Individual Workstation Assessment; (3) Training and Education; (4) Stretching Exercises and Rest Break as elements of an effective program. An experienced ergonomics practitioner should be included in the program design and implementation. Through the FITS Model Office Ergonomics Program, the risk of musculoskeletal disorders among computer users can be eliminated or minimized, and workplace health and safety and employees' wellness enhanced.

  13. topicmodels: An R Package for Fitting Topic Models

    Directory of Open Access Journals (Sweden)

    Bettina Grun

    2011-05-01

    Full Text Available Topic models allow the probabilistic modeling of term frequency occurrences in documents. The fitted model can be used to estimate the similarity between documents as well as between a set of specified keywords using an additional layer of latent variables which are referred to as topics. The R package topicmodels provides basic infrastructure for fitting topic models based on data structures from the text mining package tm. The package includes interfaces to two algorithms for fitting topic models: the variational expectation-maximization algorithm provided by David M. Blei and co-authors and an algorithm using Gibbs sampling by Xuan-Hieu Phan and co-authors.

  14. A Model Fit Statistic for Generalized Partial Credit Model

    Science.gov (United States)

    Liang, Tie; Wells, Craig S.

    2009-01-01

    Investigating the fit of a parametric model is an important part of the measurement process when implementing item response theory (IRT), but research examining it is limited. A general nonparametric approach for detecting model misfit, introduced by J. Douglas and A. S. Cohen (2001), has exhibited promising results for the two-parameter logistic…

  15. Local fit evaluation of structural equation models using graphical criteria.

    Science.gov (United States)

    Thoemmes, Felix; Rosseel, Yves; Textor, Johannes

    2018-03-01

    Evaluation of model fit is critically important for every structural equation model (SEM), and sophisticated methods have been developed for this task. Among them are the χ² goodness-of-fit test, decomposition of the χ², derived measures like the popular root mean square error of approximation (RMSEA) or comparative fit index (CFI), or inspection of residuals or modification indices. Many of these methods provide a global approach to model fit evaluation: A single index is computed that quantifies the fit of the entire SEM to the data. In contrast, graphical criteria like d-separation or trek-separation allow derivation of implications that can be used for local fit evaluation, an approach that is hardly ever applied. We provide an overview of local fit evaluation from the viewpoint of SEM practitioners. In the presence of model misfit, local fit evaluation can potentially help in pinpointing where the problem with the model lies. For models that do fit the data, local tests can identify the parts of the model that are corroborated by the data. Local tests can also be conducted before a model is fitted at all, and they can be used even for models that are globally underidentified. We discuss appropriate statistical local tests, and provide applied examples. We also present novel software in R that automates this type of local fit evaluation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  17. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  18. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  19. Automatic fitting of spiking neuron models to electrophysiological recordings

    Directory of Open Access Journals (Sweden)

    Cyrille Rossant

    2010-03-01

    Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.

  20. Measured, modeled, and causal conceptions of fitness

    Science.gov (United States)

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  1. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  2. ITEM LEVEL DIAGNOSTICS AND MODEL - DATA FIT IN ITEM ...

    African Journals Online (AJOL)

    Global Journal

    Item response theory (IRT) is a framework for modeling and analyzing item response ... data. Though, there is an argument that the evaluation of fit in IRT modeling has been ... National Council on Measurement in Education ... model data fit should be based on three types of ... prediction should be assessed through the.

  3. Goodness-of-Fit Assessment of Item Response Theory Models

    Science.gov (United States)

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  4. Modelling population dynamics model formulation, fitting and assessment using state-space methods

    CERN Document Server

    Newman, K B; Morgan, B J T; King, R; Borchers, D L; Cole, D J; Besbeas, P; Gimenez, O; Thomas, L

    2014-01-01

    This book gives a unifying framework for estimating the abundance of open populations: populations subject to births, deaths and movement, given imperfect measurements or samples of the populations.  The focus is primarily on populations of vertebrates for which dynamics are typically modelled within the framework of an annual cycle, and for which stochastic variability in the demographic processes is usually modest. Discrete-time models are developed in which animals can be assigned to discrete states such as age class, gender, maturity,  population (within a metapopulation), or species (for multi-species models). The book goes well beyond estimation of abundance, allowing inference on underlying population processes such as birth or recruitment, survival and movement. This requires the formulation and fitting of population dynamics models.  The resulting fitted models yield both estimates of abundance and estimates of parameters characterizing the underlying processes.  

  5. Does model fit decrease the uncertainty of the data in comparison with a general non-model least squares fit?

    International Nuclear Information System (INIS)

    Pronyaev, V.G.

    2003-01-01

    The information entropy is taken as a measure of knowledge about the object and the reduced univariante variance as a common measure of uncertainty. Covariances in the model versus non-model least square fits are discussed

  6. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.; Katzfuss, M.; Hu, J.; Johnson, V. E.

    2014-01-01

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  7. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.

    2014-09-16

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models\\' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  8. An R package for fitting age, period and cohort models

    Directory of Open Access Journals (Sweden)

    Adriano Decarli

    2014-11-01

    Full Text Available In this paper we present the R implementation of a GLIM macro which fits age-period-cohort model following Osmond and Gardner. In addition to the estimates of the corresponding model, owing to the programming capability of R as an object oriented language, methods for printing, plotting and summarizing the results are provided. Furthermore, the researcher has fully access to the output of the main function (apc which returns all the models fitted within the function. It is so possible to critically evaluate the goodness of fit of the resulting model.

  9. Efficient occupancy model-fitting for extensive citizen-science data

    Science.gov (United States)

    Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen

  10. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  11. Automated model fit method for diesel engine control development

    NARCIS (Netherlands)

    Seykens, X.L.J.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.J.H.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  12. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    Science.gov (United States)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on

  13. Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data

    Science.gov (United States)

    McNeish, Daniel; Harring, Jeffrey R.

    2017-01-01

    To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…

  14. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species.

    Science.gov (United States)

    Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R

    2017-01-04

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  15. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    Science.gov (United States)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  16. Fitting ARMA Time Series by Structural Equation Models.

    Science.gov (United States)

    van Buuren, Stef

    1997-01-01

    This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)

  17. Are Physical Education Majors Models for Fitness?

    Science.gov (United States)

    Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela

    2012-01-01

    The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…

  18. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....

  19. Are Fit Indices Biased in Favor of Bi-Factor Models in Cognitive Ability Research?: A Comparison of Fit in Correlated Factors, Higher-Order, and Bi-Factor Models via Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Grant B. Morgan

    2015-02-01

    Full Text Available Bi-factor confirmatory factor models have been influential in research on cognitive abilities because they often better fit the data than correlated factors and higher-order models. They also instantiate a perspective that differs from that offered by other models. Motivated by previous work that hypothesized an inherent statistical bias of fit indices favoring the bi-factor model, we compared the fit of correlated factors, higher-order, and bi-factor models via Monte Carlo methods. When data were sampled from a true bi-factor structure, each of the approximate fit indices was more likely than not to identify the bi-factor solution as the best fitting. When samples were selected from a true multiple correlated factors structure, approximate fit indices were more likely overall to identify the correlated factors solution as the best fitting. In contrast, when samples were generated from a true higher-order structure, approximate fit indices tended to identify the bi-factor solution as best fitting. There was extensive overlap of fit values across the models regardless of true structure. Although one model may fit a given dataset best relative to the other models, each of the models tended to fit the data well in absolute terms. Given this variability, models must also be judged on substantive and conceptual grounds.

  20. A Comparison of Item Fit Statistics for Mixed IRT Models

    Science.gov (United States)

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  1. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    Science.gov (United States)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  2. Item level diagnostics and model - data fit in item response theory ...

    African Journals Online (AJOL)

    Item response theory (IRT) is a framework for modeling and analyzing item response data. Item-level modeling gives IRT advantages over classical test theory. The fit of an item score pattern to an item response theory (IRT) models is a necessary condition that must be assessed for further use of item and models that best fit ...

  3. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    Science.gov (United States)

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  4. An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models

    Science.gov (United States)

    Ames, Allison J.; Penfield, Randall D.

    2015-01-01

    Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…

  5. The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing.

    Directory of Open Access Journals (Sweden)

    Jaclyn K Mann

    2014-08-01

    Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  6. Standard error propagation in R-matrix model fitting for light elements

    International Nuclear Information System (INIS)

    Chen Zhenpeng; Zhang Rui; Sun Yeying; Liu Tingjin

    2003-01-01

    The error propagation features with R-matrix model fitting 7 Li, 11 B and 17 O systems were researched systematically. Some laws of error propagation were revealed, an empirical formula P j = U j c / U j d = K j · S-bar · √m / √N for describing standard error propagation was established, the most likely error ranges for standard cross sections of 6 Li(n,t), 10 B(n,α0) and 10 B(n,α1) were estimated. The problem that the standard error of light nuclei standard cross sections may be too small results mainly from the R-matrix model fitting, which is not perfect. Yet R-matrix model fitting is the most reliable evaluation method for such data. The error propagation features of R-matrix model fitting for compound nucleus system of 7 Li, 11 B and 17 O has been studied systematically, some laws of error propagation are revealed, and these findings are important in solving the problem mentioned above. Furthermore, these conclusions are suitable for similar model fitting in other scientific fields. (author)

  7. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development.

    Science.gov (United States)

    Tøndel, Kristin; Niederer, Steven A; Land, Sander; Smith, Nicolas P

    2014-05-20

    Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input-output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on

  8. The issue of statistical power for overall model fit in evaluating structural equation models

    Directory of Open Access Journals (Sweden)

    Richard HERMIDA

    2015-06-01

    Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.

  9. Repair models of cell survival and corresponding computer program for survival curve fitting

    International Nuclear Information System (INIS)

    Shen Xun; Hu Yiwei

    1992-01-01

    Some basic concepts and formulations of two repair models of survival, the incomplete repair (IR) model and the lethal-potentially lethal (LPL) model, are introduced. An IBM-PC computer program for survival curve fitting with these models was developed and applied to fit the survivals of human melanoma cells HX118 irradiated at different dose rates. Comparison was made between the repair models and two non-repair models, the multitar get-single hit model and the linear-quadratic model, in the fitting and analysis of the survival-dose curves. It was shown that either IR model or LPL model can fit a set of survival curves of different dose rates with same parameters and provide information on the repair capacity of cells. These two mathematical models could be very useful in quantitative study on the radiosensitivity and repair capacity of cells

  10. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    OpenAIRE

    Matthew P. Adams; Catherine J. Collier; Sven Uthicke; Yan X. Ow; Lucas Langlois; Katherine R. O’Brien

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluat...

  11. [How to fit and interpret multilevel models using SPSS].

    Science.gov (United States)

    Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael

    2007-05-01

    Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.

  12. Model-fitting approach to kinetic analysis of non-isothermal oxidation of molybdenite

    International Nuclear Information System (INIS)

    Ebrahimi Kahrizsangi, R.; Abbasi, M. H.; Saidi, A.

    2007-01-01

    The kinetics of molybdenite oxidation was studied by non-isothermal TGA-DTA with heating rate 5 d eg C .min -1 . The model-fitting kinetic approach applied to TGA data. The Coats-Redfern method used of model fitting. The popular model-fitting gives excellent fit non-isothermal data in chemically controlled regime. The apparent activation energy was determined to be about 34.2 kcalmol -1 With pre-exponential factor about 10 8 sec -1 for extent of reaction less than 0.5

  13. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  14. Tests of fit of historically-informed models of African American Admixture.

    Science.gov (United States)

    Gross, Jessica M

    2018-02-01

    African American populations in the U.S. formed primarily by mating between Africans and Europeans over the last 500 years. To date, studies of admixture have focused on either a one-time admixture event or continuous input into the African American population from Europeans only. Our goal is to gain a better understanding of the admixture process by examining models that take into account (a) assortative mating by ancestry in the African American population, (b) continuous input from both Europeans and Africans, and (c) historically informed variation in the rate of African migration over time. We used a model-based clustering method to generate distributions of African ancestry in three samples comprised of 147 African Americans from two published sources. We used a log-likelihood method to examine the fit of four models to these distributions and used a log-likelihood ratio test to compare the relative fit of each model. The mean ancestry estimates for our datasets of 77% African/23% European to 83% African/17% European ancestry are consistent with previous studies. We find admixture models that incorporate continuous gene flow from Europeans fit significantly better than one-time event models, and that a model involving continuous gene flow from Africans and Europeans fits better than one with continuous gene flow from Europeans only for two samples. Importantly, models that involve continuous input from Africans necessitate a higher level of gene flow from Europeans than previously reported. We demonstrate that models that take into account information about the rate of African migration over the past 500 years fit observed patterns of African ancestry better than alternative models. Our approach will enrich our understanding of the admixture process in extant and past populations. © 2017 Wiley Periodicals, Inc.

  15. Fitting Equilibrium Search Models to Labour Market Data

    DEFF Research Database (Denmark)

    Bowlus, Audra J.; Kiefer, Nicholas M.; Neumann, George R.

    1996-01-01

    Specification and estimation of a Burdett-Mortensen type equilibrium search model is considered. The estimation is nonstandard. An estimation strategy asymptotically equivalent to maximum likelihood is proposed and applied. The results indicate that specifications with a small number of productiv...... of productivity types fit the data well compared to the homogeneous model....

  16. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  17. A versatile curve-fit model for linear to deeply concave rank abundance curves

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    A new, flexible curve-fit model for linear to concave rank abundance curves was conceptualized and validated using observational data. The model links the geometric-series model and log-series model and can also fit deeply concave rank abundance curves. The model is based ¿ in an unconventional way

  18. Gfitter - Revisiting the global electroweak fit of the Standard Model and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Flaecher, H.; Hoecker, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Goebel, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Haller, J. [Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Moenig, K.; Stelzer, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2008-11-15

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M{sub H}=116.4{sup +18.3}{sub -1.3} GeV, and the 2{sigma} and 3{sigma} allowed regions [114,145] GeV and [[113,168] and [180,225

  19. A person fit test for IRT models for polytomous items

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Dagohoy, A.V.

    2007-01-01

    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability

  20. Checking the Adequacy of Fit of Models from Split-Plot Designs

    DEFF Research Database (Denmark)

    Almini, A. A.; Kulahci, Murat; Montgomery, D. C.

    2009-01-01

    models. In this article, we propose the computation of two R-2, R-2-adjusted, prediction error sums of squares (PRESS), and R-2-prediction statistics to measure the adequacy of fit for the WP and the SP submodels in a split-plot design. This is complemented with the graphical analysis of the two types......One of the main features that distinguish split-plot experiments from other experiments is that they involve two types of experimental errors: the whole-plot (WP) error and the subplot (SP) error. Taking this into consideration is very important when computing measures of adequacy of fit for split-plot...... of errors to check for any violation of the underlying assumptions and the adequacy of fit of split-plot models. Using examples, we show how computing two measures of model adequacy of fit for each split-plot design model is appropriate and useful as they reveal whether the correct WP and SP effects have...

  1. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  2. Fitting Latent Cluster Models for Networks with latentnet

    Directory of Open Access Journals (Sweden)

    Pavel N. Krivitsky

    2007-12-01

    Full Text Available latentnet is a package to fit and evaluate statistical latent position and cluster models for networks. Hoff, Raftery, and Handcock (2002 suggested an approach to modeling networks based on positing the existence of an latent space of characteristics of the actors. Relationships form as a function of distances between these characteristics as well as functions of observed dyadic level covariates. In latentnet social distances are represented in a Euclidean space. It also includes a variant of the extension of the latent position model to allow for clustering of the positions developed in Handcock, Raftery, and Tantrum (2007.The package implements Bayesian inference for the models based on an Markov chain Monte Carlo algorithm. It can also compute maximum likelihood estimates for the latent position model and a two-stage maximum likelihood method for the latent position cluster model. For latent position cluster models, the package provides a Bayesian way of assessing how many groups there are, and thus whether or not there is any clustering (since if the preferred number of groups is 1, there is little evidence for clustering. It also estimates which cluster each actor belongs to. These estimates are probabilistic, and provide the probability of each actor belonging to each cluster. It computes four types of point estimates for the coefficients and positions: maximum likelihood estimate, posterior mean, posterior mode and the estimator which minimizes Kullback-Leibler divergence from the posterior. You can assess the goodness-of-fit of the model via posterior predictive checks. It has a function to simulate networks from a latent position or latent position cluster model.

  3. LEP asymmetries and fits of the standard model

    International Nuclear Information System (INIS)

    Pietrzyk, B.

    1994-01-01

    The lepton and quark asymmetries measured at LEP are presented. The results of the Standard Model fits to the electroweak data presented at this conference are given. The top mass obtained from the fit to the LEP data is 172 -14-20 +13+18 GeV; it is 177 -11-19 +11+18 when also the collider, ν and A LR data are included. (author). 10 refs., 3 figs., 2 tabs

  4. Model Fit and Item Factor Analysis: Overfactoring, Underfactoring, and a Program to Guide Interpretation.

    Science.gov (United States)

    Clark, D Angus; Bowles, Ryan P

    2018-04-23

    In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.

  5. Fitting measurement models to vocational interest data: are dominance models ideal?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz; Rounds, James; Williams, Bruce A

    2009-09-01

    In this study, the authors examined the item response process underlying 3 vocational interest inventories: the Occupational Preference Inventory (C.-P. Deng, P. I. Armstrong, & J. Rounds, 2007), the Interest Profiler (J. Rounds, T. Smith, L. Hubert, P. Lewis, & D. Rivkin, 1999; J. Rounds, C. M. Walker, et al., 1999), and the Interest Finder (J. E. Wall & H. E. Baker, 1997; J. E. Wall, L. L. Wise, & H. E. Baker, 1996). Item response theory (IRT) dominance models, such as the 2-parameter and 3-parameter logistic models, assume that item response functions (IRFs) are monotonically increasing as the latent trait increases. In contrast, IRT ideal point models, such as the generalized graded unfolding model, have IRFs that peak where the latent trait matches the item. Ideal point models are expected to fit better because vocational interest inventories ask about typical behavior, as opposed to requiring maximal performance. Results show that across all 3 interest inventories, the ideal point model provided better descriptions of the response process. The importance of specifying the correct item response model for precise measurement is discussed. In particular, scores computed by a dominance model were shown to be sometimes illogical: individuals endorsing mostly realistic or mostly social items were given similar scores, whereas scores based on an ideal point model were sensitive to which type of items respondents endorsed.

  6. Revisiting the Global Electroweak Fit of the Standard Model and Beyond with Gfitter

    CERN Document Server

    Flächer, Henning; Haller, J; Höcker, A; Mönig, K; Stelzer, J

    2009-01-01

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter projec...

  7. Fit Gap Analysis – The Role of Business Process Reference Models

    Directory of Open Access Journals (Sweden)

    Dejan Pajk

    2013-12-01

    Full Text Available Enterprise resource planning (ERP systems support solutions for standard business processes such as financial, sales, procurement and warehouse. In order to improve the understandability and efficiency of their implementation, ERP vendors have introduced reference models that describe the processes and underlying structure of an ERP system. To select and successfully implement an ERP system, the capabilities of that system have to be compared with a company’s business needs. Based on a comparison, all of the fits and gaps must be identified and further analysed. This step usually forms part of ERP implementation methodologies and is called fit gap analysis. The paper theoretically overviews methods for applying reference models and describes fit gap analysis processes in detail. The paper’s first contribution is its presentation of a fit gap analysis using standard business process modelling notation. The second contribution is the demonstration of a process-based comparison approach between a supply chain process and an ERP system process reference model. In addition to its theoretical contributions, the results can also be practically applied to projects involving the selection and implementation of ERP systems.

  8. Supersymmetry with prejudice: Fitting the wrong model to LHC data

    Science.gov (United States)

    Allanach, B. C.; Dolan, Matthew J.

    2012-09-01

    We critically examine interpretations of hypothetical supersymmetric LHC signals, fitting to alternative wrong models of supersymmetry breaking. The signals we consider are some of the most constraining on the sparticle spectrum: invariant mass distributions with edges and endpoints from the golden decay chain q˜→qχ20(→l˜±l∓q)→χ10l+l-q. We assume a constrained minimal supersymmetric standard model (CMSSM) point to be the ‘correct’ one, but fit the signals instead with minimal gauge mediated supersymmetry breaking models (mGMSB) with a neutralino quasistable lightest supersymmetric particle, minimal anomaly mediation and large volume string compactification models. Minimal anomaly mediation and large volume scenario can be unambiguously discriminated against the CMSSM for the assumed signal and 1fb-1 of LHC data at s=14TeV. However, mGMSB would not be discriminated on the basis of the kinematic endpoints alone. The best-fit point spectra of mGMSB and CMSSM look remarkably similar, making experimental discrimination at the LHC based on the edges or Higgs properties difficult. However, using rate information for the golden chain should provide the additional separation required.

  9. Fast and exact Newton and Bidirectional fitting of Active Appearance Models.

    Science.gov (United States)

    Kossaifi, Jean; Tzimiropoulos, Yorgos; Pantic, Maja

    2016-12-21

    Active Appearance Models (AAMs) are generative models of shape and appearance that have proven very attractive for their ability to handle wide changes in illumination, pose and occlusion when trained in the wild, while not requiring large training dataset like regression-based or deep learning methods. The problem of fitting an AAM is usually formulated as a non-linear least squares one and the main way of solving it is a standard Gauss-Newton algorithm. In this paper we extend Active Appearance Models in two ways: we first extend the Gauss-Newton framework by formulating a bidirectional fitting method that deforms both the image and the template to fit a new instance. We then formulate a second order method by deriving an efficient Newton method for AAMs fitting. We derive both methods in a unified framework for two types of Active Appearance Models, holistic and part-based, and additionally show how to exploit the structure in the problem to derive fast yet exact solutions. We perform a thorough evaluation of all algorithms on three challenging and recently annotated inthe- wild datasets, and investigate fitting accuracy, convergence properties and the influence of noise in the initialisation. We compare our proposed methods to other algorithms and show that they yield state-of-the-art results, out-performing other methods while having superior convergence properties.

  10. When the model fits the frame: the impact of regulatory fit on efficacy appraisal and persuasion in health communication.

    Science.gov (United States)

    Bosone, Lucia; Martinez, Frédéric; Kalampalikis, Nikos

    2015-04-01

    In health-promotional campaigns, positive and negative role models can be deployed to illustrate the benefits or costs of certain behaviors. The main purpose of this article is to investigate why, how, and when exposure to role models strengthens the persuasiveness of a message, according to regulatory fit theory. We argue that exposure to a positive versus a negative model activates individuals' goals toward promotion rather than prevention. By means of two experiments, we demonstrate that high levels of persuasion occur when a message advertising healthy dietary habits offers a regulatory fit between its framing and the described role model. Our data also establish that the effects of such internal regulatory fit by vicarious experience depend on individuals' perceptions of response-efficacy and self-efficacy. Our findings constitute a significant theoretical complement to previous research on regulatory fit and contain valuable practical implications for health-promotional campaigns. © 2015 by the Society for Personality and Social Psychology, Inc.

  11. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan; Krebs-Smith, Susan M.; Midthune, Douglas; Perez, Adriana; Buckman, Dennis W.; Kipnis, Victor; Freedman, Laurence S.; Dodd, Kevin W.; Carroll, Raymond J

    2011-01-01

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  12. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  13. Soil physical properties influencing the fitting parameters in Philip and Kostiakov infiltration models

    International Nuclear Information System (INIS)

    Mbagwu, J.S.C.

    1994-05-01

    Among the many models developed for monitoring the infiltration process those of Philip and Kostiakov have been studied in detail because of their simplicity and the ease of estimating their fitting parameters. The important soil physical factors influencing the fitting parameters in these infiltration models are reported in this study. The results of the study show that the single most important soil property affecting the fitting parameters in these models is the effective porosity. 36 refs, 2 figs, 5 tabs

  14. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. The l z ( p ) * Person-Fit Statistic in an Unfolding Model Context.

    Science.gov (United States)

    Tendeiro, Jorge N

    2017-01-01

    Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded unfolding model is used. Results from a simulation study indicate that the person-fit statistic performed relatively well in detecting midpoint response style patterns and not so well in detecting extreme response style patterns.

  16. Model Fitting for Predicted Precipitation in Darwin: Some Issues with Model Choice

    Science.gov (United States)

    Farmer, Jim

    2010-01-01

    In Volume 23(2) of the "Australian Senior Mathematics Journal," Boncek and Harden present an exercise in fitting a Markov chain model to rainfall data for Darwin Airport (Boncek & Harden, 2009). Days are subdivided into those with precipitation and precipitation-free days. The author abbreviates these labels to wet days and dry days.…

  17. Universal Rate Model Selector: A Method to Quickly Find the Best-Fit Kinetic Rate Model for an Experimental Rate Profile

    Science.gov (United States)

    2017-08-01

    k2 – k1) 3.3 Universal Kinetic Rate Platform Development Kinetic rate models range from pure chemical reactions to mass transfer...14 8. The rate model that best fits the experimental data is a first-order or homogeneous catalytic reaction ...Avrami (7), and intraparticle diffusion (6) rate equations to name a few. A single fitting algorithm (kinetic rate model ) for a reaction does not

  18. Nonlinear models for fitting growth curves of Nellore cows reared in the Amazon Biome

    Directory of Open Access Journals (Sweden)

    Kedma Nayra da Silva Marinho

    2013-09-01

    Full Text Available Growth curves of Nellore cows were estimated by comparing six nonlinear models: Brody, Logistic, two alternatives by Gompertz, Richards and Von Bertalanffy. The models were fitted to weight-age data, from birth to 750 days of age of 29,221 cows, born between 1976 and 2006 in the Brazilian states of Acre, Amapá, Amazonas, Pará, Rondônia, Roraima and Tocantins. The models were fitted by the Gauss-Newton method. The goodness of fit of the models was evaluated by using mean square error, adjusted coefficient of determination, prediction error and mean absolute error. Biological interpretation of parameters was accomplished by plotting estimated weights versus the observed weight means, instantaneous growth rate, absolute maturity rate, relative instantaneous growth rate, inflection point and magnitude of the parameters A (asymptotic weight and K (maturing rate. The Brody and Von Bertalanffy models fitted the weight-age data but the other models did not. The average weight (A and growth rate (K were: 384.6±1.63 kg and 0.0022±0.00002 (Brody and 313.40±0.70 kg and 0.0045±0.00002 (Von Bertalanffy. The Brody model provides better goodness of fit than the Von Bertalanffy model.

  19. Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Pantic, Maja

    2016-01-01

    Fitting algorithms for Active Appearance Models (AAMs) are usually considered to be robust but slow or fast but less able to generalize well to unseen variations. In this paper, we look into AAM fitting algorithms and make the following orthogonal contributions: We present a simple “project-out‿

  20. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    International Nuclear Information System (INIS)

    Liang, Zhong Wei; Wang, Yi Jun; Ye, Bang Yan; Brauwer, Richard Kars

    2012-01-01

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process

  1. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Zhong Wei; Wang, Yi Jun [Guangzhou Univ., Guangzhou (China); Ye, Bang Yan [South China Univ. of Technology, Guangzhou (China); Brauwer, Richard Kars [Indian Institute of Technology, Kanpur (India)

    2012-10-15

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process.

  2. Rapid world modeling: Fitting range data to geometric primitives

    International Nuclear Information System (INIS)

    Feddema, J.; Little, C.

    1996-01-01

    For the past seven years, Sandia National Laboratories has been active in the development of robotic systems to help remediate DOE's waste sites and decommissioned facilities. Some of these facilities have high levels of radioactivity which prevent manual clean-up. Tele-operated and autonomous robotic systems have been envisioned as the only suitable means of removing the radioactive elements. World modeling is defined as the process of creating a numerical geometric model of a real world environment or workspace. This model is often used in robotics to plan robot motions which perform a task while avoiding obstacles. In many applications where the world model does not exist ahead of time, structured lighting, laser range finders, and even acoustical sensors have been used to create three dimensional maps of the environment. These maps consist of thousands of range points which are difficult to handle and interpret. This paper presents a least squares technique for fitting range data to planar and quadric surfaces, including cylinders and ellipsoids. Once fit to these primitive surfaces, the amount of data associated with a surface is greatly reduced up to three orders of magnitude, thus allowing for more rapid handling and analysis of world data

  3. Using the Flipchem Photochemistry Model When Fitting Incoherent Scatter Radar Data

    Science.gov (United States)

    Reimer, A. S.; Varney, R. H.

    2017-12-01

    The North face Resolute Bay Incoherent Scatter Radar (RISR-N) routinely images the dynamics of the polar ionosphere, providing measurements of the plasma density, electron temperature, ion temperature, and line of sight velocity with seconds to minutes time resolution. RISR-N does not directly measure ionospheric parameters, but backscattered signals, recording them as voltage samples. Using signal processing techniques, radar autocorrelation functions (ACF) are estimated from the voltage samples. A model of the signal ACF is then fitted to the ACF using non-linear least-squares techniques to obtain the best-fit ionospheric parameters. The signal model, and therefore the fitted parameters, depend on the ionospheric ion composition that is used [e.g. Zettergren et. al. (2010), Zou et. al. (2017)].The software used to process RISR-N ACF data includes the "flipchem" model, which is an ion photochemistry model developed by Richards [2011] that was adapted from the Field LineInterhemispheric Plasma (FLIP) model. Flipchem requires neutral densities, neutral temperatures, electron density, ion temperature, electron temperature, solar zenith angle, and F10.7 as inputs to compute ion densities, which are input to the signal model. A description of how the flipchem model is used in RISR-N fitting software will be presented. Additionally, a statistical comparison of the fitted electron density, ion temperature, electron temperature, and velocity obtained using a flipchem ionosphere, a pure O+ ionosphere, and a Chapman O+ ionosphere will be presented. The comparison covers nearly two years of RISR-N data (April 2015 - December 2016). Richards, P. G. (2011), Reexamination of ionospheric photochemistry, J. Geophys. Res., 116, A08307, doi:10.1029/2011JA016613.Zettergren, M., Semeter, J., Burnett, B., Oliver, W., Heinselman, C., Blelly, P.-L., and Diaz, M.: Dynamic variability in F-region ionospheric composition at auroral arc boundaries, Ann. Geophys., 28, 651-664, https

  4. Multi-binding site model-based curve-fitting program for the computation of RIA data

    International Nuclear Information System (INIS)

    Malan, P.G.; Ekins, R.P.; Cox, M.G.; Long, E.M.R.

    1977-01-01

    In this paper, a comparison will be made of model-based and empirical curve-fitting procedures. The implementation of a multiple binding-site curve-fitting model which will successfully fit a wide range of assay data, and which can be run on a mini-computer is described. The latter sophisticated model also provides estimates of binding site concentrations and the values of the respective equilibrium constants present: the latter have been used for refining assay conditions using computer optimisation techniques. (orig./AJ) [de

  5. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  6. A goodness-of-fit test for occupancy models with correlated within-season revisits

    Science.gov (United States)

    Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.

    2016-01-01

    Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and

  7. Brief communication: human cranial variation fits iterative founder effect model with African origin.

    Science.gov (United States)

    von Cramon-Taubadel, Noreen; Lycett, Stephen J

    2008-05-01

    Recent studies comparing craniometric and neutral genetic affinity matrices have concluded that, on average, human cranial variation fits a model of neutral expectation. While human craniometric and genetic data fit a model of isolation by geographic distance, it is not yet clear whether this is due to geographically mediated gene flow or human dispersal events. Recently, human genetic data have been shown to fit an iterative founder effect model of dispersal with an African origin, in line with the out-of-Africa replacement model for modern human origins, and Manica et al. (Nature 448 (2007) 346-349) have demonstrated that human craniometric data also fit this model. However, in contrast with the neutral model of cranial evolution suggested by previous studies, Manica et al. (2007) made the a priori assumption that cranial form has been subject to climatically driven natural selection and therefore correct for climate prior to conducting their analyses. Here we employ a modified theoretical and methodological approach to test whether human cranial variability fits the iterative founder effect model. In contrast with Manica et al. (2007) we employ size-adjusted craniometric variables, since climatic factors such as temperature have been shown to correlate with aspects of cranial size. Despite these differences, we obtain similar results to those of Manica et al. (2007), with up to 26% of global within-population craniometric variation being explained by geographic distance from sub-Saharan Africa. Comparative analyses using non-African origins do not yield significant results. The implications of these results are discussed in the light of the modern human origins debate. (c) 2007 Wiley-Liss, Inc.

  8. The lz(p)* Person-Fit Statistic in an Unfolding Model Context

    NARCIS (Netherlands)

    Tendeiro, Jorge N.

    2017-01-01

    Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded

  9. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  10. Fitting and comparing competing models of the species abundance distribution: assessment and prospect

    Directory of Open Access Journals (Sweden)

    Thomas J Matthews

    2014-06-01

    Full Text Available A species abundance distribution (SAD characterises patterns in the commonness and rarity of all species within an ecological community. As such, the SAD provides the theoretical foundation for a number of other biogeographical and macroecological patterns, such as the species–area relationship, as well as being an interesting pattern in its own right. While there has been resurgence in the study of SADs in the last decade, less focus has been placed on methodology in SAD research, and few attempts have been made to synthesise the vast array of methods which have been employed in SAD model evaluation. As such, our review has two aims. First, we provide a general overview of SADs, including descriptions of the commonly used distributions, plotting methods and issues with evaluating SAD models. Second, we review a number of recent advances in SAD model fitting and comparison. We conclude by providing a list of recommendations for fitting and evaluating SAD models. We argue that it is time for SAD studies to move away from many of the traditional methods available for fitting and evaluating models, such as sole reliance on the visual examination of plots, and embrace statistically rigorous techniques. In particular, we recommend the use of both goodness-of-fit tests and model-comparison analyses because each provides unique information which one can use to draw inferences.

  11. Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.

    Science.gov (United States)

    Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei

    2015-02-01

    This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.

  12. A flexible, interactive software tool for fitting the parameters of neuronal models.

    Science.gov (United States)

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.

  13. A flexible, interactive software tool for fitting the parameters of neuronal models

    Directory of Open Access Journals (Sweden)

    Péter eFriedrich

    2014-07-01

    Full Text Available The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problem of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting

  14. Kernel-density estimation and approximate Bayesian computation for flexible epidemiological model fitting in Python.

    Science.gov (United States)

    Irvine, Michael A; Hollingsworth, T Déirdre

    2018-05-26

    Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  15. An Improved Cognitive Model of the Iowa and Soochow Gambling Tasks With Regard to Model Fitting Performance and Tests of Parameter Consistency

    Directory of Open Access Journals (Sweden)

    Junyi eDai

    2015-03-01

    Full Text Available The Iowa Gambling Task (IGT and the Soochow Gambling Task (SGT are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning model (EVL and the prospect valence learning model (PVL, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79 and 27 control participants (mean age 35; SD 10.44 completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.

  16. HDFITS: Porting the FITS data model to HDF5

    Science.gov (United States)

    Price, D. C.; Barsdell, B. R.; Greenhill, L. J.

    2015-09-01

    The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new 'HDFITS' format, where data are stored in HDF5 in a FITS-like manner. We show that HDFITS allows faster reading of data (up to 100x of FITS in some use cases), and improved compression (higher compression ratios and higher throughput). Finally, we show that by only changing the import lines in Python-based FITS utilities, HDFITS formatted data can be presented transparently as an in-memory FITS equivalent.

  17. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    Science.gov (United States)

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  18. Study on fitness functions of genetic algorithm for dynamically correcting nuclide atmospheric diffusion model

    International Nuclear Information System (INIS)

    Ji Zhilong; Ma Yuanwei; Wang Dezhong

    2014-01-01

    Background: In radioactive nuclides atmospheric diffusion models, the empirical dispersion coefficients were deduced under certain experiment conditions, whose difference with nuclear accident conditions is a source of deviation. A better estimation of the radioactive nuclide's actual dispersion process could be done by correcting dispersion coefficients with observation data, and Genetic Algorithm (GA) is an appropriate method for this correction procedure. Purpose: This study is to analyze the fitness functions' influence on the correction procedure and the forecast ability of diffusion model. Methods: GA, coupled with Lagrange dispersion model, was used in a numerical simulation to compare 4 fitness functions' impact on the correction result. Results: In the numerical simulation, the fitness function with observation deviation taken into consideration stands out when significant deviation exists in the observed data. After performing the correction procedure on the Kincaid experiment data, a significant boost was observed in the diffusion model's forecast ability. Conclusion: As the result shows, in order to improve dispersion models' forecast ability using GA, observation data should be given different weight in the fitness function corresponding to their error. (authors)

  19. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    Science.gov (United States)

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  20. Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.

    Science.gov (United States)

    Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F

    2009-11-01

    Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database.

  1. Person-fit to the Five Factor Model of personality

    Czech Academy of Sciences Publication Activity Database

    Allik, J.; Realo, A.; Mõttus, R.; Borkenau, P.; Kuppens, P.; Hřebíčková, Martina

    2012-01-01

    Roč. 71, č. 1 (2012), s. 35-45 ISSN 1421-0185 R&D Projects: GA ČR GAP407/10/2394 Institutional research plan: CEZ:AV0Z70250504 Keywords : Five Factor Model * cross - cultural comparison * person-fit Subject RIV: AN - Psychology Impact factor: 0.638, year: 2012

  2. Model-independent partial wave analysis using a massively-parallel fitting framework

    Science.gov (United States)

    Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.

    2017-10-01

    The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.

  3. Introducing the fit-criteria assessment plot - A visualisation tool to assist class enumeration in group-based trajectory modelling.

    Science.gov (United States)

    Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria

    2017-10-01

    Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.

  4. Fast fitting of non-Gaussian state-space models to animal movement data via Template Model Builder

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Whoriskey, Kim; Yurkowski, David

    2015-01-01

    recommend using the Laplace approximation combined with automatic differentiation (as implemented in the novel R package Template Model Builder; TMB) for the fast fitting of continuous-time multivariate non-Gaussian SSMs. Through Argos satellite tracking data, we demonstrate that the use of continuous...... are able to estimate additional parameters compared to previous methods, all without requiring a substantial increase in computational time. The model implementation is made available through the R package argosTrack....

  5. A Data-Driven Method for Selecting Optimal Models Based on Graphical Visualisation of Differences in Sequentially Fitted ROC Model Parameters

    Directory of Open Access Journals (Sweden)

    K S Mwitondi

    2013-05-01

    Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.

  6. The Meaning of Goodness-of-Fit Tests: Commentary on "Goodness-of-Fit Assessment of Item Response Theory Models"

    Science.gov (United States)

    Thissen, David

    2013-01-01

    In this commentary, David Thissen states that "Goodness-of-fit assessment for IRT models is maturing; it has come a long way from zero." Thissen then references prior works on "goodness of fit" in the index of Lord and Novick's (1968) classic text; Yen (1984); Drasgow, Levine, Tsien, Williams, and Mead (1995); Chen and…

  7. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  8. Fitting Diffusion Item Response Theory Models for Responses and Response Times Using the R Package diffIRT

    Directory of Open Access Journals (Sweden)

    Dylan Molenaar

    2015-08-01

    Full Text Available In the psychometric literature, item response theory models have been proposed that explicitly take the decision process underlying the responses of subjects to psychometric test items into account. Application of these models is however hampered by the absence of general and flexible software to fit these models. In this paper, we present diffIRT, an R package that can be used to fit item response theory models that are based on a diffusion process. We discuss parameter estimation and model fit assessment, show the viability of the package in a simulation study, and illustrate the use of the package with two datasets pertaining to extraversion and mental rotation. In addition, we illustrate how the package can be used to fit the traditional diffusion model (as it has been originally developed in experimental psychology to data.

  9. Assessing model fit in latent class analysis when asymptotics do not hold

    NARCIS (Netherlands)

    van Kollenburg, Geert H.; Mulder, Joris; Vermunt, Jeroen K.

    2015-01-01

    The application of latent class (LC) analysis involves evaluating the LC model using goodness-of-fit statistics. To assess the misfit of a specified model, say with the Pearson chi-squared statistic, a p-value can be obtained using an asymptotic reference distribution. However, asymptotic p-values

  10. Fitting the Fractional Polynomial Model to Non-Gaussian Longitudinal Data

    Directory of Open Access Journals (Sweden)

    Ji Hoon Ryoo

    2017-08-01

    Full Text Available As in cross sectional studies, longitudinal studies involve non-Gaussian data such as binomial, Poisson, gamma, and inverse-Gaussian distributions, and multivariate exponential families. A number of statistical tools have thus been developed to deal with non-Gaussian longitudinal data, including analytic techniques to estimate parameters in both fixed and random effects models. However, as yet growth modeling with non-Gaussian data is somewhat limited when considering the transformed expectation of the response via a linear predictor as a functional form of explanatory variables. In this study, we introduce a fractional polynomial model (FPM that can be applied to model non-linear growth with non-Gaussian longitudinal data and demonstrate its use by fitting two empirical binary and count data models. The results clearly show the efficiency and flexibility of the FPM for such applications.

  11. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    Science.gov (United States)

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  12. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  13. A scaled Lagrangian method for performing a least squares fit of a model to plant data

    International Nuclear Information System (INIS)

    Crisp, K.E.

    1988-01-01

    Due to measurement errors, even a perfect mathematical model will not be able to match all the corresponding plant measurements simultaneously. A further discrepancy may be introduced if an un-modelled change in conditions occurs within the plant which should have required a corresponding change in model parameters - e.g. a gradual deterioration in the performance of some component(s). Taking both these factors into account, what is required is that the overall discrepancy between the model predictions and the plant data is kept to a minimum. This process is known as 'model fitting', A method is presented for minimising any function which consists of the sum of squared terms, subject to any constraints. Its most obvious application is in the process of model fitting, where a weighted sum of squares of the differences between model predictions and plant data is the function to be minimised. When implemented within existing Central Electricity Generating Board computer models, it will perform a least squares fit of a model to plant data within a single job submission. (author)

  14. Fitted HBT radii versus space-time variances in flow-dominated models

    International Nuclear Information System (INIS)

    Lisa, Mike; Frodermann, Evan; Heinz, Ulrich

    2007-01-01

    The inability of otherwise successful dynamical models to reproduce the 'HBT radii' extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the 'RHIC HBT Puzzle'. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source which can be directly computed from the emission function, without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models some of which exhibit significant deviations from simple Gaussian behaviour. By Fourier transforming the emission function we compute the 2-particle correlation function and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and measured HBT radii remain, we show that a more 'apples-to-apples' comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data. (author)

  15. Feature extraction through least squares fit to a simple model

    International Nuclear Information System (INIS)

    Demuth, H.B.

    1976-01-01

    The Oak Ridge National Laboratory (ORNL) presented the Los Alamos Scientific Laboratory (LASL) with 18 radiographs of fuel rod test bundles. The problem is to estimate the thickness of the gap between some cylindrical rods and a flat wall surface. The edges of the gaps are poorly defined due to finite source size, x-ray scatter, parallax, film grain noise, and other degrading effects. The radiographs were scanned and the scan-line data were averaged to reduce noise and to convert the problem to one dimension. A model of the ideal gap, convolved with an appropriate point-spread function, was fit to the averaged data with a least squares program; and the gap width was determined from the final fitted-model parameters. The least squares routine did converge and the gaps obtained are of reasonable size. The method is remarkably insensitive to noise. This report describes the problem, the techniques used to solve it, and the results and conclusions. Suggestions for future work are also given

  16. Twitter classification model: the ABC of two million fitness tweets.

    Science.gov (United States)

    Vickey, Theodore A; Ginis, Kathleen Martin; Dabrowski, Maciej

    2013-09-01

    The purpose of this project was to design and test data collection and management tools that can be used to study the use of mobile fitness applications and social networking within the context of physical activity. This project was conducted over a 6-month period and involved collecting publically shared Twitter data from five mobile fitness apps (Nike+, RunKeeper, MyFitnessPal, Endomondo, and dailymile). During that time, over 2.8 million tweets were collected, processed, and categorized using an online tweet collection application and a customized JavaScript. Using the grounded theory, a classification model was developed to categorize and understand the types of information being shared by application users. Our data show that by tracking mobile fitness app hashtags, a wealth of information can be gathered to include but not limited to daily use patterns, exercise frequency, location-based workouts, and overall workout sentiment.

  17. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    Science.gov (United States)

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  18. GOODNESS-OF-FIT TEST FOR THE ACCELERATED FAILURE TIME MODEL BASED ON MARTINGALE RESIDUALS

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2013-01-01

    Roč. 49, č. 1 (2013), s. 40-59 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:GA MŠk(CZ) SVV 261315/2011 Keywords : accelerated failure time model * survival analysis * goodness-of-fit Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/novak-goodness-of-fit test for the aft model based on martingale residuals.pdf

  19. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    Science.gov (United States)

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  20. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Science.gov (United States)

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  1. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Directory of Open Access Journals (Sweden)

    A H Sabry

    Full Text Available The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  2. Assessing performance of Bayesian state-space models fit to Argos satellite telemetry locations processed with Kalman filtering.

    Directory of Open Access Journals (Sweden)

    Mónica A Silva

    Full Text Available Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF. The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km was nearly half that of LS estimates (11.6 ± 8.4 km. Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.

  3. The effect of measurement quality on targeted structural model fit indices: A comment on Lance, Beck, Fan, and Carter (2016).

    Science.gov (United States)

    McNeish, Daniel; Hancock, Gregory R

    2018-03-01

    Lance, Beck, Fan, and Carter (2016) recently advanced 6 new fit indices and associated cutoff values for assessing data-model fit in the structural portion of traditional latent variable path models. The authors appropriately argued that, although most researchers' theoretical interest rests with the latent structure, they still rely on indices of global model fit that simultaneously assess both the measurement and structural portions of the model. As such, Lance et al. proposed indices intended to assess the structural portion of the model in isolation of the measurement model. Unfortunately, although these strategies separate the assessment of the structure from the fit of the measurement model, they do not isolate the structure's assessment from the quality of the measurement model. That is, even with a perfectly fitting measurement model, poorer quality (i.e., less reliable) measurements will yield a more favorable verdict regarding structural fit, whereas better quality (i.e., more reliable) measurements will yield a less favorable structural assessment. This phenomenon, referred to by Hancock and Mueller (2011) as the reliability paradox, affects not only traditional global fit indices but also those structural indices proposed by Lance et al. as well. Fortunately, as this comment will clarify, indices proposed by Hancock and Mueller help to mitigate this problem and allow the structural portion of the model to be assessed independently of both the fit of the measurement model as well as the quality of indicator variables contained therein. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    Science.gov (United States)

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  5. The bystander effect model of Brenner and Sachs fitted to lung cancer data in 11 cohorts of underground miners, and equivalence of fit of a linear relative risk model with adjustment for attained age and age at exposure

    International Nuclear Information System (INIS)

    Little, M P

    2004-01-01

    Bystander effects following exposure to α-particles have been observed in many experimental systems, and imply that linearly extrapolating low dose risks from high dose data might materially underestimate risk. Brenner and Sachs (2002 Int. J. Radiat. Biol. 78 593-604; 2003 Health Phys. 85 103-8) have recently proposed a model of the bystander effect which they use to explain the inverse dose rate effect observed for lung cancer in underground miners exposed to radon daughters. In this paper we fit the model of the bystander effect proposed by Brenner and Sachs to 11 cohorts of underground miners, taking account of the covariance structure of the data and the period of latency between the development of the first pre-malignant cell and clinically overt cancer. We also fitted a simple linear relative risk model, with adjustment for age at exposure and attained age. The methods that we use for fitting both models are different from those used by Brenner and Sachs, in particular taking account of the covariance structure, which they did not, and omitting certain unjustifiable adjustments to the miner data. The fit of the original model of Brenner and Sachs (with 0 y period of latency) is generally poor, although it is much improved by assuming a 5 or 6 y period of latency from the first appearance of a pre-malignant cell to cancer. The fit of this latter model is equivalent to that of a linear relative risk model with adjustment for age at exposure and attained age. In particular, both models are capable of describing the observed inverse dose rate effect in this data set

  6. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    Directory of Open Access Journals (Sweden)

    Javier Macias-Guarasa

    2012-10-01

    Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  7. Multiple organ definition in CT using a Bayesian approach for 3D model fitting

    Science.gov (United States)

    Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.

    1995-08-01

    Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.

  8. The disconnected values model improves mental well-being and fitness in an employee wellness program.

    Science.gov (United States)

    Anshel, Mark H; Brinthaupt, Thomas M; Kang, Minsoo

    2010-01-01

    This study examined the effect of a 10-week wellness program on changes in physical fitness and mental well-being. The conceptual framework for this study was the Disconnected Values Model (DVM). According to the DVM, detecting the inconsistencies between negative habits and values (e.g., health, family, faith, character) and concluding that these "disconnects" are unacceptable promotes the need for health behavior change. Participants were 164 full-time employees at a university in the southeastern U.S. The program included fitness coaching and a 90-minute orientation based on the DVM. Multivariate Mixed Model analyses indicated significantly improved scores from pre- to post-intervention on selected measures of physical fitness and mental well-being. The results suggest that the Disconnected Values Model provides an effective cognitive-behavioral approach to generating health behavior change in a 10-week workplace wellness program.

  9. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    Science.gov (United States)

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  10. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    Science.gov (United States)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  11. The global electroweak Standard Model fit after the Higgs discovery

    CERN Document Server

    Baak, Max

    2013-01-01

    We present an update of the global Standard Model (SM) fit to electroweak precision data under the assumption that the new particle discovered at the LHC is the SM Higgs boson. In this scenario all parameters entering the calculations of electroweak precision observalbes are known, allowing, for the first time, to over-constrain the SM at the electroweak scale and assert its validity. Within the SM the W boson mass and the effective weak mixing angle can be accurately predicted from the global fit. The results are compatible with, and exceed in precision, the direct measurements. An updated determination of the S, T and U parameters, which parametrize the oblique vacuum corrections, is given. The obtained values show good consistency with the SM expectation and no direct signs of new physics are seen. We conclude with an outlook to the global electroweak fit for a future e+e- collider.

  12. Fit reduced GUTS models online: From theory to practice.

    Science.gov (United States)

    Baudrot, Virgile; Veber, Philippe; Gence, Guillaume; Charles, Sandrine

    2018-05-20

    Mechanistic modeling approaches, such as the toxicokinetic-toxicodynamic (TKTD) framework, are promoted by international institutions such as the European Food Safety Authority and the Organization for Economic Cooperation and Development to assess the environmental risk of chemical products generated by human activities. TKTD models can encompass a large set of mechanisms describing the kinetics of compounds inside organisms (e.g., uptake and elimination) and their effect at the level of individuals (e.g., damage accrual, recovery, and death mechanism). Compared to classical dose-response models, TKTD approaches have many advantages, including accounting for temporal aspects of exposure and toxicity, considering data points all along the experiment and not only at the end, and making predictions for untested situations as realistic exposure scenarios. Among TKTD models, the general unified threshold model of survival (GUTS) is within the most recent and innovative framework but is still underused in practice, especially by risk assessors, because specialist programming and statistical skills are necessary to run it. Making GUTS models easier to use through a new module freely available from the web platform MOSAIC (standing for MOdeling and StAtistical tools for ecotoxIClogy) should promote GUTS operability in support of the daily work of environmental risk assessors. This paper presents the main features of MOSAIC_GUTS: uploading of the experimental data, GUTS fitting analysis, and LCx estimates with their uncertainty. These features will be exemplified from literature data. Integr Environ Assess Manag 2018;00:000-000. © 2018 SETAC. © 2018 SETAC.

  13. Assessing a moderating effect and the global fit of a PLS model on online trading

    Directory of Open Access Journals (Sweden)

    Juan J. García-Machado

    2017-12-01

    Full Text Available This paper proposes a PLS Model for the study of Online Trading. Traditional investing has experienced a revolution due to the rise of e-trading services that enable investors to use Internet conduct secure trading. On the hand, model results show that there is a positive, direct and statistically significant relationship between personal outcome expectations, perceived relative advantage, shared vision and economy-based trust with the quality of knowledge. On the other hand, trading frequency and portfolio performance has also this relationship. After including the investor’s income and financial wealth (IFW as moderating effect, the PLS model was enhanced, and we found that the interaction term is negative and statistically significant, so, higher IFW levels entail a weaker relationship between trading frequency and portfolio performance and vice-versa. Finally, with regard to the goodness of overall model fit measures, they showed that the model is fit for SRMR and dG measures, so it is likely that the model is true.

  14. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Huang, Can

    2018-01-01

    –slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost......In order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master...... optimality. Numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods....

  15. Using geometry to improve model fitting and experiment design for glacial isostasy

    Science.gov (United States)

    Kachuck, S. B.; Cathles, L. M.

    2017-12-01

    As scientists we routinely deal with models, which are geometric objects at their core - the manifestation of a set of parameters as predictions for comparison with observations. When the number of observations exceeds the number of parameters, the model is a hypersurface (the model manifold) in the space of all possible predictions. The object of parameter fitting is to find the parameters corresponding to the point on the model manifold as close to the vector of observations as possible. But the geometry of the model manifold can make this difficult. By curving, ending abruptly (where, for instance, parameters go to zero or infinity), and by stretching and compressing the parameters together in unexpected directions, it can be difficult to design algorithms that efficiently adjust the parameters. Even at the optimal point on the model manifold, parameters might not be individually resolved well enough to be applied to new contexts. In our context of glacial isostatic adjustment, models of sparse surface observations have a broad spread of sensitivity to mixtures of the earth's viscous structure and the surface distribution of ice over the last glacial cycle. This impedes precise statements about crucial geophysical processes, such as the planet's thermal history or the climates that controlled the ice age. We employ geometric methods developed in the field of systems biology to improve the efficiency of fitting (geodesic accelerated Levenberg-Marquardt) and to identify the maximally informative sources of additional data to make better predictions of sea levels and ice configurations (optimal experiment design). We demonstrate this in particular in reconstructions of the Barents Sea Ice Sheet, where we show that only certain kinds of data from the central Barents have the power to distinguish between proposed models.

  16. Fitting the CDO correlation skew: a tractable structural jump-diffusion model

    DEFF Research Database (Denmark)

    Willemann, Søren

    2007-01-01

    We extend a well-known structural jump-diffusion model for credit risk to handle both correlations through diffusion of asset values and common jumps in asset value. Through a simplifying assumption on the default timing and efficient numerical techniques, we develop a semi-analytic framework...... allowing for instantaneous calibration to heterogeneous CDS curves and fast computation of CDO tranche spreads. We calibrate the model to CDX and iTraxx data from February 2007 and achieve a satisfactory fit. To price the senior tranches for both indices, we require a risk-neutral probability of a market...

  17. Models selection and fitting

    International Nuclear Information System (INIS)

    Martin Llorente, F.

    1990-01-01

    The models of atmospheric pollutants dispersion are based in mathematic algorithms that describe the transport, diffusion, elimination and chemical reactions of atmospheric contaminants. These models operate with data of contaminants emission and make an estimation of quality air in the area. This model can be applied to several aspects of atmospheric contamination

  18. An approximation to the adaptive exponential integrate-and-fire neuron model allows fast and predictive fitting to physiological data

    Directory of Open Access Journals (Sweden)

    Loreen eHertäg

    2012-09-01

    Full Text Available For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('in-vivo-like' input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  19. Measuring fit of sequence data to phylogenetic model: gain of power using marginal tests.

    Science.gov (United States)

    Waddell, Peter J; Ota, Rissa; Penny, David

    2009-10-01

    Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (Unended quest: an intellectual autobiography. Fontana, London, 1976) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (Nature 297:197-200, 1982) to the present. We compare the general log-likelihood ratio (the G or G (2) statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (P approximately 0.5), but the marginalized tests do. Tests on pairwise frequency (F) matrices, strongly (P < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (P < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4( t ) patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with P < 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published trees may really be far larger than the analytical methods (e.g., bootstrap) report.

  20. UROX 2.0: an interactive tool for fitting atomic models into electron-microscopy reconstructions

    International Nuclear Information System (INIS)

    Siebert, Xavier; Navaza, Jorge

    2009-01-01

    UROX is software designed for the interactive fitting of atomic models into electron-microscopy reconstructions. The main features of the software are presented, along with a few examples. Electron microscopy of a macromolecular structure can lead to three-dimensional reconstructions with resolutions that are typically in the 30–10 Å range and sometimes even beyond 10 Å. Fitting atomic models of the individual components of the macromolecular structure (e.g. those obtained by X-ray crystallography or nuclear magnetic resonance) into an electron-microscopy map allows the interpretation of the latter at near-atomic resolution, providing insight into the interactions between the components. Graphical software is presented that was designed for the interactive fitting and refinement of atomic models into electron-microscopy reconstructions. Several characteristics enable it to be applied over a wide range of cases and resolutions. Firstly, calculations are performed in reciprocal space, which results in fast algorithms. This allows the entire reconstruction (or at least a sizeable portion of it) to be used by taking into account the symmetry of the reconstruction both in the calculations and in the graphical display. Secondly, atomic models can be placed graphically in the map while the correlation between the model-based electron density and the electron-microscopy reconstruction is computed and displayed in real time. The positions and orientations of the models are refined by a least-squares minimization. Thirdly, normal-mode calculations can be used to simulate conformational changes between the atomic model of an individual component and its corresponding density within a macromolecular complex determined by electron microscopy. These features are illustrated using three practical cases with different symmetries and resolutions. The software, together with examples and user instructions, is available free of charge at http://mem.ibs.fr/UROX/

  1. Development and Analysis of Volume Multi-Sphere Method Model Generation using Electric Field Fitting

    Science.gov (United States)

    Ingram, G. J.

    Electrostatic modeling of spacecraft has wide-reaching applications such as detumbling space debris in the Geosynchronous Earth Orbit regime before docking, servicing and tugging space debris to graveyard orbits, and Lorentz augmented orbits. The viability of electrostatic actuation control applications relies on faster-than-realtime characterization of the electrostatic interaction. The Volume Multi-Sphere Method (VMSM) seeks the optimal placement and radii of a small number of equipotential spheres to accurately model the electrostatic force and torque on a conducting space object. Current VMSM models tuned using force and torque comparisons with commercially available finite element software are subject to the modeled probe size and numerical errors of the software. This work first investigates fitting of VMSM models to Surface-MSM (SMSM) generated electrical field data, removing modeling dependence on probe geometry while significantly increasing performance and speed. A proposed electric field matching cost function is compared to a force and torque cost function, the inclusion of a self-capacitance constraint is explored and 4 degree-of-freedom VMSM models generated using electric field matching are investigated. The resulting E-field based VMSM development framework is illustrated on a box-shaped hub with a single solar panel, and convergence properties of select models are qualitatively analyzed. Despite the complex non-symmetric spacecraft geometry, elegantly simple 2-sphere VMSM solutions provide force and torque fits within a few percent.

  2. Invited commentary: Lost in estimation--searching for alternatives to markov chains to fit complex Bayesian models.

    Science.gov (United States)

    Molitor, John

    2012-03-01

    Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.

  3. Phylogenetic tree reconstruction accuracy and model fit when proportions of variable sites change across the tree.

    Science.gov (United States)

    Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R

    2010-05-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.

  4. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    Science.gov (United States)

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  5. Tanning Shade Gradations of Models in Mainstream Fitness and Muscle Enthusiast Magazines: Implications for Skin Cancer Prevention in Men.

    Science.gov (United States)

    Basch, Corey H; Hillyer, Grace Clarke; Ethan, Danna; Berdnik, Alyssa; Basch, Charles E

    2015-07-01

    Tanned skin has been associated with perceptions of fitness and social desirability. Portrayal of models in magazines may reflect and perpetuate these perceptions. Limited research has investigated tanning shade gradations of models in men's versus women's fitness and muscle enthusiast magazines. Such findings are relevant in light of increased incidence and prevalence of melanoma in the United States. This study evaluated and compared tanning shade gradations of adult Caucasian male and female model images in mainstream fitness and muscle enthusiast magazines. Sixty-nine U.S. magazine issues (spring and summer, 2013) were utilized. Two independent reviewers rated tanning shade gradations of adult Caucasian male and female model images on magazines' covers, advertisements, and feature articles. Shade gradations were assessed using stock photographs of Caucasian models with varying levels of tanned skin on an 8-shade scale. A total of 4,683 images were evaluated. Darkest tanning shades were found among males in muscle enthusiast magazines and lightest among females in women's mainstream fitness magazines. By gender, male model images were 54% more likely to portray a darker tanning shade. In this study, images in men's (vs. women's) fitness and muscle enthusiast magazines portrayed Caucasian models with darker skin shades. Despite these magazines' fitness-related messages, pro-tanning images may promote attitudes and behaviors associated with higher skin cancer risk. To date, this is the first study to explore tanning shades in men's magazines of these genres. Further research is necessary to identify effects of exposure to these images among male readers. © The Author(s) 2014.

  6. Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models

    Science.gov (United States)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.

  7. Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models

    International Nuclear Information System (INIS)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-01-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data

  8. Global fits of GUT-scale SUSY models with GAMBIT

    Science.gov (United States)

    Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin

    2017-12-01

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

  9. Global fits of GUT-scale SUSY models with GAMBIT

    Energy Technology Data Exchange (ETDEWEB)

    Athron, Peter [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Bringmann, Torsten; Dal, Lars A.; Krislock, Abram; Raklev, Are [University of Oslo, Department of Physics, Oslo (Norway); Buckley, Andy [University of Glasgow, SUPA, School of Physics and Astronomy, Glasgow (United Kingdom); Chrzaszcz, Marcin [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); H. Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Krakow (Poland); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Jackson, Paul; White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, CNRS, ENS de Lyon, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval (France); Theoretical Physics Department, CERN, Geneva (Switzerland); Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Rogan, Christopher [Harvard University, Department of Physics, Cambridge, MA (United States); Ruiz de Austri, Roberto [IFIC-UV/CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Saavedra, Aldo [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); The University of Sydney, Faculty of Engineering and Information Technologies, Centre for Translational Data Science, School of Physics, Camperdown, NSW (Australia); Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Serra, Nicola [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); Collaboration: The GAMBIT Collaboration

    2017-12-15

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos. (orig.)

  10. The regression-calibration method for fitting generalized linear models with additive measurement error

    OpenAIRE

    James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll

    2003-01-01

    This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...

  11. Fit-for-purpose: species distribution model performance depends on evaluation criteria - Dutch Hoverflies as a case study.

    Science.gov (United States)

    Aguirre-Gutiérrez, Jesús; Carvalheiro, Luísa G; Polce, Chiara; van Loon, E Emiel; Raes, Niels; Reemer, Menno; Biesmeijer, Jacobus C

    2013-01-01

    Understanding species distributions and the factors limiting them is an important topic in ecology and conservation, including in nature reserve selection and predicting climate change impacts. While Species Distribution Models (SDM) are the main tool used for these purposes, choosing the best SDM algorithm is not straightforward as these are plentiful and can be applied in many different ways. SDM are used mainly to gain insight in 1) overall species distributions, 2) their past-present-future probability of occurrence and/or 3) to understand their ecological niche limits (also referred to as ecological niche modelling). The fact that these three aims may require different models and outputs is, however, rarely considered and has not been evaluated consistently. Here we use data from a systematically sampled set of species occurrences to specifically test the performance of Species Distribution Models across several commonly used algorithms. Species range in distribution patterns from rare to common and from local to widespread. We compare overall model fit (representing species distribution), the accuracy of the predictions at multiple spatial scales, and the consistency in selection of environmental correlations all across multiple modelling runs. As expected, the choice of modelling algorithm determines model outcome. However, model quality depends not only on the algorithm, but also on the measure of model fit used and the scale at which it is used. Although model fit was higher for the consensus approach and Maxent, Maxent and GAM models were more consistent in estimating local occurrence, while RF and GBM showed higher consistency in environmental variables selection. Model outcomes diverged more for narrowly distributed species than for widespread species. We suggest that matching study aims with modelling approach is essential in Species Distribution Models, and provide suggestions how to do this for different modelling aims and species' data

  12. Human X-chromosome inactivation pattern distributions fit a model of genetically influenced choice better than models of completely random choice

    Science.gov (United States)

    Renault, Nisa K E; Pritchett, Sonja M; Howell, Robin E; Greer, Wenda L; Sapienza, Carmen; Ørstavik, Karen Helene; Hamilton, David C

    2013-01-01

    In eutherian mammals, one X-chromosome in every XX somatic cell is transcriptionally silenced through the process of X-chromosome inactivation (XCI). Females are thus functional mosaics, where some cells express genes from the paternal X, and the others from the maternal X. The relative abundance of the two cell populations (X-inactivation pattern, XIP) can have significant medical implications for some females. In mice, the ‘choice' of which X to inactivate, maternal or paternal, in each cell of the early embryo is genetically influenced. In humans, the timing of XCI choice and whether choice occurs completely randomly or under a genetic influence is debated. Here, we explore these questions by analysing the distribution of XIPs in large populations of normal females. Models were generated to predict XIP distributions resulting from completely random or genetically influenced choice. Each model describes the discrete primary distribution at the onset of XCI, and the continuous secondary distribution accounting for changes to the XIP as a result of development and ageing. Statistical methods are used to compare models with empirical data from Danish and Utah populations. A rigorous data treatment strategy maximises information content and allows for unbiased use of unphased XIP data. The Anderson–Darling goodness-of-fit statistics and likelihood ratio tests indicate that a model of genetically influenced XCI choice better fits the empirical data than models of completely random choice. PMID:23652377

  13. A fitting LEGACY – modelling Kepler's best stars

    Directory of Open Access Journals (Sweden)

    Aarslev Magnus J.

    2017-01-01

    Full Text Available The LEGACY sample represents the best solar-like stars observed in the Kepler mission[5, 8]. The 66 stars in the sample are all on the main sequence or only slightly more evolved. They each have more than one year's observation data in short cadence, allowing for precise extraction of individual frequencies. Here we present model fits using a modified ASTFIT procedure employing two different near-surface-effect corrections, one by Christensen-Dalsgaard[4] and a newer correction proposed by Ball & Gizon[1]. We then compare the results obtained using the different corrections. We find that using the latter correction yields lower masses and significantly lower χ2 values for a large part of the sample.

  14. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references.

  15. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    International Nuclear Information System (INIS)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references

  16. A cautionary note on the use of information fit indexes in covariance structure modeling with means

    NARCIS (Netherlands)

    Wicherts, J.M.; Dolan, C.V.

    2004-01-01

    Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases

  17. Fitting the two-compartment model in DCE-MRI by linear inversion.

    Science.gov (United States)

    Flouri, Dimitra; Lesnic, Daniel; Sourbron, Steven P

    2016-09-01

    Model fitting of dynamic contrast-enhanced-magnetic resonance imaging-MRI data with nonlinear least squares (NLLS) methods is slow and may be biased by the choice of initial values. The aim of this study was to develop and evaluate a linear least squares (LLS) method to fit the two-compartment exchange and -filtration models. A second-order linear differential equation for the measured concentrations was derived where model parameters act as coefficients. Simulations of normal and pathological data were performed to determine calculation time, accuracy and precision under different noise levels and temporal resolutions. Performance of the LLS was evaluated by comparison against the NLLS. The LLS method is about 200 times faster, which reduces the calculation times for a 256 × 256 MR slice from 9 min to 3 s. For ideal data with low noise and high temporal resolution the LLS and NLLS were equally accurate and precise. The LLS was more accurate and precise than the NLLS at low temporal resolution, but less accurate at high noise levels. The data show that the LLS leads to a significant reduction in calculation times, and more reliable results at low noise levels. At higher noise levels the LLS becomes exceedingly inaccurate compared to the NLLS, but this may be improved using a suitable weighting strategy. Magn Reson Med 76:998-1006, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  18. Neural network hydrological modelling: on questions of over-fitting, over-training and over-parameterisation

    Science.gov (United States)

    Abrahart, R. J.; Dawson, C. W.; Heppenstall, A. J.; See, L. M.

    2009-04-01

    The most critical issue in developing a neural network model is generalisation: how well will the preferred solution perform when it is applied to unseen datasets? The reported experiments used far-reaching sequences of model architectures and training periods to investigate the potential damage that could result from the impact of several interrelated items: (i) over-fitting - a machine learning concept related to exceeding some optimal architectural size; (ii) over-training - a machine learning concept related to the amount of adjustment that is applied to a specific model - based on the understanding that too much fine-tuning might result in a model that had accommodated random aspects of its training dataset - items that had no causal relationship to the target function; and (iii) over-parameterisation - a statistical modelling concept that is used to restrict the number of parameters in a model so as to match the information content of its calibration dataset. The last item in this triplet stems from an understanding that excessive computational complexities might permit an absurd and false solution to be fitted to the available material. Numerous feedforward multilayered perceptrons were trialled and tested. Two different methods of model construction were also compared and contrasted: (i) traditional Backpropagation of Error; and (ii) state-of-the-art Symbiotic Adaptive Neuro-Evolution. Modelling solutions were developed using the reported experimental set ups of Gaume & Gosset (2003). The models were applied to a near-linear hydrological modelling scenario in which past upstream and past downstream discharge records were used to forecast current discharge at the downstream gauging station [CS1: River Marne]; and a non-linear hydrological modelling scenario in which past river discharge measurements and past local meteorological records (precipitation and evaporation) were used to forecast current discharge at the river gauging station [CS2: Le Sauzay].

  19. A bipartite fitness model for online music streaming services

    Science.gov (United States)

    Pongnumkul, Suchit; Motohashi, Kazuyuki

    2018-01-01

    This paper proposes an evolution model and an analysis of the behavior of music consumers on online music streaming services. While previous studies have observed power-law degree distributions of usage in online music streaming services, the underlying behavior of users has not been well understood. Users and songs can be described using a bipartite network where an edge exists between a user node and a song node when the user has listened that song. The growth mechanism of bipartite networks has been used to understand the evolution of online bipartite networks Zhang et al. (2013). Existing bipartite models are based on a preferential attachment mechanism László Barabási and Albert (1999) in which the probability that a user listens to a song is proportional to its current popularity. This mechanism does not allow for two types of real world phenomena. First, a newly released song with high quality sometimes quickly gains popularity. Second, the popularity of songs normally decreases as time goes by. Therefore, this paper proposes a new model that is more suitable for online music services by adding fitness and aging functions to the song nodes of the bipartite network proposed by Zhang et al. (2013). Theoretical analyses are performed for the degree distribution of songs. Empirical data from an online streaming service, Last.fm, are used to confirm the degree distribution of the object nodes. Simulation results show improvements from a previous model. Finally, to illustrate the application of the proposed model, a simplified royalty cost model for online music services is used to demonstrate how the changes in the proposed parameters can affect the costs for online music streaming providers. Managerial implications are also discussed.

  20. FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES

    Directory of Open Access Journals (Sweden)

    U. S. Panday

    2012-09-01

    Full Text Available In literature and in photogrammetric workstations many approaches and systems to automatically reconstruct buildings from remote sensing data are described and available. Those building models are being used for instance in city modeling or in cadastre context. If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS data. This leads to inconsistent building outlines, which has a negative influence on visual impression, but more seriously also represents a wrong legal boundary in the cadaster. Oblique aerial images as opposed to nadir-view images reveal greater detail, enabling to see different views of an object taken from different directions. Building walls are visible from oblique images directly and those images are used for automated roof overhang estimation in this research. A fitting algorithm is employed to find roof parameters of simple buildings. It uses a least squares algorithm to fit projected wire frames to their corresponding edge lines extracted from the images. Self-occlusion is detected based on intersection result of viewing ray and the planes formed by the building whereas occlusion from other objects is detected using an ALS point cloud. Overhang and ground height are obtained by sweeping vertical and horizontal planes respectively. Experimental results are verified with high resolution ortho-images, field survey, and ALS data. Planimetric accuracy of 1cm mean and 5cm standard deviation was obtained, while buildings' orientation were accurate to mean of 0.23° and standard deviation of 0.96° with ortho-image. Overhang parameters were aligned to approximately 10cm with field survey. The ground and roof heights were accurate to mean of – 9cm and 8cm with standard deviations of 16cm and 8cm with ALS respectively. The developed approach reconstructs 3D building models well in cases of sufficient texture. More images should be acquired for

  1. Testing the goodness of fit of selected infiltration models on soils with different land use histories

    International Nuclear Information System (INIS)

    Mbagwu, J.S.C.

    1993-10-01

    Six infiltration models, some obtained by reformulating the fitting parameters of the classical Kostiakov (1932) and Philip (1957) equations, were investigated for their ability to describe water infiltration into highly permeable sandy soils from the Nsukka plains of SE Nigeria. The models were Kostiakov, Modified Kostiakov (A), Modified Kostiakov (B), Philip, Modified Philip (A) and Modified Philip (B). Infiltration data were obtained from double ring infiltrometers on field plots established on a Knadic Paleustult (Nkpologu series) to investigate the effects of land use on soil properties and maize yield. The treatments were; (i) tilled-mulched (TM), (ii) tilled-unmulched (TU), (iii) untilled-mulched (UM), (iv) untilled-unmulched (UU) and (v) continuous pasture (CP). Cumulative infiltration was highest on the TM and lowest on the CP plots. All estimated model parameters obtained by the best fit of measured data differed significantly among the treatments. Based on the magnitude of R 2 values, the Kostiakov, Modified Kostiakov (A), Philip and Modified Philip (A) models provided best predictions of cumulative infiltration as a function of time. Comparing experimental with model-predicted cumulative infiltration showed, however, that on all treatments the values predicted by the classical Kostiakov, Philip and Modified Philip (A) models deviated most from experimental data. The other models produced values that agreed very well with measured data. Considering the eases of determining the fitting parameters it is proposed that on soils with high infiltration rates, either Modified Kostiakov model (I = Kt a + Ict) or Modified Philip model (I St 1/2 + Ict), (where I is cumulative infiltration, K, the time coefficient, t, time elapsed, 'a' the time exponent, Ic the equilibrium infiltration rate and S, the soil water sorptivity), be used for routine characterization of the infiltration process. (author). 33 refs, 3 figs 6 tabs

  2. Modelling job support, job fit, job role and job satisfaction for school of nursing sessional academic staff.

    Science.gov (United States)

    Cowin, Leanne S; Moroney, Robyn

    2018-01-01

    Sessional academic staff are an important part of nursing education. Increases in casualisation of the academic workforce continue and satisfaction with the job role is an important bench mark for quality curricula delivery and influences recruitment and retention. This study examined relations between four job constructs - organisation fit, organisation support, staff role and job satisfaction for Sessional Academic Staff at a School of Nursing by creating two path analysis models. A cross-sectional correlational survey design was utilised. Participants who were currently working as sessional or casual teaching staff members were invited to complete an online anonymous survey. The data represents a convenience sample of Sessional Academic Staff in 2016 at a large school of Nursing and Midwifery in Australia. After psychometric evaluation of each of the job construct measures in this study we utilised Structural Equation Modelling to better understand the relations of the variables. The measures used in this study were found to be both valid and reliable for this sample. Job support and job fit are positively linked to job satisfaction. Although the hypothesised model did not meet model fit standards, a new 'nested' model made substantive sense. This small study explored a new scale for measuring academic job role, and demonstrated how it promotes the constructs of job fit and job supports. All four job constructs are important in providing job satisfaction - an outcome that in turn supports staffing stability, retention, and motivation.

  3. THE HERSCHEL ORION PROTOSTAR SURVEY: SPECTRAL ENERGY DISTRIBUTIONS AND FITS USING A GRID OF PROTOSTELLAR MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Furlan, E. [Infrared Processing and Analysis Center, California Institute of Technology, 770 S. Wilson Ave., Pasadena, CA 91125 (United States); Fischer, W. J. [Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771 (United States); Ali, B. [Space Science Institute, 4750 Walnut Street, Boulder, CO 80301 (United States); Stutz, A. M. [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Stanke, T. [ESO, Karl-Schwarzschild-Strasse 2, D-85748 Garching bei München (Germany); Tobin, J. J. [National Radio Astronomy Observatory, Charlottesville, VA 22903 (United States); Megeath, S. T.; Booker, J. [Ritter Astrophysical Research Center, Department of Physics and Astronomy, University of Toledo, 2801 W. Bancroft Street, Toledo, OH 43606 (United States); Osorio, M. [Instituto de Astrofísica de Andalucía, CSIC, Camino Bajo de Huétor 50, E-18008 Granada (Spain); Hartmann, L.; Calvet, N. [Department of Astronomy, University of Michigan, 500 Church Street, Ann Arbor, MI 48109 (United States); Poteet, C. A. [New York Center for Astrobiology, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180 (United States); Manoj, P. [Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India); Watson, D. M. [Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627 (United States); Allen, L., E-mail: furlan@ipac.caltech.edu [National Optical Astronomy Observatory, 950 N. Cherry Avenue, Tucson, AZ 85719 (United States)

    2016-05-01

    We present key results from the Herschel Orion Protostar Survey: spectral energy distributions (SEDs) and model fits of 330 young stellar objects, predominantly protostars, in the Orion molecular clouds. This is the largest sample of protostars studied in a single, nearby star formation complex. With near-infrared photometry from 2MASS, mid- and far-infrared data from Spitzer and Herschel , and submillimeter photometry from APEX, our SEDs cover 1.2–870 μ m and sample the peak of the protostellar envelope emission at ∼100 μ m. Using mid-IR spectral indices and bolometric temperatures, we classify our sample into 92 Class 0 protostars, 125 Class I protostars, 102 flat-spectrum sources, and 11 Class II pre-main-sequence stars. We implement a simple protostellar model (including a disk in an infalling envelope with outflow cavities) to generate a grid of 30,400 model SEDs and use it to determine the best-fit model parameters for each protostar. We argue that far-IR data are essential for accurate constraints on protostellar envelope properties. We find that most protostars, and in particular the flat-spectrum sources, are well fit. The median envelope density and median inclination angle decrease from Class 0 to Class I to flat-spectrum protostars, despite the broad range in best-fit parameters in each of the three categories. We also discuss degeneracies in our model parameters. Our results confirm that the different protostellar classes generally correspond to an evolutionary sequence with a decreasing envelope infall rate, but the inclination angle also plays a role in the appearance, and thus interpretation, of the SEDs.

  4. GRace: a MATLAB-based application for fitting the discrimination-association model.

    Science.gov (United States)

    Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio

    2014-10-28

    The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed.

  5. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    Science.gov (United States)

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.

  6. Fitting outbreak models to data from many small norovirus outbreaks

    Directory of Open Access Journals (Sweden)

    Eamon B. O’Dea

    2014-03-01

    Full Text Available Infectious disease often occurs in small, independent outbreaks in populations with varying characteristics. Each outbreak by itself may provide too little information for accurate estimation of epidemic model parameters. Here we show that using standard stochastic epidemic models for each outbreak and allowing parameters to vary between outbreaks according to a linear predictor leads to a generalized linear model that accurately estimates parameters from many small and diverse outbreaks. By estimating initial growth rates in addition to transmission rates, we are able to characterize variation in numbers of initially susceptible individuals or contact patterns between outbreaks. With simulation, we find that the estimates are fairly robust to the data being collected at discrete intervals and imputation of about half of all infectious periods. We apply the method by fitting data from 75 norovirus outbreaks in health-care settings. Our baseline regression estimates are 0.0037 transmissions per infective-susceptible day, an initial growth rate of 0.27 transmissions per infective day, and a symptomatic period of 3.35 days. Outbreaks in long-term-care facilities had significantly higher transmission and initial growth rates than outbreaks in hospitals.

  7. Minimal see-saw model predicting best fit lepton mixing angles

    International Nuclear Information System (INIS)

    King, Stephen F.

    2013-01-01

    We discuss a minimal predictive see-saw model in which the right-handed neutrino mainly responsible for the atmospheric neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (0,1,1) and the right-handed neutrino mainly responsible for the solar neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (1,4,2), with a relative phase η=−2π/5. We show how these patterns of couplings could arise from an A 4 family symmetry model of leptons, together with Z 3 and Z 5 symmetries which fix η=−2π/5 up to a discrete phase choice. The PMNS matrix is then completely determined by one remaining parameter which is used to fix the neutrino mass ratio m 2 /m 3 . The model predicts the lepton mixing angles θ 12 ≈34 ∘ ,θ 23 ≈41 ∘ ,θ 13 ≈9.5 ∘ , which exactly coincide with the current best fit values for a normal neutrino mass hierarchy, together with the distinctive prediction for the CP violating oscillation phase δ≈106 ∘

  8. Direct fit of a theoretical model of phase transition in oscillatory finger motions.

    NARCIS (Netherlands)

    Newell, K.M.; Molenaar, P.C.M.

    2003-01-01

    This paper presents a general method to fit the Schoner-Haken-Kelso (SHK) model of human movement phase transitions directly to time series data. A robust variant of the extended Kalman filter technique is applied to the data of a single subject. The options of covariance resetting and iteration

  9. The FIT Model - Fuel-cycle Integration and Tradeoffs

    International Nuclear Information System (INIS)

    Piet, Steven J.; Soelberg, Nick R.; Bays, Samuel E.; Pereira, Candido; Pincock, Layne F.; Shaber, Eric L.; Teague, Melissa C.; Teske, Gregory M.; Vedros, Kurt G.

    2010-01-01

    All mass streams from fuel separation and fabrication are products that must meet some set of product criteria - fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the 'system losses study' team that developed it (Shropshire2009, Piet2010) are an initial step by the FCR and D program toward a global analysis that accounts for the requirements and capabilities of each component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R and D needs and set longer-term goals. The question originally posed to the 'system losses study' was the cost of separation, fuel fabrication, waste management, etc. versus the separation efficiency. In other words, are the costs associated with marginal reductions in separations losses (or improvements in product recovery) justified by the gains in the performance of other systems? We have learned that that is the wrong question. The right question is: how does one adjust the compositions and quantities of all mass streams, given uncertain product criteria, to balance competing objectives including cost? FIT is a method to analyze different fuel cycles using common bases to determine how chemical performance changes in one part of a fuel cycle (say used fuel cooling times or separation efficiencies) affect other parts of the fuel cycle. FIT estimates impurities in fuel and waste via a rough estimate of physics and mass balance for a set of technologies. If feasibility is an issue for a set, as it is for 'minimum fuel treatment' approaches such as melt refining and AIROX, it can help to make an estimate of how performances would have to change to achieve feasibility.

  10. Fitness voter model: Damped oscillations and anomalous consensus.

    Science.gov (United States)

    Woolcock, Anthony; Connaughton, Colm; Merali, Yasmin; Vazquez, Federico

    2017-09-01

    We study the dynamics of opinion formation in a heterogeneous voter model on a complete graph, in which each agent is endowed with an integer fitness parameter k≥0, in addition to its + or - opinion state. The evolution of the distribution of k-values and the opinion dynamics are coupled together, so as to allow the system to dynamically develop heterogeneity and memory in a simple way. When two agents with different opinions interact, their k-values are compared, and with probability p the agent with the lower value adopts the opinion of the one with the higher value, while with probability 1-p the opposite happens. The agent that keeps its opinion (winning agent) increments its k-value by one. We study the dynamics of the system in the entire 0≤p≤1 range and compare with the case p=1/2, in which opinions are decoupled from the k-values and the dynamics is equivalent to that of the standard voter model. When 0≤psystem approaches exponentially fast to the consensus state of the initial majority opinion. The mean consensus time τ appears to grow logarithmically with the number of agents N, and it is greatly decreased relative to the linear behavior τ∼N found in the standard voter model. When 1/2system initially relaxes to a state with an even coexistence of opinions, but eventually reaches consensus by finite-size fluctuations. The approach to the coexistence state is monotonic for 1/2oscillations around the coexistence value. The final approach to coexistence is approximately a power law t^{-b(p)} in both regimes, where the exponent b increases with p. Also, τ increases respect to the standard voter model, although it still scales linearly with N. The p=1 case is special, with a relaxation to coexistence that scales as t^{-2.73} and a consensus time that scales as τ∼N^{β}, with β≃1.45.

  11. A hands-on approach for fitting long-term survival models under the GAMLSS framework.

    Science.gov (United States)

    de Castro, Mário; Cancho, Vicente G; Rodrigues, Josemar

    2010-02-01

    In many data sets from clinical studies there are patients insusceptible to the occurrence of the event of interest. Survival models which ignore this fact are generally inadequate. The main goal of this paper is to describe an application of the generalized additive models for location, scale, and shape (GAMLSS) framework to the fitting of long-term survival models. In this work the number of competing causes of the event of interest follows the negative binomial distribution. In this way, some well known models found in the literature are characterized as particular cases of our proposal. The model is conveniently parameterized in terms of the cured fraction, which is then linked to covariates. We explore the use of the gamlss package in R as a powerful tool for inference in long-term survival models. The procedure is illustrated with a numerical example. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  12. Bifactor Models Show a Superior Model Fit: Examination of the Factorial Validity of Parent-Reported and Self-Reported Symptoms of Attention-Deficit/Hyperactivity Disorders in Children and Adolescents.

    Science.gov (United States)

    Rodenacker, Klaas; Hautmann, Christopher; Görtz-Dorten, Anja; Döpfner, Manfred

    2016-01-01

    Various studies have demonstrated that bifactor models yield better solutions than models with correlated factors. However, the kind of bifactor model that is most appropriate is yet to be examined. The current study is the first to test bifactor models across the full age range (11-18 years) of adolescents using self-reports, and the first to test bifactor models with German subjects and German questionnaires. The study sample included children and adolescents aged between 6 and 18 years recruited from a German clinical sample (n = 1,081) and a German community sample (n = 642). To examine the factorial validity, we compared unidimensional, correlated factors and higher-order and bifactor models and further tested a modified incomplete bifactor model for measurement invariance. Bifactor models displayed superior model fit statistics compared to correlated factor models or second-order models. However, a more parsimonious incomplete bifactor model with only 2 specific factors (inattention and impulsivity) showed a good model fit and a better factor structure than the other bifactor models. Scalar measurement invariance was given in most group comparisons. An incomplete bifactor model would suggest that the specific inattention and impulsivity factors represent entities separable from the general attention-deficit/hyperactivity disorder construct and might, therefore, give way to a new approach to subtyping of children beyond and above attention-deficit/hyperactivity disorder. © 2016 S. Karger AG, Basel.

  13. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.

    2002-01-01

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  14. A classical regression framework for mediation analysis: fitting one model to estimate mediation effects.

    Science.gov (United States)

    Saunders, Christina T; Blume, Jeffrey D

    2017-10-26

    Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.

  15. Keep Using My Health Apps: Discover Users' Perception of Health and Fitness Apps with the UTAUT2 Model.

    Science.gov (United States)

    Yuan, Shupei; Ma, Wenjuan; Kanthawala, Shaheen; Peng, Wei

    2015-09-01

    Health and fitness applications (apps) are one of the major app categories in the current mobile app market. Few studies have examined this area from the users' perspective. This study adopted the Extended Unified Theory of Acceptance and Use of Technology (UTAUT2) Model to examine the predictors of the users' intention to adopt health and fitness apps. A survey (n=317) was conducted with college-aged smartphone users at a Midwestern university in the United States. Performance expectancy, hedonic motivations, price value, and habit were significant predictors of users' intention of continued usage of health and fitness apps. However, effort expectancy, social influence, and facilitating conditions were not found to predict users' intention of continued usage of health and fitness apps. This study extends the UTATU2 Model to the mobile apps domain and provides health professions, app designers, and marketers with the insights of user experience in terms of continuously using health and fitness apps.

  16. Supersymmetric Fits after the Higgs Discovery and Implications for Model Building

    CERN Document Server

    Ellis, John

    2014-01-01

    The data from the first run of the LHC at 7 and 8 TeV, together with the information provided by other experiments such as precision electroweak measurements, flavour measurements, the cosmological density of cold dark matter and the direct search for the scattering of dark matter particles in the LUX experiment, provide important constraints on supersymmetric models. Important information is provided by the ATLAS and CMS measurements of the mass of the Higgs boson, as well as the negative results of searches at the LHC for events with missing transverse energy accompanied by jets, and the LHCb and CMS measurements off BR($B_s \\to \\mu^+ \\mu^-$). Results are presented from frequentist analyses of the parameter spaces of the CMSSM and NUHM1. The global $\\chi^2$ functions for the supersymmetric models vary slowly over most of the parameter spaces allowed by the Higgs mass and the missing transverse energy search, with best-fit values that are comparable to the $\\chi^2$ for the Standard Model. The $95\\%$ CL lower...

  17. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2006-01-01

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...

  18. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...

  19. Modelling binary data

    CERN Document Server

    Collett, David

    2002-01-01

    INTRODUCTION Some Examples The Scope of this Book Use of Statistical Software STATISTICAL INFERENCE FOR BINARY DATA The Binomial Distribution Inference about the Success Probability Comparison of Two Proportions Comparison of Two or More Proportions MODELS FOR BINARY AND BINOMIAL DATA Statistical Modelling Linear Models Methods of Estimation Fitting Linear Models to Binomial Data Models for Binomial Response Data The Linear Logistic Model Fitting the Linear Logistic Model to Binomial Data Goodness of Fit of a Linear Logistic Model Comparing Linear Logistic Models Linear Trend in Proportions Comparing Stimulus-Response Relationships Non-Convergence and Overfitting Some other Goodness of Fit Statistics Strategy for Model Selection Predicting a Binary Response Probability BIOASSAY AND SOME OTHER APPLICATIONS The Tolerance Distribution Estimating an Effective Dose Relative Potency Natural Response Non-Linear Logistic Regression Models Applications of the Complementary Log-Log Model MODEL CHECKING Definition of Re...

  20. The fitting parameters extraction of conversion model of the low dose rate effect in bipolar devices

    International Nuclear Information System (INIS)

    Bakerenkov, Alexander

    2011-01-01

    The Enhanced Low Dose Rate Sensitivity (ELDRS) in bipolar devices consists of in base current degradation of NPN and PNP transistors increase as the dose rate is decreased. As a result of almost 20-year studying, the some physical models of effect are developed, being described in detail. Accelerated test methods, based on these models use in standards. The conversion model of the effect, that allows to describe the inverse S-shaped excess base current dependence versus dose rate, was proposed. This paper presents the problem of conversion model fitting parameters extraction.

  1. Permutation tests for goodness-of-fit testing of mathematical models to experimental data.

    Science.gov (United States)

    Fişek, M Hamit; Barlas, Zeynep

    2013-03-01

    This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. A History of Regression and Related Model-Fitting in the Earth Sciences (1636?-2000)

    International Nuclear Information System (INIS)

    Howarth, Richard J.

    2001-01-01

    The (statistical) modeling of the behavior of a dependent variate as a function of one or more predictors provides examples of model-fitting which span the development of the earth sciences from the 17th Century to the present. The historical development of these methods and their subsequent application is reviewed. Bond's predictions (c. 1636 and 1668) of change in the magnetic declination at London may be the earliest attempt to fit such models to geophysical data. Following publication of Newton's theory of gravitation in 1726, analysis of data on the length of a 1 o meridian arc, and the length of a pendulum beating seconds, as a function of sin 2 (latitude), was used to determine the ellipticity of the oblate spheroid defining the Figure of the Earth. The pioneering computational methods of Mayer in 1750, Boscovich in 1755, and Lambert in 1765, and the subsequent independent discoveries of the principle of least squares by Gauss in 1799, Legendre in 1805, and Adrain in 1808, and its later substantiation on the basis of probability theory by Gauss in 1809 were all applied to the analysis of such geodetic and geophysical data. Notable later applications include: the geomagnetic survey of Ireland by Lloyd, Sabine, and Ross in 1836, Gauss's model of the terrestrial magnetic field in 1838, and Airy's 1845 analysis of the residuals from a fit to pendulum lengths, from which he recognized the anomalous character of measurements of gravitational force which had been made on islands. In the early 20th Century applications to geological topics proliferated, but the computational burden effectively held back applications of multivariate analysis. Following World War II, the arrival of digital computers in universities in the 1950s facilitated computation, and fitting linear or polynomial models as a function of geographic coordinates, trend surface analysis, became popular during the 1950-60s. The inception of geostatistics in France at this time by Matheron had its

  3. Building Customer Churn Prediction Models in Fitness Industry with Machine Learning Methods

    OpenAIRE

    Shan, Min

    2017-01-01

    With the rapid growth of digital systems, churn management has become a major focus within customer relationship management in many industries. Ample research has been conducted for churn prediction in different industries with various machine learning methods. This thesis aims to combine feature selection and supervised machine learning methods for defining models of churn prediction and apply them on fitness industry. Forward selection is chosen as feature selection methods. Support Vector ...

  4. A new fit-for-purpose model testing framework: Decision Crash Tests

    Science.gov (United States)

    Tolson, Bryan; Craig, James

    2016-04-01

    Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building

  5. Describing the Process of Adopting Nutrition and Fitness Apps: Behavior Stage Model Approach.

    Science.gov (United States)

    König, Laura M; Sproesser, Gudrun; Schupp, Harald T; Renner, Britta

    2018-03-13

    Although mobile technologies such as smartphone apps are promising means for motivating people to adopt a healthier lifestyle (mHealth apps), previous studies have shown low adoption and continued use rates. Developing the means to address this issue requires further understanding of mHealth app nonusers and adoption processes. This study utilized a stage model approach based on the Precaution Adoption Process Model (PAPM), which proposes that people pass through qualitatively different motivational stages when adopting a behavior. To establish a better understanding of between-stage transitions during app adoption, this study aimed to investigate the adoption process of nutrition and fitness app usage, and the sociodemographic and behavioral characteristics and decision-making style preferences of people at different adoption stages. Participants (N=1236) were recruited onsite within the cohort study Konstanz Life Study. Use of mobile devices and nutrition and fitness apps, 5 behavior adoption stages of using nutrition and fitness apps, preference for intuition and deliberation in eating decision-making (E-PID), healthy eating style, sociodemographic variables, and body mass index (BMI) were assessed. Analysis of the 5 behavior adoption stages showed that stage 1 ("unengaged") was the most prevalent motivational stage for both nutrition and fitness app use, with half of the participants stating that they had never thought about using a nutrition app (52.41%, 533/1017), whereas less than one-third stated they had never thought about using a fitness app (29.25%, 301/1029). "Unengaged" nonusers (stage 1) showed a higher preference for an intuitive decision-making style when making eating decisions, whereas those who were already "acting" (stage 4) showed a greater preference for a deliberative decision-making style (F 4,1012 =21.83, Pdigital interventions. This study highlights that new user groups might be better reached by apps designed to address a more intuitive

  6. Hair length, facial attractiveness, personality attribution: A multiple fitness model of hairdressing

    OpenAIRE

    Bereczkei, Tamas; Mesko, Norbert

    2007-01-01

    Multiple Fitness Model states that attractiveness varies across multiple dimensions, with each feature representing a different aspect of mate value. In the present study, male raters judged the attractiveness of young females with neotenous and mature facial features, with various hair lengths. Results revealed that the physical appearance of long-haired women was rated high, regardless of their facial attractiveness being valued high or low. Women rated as most attractive were those whose f...

  7. FITTING A THREE DIMENSIONAL PEM FUEL CELL MODEL TO MEASUREMENTS BY TUNING THE POROSITY AND

    DEFF Research Database (Denmark)

    Bang, Mads; Odgaard, Madeleine; Condra, Thomas Joseph

    2004-01-01

    the distribution of current density and further how thisaffects the polarization curve.The porosity and conductivity of the catalyst layer are some ofthe most difficult parameters to measure, estimate and especiallycontrol. Yet the proposed model shows how these two parameterscan have significant influence...... on the performance of the fuel cell.The two parameters are shown to be key elements in adjusting thethree-dimensional model to fit measured polarization curves.Results from the proposed model are compared to single cellmeasurements on a test MEA from IRD Fuel Cells.......A three-dimensional, computational fluid dynamics (CFD) model of a PEM fuel cell is presented. The model consists ofstraight channels, porous gas diffusion layers, porous catalystlayers and a membrane. In this computational domain, most ofthe transport phenomena which govern the performance of the...

  8. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh; Genton, Marc G.

    2014-01-01

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte

  9. Two Aspects of the Simplex Model: Goodness of Fit to Linear Growth Curve Structures and the Analysis of Mean Trends.

    Science.gov (United States)

    Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.

    1994-01-01

    Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)

  10. Are all models created equal? A content analysis of women in advertisements of fitness versus fashion magazines.

    Science.gov (United States)

    Wasylkiw, L; Emms, A A; Meuse, R; Poirier, K F

    2009-03-01

    The current study is a content analysis of women appearing in advertisements in two types of magazines: fitness/health versus fashion/beauty chosen because of their large and predominantly female readerships. Women appearing in advertisements of the June 2007 issue of five fitness/health magazines were compared to women appearing in advertisements of the June 2007 issue of five beauty/fashion magazines. Female models appearing in advertisements of both types of magazines were primarily young, thin Caucasians; however, images of models were more likely to emphasize appearance over performance when they appeared in fashion magazines. This difference in emphasis has implications for future research.

  11. Optimized aerodynamic design process for subsonic transport wing fitted with winglets. [wind tunnel model

    Science.gov (United States)

    Kuhlman, J. M.

    1979-01-01

    The aerodynamic design of a wind-tunnel model of a wing representative of that of a subsonic jet transport aircraft, fitted with winglets, was performed using two recently developed optimal wing-design computer programs. Both potential flow codes use a vortex lattice representation of the near-field of the aerodynamic surfaces for determination of the required mean camber surfaces for minimum induced drag, and both codes use far-field induced drag minimization procedures to obtain the required spanloads. One code uses a discrete vortex wake model for this far-field drag computation, while the second uses a 2-D advanced panel wake model. Wing camber shapes for the two codes are very similar, but the resulting winglet camber shapes differ widely. Design techniques and considerations for these two wind-tunnel models are detailed, including a description of the necessary modifications of the design geometry to format it for use by a numerically controlled machine for the actual model construction.

  12. Development and design of a late-model fitness test instrument based on LabView

    Science.gov (United States)

    Xie, Ying; Wu, Feiqing

    2010-12-01

    Undergraduates are pioneers of China's modernization program and undertake the historic mission of rejuvenating our nation in the 21st century, whose physical fitness is vital. A smart fitness test system can well help them understand their fitness and health conditions, thus they can choose more suitable approaches and make practical plans for exercising according to their own situation. following the future trends, a Late-model fitness test Instrument based on LabView has been designed to remedy defects of today's instruments. The system hardware consists of fives types of sensors with their peripheral circuits, an acquisition card of NI USB-6251 and a computer, while the system software, on the basis of LabView, includes modules of user register, data acquisition, data process and display, and data storage. The system, featured by modularization and an open structure, is able to be revised according to actual needs. Tests results have verified the system's stability and reliability.

  13. Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain

    Science.gov (United States)

    Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises

    2015-01-01

    Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156

  14. Innovation Rather than Improvement: A Solvable High-Dimensional Model Highlights the Limitations of Scalar Fitness

    Science.gov (United States)

    Tikhonov, Mikhail; Monasson, Remi

    2018-01-01

    Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.

  15. Predicting the Best Fit: A Comparison of Response Surface Models for Midazolam and Alfentanil Sedation in Procedures With Varying Stimulation.

    Science.gov (United States)

    Liou, Jing-Yang; Ting, Chien-Kun; Mandell, M Susan; Chang, Kuang-Yi; Teng, Wei-Nung; Huang, Yu-Yin; Tsou, Mei-Yung

    2016-08-01

    Selecting an effective dose of sedative drugs in combined upper and lower gastrointestinal endoscopy is complicated by varying degrees of pain stimulation. We tested the ability of 5 response surface models to predict depth of sedation after administration of midazolam and alfentanil in this complex model. The procedure was divided into 3 phases: esophagogastroduodenoscopy (EGD), colonoscopy, and the time interval between the 2 (intersession). The depth of sedation in 33 adult patients was monitored by Observer Assessment of Alertness/Scores. A total of 218 combinations of midazolam and alfentanil effect-site concentrations derived from pharmacokinetic models were used to test 5 response surface models in each of the 3 phases of endoscopy. Model fit was evaluated with objective function value, corrected Akaike Information Criterion (AICc), and Spearman ranked correlation. A model was arbitrarily defined as accurate if the predicted probability is effect-site concentrations tested ranged from 1 to 76 ng/mL and from 5 to 80 ng/mL for midazolam and alfentanil, respectively. Midazolam and alfentanil had synergistic effects in colonoscopy and EGD, but additivity was observed in the intersession group. Adequate prediction rates were 84% to 85% in the intersession group, 84% to 88% during colonoscopy, and 82% to 87% during EGD. The reduced Greco and Fixed alfentanil concentration required for 50% of the patients to achieve targeted response Hierarchy models performed better with comparable predictive strength. The reduced Greco model had the lowest AICc with strong correlation in all 3 phases of endoscopy. Dynamic, rather than fixed, γ and γalf in the Hierarchy model improved model fit. The reduced Greco model had the lowest objective function value and AICc and thus the best fit. This model was reliable with acceptable predictive ability based on adequate clinical correlation. We suggest that this model has practical clinical value for patients undergoing procedures

  16. Fitting model-based psychometric functions to simultaneity and temporal-order judgment data: MATLAB and R routines.

    Science.gov (United States)

    Alcalá-Quintana, Rocío; García-Pérez, Miguel A

    2013-12-01

    Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.

  17. Black Versus Gray T-Shirts: Comparison of Spectrophotometric and Other Biophysical Properties of Physical Fitness Uniforms and Modeled Heat Strain and Thermal Comfort

    Science.gov (United States)

    2016-09-01

    PROPERTIES OF PHYSICAL FITNESS UNIFORMS AND MODELED HEAT STRAIN AND THERMAL COMFORT DISCLAIMER The opinions or assertions contained herein are the...SHIRTS: COMPARISON OF SPECTROPHOTOMETRIC AND OTHER BIOPHYSICAL PROPERTIES OF PHYSICAL FITNESS UNIFORMS AND MODELED HEAT STRAIN AND THERMAL COMFORT ...the impact of the environment on the wearer. To model these impacts on human thermal sensation (e.g., thermal comfort ) and thermoregulatory

  18. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    Science.gov (United States)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  19. FIT ANALYSIS OF INDOSAT DOMPETKU BUSINESS MODEL USING A STRATEGIC DIAGNOSIS APPROACH

    Directory of Open Access Journals (Sweden)

    Fauzi Ridwansyah

    2015-09-01

    Full Text Available Mobile payment is an industry's response to global and regional technological-driven, as well as national social-economical driven in less cash society development. The purposes of this study were 1 identifying positioning of PT. Indosat in providing a response to Indonesian mobile payment market, 2 analyzing Indosat’s internal capabilities and business model fit with environment turbulence, and 3 formulating the optimum mobile payment business model development design for Indosat. The method used in this study was a combination of qualitative and quantitative analysis through in-depth interviews with purposive judgment sampling. The analysis tools used in this study were Business Model Canvas (MBC and Ansoff’s Strategic Diagnosis. The interviewees were the representatives of PT. Indosat internal management and mobile payment business value chain stakeholders. Based on BMC mapping which is then analyzed by strategic diagnosis model, a considerable gap (>1 between the current market environment and Indosat strategy of aggressiveness with the expected future of environment turbulence level was obtained. Therefore, changes in the competitive strategy that need to be conducted include 1 developing a new customer segment, 2 shifting the value proposition that leads to the extensification of mobile payment, 3 monetizing effective value proposition, and 4 integrating effective collaboration for harmonizing company’s objective with the government's vision. Keywords: business model canvas, Indosat, mobile payment, less cash society, strategic diagnosis

  20. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example

    Science.gov (United States)

    Helgesson, P.; Sjöstrand, H.

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  1. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example.

    Science.gov (United States)

    Helgesson, P; Sjöstrand, H

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  2. A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.

    Science.gov (United States)

    Glas, Cees A. W.; Meijer, Rob R.

    A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…

  3. A differential equation for the asymptotic fitness distribution in the Bak-Sneppen model with five species.

    Science.gov (United States)

    Schlemm, Eckhard

    2015-09-01

    The Bak-Sneppen model is an abstract representation of a biological system that evolves according to the Darwinian principles of random mutation and selection. The species in the system are characterized by a numerical fitness value between zero and one. We show that in the case of five species the steady-state fitness distribution can be obtained as a solution to a linear differential equation of order five with hypergeometric coefficients. Similar representations for the asymptotic fitness distribution in larger systems may help pave the way towards a resolution of the question of whether or not, in the limit of infinitely many species, the fitness is asymptotically uniformly distributed on the interval [fc, 1] with fc ≳ 2/3. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Inverse problem theory methods for data fitting and model parameter estimation

    CERN Document Server

    Tarantola, A

    2002-01-01

    Inverse Problem Theory is written for physicists, geophysicists and all scientists facing the problem of quantitative interpretation of experimental data. Although it contains a lot of mathematics, it is not intended as a mathematical book, but rather tries to explain how a method of acquisition of information can be applied to the actual world.The book provides a comprehensive, up-to-date description of the methods to be used for fitting experimental data, or to estimate model parameters, and to unify these methods into the Inverse Problem Theory. The first part of the book deals wi

  5. A bivariate contaminated binormal model for robust fitting of proper ROC curves to a pair of correlated, possibly degenerate, ROC datasets.

    Science.gov (United States)

    Zhai, Xuetong; Chakraborty, Dev P

    2017-06-01

    The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics

  6. Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2017-01-01

    The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days

  7. Applied stochastic modelling

    CERN Document Server

    Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P

    2008-01-01

    Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...

  8. Levy flights and self-similar exploratory behaviour of termite workers: beyond model fitting.

    Directory of Open Access Journals (Sweden)

    Octavio Miramontes

    Full Text Available Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties--including Lévy flights--in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.

  9. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    Science.gov (United States)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  10. Chempy: A flexible chemical evolution model for abundance fitting. Do the Sun's abundances alone constrain chemical evolution models?

    Science.gov (United States)

    Rybizki, Jan; Just, Andreas; Rix, Hans-Walter

    2017-09-01

    Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar

  11. Anticipating mismatches of HIT investments: Developing a viability-fit model for e-health services.

    Science.gov (United States)

    Mettler, Tobias

    2016-01-01

    Albeit massive investments in the recent years, the impact of health information technology (HIT) has been controversial and strongly disputed by both research and practice. While many studies are concerned with the development of new or the refinement of existing measurement models for assessing the impact of HIT adoption (ex post), this study presents an initial attempt to better understand the factors affecting viability and fit of HIT and thereby underscores the importance of also having instruments for managing expectations (ex ante). We extend prior research by undertaking a more granular investigation into the theoretical assumptions of viability and fit constructs. In doing so, we use a mixed-methods approach, conducting qualitative focus group discussions and a quantitative field study to improve and validate a viability-fit measurement instrument. Our findings suggest two issues for research and practice. First, the results indicate that different stakeholders perceive HIT viability and fit of the same e-health services very unequally. Second, the analysis also demonstrates that there can be a great discrepancy between the organizational viability and individual fit of a particular e-health service. The findings of this study have a number of important implications such as for health policy making, HIT portfolios, and stakeholder communication. Copyright © 2015. Published by Elsevier Ireland Ltd.

  12. Modeling of physical fitness of young karatyst on the pre basic training

    Directory of Open Access Journals (Sweden)

    V. A. Galimskyi

    2014-09-01

    Full Text Available Purpose : to develop a program of physical fitness for the correction of the pre basic training on the basis of model performance. Material: 57 young karate sportsmen of 9-11 years old took part in the research. Results : the level of general and special physical preparedness of young karate 9-11 years old was determined. Classes in the control group occurred in the existing program for yous sports school Muay Thai (Thailand boxing. For the experimental group has developed a program of selective development of general and special physical qualities of model-based training sessions. Special program contains 6 direction: 1. Development of static and dynamic balance; 2. Development of vestibular stability (precision movements after rotation; 3. Development rate movements; 4. The development of the capacity for rapid restructuring movements; 5. Development capabilities to differentiate power and spatial parameters of movement; 6. Development of the ability to perform jumping movements of rotation. Development of special physical qualities continued to work to improve engineering complex shock motions on the place and with movement. Conclusions : the use of selective development of special physical qualities based models of training sessions has a significant performance advantage over the control group.

  13. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  14. Fitness for duty: A tried-and-true model for decision making

    International Nuclear Information System (INIS)

    Horn, G.L.

    1989-01-01

    The US Nuclear Regulatory Commission (NRC) rules and regulations pertaining to fitness for duty specify development of programs designed to ensure that nuclear power plant personnel are not under the influence of legal or illegal substances that cause mental or physical impairment of work performance such that public safety is compromised. These regulations specify the type of decision loop to employ in determining the employee's movement through the process of initial restriction of access to the point at which his access authorization is restores. Suggestions are also offered to determine the roles that various components of the organization should take in the decision loop. This paper discusses some implications and labor concerns arising from the suggested role of employee assistance programs (EAPs) in the decision loop for clinical assessment and return-to-work evaluation of chemical testing failures. A model for a decision loop addressing some of the issues raised is presented. The proposed model has been implemented in one nuclear facility and has withstood the scrutiny of an NRC audit

  15. GARCH Modelling of Cryptocurrencies

    OpenAIRE

    Jeffrey Chu; Stephen Chan; Saralees Nadarajah; Joerg Osterrieder

    2017-01-01

    With the exception of Bitcoin, there appears to be little or no literature on GARCH modelling of cryptocurrencies. This paper provides the first GARCH modelling of the seven most popular cryptocurrencies. Twelve GARCH models are fitted to each cryptocurrency, and their fits are assessed in terms of five criteria. Conclusions are drawn on the best fitting models, forecasts and acceptability of value at risk estimates.

  16. Econometric modelling of risk adverse behaviours of entrepreneurs in the provision of house fittings in China

    Directory of Open Access Journals (Sweden)

    Rita Yi Man Li

    2012-03-01

    Full Text Available Entrepreneurs have always born the risk of running their business. They reap a profit in return for their risk taking and work. Housing developers are no different. In many countries, such as Australia, the United Kingdom and the United States, they interpret the tastes of the buyers and provide the dwellings they develop with basic fittings such as floor and wall coverings, bathroom fittings and kitchen cupboards. In mainland China, however, in most of the developments, units or houses are sold without floor or wall coverings, kitchen  or bathroom fittings. What is the motive behind this choice? This paper analyses the factors affecting housing developers’ decisions to provide fittings based on 1701 housing developments in Hangzhou, Chongqing and Hangzhou using a Probit model. The results show that developers build a higher proportion of bare units in mainland China when: 1 there is shortage of housing; 2 land costs are high so that the comparative costs of providing fittings become relatively low.

  17. Towards greater realism in inclusive fitness models: the case of worker reproduction in insect societies

    Science.gov (United States)

    Wenseleers, Tom; Helanterä, Heikki; Alves, Denise A.; Dueñez-Guzmán, Edgar; Pamilo, Pekka

    2013-01-01

    The conflicts over sex allocation and male production in insect societies have long served as an important test bed for Hamilton's theory of inclusive fitness, but have for the most part been considered separately. Here, we develop new coevolutionary models to examine the interaction between these two conflicts and demonstrate that sex ratio and colony productivity costs of worker reproduction can lead to vastly different outcomes even in species that show no variation in their relatedness structure. Empirical data on worker-produced males in eight species of Melipona bees support the predictions from a model that takes into account the demographic details of colony growth and reproduction. Overall, these models contribute significantly to explaining behavioural variation that previous theories could not account for. PMID:24132088

  18. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    Science.gov (United States)

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.

  19. Comments on Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?"

    Science.gov (United States)

    McCluskey, Ken W.

    2010-01-01

    This article presents the author's comments on Hisham B. Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" Ghassib's article focuses on the transformation of science from pre-modern times to the present. Ghassib (2010) notes that, unlike in an earlier era when the economy depended on static…

  20. Are trans diagnostic models of eating disorders fit for purpose? A consideration of the evidence for food addiction.

    Science.gov (United States)

    Treasure, Janet; Leslie, Monica; Chami, Rayane; Fernández-Aranda, Fernando

    2018-03-01

    Explanatory models for eating disorders have changed over time to account for changing clinical presentations. The transdiagnostic model evolved from the maintenance model, which provided the framework for cognitive behavioural therapy for bulimia nervosa. However, for many individuals (especially those at the extreme ends of the weight spectrum), this account does not fully fit. New evidence generated from research framed within the food addiction hypothesis is synthesized here into a model that can explain recurrent binge eating behaviour. New interventions that target core maintenance elements identified within the model may be useful additions to a complex model of treatment for eating disorders. Copyright © 2018 John Wiley & Sons, Ltd and Eating Disorders Association.

  1. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing.

    Science.gov (United States)

    Cai, Li

    2015-06-01

    Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.

  2. GARCH Modelling of Cryptocurrencies

    Directory of Open Access Journals (Sweden)

    Jeffrey Chu

    2017-10-01

    Full Text Available With the exception of Bitcoin, there appears to be little or no literature on GARCH modelling of cryptocurrencies. This paper provides the first GARCH modelling of the seven most popular cryptocurrencies. Twelve GARCH models are fitted to each cryptocurrency, and their fits are assessed in terms of five criteria. Conclusions are drawn on the best fitting models, forecasts and acceptability of value at risk estimates.

  3. Two-Stage Method Based on Local Polynomial Fitting for a Linear Heteroscedastic Regression Model and Its Application in Economics

    Directory of Open Access Journals (Sweden)

    Liyun Su

    2012-01-01

    Full Text Available We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations.

  4. Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.

    Science.gov (United States)

    Mi, Gu; Di, Yanming; Schafer, Daniel W

    2015-01-01

    This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.

  5. GMTR: two-dimensional geo-fit multitarget retrieval model for michelson interferometer for passive atmospheric sounding/environmental satellite observations.

    Science.gov (United States)

    Carlotti, Massimo; Brizzi, Gabriele; Papandrea, Enzo; Prevedelli, Marco; Ridolfi, Marco; Dinelli, Bianca Maria; Magnani, Luca

    2006-02-01

    We present a new retrieval model designed to analyze the observations of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), which is on board the ENVironmental SATellite (ENVISAT). The new geo-fit multitarget retrieval model (GMTR) implements the geo-fit two-dimensional inversion for the simultaneous retrieval of several targets including a set of atmospheric constituents that are not considered by the ground processor of the MIPAS experiment. We describe the innovative solutions adopted in the inversion algorithm and the main functionalities of the corresponding computer code. The performance of GMTR is compared with that of the MIPAS ground processor in terms of accuracy of the retrieval products. Furthermore, we show the capability of GMTR to resolve the horizontal structures of the atmosphere. The new retrieval model is implemented in an optimized computer code that is distributed by the European Space Agency as "open source" in a package that includes a full set of auxiliary data for the retrieval of 28 atmospheric targets.

  6. Group Targets Tracking Using Multiple Models GGIW-CPHD Based on Best-Fitting Gaussian Approximation and Strong Tracking Filter

    Directory of Open Access Journals (Sweden)

    Yun Wang

    2016-01-01

    Full Text Available Gamma Gaussian inverse Wishart cardinalized probability hypothesis density (GGIW-CPHD algorithm was always used to track group targets in the presence of cluttered measurements and missing detections. A multiple models GGIW-CPHD algorithm based on best-fitting Gaussian approximation method (BFG and strong tracking filter (STF is proposed aiming at the defect that the tracking error of GGIW-CPHD algorithm will increase when the group targets are maneuvering. The best-fitting Gaussian approximation method is proposed to implement the fusion of multiple models using the strong tracking filter to correct the predicted covariance matrix of the GGIW component. The corresponding likelihood functions are deduced to update the probability of multiple tracking models. From the simulation results we can see that the proposed tracking algorithm MM-GGIW-CPHD can effectively deal with the combination/spawning of groups and the tracking error of group targets in the maneuvering stage is decreased.

  7. Landscape and flow metrics affecting the distribution of a federally-threatened fish: Improving management, model fit, and model transferability

    Science.gov (United States)

    Worthington, Thomas A.; Zhang, T.; Logue, Daniel R.; Mittelstet, Aaron R.; Brewer, Shannon K.

    2016-01-01

    Truncated distributions of pelagophilic fishes have been observed across the Great Plains of North America, with water use and landscape fragmentation implicated as contributing factors. Developing conservation strategies for these species is hindered by the existence of multiple competing flow regime hypotheses related to species persistence. Our primary study objective was to compare the predicted distributions of one pelagophil, the Arkansas River Shiner Notropis girardi, constructed using different flow regime metrics. Further, we investigated different approaches for improving temporal transferability of the species distribution model (SDM). We compared four hypotheses: mean annual flow (a baseline), the 75th percentile of daily flow, the number of zero-flow days, and the number of days above 55th percentile flows, to examine the relative importance of flows during the spawning period. Building on an earlier SDM, we added covariates that quantified wells in each catchment, point source discharges, and non-native species presence to a structured variable framework. We assessed the effects on model transferability and fit by reducing multicollinearity using Spearman’s rank correlations, variance inflation factors, and principal component analysis, as well as altering the regularization coefficient (β) within MaxEnt. The 75th percentile of daily flow was the most important flow metric related to structuring the species distribution. The number of wells and point source discharges were also highly ranked. At the default level of β, model transferability was improved using all methods to reduce collinearity; however, at higher levels of β, the correlation method performed best. Using β = 5 provided the best model transferability, while retaining the majority of variables that contributed 95% to the model. This study provides a workflow for improving model transferability and also presents water-management options that may be considered to improve the

  8. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  9. Fitness, Sleep-Disordered Breathing, Symptoms of Depression, and Cognition in Inactive Overweight Children: Mediation Models.

    Science.gov (United States)

    Stojek, Monika M K; Montoya, Amanda K; Drescher, Christopher F; Newberry, Andrew; Sultan, Zain; Williams, Celestine F; Pollock, Norman K; Davis, Catherine L

    We used mediation models to examine the mechanisms underlying the relationships among physical fitness, sleep-disordered breathing (SDB), symptoms of depression, and cognitive functioning. We conducted a cross-sectional secondary analysis of the cohorts involved in the 2003-2006 project PLAY (a trial of the effects of aerobic exercise on health and cognition) and the 2008-2011 SMART study (a trial of the effects of exercise on cognition). A total of 397 inactive overweight children aged 7-11 received a fitness test, standardized cognitive test (Cognitive Assessment System, yielding Planning, Attention, Simultaneous, Successive, and Full Scale scores), and depression questionnaire. Parents completed a Pediatric Sleep Questionnaire. We used bootstrapped mediation analyses to test whether SDB mediated the relationship between fitness and depression and whether SDB and depression mediated the relationship between fitness and cognition. Fitness was negatively associated with depression ( B = -0.041; 95% CI, -0.06 to -0.02) and SDB ( B = -0.005; 95% CI, -0.01 to -0.001). SDB was positively associated with depression ( B = 0.99; 95% CI, 0.32 to 1.67) after controlling for fitness. The relationship between fitness and depression was mediated by SDB (indirect effect = -0.005; 95% CI, -0.01 to -0.0004). The relationship between fitness and the attention component of cognition was independently mediated by SDB (indirect effect = 0.058; 95% CI, 0.004 to 0.13) and depression (indirect effect = -0.071; 95% CI, -0.01 to -0.17). SDB mediates the relationship between fitness and depression, and SDB and depression separately mediate the relationship between fitness and the attention component of cognition.

  10. Adapted strategic plannig model applied to small business: a case study in the fitness area

    Directory of Open Access Journals (Sweden)

    Eduarda Tirelli Hennig

    2012-06-01

    Full Text Available The strategic planning is an important management tool in the corporate scenario and shall not be restricted to big Companies. However, this kind of planning process in small business may need special adaptations due to their own characteristics. This paper aims to identify and adapt the existent models of strategic planning to the scenario of a small business in the fitness area. Initially, it is accomplished a comparative study among models of different authors to identify theirs phases and activities. Then, it is defined which of these phases and activities should be present in a model that will be utilized in a small business. That model was applied to a Pilates studio; it involves the establishment of an organizational identity, an environmental analysis as well as the definition of strategic goals, strategies and actions to reach them. Finally, benefits to the organization could be identified, as well as hurdles in the implementation of the tool.

  11. On the fit of models to covariances and methodology to the Bulletin.

    Science.gov (United States)

    Bentler, P M

    1992-11-01

    It is noted that 7 of the 10 top-cited articles in the Psychological Bulletin deal with methodological topics. One of these is the Bentler-Bonett (1980) article on the assessment of fit in covariance structure models. Some context is provided on the popularity of this article. In addition, a citation study of methodology articles appearing in the Bulletin since 1978 was carried out. It verified that publications in design, evaluation, measurement, and statistics continue to be important to psychological research. Some thoughts are offered on the role of the journal in making developments in these areas more accessible to psychologists.

  12. The More, the Better? Curvilinear Effects of Job Autonomy on Well-Being From Vitamin Model and PE-Fit Theory Perspectives.

    Science.gov (United States)

    Stiglbauer, Barbara; Kovacs, Carrie

    2017-12-28

    In organizational psychology research, autonomy is generally seen as a job resource with a monotone positive relationship with desired occupational outcomes such as well-being. However, both Warr's vitamin model and person-environment (PE) fit theory suggest that negative outcomes may result from excesses of some job resources, including autonomy. Thus, the current studies used survey methodology to explore cross-sectional relationships between environmental autonomy, person-environment autonomy (mis)fit, and well-being. We found that autonomy and autonomy (mis)fit explained between 6% and 22% of variance in well-being, depending on type of autonomy (scheduling, method, or decision-making) and type of (mis)fit operationalization (atomistic operationalization through the separate assessment of actual and ideal autonomy levels vs. molecular operationalization through the direct assessment of perceived autonomy (mis)fit). Autonomy (mis)fit (PE-fit perspective) explained more unique variance in well-being than environmental autonomy itself (vitamin model perspective). Detrimental effects of autonomy excess on well-being were most evident for method autonomy and least consistent for decision-making autonomy. We argue that too-much-of-a-good-thing effects of job autonomy on well-being exist, but suggest that these may be dependent upon sample characteristics (range of autonomy levels), type of operationalization (molecular vs. atomistic fit), autonomy facet (method, scheduling, or decision-making), as well as individual and organizational moderators. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. The two-state dimer receptor model: a general model for receptor dimers.

    Science.gov (United States)

    Franco, Rafael; Casadó, Vicent; Mallol, Josefa; Ferrada, Carla; Ferré, Sergi; Fuxe, Kjell; Cortés, Antoni; Ciruela, Francisco; Lluis, Carmen; Canela, Enric I

    2006-06-01

    Nonlinear Scatchard plots are often found for agonist binding to G-protein-coupled receptors. Because there is clear evidence of receptor dimerization, these nonlinear Scatchard plots can reflect cooperativity on agonist binding to the two binding sites in the dimer. According to this, the "two-state dimer receptor model" has been recently derived. In this article, the performance of the model has been analyzed in fitting data of agonist binding to A(1) adenosine receptors, which are an example of receptor displaying concave downward Scatchard plots. Analysis of agonist/antagonist competition data for dopamine D(1) receptors using the two-state dimer receptor model has also been performed. Although fitting to the two-state dimer receptor model was similar to the fitting to the "two-independent-site receptor model", the former is simpler, and a discrimination test selects the two-state dimer receptor model as the best. This model was also very robust in fitting data of estrogen binding to the estrogen receptor, for which Scatchard plots are concave upward. On the one hand, the model would predict the already demonstrated existence of estrogen receptor dimers. On the other hand, the model would predict that concave upward Scatchard plots reflect positive cooperativity, which can be neither predicted nor explained by assuming the existence of two different affinity states. In summary, the two-state dimer receptor model is good for fitting data of binding to dimeric receptors displaying either linear, concave upward, or concave downward Scatchard plots.

  14. Estimation of error components in a multi-error linear regression model, with an application to track fitting

    International Nuclear Information System (INIS)

    Fruehwirth, R.

    1993-01-01

    We present an estimation procedure of the error components in a linear regression model with multiple independent stochastic error contributions. After solving the general problem we apply the results to the estimation of the actual trajectory in track fitting with multiple scattering. (orig.)

  15. Non-linear least squares curve fitting of a simple theoretical model to radioimmunoassay dose-response data using a mini-computer

    International Nuclear Information System (INIS)

    Wilkins, T.A.; Chadney, D.C.; Bryant, J.; Palmstroem, S.H.; Winder, R.L.

    1977-01-01

    Using the simple univalent antigen univalent-antibody equilibrium model the dose-response curve of a radioimmunoassay (RIA) may be expressed as a function of Y, X and the four physical parameters of the idealised system. A compact but powerful mini-computer program has been written in BASIC for rapid iterative non-linear least squares curve fitting and dose interpolation with this function. In its simplest form the program can be operated in an 8K byte mini-computer. The program has been extensively tested with data from 10 different assay systems (RIA and CPBA) for measurement of drugs and hormones ranging in molecular size from thyroxine to insulin. For each assay system the results have been analysed in terms of (a) curve fitting biases and (b) direct comparison with manual fitting. In all cases the quality of fitting was remarkably good in spite of the fact that the chemistry of each system departed significantly from one or more of the assumptions implicit in the model used. A mathematical analysis of departures from the model's principal assumption has provided an explanation for this somewhat unexpected observation. The essential features of this analysis are presented in this paper together with the statistical analyses of the performance of the program. From these and the results obtained to date in the routine quality control of these 10 assays, it is concluded that the method of curve fitting and dose interpolation presented in this paper is likely to be of general applicability. (orig.) [de

  16. Applicability of Zero-Inflated Models to Fit the Torrential Rainfall Count Data with Extra Zeros in South Korea

    Directory of Open Access Journals (Sweden)

    Cheol-Eung Lee

    2017-02-01

    Full Text Available Several natural disasters occur because of torrential rainfalls. The change in global climate most likely increases the occurrences of such downpours. Hence, it is necessary to investigate the characteristics of the torrential rainfall events in order to introduce effective measures for mitigating disasters such as urban floods and landslides. However, one of the major problems is evaluating the number of torrential rainfall events from a statistical viewpoint. If the number of torrential rainfall occurrences during a month is considered as count data, their frequency distribution could be identified using a probability distribution. Generally, the number of torrential rainfall occurrences has been analyzed using the Poisson distribution (POI or the Generalized Poisson Distribution (GPD. However, it was reported that POI and GPD often overestimated or underestimated the observed count data when additional or fewer zeros were included. Hence, in this study, a zero-inflated model concept was applied to solve this problem existing in the conventional models. Zero-Inflated Poisson (ZIP model, Zero-Inflated Generalized Poisson (ZIGP model, and the Bayesian ZIGP model have often been applied to fit the count data having additional or fewer zeros. However, the applications of these models in water resource management have been very limited despite their efficiency and accuracy. The five models, namely, POI, GPD, ZIP, ZIGP, and Bayesian ZIGP, were applied to the torrential rainfall data having additional zeros obtained from two rain gauges in South Korea, and their applicability was examined in this study. In particular, the informative prior distributions evaluated via the empirical Bayes method using ten rain gauges were developed in the Bayesian ZIGP model. Finally, it was suggested to avoid using the POI and GPD models to fit the frequency of torrential rainfall data. In addition, it was concluded that the Bayesian ZIGP model used in this study

  17. An evolutionary algorithm for model selection

    Energy Technology Data Exchange (ETDEWEB)

    Bicker, Karl [CERN, Geneva (Switzerland); Chung, Suh-Urk; Friedrich, Jan; Grube, Boris; Haas, Florian; Ketzer, Bernhard; Neubert, Sebastian; Paul, Stephan; Ryabchikov, Dimitry [Technische Univ. Muenchen (Germany)

    2013-07-01

    When performing partial-wave analyses of multi-body final states, the choice of the fit model, i.e. the set of waves to be used in the fit, can significantly alter the results of the partial wave fit. Traditionally, the models were chosen based on physical arguments and by observing the changes in log-likelihood of the fits. To reduce possible bias in the model selection process, an evolutionary algorithm was developed based on a Bayesian goodness-of-fit criterion which takes into account the model complexity. Starting from systematically constructed pools of waves which contain significantly more waves than the typical fit model, the algorithm yields a model with an optimal log-likelihood and with a number of partial waves which is appropriate for the number of events in the data. Partial waves with small contributions to the total intensity are penalized and likely to be dropped during the selection process, as are models were excessive correlations between single waves occur. Due to the automated nature of the model selection, a much larger part of the model space can be explored than would be possible in a manual selection. In addition the method allows to assess the dependence of the fit result on the fit model which is an important contribution to the systematic uncertainty.

  18. Modelling the association between weight status and social deprivation in English school children: Can physical activity and fitness affect the relationship?

    Science.gov (United States)

    Nevill, Alan M; Duncan, Michael J; Lahart, Ian; Sandercock, Gavin

    2016-11-01

    The association between being overweight/obese and deprivation is a serious concern in English schoolchildren. To model this association incorporating known confounders and to discover whether physical fitness and physical activity may reduce or eliminate this association. Cross-sectional data were collected between 2007-2009, from 8053 10-16 year old children from the East-of-England Healthy Heart Study. Weight status was assessed using waist circumference (cm) and body mass (kg). Deprivation was measured using the Index of Multiple Deprivation (IMD). Confounding variables used in the proportional, allometric models were hip circumference, stature, age and sex. Children's fitness levels were assessed using predicted VO 2 max (20-metre shuttle-run test) and physical activity was estimated using the Physical Activity Questionnaire for Adolescents or Children. A strong association was found between both waist circumference and body mass and the IMD. These associations persisted after controlling for all confounding variables. When the children's physical activity and fitness levels were added to the models, the association was either greatly reduced or, in the case of body mass, absent. To reduce deprivation inequalities in children's weight-status, health practitioners should focus on increasing physical fitness via physical activity in areas of greater deprivation.

  19. Physician behavioral adaptability: A model to outstrip a "one size fits all" approach.

    Science.gov (United States)

    Carrard, Valérie; Schmid Mast, Marianne

    2015-10-01

    Based on a literature review, we propose a model of physician behavioral adaptability (PBA) with the goal of inspiring new research. PBA means that the physician adapts his or her behavior according to patients' different preferences. The PBA model shows how physicians infer patients' preferences and adapt their interaction behavior from one patient to the other. We claim that patients will benefit from better outcomes if their physicians show behavioral adaptability rather than a "one size fits all" approach. This literature review is based on a literature search of the PsycINFO(®) and MEDLINE(®) databases. The literature review and first results stemming from the authors' research support the validity and viability of parts of the PBA model. There is evidence suggesting that physicians are able to show behavioral flexibility when interacting with their different patients, that a match between patients' preferences and physician behavior is related to better consultation outcomes, and that physician behavioral adaptability is related to better consultation outcomes. Training of physicians' behavioral flexibility and their ability to infer patients' preferences can facilitate physician behavioral adaptability and positive patient outcomes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Fit model between participation statement of exhibitors and visitors to improve the exhibition performance

    Directory of Open Access Journals (Sweden)

    Cristina García Magro

    2015-06-01

    Full Text Available Purpose: The aims of the paper is offers a model of analysis which allows to measure the impact on the performance of fairs, as well as the knowledge or not of the motives of participation of the visitors on the part of the exhibitors. Design/methodology: A review of the literature is established concerning two of the principal interested agents, exhibitors and visitors, focusing. The study is focused on the line of investigation referred to the motives of participation or not in a trade show. According to the information thrown by each perspectives of study, a comparative analysis is carried out in order to determine the degree of existing understanding between both. Findings: The trade shows allow to be studied from an integrated strategic marketing approach. The fit model between the reasons for participation of exhibitors and visitors offer information on the lack of an understanding between exhibitors and visitors, leading to dissatisfaction with the participation, a fact that is reflected in the fair success. The model identified shows that a strategic plan must be designed in which the reason for participation of visitor was incorporated as moderating variable of the reason for participation of exhibitors. The article concludes with the contribution of a series of proposals for the improvement of fairground results. Social implications: The fit model that improve the performance of trade shows, implicitly leads to successful achievement of targets for multiple stakeholders beyond the consideration of visitors and exhibitors. Originality/value: The integrated perspective of stakeholders allows the study of the existing relationships between the principal groups of interest, in such a way that, having knowledge on the condition of the question of the trade shows facilitates the task of the investigator in future academic works and allows that the interested groups obtain a better performance to the participation in fairs, as visitor or as

  1. Time series modeling and forecasting using memetic algorithms for regime-switching models.

    Science.gov (United States)

    Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel

    2012-11-01

    In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.

  2. Fitting Data to Model: Structural Equation Modeling Diagnosis Using Two Scatter Plots

    Science.gov (United States)

    Yuan, Ke-Hai; Hayashi, Kentaro

    2010-01-01

    This article introduces two simple scatter plots for model diagnosis in structural equation modeling. One plot contrasts a residual-based M-distance of the structural model with the M-distance for the factor score. It contains information on outliers, good leverage observations, bad leverage observations, and normal cases. The other plot contrasts…

  3. Fitting Simpson's neutrino into the standard model

    International Nuclear Information System (INIS)

    Valle, J.W.F.

    1985-01-01

    I show how to accomodate the 17 keV state recently by Simpson as one of the neutrinos of the standard model. Experimental constraints can only be satisfied if the μ and tau neutrino combine to a very good approximation to form a Dirac neutrino of 17 keV leaving a light νsub(e). Neutrino oscillations will provide the most stringent test of the model. The cosmological bounds are also satisfied in a natural way in models with Goldstone bosons. Explicit examples are given in the framework of majoron-type models. Constraints on the lepton symmetry breaking scale which follow from astrophysics, cosmology and laboratory experiments are discussed. (orig.)

  4. Crushed Salt Constitutive Model

    International Nuclear Information System (INIS)

    Callahan, G.D.

    1999-01-01

    The constitutive model used to describe the deformation of crushed salt is presented in this report. Two mechanisms -- dislocation creep and grain boundary diffusional pressure solution -- are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. Upon complete consolidation, the crushed-salt model reproduces the Multimechanism Deformation (M-D) model typically used for the Waste Isolation Pilot Plant (WIPP) host geological formation salt. New shear consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on WIPP and southeastern New Mexico salt. Nonlinear least-squares model fitting to the database produced two sets of material parameter values for the model -- one for the shear consolidation tests and one for a combination of the shear and hydrostatic consolidation tests. Using the parameter values determined from the fitted database, the constitutive model is validated against constant strain-rate tests. Shaft seal problems are analyzed to demonstrate model-predicted consolidation of the shaft seal crushed-salt component. Based on the fitting statistics, the ability of the model to predict the test data, and the ability of the model to predict load paths and test data outside of the fitted database, the model appears to capture the creep consolidation behavior of crushed salt reasonably well

  5. Including irrigation in niche modelling of the invasive wasp Vespula germanica (Fabricius) improves model fit to predict potential for further spread.

    Science.gov (United States)

    de Villiers, Marelize; Kriticos, Darren J; Veldtman, Ruan

    2017-01-01

    The European wasp, Vespula germanica (Fabricius) (Hymenoptera: Vespidae), is of Palaearctic origin, being native to Europe, northern Africa and Asia, and introduced into North America, Chile, Argentina, Iceland, Ascension Island, South Africa, Australia and New Zealand. Due to its polyphagous nature and scavenging behaviour, V. germanica threatens agriculture and silviculture, and negatively affects biodiversity, while its aggressive nature and venomous sting pose a health risk to humans. In areas with warmer winters and longer summers, queens and workers can survive the winter months, leading to the build-up of large nests during the following season; thereby increasing the risk posed by this species. To prevent or prepare for such unwanted impacts it is important to know where the wasp may be able to establish, either through natural spread or through introduction as a result of human transport. Distribution data from Argentina and Australia, and seasonal phenology data from Argentina were used to determine the potential distribution of V. germanica using CLIMEX modelling. In contrast to previous models, the influence of irrigation on its distribution was also investigated. Under a natural rainfall scenario, the model showed similarities to previous models. When irrigation is applied, dry stress is alleviated, leading to larger areas modelled climatically suitable compared with previous models, which provided a better fit with the actual distribution of the species. The main areas at risk of invasion by V. germanica include western USA, Mexico, small areas in Central America and in the north-western region of South America, eastern Brazil, western Russia, north-western China, Japan, the Mediterranean coastal regions of North Africa, and parts of southern and eastern Africa.

  6. Including irrigation in niche modelling of the invasive wasp Vespula germanica (Fabricius improves model fit to predict potential for further spread.

    Directory of Open Access Journals (Sweden)

    Marelize de Villiers

    Full Text Available The European wasp, Vespula germanica (Fabricius (Hymenoptera: Vespidae, is of Palaearctic origin, being native to Europe, northern Africa and Asia, and introduced into North America, Chile, Argentina, Iceland, Ascension Island, South Africa, Australia and New Zealand. Due to its polyphagous nature and scavenging behaviour, V. germanica threatens agriculture and silviculture, and negatively affects biodiversity, while its aggressive nature and venomous sting pose a health risk to humans. In areas with warmer winters and longer summers, queens and workers can survive the winter months, leading to the build-up of large nests during the following season; thereby increasing the risk posed by this species. To prevent or prepare for such unwanted impacts it is important to know where the wasp may be able to establish, either through natural spread or through introduction as a result of human transport. Distribution data from Argentina and Australia, and seasonal phenology data from Argentina were used to determine the potential distribution of V. germanica using CLIMEX modelling. In contrast to previous models, the influence of irrigation on its distribution was also investigated. Under a natural rainfall scenario, the model showed similarities to previous models. When irrigation is applied, dry stress is alleviated, leading to larger areas modelled climatically suitable compared with previous models, which provided a better fit with the actual distribution of the species. The main areas at risk of invasion by V. germanica include western USA, Mexico, small areas in Central America and in the north-western region of South America, eastern Brazil, western Russia, north-western China, Japan, the Mediterranean coastal regions of North Africa, and parts of southern and eastern Africa.

  7. Modelling the Factors that Affect Individuals' Utilisation of Online Learning Systems: An Empirical Study Combining the Task Technology Fit Model with the Theory of Planned Behaviour

    Science.gov (United States)

    Yu, Tai-Kuei; Yu, Tai-Yi

    2010-01-01

    Understanding learners' behaviour, perceptions and influence in terms of learner performance is crucial to predict the use of electronic learning systems. By integrating the task-technology fit (TTF) model and the theory of planned behaviour (TPB), this paper investigates the online learning utilisation of Taiwanese students. This paper provides a…

  8. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  9. A Rigorous Test of the Fit of the Circumplex Model to Big Five Personality Data: Theoretical and Methodological Issues and Two Large Sample Empirical Tests.

    Science.gov (United States)

    DeGeest, David Scott; Schmidt, Frank

    2015-01-01

    Our objective was to apply the rigorous test developed by Browne (1992) to determine whether the circumplex model fits Big Five personality data. This test has yet to be applied to personality data. Another objective was to determine whether blended items explained correlations among the Big Five traits. We used two working adult samples, the Eugene-Springfield Community Sample and the Professional Worker Career Experience Survey. Fit to the circumplex was tested via Browne's (1992) procedure. Circumplexes were graphed to identify items with loadings on multiple traits (blended items), and to determine whether removing these items changed five-factor model (FFM) trait intercorrelations. In both samples, the circumplex structure fit the FFM traits well. Each sample had items with dual-factor loadings (8 items in the first sample, 21 in the second). Removing blended items had little effect on construct-level intercorrelations among FFM traits. We conclude that rigorous tests show that the fit of personality data to the circumplex model is good. This finding means the circumplex model is competitive with the factor model in understanding the organization of personality traits. The circumplex structure also provides a theoretically and empirically sound rationale for evaluating intercorrelations among FFM traits. Even after eliminating blended items, FFM personality traits remained correlated.

  10. Sustained fitness gains and variability in fitness trajectories in the long-term evolution experiment with Escherichia coli

    Science.gov (United States)

    Lenski, Richard E.; Wiser, Michael J.; Ribeck, Noah; Blount, Zachary D.; Nahum, Joshua R.; Morris, J. Jeffrey; Zaman, Luis; Turner, Caroline B.; Wade, Brian D.; Maddamsetti, Rohan; Burmeister, Alita R.; Baird, Elizabeth J.; Bundy, Jay; Grant, Nkrumah A.; Card, Kyle J.; Rowles, Maia; Weatherspoon, Kiyana; Papoulis, Spiridon E.; Sullivan, Rachel; Clark, Colleen; Mulka, Joseph S.; Hajela, Neerja

    2015-01-01

    Many populations live in environments subject to frequent biotic and abiotic changes. Nonetheless, it is interesting to ask whether an evolving population's mean fitness can increase indefinitely, and potentially without any limit, even in a constant environment. A recent study showed that fitness trajectories of Escherichia coli populations over 50 000 generations were better described by a power-law model than by a hyperbolic model. According to the power-law model, the rate of fitness gain declines over time but fitness has no upper limit, whereas the hyperbolic model implies a hard limit. Here, we examine whether the previously estimated power-law model predicts the fitness trajectory for an additional 10 000 generations. To that end, we conducted more than 1100 new competitive fitness assays. Consistent with the previous study, the power-law model fits the new data better than the hyperbolic model. We also analysed the variability in fitness among populations, finding subtle, but significant, heterogeneity in mean fitness. Some, but not all, of this variation reflects differences in mutation rate that evolved over time. Taken together, our results imply that both adaptation and divergence can continue indefinitely—or at least for a long time—even in a constant environment. PMID:26674951

  11. Comparison of hypertabastic survival model with other unimodal hazard rate functions using a goodness-of-fit test.

    Science.gov (United States)

    Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S

    2017-05-30

    We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Assessing the fit of the Dysphoric Arousal model across two nationally representative epidemiological surveys: The Australian NSMHWB and the United States NESARC

    DEFF Research Database (Denmark)

    Armour, C.; Carragher, N.; Elhai, J. D.

    2013-01-01

    samples. Results revealed that the Dysphoric Arousal model provided superior fit to the data compared to the alternative models. In conclusion, these findings suggest that items D1-D3 (sleeping difficulties; irritability; concentration difficulties) represent a separate, fifth factor within PTSD's latent...

  13. Expanding the Technology Acceptance Model with the Inclusion of Trust, Social Influence, and Health Valuation to Determine the Predictors of German Users’ Willingness to Continue using a Fitness App : A Structural Equation Modeling Approach

    NARCIS (Netherlands)

    Beldad, Ardion Daroca; Hegner, Sabrina

    2017-01-01

    According to one market research, fitness or running apps are hugely popular in Germany. Such a trend prompts the question concerning the factors influencing German users’ intention to continue using a specific fitness app. To address the research question, the expanded Technology Acceptance Model

  14. VizieR Online Data Catalog: GRB prompt emission fitted with the DREAM model (Ahlgren+, 2015)

    Science.gov (United States)

    Ahlgren, B.; Larsson, J.; Nymark, T.; Ryde, F.; Pe'Er, A.

    2018-01-01

    We illustrate the application of the DREAM model by fitting it to two different, bright Fermi GRBs; GRB 090618 and GRB 100724B. While GRB 090618 is well fitted by a Band function, GRB 100724B was the first example of a burst with a significant additional BB component (Guiriec et al. 2011ApJ...727L..33G). GRB 090618 is analysed using Gamma-ray Burst Monitor (GBM) data (Meegan et al. 2009ApJ...702..791M) from the NaI and BGO detectors. For GRB 100724B, we used GBM data from the NaI and BGO detectors as well as Large Area Telescope Low Energy (LAT-LLE) data. For both bursts we selected NaI detectors seeing the GRB at an off-axis angle lower than 60° and the BGO detector as being the best aligned of the two BGO detectors. The spectra were fitted in the energy ranges 8-1000 keV (NaI), 200-40000 keV (BGO) and 30-1000 MeV (LAT-LLE). (2 data files).

  15. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    Science.gov (United States)

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  16. Exploratory Analyses To Improve Model Fit: Errors Due to Misspecification and a Strategy To Reduce Their Occurrence.

    Science.gov (United States)

    Green, Samuel B.; Thompson, Marilyn S.; Poirier, Jennifer

    1999-01-01

    The use of Lagrange multiplier (LM) tests in specification searches and the efforts that involve the addition of extraneous parameters to models are discussed. Presented are a rationale and strategy for conducting specification searches in two stages that involve adding parameters to LM tests to maximize fit and then deleting parameters not needed…

  17. Model for fitting longitudinal traits subject to threshold response applied to genetic evaluation for heat tolerance

    Directory of Open Access Journals (Sweden)

    Misztal Ignacy

    2009-01-01

    Full Text Available Abstract A semi-parametric non-linear longitudinal hierarchical model is presented. The model assumes that individual variation exists both in the degree of the linear change of performance (slope beyond a particular threshold of the independent variable scale and in the magnitude of the threshold itself; these individual variations are attributed to genetic and environmental components. During implementation via a Bayesian MCMC approach, threshold levels were sampled using a Metropolis step because their fully conditional posterior distributions do not have a closed form. The model was tested by simulation following designs similar to previous studies on genetics of heat stress. Posterior means of parameters of interest, under all simulation scenarios, were close to their true values with the latter always being included in the uncertain regions, indicating an absence of bias. The proposed models provide flexible tools for studying genotype by environmental interaction as well as for fitting other longitudinal traits subject to abrupt changes in the performance at particular points on the independent variable scale.

  18. The universal Higgs fit

    DEFF Research Database (Denmark)

    Giardino, P. P.; Kannike, K.; Masina, I.

    2014-01-01

    We perform a state-of-the-art global fit to all Higgs data. We synthesise them into a 'universal' form, which allows to easily test any desired model. We apply the proposed methodology to extract from data the Higgs branching ratios, production cross sections, couplings and to analyse composite...... Higgs models, models with extra Higgs doublets, supersymmetry, extra particles in the loops, anomalous top couplings, and invisible Higgs decays into Dark Matter. Best fit regions lie around the Standard Model predictions and are well approximated by our 'universal' fit. Latest data exclude the dilaton...... as an alternative to the Higgs, and disfavour fits with negative Yukawa couplings. We derive for the first time the SM Higgs boson mass from the measured rates, rather than from the peak positions, obtaining M-h = 124.4 +/- 1.6 GeV....

  19. Fits of the baryon magnetic moments to the quark model and spectrum-generating SU(3)

    International Nuclear Information System (INIS)

    Bohm, A.; Teese, R.B.

    1982-01-01

    We show that for theoretical as well as phenomenological reasons the baryon magnetic moments that fulfill simple group transformation properties should be taken in intrinsic rather than nuclear magnetons. A fit of the recent experimental data to the reduced matrix elements of the usual octet electromagnetic current is still not good, and in order to obtain acceptable agreement, one has to add correction terms to the octet current. We have texted two kinds of corrections: U-spin-scalar terms, which are singles out by the model-independent algebraic properties of the hadron electromagnetic current, and octet U-spin vectors, which could come from quark-mass breaking in a nonrelativistic quark model. We find that the U-spin-scalar terms are more important than the U-spin vectors for various levels of demanded theoretical accuracy

  20. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  1. Pengukuran tingkat kesuksesan penerapan website Penerimaan Mahasiswa Baru (PMB online di perguruan tinggi swasta dengan pendekatan Human Organization Technology (HOT Fit model

    Directory of Open Access Journals (Sweden)

    Ahmad Heru Mujianto

    2017-01-01

    Abstract Private Higher Education (PHE in Jombang apply online admission of new students selection process, so applicants simply register through admission of new students online website owned by their respective private universities, without needing to the university. But in the implementation there are still prospective students who apply directly to the office of admission of new students PHE, it makes the need to measure the success rate of admission of new students website online application in PHE. Also, so far admission of new students online website PHE in Jombang has never been evaluated to determine the success rate. HOT (Human Organization Technology Fit model is a model of success that can be used as a model for evaluating information systems. There are seven variables used by HOT Fit, i.e., system quality, information quality, service quality, system use, user satisfaction, net beneFits, organizational structure (organization structure. The result of the research shows that there are three assessment indicators with satisfaction value below 85%, the response time is 76,1%; the availability of 71.6% of aid facilities; and 64.2% display satisfaction. So that three indicators need to be increased again to get better results and can optimize the implementation of admission of new students website online PHE in Jombang. Keywords: Admission of new students; HOT Fit; Human Organization Technology; Private Higher Education; PHE.

  2. The role of social capital and community belongingness for exercise adherence: An exploratory study of the CrossFit gym model.

    Science.gov (United States)

    Whiteman-Sandland, Jessica; Hawkins, Jemma; Clayton, Debbie

    2016-08-01

    This is the first study to measure the 'sense of community' reportedly offered by the CrossFit gym model. A cross-sectional study adapted Social Capital and General Belongingness scales to compare perceptions of a CrossFit gym and a traditional gym. CrossFit gym members reported significantly higher levels of social capital (both bridging and bonding) and community belongingness compared with traditional gym members. However, regression analysis showed neither social capital, community belongingness, nor gym type was an independent predictor of gym attendance. Exercise and health professionals may benefit from evaluating further the 'sense of community' offered by gym-based exercise programmes.

  3. A three-parameter langmuir-type model for fitting standard curves of sandwich enzyme immunoassays with special attention to the α-fetoprotein assay

    NARCIS (Netherlands)

    Kortlandt, W.; Endeman, H.J.; Hoeke, J.O.O.

    In a simplified approach to the reaction kinetics of enzyme-linked immunoassays, a Langmuir-type equation y = [ax/(b + x)] + c was derived. This model proved to be superior to logit-log and semilog models in the curve-fitting of standard curves. An assay for α-fetoprotein developed in our laboratory

  4. An evaluation of the Bayesian approach to fitting the N-mixture model for use with pseudo-replicated count data

    Science.gov (United States)

    Toribo, S.G.; Gray, B.R.; Liang, S.

    2011-01-01

    The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.

  5. Universal fit to p-p elastic diffraction scattering from the Lorentz contracted geometrical model

    International Nuclear Information System (INIS)

    Hansen, P.H.; Krisch, A.D.

    1976-01-01

    The prediction of the Lorentz contracted geometical model for proton-proton elastic scattering at small angles is examined. The model assumes that when two high energy particles collide, each behaves as a geometrical object which has a Gaussian density and is spherically symmetric except for the Lorentz contraction in the incident direction. It is predicted that dsigma/dt should be independent of energy when plotted against the variable β 2 P 2 sub(perpendicular) sigmasub(TOT)(s)/38.3. Thus the energy dependence of the diffraction peak slope (b in an esup(-b mod(t))plot) is given by b(s)=A 2 β 2 sigmasub(TOT)(s)/38.3 where β is the proton's c.m. velocity and A is its radius. Recently measured values of sigmasub(TOT)(s) were used and an excellent fit obtained to the elastic slope in both t regions [-t 2 and 0.1 2 ] at all energies from s=6 to 4000(GeV/c) 2 . (Auth.)

  6. Temperature dependence of bulk respiration of crop stands. Measurement and model fitting

    International Nuclear Information System (INIS)

    Tani, Takashi; Arai, Ryuji; Tako, Yasuhiro

    2007-01-01

    The objective of the present study was to examine whether the temperature dependence of respiration at a crop-stand scale could be directly represented by an Arrhenius function that was widely used for representing the temperature dependence of leaf respiration. We determined temperature dependences of bulk respiration of monospecific stands of rice and soybean within a range of the air temperature from 15 to 30degC using large closed chambers. Measured responses of respiration rates of the two stands were well fitted by the Arrhenius function (R 2 =0.99). In the existing model to assess the local radiological impact of the anthropogenic carbon-14, effects of the physical environmental factors on photosynthesis and respiration of crop stands are not taken into account for the calculation of the net amount of carbon per cultivation area in crops at harvest which is the crucial parameter for the estimation of the activity concentration of carbon-14 in crops. Our result indicates that the Arrhenius function is useful for incorporating the effect of the temperature on respiration of crop stands into the model which is expected to contribute to a more realistic estimate of the activity concentration of carbon-14 in crops. (author)

  7. FitSKIRT: genetic algorithms to automatically fit dusty galaxies with a Monte Carlo radiative transfer code

    Science.gov (United States)

    De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.

    2013-02-01

    We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.

  8. Can the Stephani model be an alternative to FRW accelerating models?

    International Nuclear Information System (INIS)

    Godlowski, Wlodzimierz; Stelmach, Jerzy; Szydlowski, Marek

    2004-01-01

    A class of Stephani cosmological models as a prototype of a non-homogeneous universe is considered. The non-homogeneity can lead to accelerated evolution, which is now observed from the SNe Ia data. Three samples of type Ia supernovae obtained by Perlmutter et al, Tonry et al and Knop et al are taken into account. Different statistical methods (best fits as well as maximum likelihood method) to obtain estimation for the model parameters are used. The Stephani model is considered as an alternative to the ΛCDM model in the explanation of the present acceleration of the universe. The model explains the acceleration of the universe at the same level of accuracy as the ΛCDM model (χ 2 statistics are comparable). From the best fit analysis it follows that the Stephani model is characterized by a higher value of density parameter Ω m0 than the ΛCDM model. It is also shown that the model is consistent with the location of CMB peaks

  9. A generalized multivariate regression model for modelling ocean wave heights

    Science.gov (United States)

    Wang, X. L.; Feng, Y.; Swail, V. R.

    2012-04-01

    In this study, a generalized multivariate linear regression model is developed to represent the relationship between 6-hourly ocean significant wave heights (Hs) and the corresponding 6-hourly mean sea level pressure (MSLP) fields. The model is calibrated using the ERA-Interim reanalysis of Hs and MSLP fields for 1981-2000, and is validated using the ERA-Interim reanalysis for 2001-2010 and ERA40 reanalysis of Hs and MSLP for 1958-2001. The performance of the fitted model is evaluated in terms of Pierce skill score, frequency bias index, and correlation skill score. Being not normally distributed, wave heights are subjected to a data adaptive Box-Cox transformation before being used in the model fitting. Also, since 6-hourly data are being modelled, lag-1 autocorrelation must be and is accounted for. The models with and without Box-Cox transformation, and with and without accounting for autocorrelation, are inter-compared in terms of their prediction skills. The fitted MSLP-Hs relationship is then used to reconstruct historical wave height climate from the 6-hourly MSLP fields taken from the Twentieth Century Reanalysis (20CR, Compo et al. 2011), and to project possible future wave height climates using CMIP5 model simulations of MSLP fields. The reconstructed and projected wave heights, both seasonal means and maxima, are subject to a trend analysis that allows for non-linear (polynomial) trends.

  10. Probabilistic model fitting for spatio-temporal variability studies of precipitation: the Sara-Brut system - a case study

    International Nuclear Information System (INIS)

    Dorado Delgado, Jennifer; Burbano Criollo, Juan Carlos; Molina Tabares, Jose Manuel; Carvajal Escobar, Yesid; Aristizabal, Hector Fabio

    2006-01-01

    In this study, space and time variability of monthly and annual rainfall was analyzed for the downstream influence zone of a Colombian supply-regulation reservoir, Sara-Brut, located on the Cauca valley department. Monthly precipitation data from 18 gauge stations and for a 29-year record (1975-2003) were used. These data were processed by means of time series completion, consistency analyses and sample statistics computations. Theoretical probabilistic distribution models such as Gumbel, normal, lognormal and wake by, and other empirical distributions such as Weibull and Landwehr were applied in order to fit the historical precipitation data set. The fit standard error (FSE) was used to test the goodness of fit of the theoretical distribution models and to choose the best of this probabilistic function. The wake by approach showed the best goodness of fit in 89% of the total gauges taken into account. Time variability was analyzed by means of wake by estimated values of monthly and annual precipitation associated with return periods of 1,052, 1,25, 2, 10, 20 and 50 years. Precipitation space variability is presents by means of ArcGis v8.3 and using krigging as interpolation method. In general terms the results obtained from this study show significant distribution variability in precipitation over the whole area, and particularity, the formation of dry and humid nucleus over the northeastern strip and microclimates at the southwestern and central zone of the study area were observed, depending on the season of year. The mentioned distribution pattern is likely caused by the influence of pacific wind streams, which come from the Andean western mountain range. It is expected that the results from this work be helpful for future planning and hydrologic project design

  11. Using R and WinBUGS to fit a generalized partial credit model for developing and evaluating patient-reported outcomes assessments.

    Science.gov (United States)

    Li, Yuelin; Baser, Ray

    2012-08-15

    The US Food and Drug Administration recently announced the final guidelines on the development and validation of patient-reported outcomes (PROs) assessments in drug labeling and clinical trials. This guidance paper may boost the demand for new PRO survey questionnaires. Henceforth, biostatisticians may encounter psychometric methods more frequently, particularly item response theory (IRT) models to guide the shortening of a PRO assessment instrument. This article aims to provide an introduction on the theory and practical analytic skills in fitting a generalized partial credit model (GPCM) in IRT. GPCM theory is explained first, with special attention to a clearer exposition of the formal mathematics than what is typically available in the psychometric literature. Then, a worked example is presented, using self-reported responses taken from the international personality item pool. The worked example contains step-by-step guides on using the statistical languages r and WinBUGS in fitting the GPCM. Finally, the Fisher information function of the GPCM model is derived and used to evaluate, as an illustrative example, the usefulness of assessment items by their information contents. This article aims to encourage biostatisticians to apply IRT models in the re-analysis of existing data and in future research. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Estimating the fitness cost and benefit of cefixime resistance in Neisseria gonorrhoeae to inform prescription policy: A modelling study.

    Directory of Open Access Journals (Sweden)

    Lilith K Whittles

    2017-10-01

    Full Text Available Gonorrhoea is one of the most common bacterial sexually transmitted infections in England. Over 41,000 cases were recorded in 2015, more than half of which occurred in men who have sex with men (MSM. As the bacterium has developed resistance to each first-line antibiotic in turn, we need an improved understanding of fitness benefits and costs of antibiotic resistance to inform control policy and planning. Cefixime was recommended as a single-dose treatment for gonorrhoea from 2005 to 2010, during which time resistance increased, and subsequently declined.We developed a stochastic compartmental model representing the natural history and transmission of cefixime-sensitive and cefixime-resistant strains of Neisseria gonorrhoeae in MSM in England, which was applied to data on diagnoses and prescriptions between 2008 and 2015. We estimated that asymptomatic carriers play a crucial role in overall transmission dynamics, with 37% (95% credible interval CrI 24%-52% of infections remaining asymptomatic and untreated, accounting for 89% (95% CrI 82%-93% of onward transmission. The fitness cost of cefixime resistance in the absence of cefixime usage was estimated to be such that the number of secondary infections caused by resistant strains is only about half as much as for the susceptible strains, which is insufficient to maintain persistence. However, we estimated that treatment of cefixime-resistant strains with cefixime was unsuccessful in 83% (95% CrI 53%-99% of cases, representing a fitness benefit of resistance. This benefit was large enough to counterbalance the fitness cost when 31% (95% CrI 26%-36% of cases were treated with cefixime, and when more than 55% (95% CrI 44%-66% of cases were treated with cefixime, the resistant strain had a net fitness advantage over the susceptible strain. Limitations include sparse data leading to large intervals on key model parameters and necessary assumptions in the modelling of a complex epidemiological process

  13. Damage Identification of Bridge Based on Chebyshev Polynomial Fitting and Fuzzy Logic without Considering Baseline Model Parameters

    Directory of Open Access Journals (Sweden)

    Yu-Bo Jiao

    2015-01-01

    Full Text Available The paper presents an effective approach for damage identification of bridge based on Chebyshev polynomial fitting and fuzzy logic systems without considering baseline model data. The modal curvature of damaged bridge can be obtained through central difference approximation based on displacement modal shape. Depending on the modal curvature of damaged structure, Chebyshev polynomial fitting is applied to acquire the curvature of undamaged one without considering baseline parameters. Therefore, modal curvature difference can be derived and used for damage localizing. Subsequently, the normalized modal curvature difference is treated as input variable of fuzzy logic systems for damage condition assessment. Numerical simulation on a simply supported bridge was carried out to demonstrate the feasibility of the proposed method.

  14. Modelling noise in second generation sequencing forensic genetics STR data using a one-inflated (zero-truncated) negative binomial model

    DEFF Research Database (Denmark)

    Vilsen, Søren B.; Tvedebrink, Torben; Mogensen, Helle Smidt

    2015-01-01

    We present a model fitting the distribution of non-systematic errors in STR second generation sequencing, SGS, analysis. The model fits the distribution of non-systematic errors, i.e. the noise, using a one-inflated, zero-truncated, negative binomial model. The model is a two component model...

  15. The transtheoretical model and strategies of European fitness professionals to support clients in changing health-related behaviour: A survey study

    NARCIS (Netherlands)

    Middelkamp, P.J.C.; Wolfhagen, P.; Steenbergen, B.

    2015-01-01

    Introduction: The transtheoretical model of behaviour change (TTM) is often used to understand and predict changes in health related behaviour, for example exercise behaviour and eating behaviour. Fitness professionals like personal trainers typically service and support clients in improving

  16. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    Science.gov (United States)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  17. A BRDF statistical model applying to space target materials modeling

    Science.gov (United States)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  18. Dwarf novae in outburst: modelling the observations

    International Nuclear Information System (INIS)

    Pringle, J.E.; Verbunt, F.

    1986-01-01

    Time-dependent accretion-disc models are constructed and used to calculate theoretical spectra in order to try to fit the ultraviolet and optical observations of outbursts of the two dwarf novae VW Hydri and CN Orionis. It is found that the behaviour on the rise to outburst is the strongest discriminator between theoretical models. The mass-transfer burst model is able to fit the spectral behaviour for both objects. The disc-instability model is unable to fit the rise to outburst in VW Hydri, and gives a poor fit to the observations of CN Orionis. (author)

  19. A CONTRASTIVE ANALYSIS OF THE FACTORIAL STRUCTURE OF THE PCL-R: WHICH MODEL FITS BEST THE DATA?

    Directory of Open Access Journals (Sweden)

    Beatriz Pérez

    2015-01-01

    Full Text Available The aim of this study was to determine which of the factorial solutions proposed for the Hare Psychopathy Checklist-Revised (PCL-R of two, three, four factors, and unidimensional fitted best the data. Two trained and experienced independent raters scored 197 prisoners from the Villabona Penitentiary (Asturias, Spain, age range 21 to 73 years (M = 36.0, SD = 9.7, of whom 60.12% were reoffenders and 73% had committed violent crimes. The results revealed that the two-factor correlational, three-factor hierarchical without testlets, four-factor correlational and hierarchical, and unidimensional models were a poor fit for the data (CFI ≤ .86, and the three-factor model with testlets was a reasonable fit for the data (CFI = .93. The scale resulting from the three-factor hierarchical model with testlets (13 items classified psychopathy significantly higher than the original 20-item scale. The results are discussed in terms of their implications for theoretical models of psychopathy, decision-making, prison classification and intervention, and prevention. Se diseñó un estudio con el objetivo de conocer cuál de las soluciones factoriales propuestas para la Hare Psychopathy Checklist-Revised (PCL-R de dos, tres y cuatro factores y unidimensional era la que presentaba mejor ajuste a los datos. Para ello, dos evaluadores entrenados y con experiencia evaluaron de forma independiente a 197 internos en la prisión Villabona (Asturias, España, con edades comprendidas entre los 21 y los 73 años (M = 36.0, DT = 9.7, de los cuales el 60.12% eran reincidentes y el 73% había cometido delitos violentos. Los resultados mostraron que los modelos unidimensional, correlacional de 2 factores, jerárquico de 3 factores sin testlest y correlacional y jerárquico de 4 factores, presentaban un pobre ajuste con los datos (CFI ≤ .86 y un ajuste razonable del modelo jerárquico de tres factores con testlets (CFI = .93. La escala resultante del modelo de tres factores

  20. Fitting polytomous Rasch models in SAS

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    The item parameters of a polytomous Rasch model can be estimated using marginal and conditional approaches. This paper describes how this can be done in SAS (V8.2) for three item parameter estimation procedures: marginal maximum likelihood estimation, conditional maximum likelihood estimation, an...

  1. Respirometry techniques and activated sludge models

    NARCIS (Netherlands)

    Benes, O.; Spanjers, H.; Holba, M.

    2002-01-01

    This paper aims to explain results of respirometry experiments using Activated Sludge Model No. 1. In cases of insufficient fit of ASM No. 1, further modifications to the model were carried out and the so-called "Enzymatic model" was developed. The best-fit method was used to determine the effect of

  2. Using multistage models to describe radiation-induced leukaemia

    International Nuclear Information System (INIS)

    Little, M.P.; Muirhead, C.R.; Boice, J.D. Jr.; Kleinerman, R.A.

    1995-01-01

    The Armitage-Doll model of carcinogenesis is fitted to data on leukaemia mortality among the Japanese atomic bomb survivors with the DS86 dosimetry and on leukaemia incidence in the International Radiation Study of Cervical Cancer patients. Two different forms of model are fitted: the first postulates up to two radiation-affected stages and the second additionally allows for the presence at birth of a non-trivial population of cells which have already accumulated the first of the mutations leading to malignancy. Among models of the first form, a model with two adjacent radiation-affected stages appears to fit the data better than other models of the first form, including both models with two affected stages in any order and models with only one affected stage. The best fitting model predicts a linear-quadratic dose-response and reductions of relative risk with increasing time after exposure and age at exposure, in agreement with what has previously been observed in the Japanese and cervical cancer data. However, on the whole it does not provide an adequate fit to either dataset. The second form of model appears to provide a rather better fit, but the optimal models have biologically implausible parameters (the number of initiated cells at birth is negative) so that this model must also be regarded as providing an unsatisfactory description of the data. (author)

  3. Assessing Local Model Adequacy in Bayesian Hierarchical Models Using the Partitioned Deviance Information Criterion

    Science.gov (United States)

    Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.

    2010-01-01

    Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121

  4. Discrete competing risk model with application to modeling bus-motor failure data

    International Nuclear Information System (INIS)

    Jiang, R.

    2010-01-01

    Failure data are often modeled using continuous distributions. However, a discrete distribution can be appropriate for modeling interval or grouped data. When failure data come from a complex system, a simple discrete model can be inappropriate for modeling such data. This paper presents two types of discrete distributions. One is formed by exponentiating an underlying distribution, and the other is a two-fold competing risk model. The paper focuses on two special distributions: (a) exponentiated Poisson distribution and (b) competing risk model involving a geometric distribution and an exponentiated Poisson distribution. The competing risk model has a decreasing-followed-by-unimodal mass function and a bathtub-shaped failure rate. Five classical data sets on bus-motor failures can be simultaneously and appropriately fitted by a general 5-parameter competing risk model with the parameters being functions of the number of successive failures. The lifetime and aging characteristics of the fitted distribution are analyzed.

  5. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    Science.gov (United States)

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  6. Electricity demand loads modeling using AutoRegressive Moving Average (ARMA) models

    Energy Technology Data Exchange (ETDEWEB)

    Pappas, S.S. [Department of Information and Communication Systems Engineering, University of the Aegean, Karlovassi, 83 200 Samos (Greece); Ekonomou, L.; Chatzarakis, G.E. [Department of Electrical Engineering Educators, ASPETE - School of Pedagogical and Technological Education, N. Heraklion, 141 21 Athens (Greece); Karamousantas, D.C. [Technological Educational Institute of Kalamata, Antikalamos, 24100 Kalamata (Greece); Katsikas, S.K. [Department of Technology Education and Digital Systems, University of Piraeus, 150 Androutsou Srt., 18 532 Piraeus (Greece); Liatsis, P. [Division of Electrical Electronic and Information Engineering, School of Engineering and Mathematical Sciences, Information and Biomedical Engineering Centre, City University, Northampton Square, London EC1V 0HB (United Kingdom)

    2008-09-15

    This study addresses the problem of modeling the electricity demand loads in Greece. The provided actual load data is deseasonilized and an AutoRegressive Moving Average (ARMA) model is fitted on the data off-line, using the Akaike Corrected Information Criterion (AICC). The developed model fits the data in a successful manner. Difficulties occur when the provided data includes noise or errors and also when an on-line/adaptive modeling is required. In both cases and under the assumption that the provided data can be represented by an ARMA model, simultaneous order and parameter estimation of ARMA models under the presence of noise are performed. The produced results indicate that the proposed method, which is based on the multi-model partitioning theory, tackles successfully the studied problem. For validation purposes the produced results are compared with three other established order selection criteria, namely AICC, Akaike's Information Criterion (AIC) and Schwarz's Bayesian Information Criterion (BIC). The developed model could be useful in the studies that concern electricity consumption and electricity prices forecasts. (author)

  7. The inert doublet model in the light of Fermi-LAT gamma-ray data: a global fit analysis

    Science.gov (United States)

    Eiteneuer, Benedikt; Goudelis, Andreas; Heisig, Jan

    2017-09-01

    We perform a global fit within the inert doublet model taking into account experimental observables from colliders, direct and indirect dark matter searches and theoretical constraints. In particular, we consider recent results from searches for dark matter annihilation-induced gamma-rays in dwarf spheroidal galaxies and relax the assumption that the inert doublet model should account for the entire dark matter in the Universe. We, moreover, study in how far the model is compatible with a possible dark matter explanation of the so-called Galactic center excess. We find two distinct parameter space regions that are consistent with existing constraints and can simultaneously explain the excess: One with dark matter masses near the Higgs resonance and one around 72 GeV where dark matter annihilates predominantly into pairs of virtual electroweak gauge bosons via the four-vertex arising from the inert doublet's kinetic term. We briefly discuss future prospects to probe these scenarios.

  8. The inert doublet model in the light of Fermi-LAT gamma-ray data: a global fit analysis

    Energy Technology Data Exchange (ETDEWEB)

    Eiteneuer, Benedikt; Heisig, Jan [RWTH Aachen University, Institute for Theoretical Particle Physics and Cosmology, Aachen (Germany); Goudelis, Andreas [UMR 7589 CNRS and UPMC, Laboratoire de Physique Theorique et Hautes Energies (LPTHE), Paris (France)

    2017-09-15

    We perform a global fit within the inert doublet model taking into account experimental observables from colliders, direct and indirect dark matter searches and theoretical constraints. In particular, we consider recent results from searches for dark matter annihilation-induced gamma-rays in dwarf spheroidal galaxies and relax the assumption that the inert doublet model should account for the entire dark matter in the Universe. We, moreover, study in how far the model is compatible with a possible dark matter explanation of the so-called Galactic center excess. We find two distinct parameter space regions that are consistent with existing constraints and can simultaneously explain the excess: One with dark matter masses near the Higgs resonance and one around 72 GeV where dark matter annihilates predominantly into pairs of virtual electroweak gauge bosons via the four-vertex arising from the inert doublet's kinetic term. We briefly discuss future prospects to probe these scenarios. (orig.)

  9. Genome-Enabled Modeling of Biogeochemical Processes Predicts Metabolic Dependencies that Connect the Relative Fitness of Microbial Functional Guilds

    Science.gov (United States)

    Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Steefel, C. I.; Banfield, J. F.; Beller, H. R.; Anantharaman, K.; Ligocki, T. J.; Trebotich, D.

    2015-12-01

    Pore-scale processes mediated by microorganisms underlie a range of critical ecosystem services, regulating carbon stability, nutrient flux, and the purification of water. Advances in cultivation-independent approaches now provide us with the ability to reconstruct thousands of genomes from microbial populations from which functional roles may be assigned. With this capability to reveal microbial metabolic potential, the next step is to put these microbes back where they belong to interact with their natural environment, i.e. the pore scale. At this scale, microorganisms communicate, cooperate and compete across their fitness landscapes with communities emerging that feedback on the physical and chemical properties of their environment, ultimately altering the fitness landscape and selecting for new microbial communities with new properties and so on. We have developed a trait-based model of microbial activity that simulates coupled functional guilds that are parameterized with unique combinations of traits that govern fitness under dynamic conditions. Using a reactive transport framework, we simulate the thermodynamics of coupled electron donor-acceptor reactions to predict energy available for cellular maintenance, respiration, biomass development, and enzyme production. From metagenomics, we directly estimate some trait values related to growth and identify the linkage of key traits associated with respiration and fermentation, macromolecule depolymerizing enzymes, and other key functions such as nitrogen fixation. Our simulations were carried out to explore abiotic controls on community emergence such as seasonally fluctuating water table regimes across floodplain organic matter hotspots. Simulations and metagenomic/metatranscriptomic observations highlighted the many dependencies connecting the relative fitness of functional guilds and the importance of chemolithoautotrophic lifestyles. Using an X-Ray microCT-derived soil microaggregate physical model combined

  10. Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting

    Energy Technology Data Exchange (ETDEWEB)

    Ross, James C., E-mail: jross@bwh.harvard.edu [Channing Laboratory, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Surgical Planning Lab, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Laboratory of Mathematics in Imaging, Brigham and Women' s Hospital, Boston, Massachusetts 02126 (United States); Kindlmann, Gordon L. [Computer Science Department and Computation Institute, University of Chicago, Chicago, Illinois 60637 (United States); Okajima, Yuka; Hatabu, Hiroto [Department of Radiology, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Díaz, Alejandro A. [Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 and Department of Pulmonary Diseases, Pontificia Universidad Católica de Chile, Santiago (Chile); Silverman, Edwin K. [Channing Laboratory, Brigham and Women' s Hospital, Boston, Massachusetts 02215 and Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 (United States); Washko, George R. [Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 (United States); Dy, Jennifer [ECE Department, Northeastern University, Boston, Massachusetts 02115 (United States); Estépar, Raúl San José [Department of Radiology, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Surgical Planning Lab, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Laboratory of Mathematics in Imaging, Brigham and Women' s Hospital, Boston, Massachusetts 02126 (United States)

    2013-12-15

    Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The

  11. Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting

    International Nuclear Information System (INIS)

    Ross, James C.; Kindlmann, Gordon L.; Okajima, Yuka; Hatabu, Hiroto; Díaz, Alejandro A.; Silverman, Edwin K.; Washko, George R.; Dy, Jennifer; Estépar, Raúl San José

    2013-01-01

    Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The proposed

  12. Experimental model for non-Newtonian fluid viscosity estimation: Fit to mathematical expressions

    Directory of Open Access Journals (Sweden)

    Guillem Masoliver i Marcos

    2017-01-01

    Full Text Available The  construction  process  of  a  viscometer,  developed  in  collaboration  with  a  final  project  student,  is  here  presented.  It  is  intended  to  be  used  by   first  year's  students  to  know  the  viscosity  as  a  fluid  property, for  both  Newtonian  and  non-Newtonian  flows.  Viscosity  determination  is  crucial  for  the  fluids  behaviour knowledge  related  to  their  reologic  and  physical  properties.  These  have  great  implications  in  engineering aspects  such  as  friction  or  lubrication.  With  the  present  experimental  model  device  three  different fluids are  analyzed  (water,  kétchup  and  a  mixture  with  cornstarch  and  water.  Tangential stress is measured versus velocity in order to characterize all the fluids in different thermal conditions. A mathematical fit process is proposed to be done in order to adjust the results to expected analytical expressions, obtaining good results for these fittings, with R2 greater than 0.88 in any case.

  13. Extreme value modelling of Ghana stock exchange index.

    Science.gov (United States)

    Nortey, Ezekiel N N; Asare, Kwabena; Mettle, Felix Okoe

    2015-01-01

    Modelling of extreme events has always been of interest in fields such as hydrology and meteorology. However, after the recent global financial crises, appropriate models for modelling of such rare events leading to these crises have become quite essential in the finance and risk management fields. This paper models the extreme values of the Ghana stock exchange all-shares index (2000-2010) by applying the extreme value theory (EVT) to fit a model to the tails of the daily stock returns data. A conditional approach of the EVT was preferred and hence an ARMA-GARCH model was fitted to the data to correct for the effects of autocorrelation and conditional heteroscedastic terms present in the returns series, before the EVT method was applied. The Peak Over Threshold approach of the EVT, which fits a Generalized Pareto Distribution (GPD) model to excesses above a certain selected threshold, was employed. Maximum likelihood estimates of the model parameters were obtained and the model's goodness of fit was assessed graphically using Q-Q, P-P and density plots. The findings indicate that the GPD provides an adequate fit to the data of excesses. The size of the extreme daily Ghanaian stock market movements were then computed using the value at risk and expected shortfall risk measures at some high quantiles, based on the fitted GPD model.

  14. Modeling and forecasting petroleum futures volatility

    International Nuclear Information System (INIS)

    Sadorsky, Perry

    2006-01-01

    Forecasts of oil price volatility are important inputs into macroeconometric models, financial market risk assessment calculations like value at risk, and option pricing formulas for futures contracts. This paper uses several different univariate and multivariate statistical models to estimate forecasts of daily volatility in petroleum futures price returns. The out-of-sample forecasts are evaluated using forecast accuracy tests and market timing tests. The TGARCH model fits well for heating oil and natural gas volatility and the GARCH model fits well for crude oil and unleaded gasoline volatility. Simple moving average models seem to fit well in some cases provided the correct order is chosen. Despite the increased complexity, models like state space, vector autoregression and bivariate GARCH do not perform as well as the single equation GARCH model. Most models out perform a random walk and there is evidence of market timing. Parametric and non-parametric value at risk measures are calculated and compared. Non-parametric models outperform the parametric models in terms of number of exceedences in backtests. These results are useful for anyone needing forecasts of petroleum futures volatility. (author)

  15. Influence of a health-related physical fitness model on students' physical activity, perceived competence, and enjoyment.

    Science.gov (United States)

    Fu, You; Gao, Zan; Hannon, James; Shultz, Barry; Newton, Maria; Sibthorp, Jim

    2013-12-01

    This study was designed to explore the effects of a health-related physical fitness physical education model on students' physical activity, perceived competence, and enjoyment. 61 students (25 boys, 36 girls; M age = 12.6 yr., SD = 0.6) were assigned to two groups (health-related physical fitness physical education group, and traditional physical education group), and participated in one 50-min. weekly basketball class for 6 wk. Students' in-class physical activity was assessed using NL-1000 pedometers. The physical subscale of the Perceived Competence Scale for Children was employed to assess perceived competence, and children's enjoyment was measured using the Sport Enjoyment Scale. The findings suggest that students in the intervention group increased their perceived competence, enjoyment, and physical activity over a 6-wk. intervention, while the comparison group simply increased physical activity over time. Children in the intervention group had significantly greater enjoyment.

  16. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  17. Model Atmosphere Spectrum Fit to the Soft X-Ray Outburst Spectrum of SS Cyg

    Directory of Open Access Journals (Sweden)

    V. F. Suleimanov

    2015-02-01

    Full Text Available The X-ray spectrum of SS Cyg in outburst has a very soft component that can be interpreted as the fast-rotating optically thick boundary layer on the white dwarf surface. This component was carefully investigated by Mauche (2004 using the Chandra LETG spectrum of this object in outburst. The spectrum shows broad ( ≈5 °A spectral features that have been interpreted as a large number of absorption lines on a blackbody continuum with a temperature of ≈250 kK. Because the spectrum resembles the photospheric spectra of super-soft X-ray sources, we tried to fit it with high gravity hot LTE stellar model atmospheres with solar chemical composition, specially computed for this purpose. We obtained a reasonably good fit to the 60–125 °A spectrum with the following parameters: Teff = 190 kK, log g = 6.2, and NH = 8 · 1019 cm−2, although at shorter wavelengths the observed spectrum has a much higher flux. The reasons for this are discussed. The hypothesis of a fast rotating boundary layer is supported by the derived low surface gravity.

  18. CRAPONE, Optical Model Potential Fit of Neutron Scattering Data

    International Nuclear Information System (INIS)

    Fabbri, F.; Fratamico, G.; Reffo, G.

    2004-01-01

    1 - Description of problem or function: Automatic search for local and non-local optical potential parameters for neutrons. Total, elastic, differential elastic cross sections, l=0 and l=1 strength functions and scattering length can be considered. 2 - Method of solution: A fitting procedure is applied to different sets of experimental data depending on the local or non-local approximation chosen. In the non-local approximation the fitting procedure can be simultaneously performed over the whole energy range. The best fit is obtained when a set of parameters is found where CHI 2 is at its minimum. The solution of the system equations is obtained by diagonalization of the matrix according to the Jacobi method

  19. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    Science.gov (United States)

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  20. Fitting and Testing Conditional Multinormal Partial Credit Models

    Science.gov (United States)

    Hessen, David J.

    2012-01-01

    A multinormal partial credit model for factor analysis of polytomously scored items with ordered response categories is derived using an extension of the Dutch Identity (Holland in "Psychometrika" 55:5-18, 1990). In the model, latent variables are assumed to have a multivariate normal distribution conditional on unweighted sums of item…

  1. HRM: HII Region Models

    Science.gov (United States)

    Wenger, Trey V.; Kepley, Amanda K.; Balser, Dana S.

    2017-07-01

    HII Region Models fits HII region models to observed radio recombination line and radio continuum data. The algorithm includes the calculations of departure coefficients to correct for non-LTE effects. HII Region Models has been used to model star formation in the nucleus of IC 342.

  2. Decision making on fitness landscapes

    Science.gov (United States)

    Arthur, R.; Sibani, P.

    2017-04-01

    We discuss fitness landscapes and how they can be modified to account for co-evolution. We are interested in using the landscape as a way to model rational decision making in a toy economic system. We develop a model very similar to the Tangled Nature Model of Christensen et al. that we call the Tangled Decision Model. This is a natural setting for our discussion of co-evolutionary fitness landscapes. We use a Monte Carlo step to simulate decision making and investigate two different decision making procedures.

  3. Decision Making on Fitness Landscapes

    DEFF Research Database (Denmark)

    Arthur, Rudy; Sibani, Paolo

    2017-01-01

    We discuss fitness landscapes and how they can be modified to account for co-evolution. We are interested in using the landscape as a way to model rational decision making in a toy economic system. We develop a model very similar to the Tangled Nature Model of Christensen et. al. that we call...... the Tangled Decision Model. This is a natural setting for our discussion of co-evolutionary fitness landscapes. We use a Monte Carlo step to simulate decision making and investigate two different decision making procedures....

  4. GOSSIP: SED fitting code

    Science.gov (United States)

    Franzetti, Paolo; Scodeggio, Marco

    2012-10-01

    GOSSIP fits the electro-magnetic emission of an object (the SED, Spectral Energy Distribution) against synthetic models to find the simulated one that best reproduces the observed data. It builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions.

  5. Survival analysis of clinical mastitis data using a nested frailty Cox model fit as a mixed-effects Poisson model.

    Science.gov (United States)

    Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik

    2014-12-01

    Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation

  6. In Search of Optimal Cognitive Diagnostic Model(s) for ESL Grammar Test Data

    Science.gov (United States)

    Yi, Yeon-Sook

    2017-01-01

    This study compares five cognitive diagnostic models in search of optimal one(s) for English as a Second Language grammar test data. Using a unified modeling framework that can represent specific models with proper constraints, the article first fit the full model (the log-linear cognitive diagnostic model, LCDM) and investigated which model…

  7. Ideas for fast accelerator model calibration

    International Nuclear Information System (INIS)

    Corbett, J.

    1997-05-01

    With the advent of a simple matrix inversion technique, measurement-based storage ring modeling has made rapid progress in recent years. Using fast computers with large memory, the matrix inversion procedure typically adjusts up to 10 3 model variables to fit the order of 10 5 measurements. The results have been surprisingly accurate. Physics aside, one of the next frontiers is to simplify the process and to reduce computation time. In this paper, the authors discuss two approaches to speed up the model calibration process: recursive least-squares fitting and a piecewise fitting approach

  8. Ignoring imperfect detection in biological surveys is dangerous: a response to 'fitting and interpreting occupancy models'.

    Directory of Open Access Journals (Sweden)

    Gurutzeta Guillera-Arroita

    Full Text Available In a recent paper, Welsh, Lindenmayer and Donnelly (WLD question the usefulness of models that estimate species occupancy while accounting for detectability. WLD claim that these models are difficult to fit and argue that disregarding detectability can be better than trying to adjust for it. We think that this conclusion and subsequent recommendations are not well founded and may negatively impact the quality of statistical inference in ecology and related management decisions. Here we respond to WLD's claims, evaluating in detail their arguments, using simulations and/or theory to support our points. In particular, WLD argue that both disregarding and accounting for imperfect detection lead to the same estimator performance regardless of sample size when detectability is a function of abundance. We show that this, the key result of their paper, only holds for cases of extreme heterogeneity like the single scenario they considered. Our results illustrate the dangers of disregarding imperfect detection. When ignored, occupancy and detection are confounded: the same naïve occupancy estimates can be obtained for very different true levels of occupancy so the size of the bias is unknowable. Hierarchical occupancy models separate occupancy and detection, and imprecise estimates simply indicate that more data are required for robust inference about the system in question. As for any statistical method, when underlying assumptions of simple hierarchical models are violated, their reliability is reduced. Resorting in those instances where hierarchical occupancy models do no perform well to the naïve occupancy estimator does not provide a satisfactory solution. The aim should instead be to achieve better estimation, by minimizing the effect of these issues during design, data collection and analysis, ensuring that the right amount of data is collected and model assumptions are met, considering model extensions where appropriate.

  9. Potential application of digital image-processing method and fitted logistic model to the control of oriental fruit moths (Grapholita molesta Busck).

    Science.gov (United States)

    Zhao, Z G; Rong, E H; Li, S C; Zhang, L J; Zhang, Z W; Guo, Y Q; Ma, R Y

    2016-08-01

    Monitoring of oriental fruit moths (Grapholita molesta Busck) is a prerequisite for its control. This study introduced a digital image-processing method and logistic model for the control of oriental fruit moths. First, five triangular sex pheromone traps were installed separately within each area of 667 m2 in a peach orchard to monitor oriental fruit moths consecutively for 3 years. Next, full view images of oriental fruit moths were collected via a digital camera and then subjected to graying, separation and morphological analysis for automatic counting using MATLAB software. Afterwards, the results of automatic counting were used for fitting a logistic model to forecast the control threshold and key control period. There was a high consistency between automatic counting and manual counting (0.99, P model, oriental fruit moths had four occurrence peaks during a year, with a time-lag of 15-18 days between adult occurrence peak and the larval damage peak. Additionally, the key control period was from 28 June to 3 July each year, when the wormy fruit rate reached up to 5% and the trapping volume was approximately 10.2 per day per trap. Additionally, the key control period for the overwintering generation was 25 April. This study provides an automatic counting method and fitted logistic model with a great potential for application to the control of oriental fruit moths.

  10. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    Science.gov (United States)

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  11. Hierarchical Bass model

    International Nuclear Information System (INIS)

    Tashiro, Tohru

    2014-01-01

    We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model

  12. Hierarchical Bass model

    Science.gov (United States)

    Tashiro, Tohru

    2014-03-01

    We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.

  13. The 'fitting problem' in cosmology

    International Nuclear Information System (INIS)

    Ellis, G.F.R.; Stoeger, W.

    1987-01-01

    The paper considers the best way to fit an idealised exactly homogeneous and isotropic universe model to a realistic ('lumpy') universe; whether made explicit or not, some such approach of necessity underlies the use of the standard Robertson-Walker models as models of the real universe. Approaches based on averaging, normal coordinates and null data are presented, the latter offering the best opportunity to relate the fitting procedure to data obtainable by astronomical observations. (author)

  14. Modeling Simple Driving Tasks with a One-Boundary Diffusion Model

    Science.gov (United States)

    Ratcliff, Roger; Strayer, David

    2014-01-01

    A one-boundary diffusion model was applied to the data from two experiments in which subjects were performing a simple simulated driving task. In the first experiment, the same subjects were tested on two driving tasks using a PC-based driving simulator and the psychomotor vigilance test (PVT). The diffusion model fit the response time (RT) distributions for each task and individual subject well. Model parameters were found to correlate across tasks which suggests common component processes were being tapped in the three tasks. The model was also fit to a distracted driving experiment of Cooper and Strayer (2008). Results showed that distraction altered performance by affecting the rate of evidence accumulation (drift rate) and/or increasing the boundary settings. This provides an interpretation of cognitive distraction whereby conversing on a cell phone diverts attention from the normal accumulation of information in the driving environment. PMID:24297620

  15. Standard Model updates and new physics analysis with the Unitarity Triangle fit

    International Nuclear Information System (INIS)

    Bevan, A.; Bona, M.; Ciuchini, M.; Derkach, D.; Franco, E.; Silvestrini, L.; Lubicz, V.; Tarantino, C.; Martinelli, G.; Parodi, F.; Schiavi, C.; Pierini, M.; Sordini, V.; Stocchi, A.; Vagnoni, V.

    2013-01-01

    We present the summer 2012 update of the Unitarity Triangle (UT) analysis performed by the UTfit Collaboration within the Standard Model (SM) and beyond. The increased accuracy on several of the fundamental constraints is now enhancing some of the tensions amongst and within the constraint themselves. In particular, the long standing tension between exclusive and inclusive determinations of the V ub and V cb CKM matrix elements is now playing a major role. Then we present the generalisation the UT analysis to investigate new physics (NP) effects, updating the constraints on NP contributions to ΔF=2 processes. In the NP analysis, both CKM and NP parameters are fitted simultaneously to obtain the possible NP effects in any specific sector. Finally, based on the NP constraints, we derive upper bounds on the coefficients of the most general ΔF=2 effective Hamiltonian. These upper bounds can be translated into lower bounds on the scale of NP that contributes to these low-energy effective interactions

  16. Geometric Modelling of Octagonal Lamp Poles

    Science.gov (United States)

    Chan, T. O.; Lichti, D. D.

    2014-06-01

    Lamp poles are one of the most abundant highway and community components in modern cities. Their supporting parts are primarily tapered octagonal cones specifically designed for wind resistance. The geometry and the positions of the lamp poles are important information for various applications. For example, they are important to monitoring deformation of aged lamp poles, maintaining an efficient highway GIS system, and also facilitating possible feature-based calibration of mobile LiDAR systems. In this paper, we present a novel geometric model for octagonal lamp poles. The model consists of seven parameters in which a rotation about the z-axis is included, and points are constrained by the trigonometric property of 2D octagons after applying the rotations. For the geometric fitting of the lamp pole point cloud captured by a terrestrial LiDAR, accurate initial parameter values are essential. They can be estimated by first fitting the points to a circular cone model and this is followed by some basic point cloud processing techniques. The model was verified by fitting both simulated and real data. The real data includes several lamp pole point clouds captured by: (1) Faro Focus 3D and (2) Velodyne HDL-32E. The fitting results using the proposed model are promising, and up to 2.9 mm improvement in fitting accuracy was realized for the real lamp pole point clouds compared to using the conventional circular cone model. The overall result suggests that the proposed model is appropriate and rigorous.

  17. The application of a social cognition model in explaining fruit intake in Austrian, Norwegian and Spanish schoolchildren using structural equation modelling

    Directory of Open Access Journals (Sweden)

    Pérez-Rodrigo Carmen

    2007-11-01

    Full Text Available Abstract Background The aim of this paper was to test the goodness of fit of the Attitude – Social influence – self-Efficacy (ASE model in explaining schoolchildren's intentions to eat fruit and their actual fruit intake in Austria, Norway and Spain; to assess how well the model could explain the observed variance in intention to eat fruit and in reported fruit intake and to investigate whether the same model would fit data from all three countries. Methods Samples consisted of schoolchildren from three of the countries participating in the cross-sectional part of the Pro Children project. Sample size varied from 991 in Austria to 1297 in Spain. Mean age ranged from 11.3 to 11.4 years. The initial model was designed using items and constructs from the Pro Children study. Factor analysis was conducted to test the structure of the measures in the model. The Norwegian sample was used to test the latent variable structure, to make a preliminary assessment of model fit, and to modify the model to increase goodness of fit with the data. The original and modified models were then applied to the Austrian and Spanish samples. All model analyses were carried out using structural equation modelling techniques. Results The ASE-model fitted the Norwegian and Spanish data well. For Austria, a slightly more complex model was needed. For this reason multi-sample analysis to test equality in factor structure and loadings across countries could not be used. The models explained between 51% and 69% of the variance in intention to eat fruit, and 27% to 38% of the variance in reported fruit intake. Conclusion Structural equation modelling showed that a rather parsimonious model was useful in explaining the variation in fruit intake of 11-year-old schoolchildren in Norway and Spain. For Austria, more modifications were needed to fit the data.

  18. Validation of an employee satisfaction model: A structural equation model approach

    Directory of Open Access Journals (Sweden)

    Ophillia Ledimo

    2015-01-01

    Full Text Available The purpose of this study was to validate an employee satisfaction model and to determine the relationships between the different dimensions of the concept, using the structural equation modelling approach (SEM. A cross-sectional quantitative survey design was used to collect data from a random sample of (n=759 permanent employees of a parastatal organisation. Data was collected using the Employee Satisfaction Survey (ESS to measure employee satisfaction dimensions. Following the steps of SEM analysis, the three domains and latent variables of employee satisfaction were specified as organisational strategy, policies and procedures, and outcomes. Confirmatory factor analysis of the latent variables was conducted, and the path coefficients of the latent variables of the employee satisfaction model indicated a satisfactory fit for all these variables. The goodness-of-fit measure of the model indicated both absolute and incremental goodness-of-fit; confirming the relationships between the latent and manifest variables. It also indicated that the latent variables, organisational strategy, policies and procedures, and outcomes, are the main indicators of employee satisfaction. This study adds to the knowledge base on employee satisfaction and makes recommendations for future research.

  19. Testing the validity of stock-recruitment curve fits

    International Nuclear Information System (INIS)

    Christensen, S.W.; Goodyear, C.P.

    1988-01-01

    The utilities relied heavily on the Ricker stock-recruitment model as the basis for quantifying biological compensation in the Hudson River power case. They presented many fits of the Ricker model to data derived from striped bass catch and effort records compiled by the National Marine Fisheries Service. Based on this curve-fitting exercise, a value of 4 was chosen for the parameter alpha in the Ricker model, and this value was used to derive the utilities' estimates of the long-term impact of power plants on striped bass populations. A technique was developed and applied to address a single fundamental question: if the Ricker model were applicable to the Hudson River striped bass population, could the estimates of alpha from the curve-fitting exercise be considered reliable. The technique involved constructing a simulation model that incorporated the essential biological features of the population and simulated the characteristics of the available actual catch-per-unit-effort data through time. The ability or failure to retrieve the known parameter values underlying the simulation model via the curve-fitting exercise was a direct test of the reliability of the results of fitting stock-recruitment curves to the real data. The results demonstrated that estimates of alpha from the curve-fitting exercise were not reliable. The simulation-modeling technique provides an effective way to identify whether or not particular data are appropriate for use in fitting such models. 39 refs., 2 figs., 3 tabs

  20. A nonlinear model of gold production in Malaysia

    Science.gov (United States)

    Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi

    2014-06-01

    Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.

  1. Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.

    Science.gov (United States)

    Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko

    2016-03-01

    In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Tumor Control Probability Modeling for Stereotactic Body Radiation Therapy of Early-Stage Lung Cancer Using Multiple Bio-physical Models

    Science.gov (United States)

    Liu, Feng; Tai, An; Lee, Percy; Biswas, Tithi; Ding, George X.; El Naqa, Isaam; Grimm, Jimm; Jackson, Andrew; Kong, Feng-Ming (Spring); LaCouture, Tamara; Loo, Billy; Miften, Moyed; Solberg, Timothy; Li, X Allen

    2017-01-01

    Purpose To analyze pooled clinical data using different radiobiological models and to understand the relationship between biologically effective dose (BED) and tumor control probability (TCP) for stereotactic body radiotherapy (SBRT) of early-stage non-small cell lung cancer (NSCLC). Method and Materials The clinical data of 1-, 2-, 3-, and 5-year actuarial or Kaplan-Meier TCP from 46 selected studies were collected for SBRT of NSCLC in the literature. The TCP data were separated for Stage T1 and T2 tumors if possible, otherwise collected for combined stages. BED was calculated at isocenters using six radiobiological models. For each model, the independent model parameters were determined from a fit to the TCP data using the least chi-square (χ2) method with either one set of parameters regardless of tumor stages or two sets for T1 and T2 tumors separately. Results The fits to the clinic data yield consistent results of large α/β ratios of about 20 Gy for all models investigated. The regrowth model that accounts for the tumor repopulation and heterogeneity leads to a better fit to the data, compared to other 5 models where the fits were indistinguishable between the models. The models based on the fitting parameters predict that the T2 tumors require about additional 1 Gy physical dose at isocenters per fraction (≤5 fractions) to achieve the optimal TCP when compared to the T1 tumors. Conclusion This systematic analysis of a large set of published clinical data using different radiobiological models shows that local TCP for SBRT of early-stage NSCLC has strong dependence on BED with large α/β ratios of about 20 Gy. The six models predict that a BED (calculated with α/β of 20) of 90 Gy is sufficient to achieve TCP ≥ 95%. Among the models considered, the regrowth model leads to a better fit to the clinical data. PMID:27871671

  3. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

    Science.gov (United States)

    Cheung, Mike W.-L.; Cheung, Shu Fai

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

  4. Modelling of Attentional Dwell Time

    DEFF Research Database (Denmark)

    Petersen, Anders; Kyllingsbæk, Søren; Bundesen, Claus

    2009-01-01

    . This confinement of attentional resources leads to the impairment in identifying the second target. With the model, we are able to produce close fits to data from the traditional two target dwell time paradigm. A dwell-time experiment with three targets has also been carried out for individual subjects...... and the model has been extended to fit these data....

  5. Reproductive fitness and dietary choice behavior of the genetic model organism Caenorhabditis elegans under semi-natural conditions.

    Science.gov (United States)

    Freyth, Katharina; Janowitz, Tim; Nunes, Frank; Voss, Melanie; Heinick, Alexander; Bertaux, Joanne; Scheu, Stefan; Paul, Rüdiger J

    2010-10-01

    Laboratory breeding conditions of the model organism C. elegans do not correspond with the conditions in its natural soil habitat. To assess the consequences of the differences in environmental conditions, the effects of air composition, medium and bacterial food on reproductive fitness and/or dietary-choice behavior of C. elegans were investigated. The reproductive fitness of C. elegans was maximal under oxygen deficiency and not influenced by a high fractional share of carbon dioxide. In media approximating natural soil structure, reproductive fitness was much lower than in standard laboratory media. In seminatural media, the reproductive fitness of C. elegans was low with the standard laboratory food bacterium E. coli (γ-Proteobacteria), but significantly higher with C. arvensicola (Bacteroidetes) and B. tropica (β-Proteobacteria) as food. Dietary-choice experiments in semi-natural media revealed a low preference of C. elegans for E. coli but significantly higher preferences for C. arvensicola and B. tropica (among other bacteria). Dietary-choice experiments under quasi-natural conditions, which were feasible by fluorescence in situ hybridization (FISH) of bacteria, showed a high preference of C. elegans for Cytophaga-Flexibacter-Bacteroides, Firmicutes, and β-Proteobacteria, but a low preference for γ-Proteobacteria. The results show that data on C. elegans under standard laboratory conditions have to be carefully interpreted with respect to their biological significance.

  6. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda

    2009-05-12

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.

  7. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  8. The Electroweak Fit of the Standard Model after the Discovery of a New Boson at the LHC

    CERN Document Server

    Baak, M.

    2012-11-03

    In view of the discovery of a new boson by the ATLAS and CMS Collaborations at the LHC, we present an update of the global Standard Model (SM) fit to electroweak precision data. Assuming the new particle to be the SM Higgs boson, all fundamental parameters of the SM are known allowing, for the first time, to overconstrain the SM at the electroweak scale and assert its validity. Including the effects of radiative corrections and the experimental and theoretical uncertainties, the global fit exhibits a p-value of 0.07. The mass measurements by ATLAS and CMS agree within 1.3sigma with the indirect determination M_H=(94 +25 -22) GeV. Within the SM the W boson mass and the effective weak mixing angle can be accurately predicted to be M_W=(80.359 +- 0.011) GeV and sin^2(theta_eff^ell)=(0.23150 +- 0.00010) from the global fit. These results are compatible with, and exceed in precision, the direct measurements. For the indirect determination of the top quark mass we find m_t=(175.8 +2.7 -2.4) GeV, in agreement with t...

  9. The electroweak fit of the standard model after the discovery of a new boson at the LHC

    International Nuclear Information System (INIS)

    Baak, M.; Hoecker, A.; Schott, M.; Goebel, M.; Kennedy, D.; Moenig, K.; Haller, J.; Kogler, R.; Stelzer, J.

    2012-09-01

    In view of the discovery of a new boson by the ATLAS and CMS Collaborations at the LHC, we present an update of the global Standard Model (SM) fit to electroweak precision data. Assuming the new particle to be the SM Higgs boson, all fundamental parameters of the SM are known allowing, for the first time, to overconstrain the SM at the electroweak scale and assert its validity. Including the effects of radiative corrections and the experimental and theoretical uncertainties, the global fit exhibits a p-value of 0.07. The mass measurements by ATLAS and CMS agree within 1.3σ with the indirect determination M H =94 +25 -22 GeV. Within the SM the W boson mass and the effective weak mixing angle can be accurately predicted to be M W =80.359±0.011 GeV and sin 2 θ l eff =0.23150±0.00010 from the global fit. These results are compatible with, and exceed in precision, the direct measurements. For the indirect determination of the top quark mass we find m t =175.8 +2.7 -2.4 GeV, in agreement with the kinematic and cross-section based measurements.

  10. Quadratic reactivity fuel cycle model

    International Nuclear Information System (INIS)

    Lewins, J.D.

    1985-01-01

    For educational purposes it is highly desirable to provide simple yet realistic models for fuel cycle and fuel economy. In particular, a lumped model without recourse to detailed spatial calculations would be very helpful in providing the student with a proper understanding of the purposes of fuel cycle calculations. A teaching model for fuel cycle studies based on a lumped model assuming the summability of partial reactivities with a linear dependence of reactivity usefully illustrates fuel utilization concepts. The linear burnup model does not satisfactorily represent natural enrichment reactors. A better model, showing the trend of initial plutonium production before subsequent fuel burnup and fission product generation, is a quadratic fit. The study of M-batch cycles, reloading 1/Mth of the core at end of cycle, is now complicated by nonlinear equations. A complete account of the asymptotic cycle for any order of M-batch refueling can be given and compared with the linear model. A complete account of the transient cycle can be obtained readily in the two-batch model and this exact solution would be useful in verifying numerical marching models. It is convenient to treat the parabolic fit rho = 1 - tau 2 as a special case of the general quadratic fit rho = 1 - C/sub tau/ - (1 - C)tau 2 in suitably normalized reactivity and cycle time units. The parabolic results are given in this paper

  11. Universally sloppy parameter sensitivities in systems biology models.

    Directory of Open Access Journals (Sweden)

    Ryan N Gutenkunst

    2007-10-01

    Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  12. Universally sloppy parameter sensitivities in systems biology models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  13. Virtual Suit Fit Assessment Using Body Shape Model

    Data.gov (United States)

    National Aeronautics and Space Administration — Shoulder injury is one of the most serious risks for crewmembers in long-duration spaceflight. While suboptimal suit fit and contact pressures between the shoulder...

  14. Stand basal area model for Cunninghamia lanceolata (Lamb.) Hook ...

    African Journals Online (AJOL)

    When evaluating the predictive accuracy of the final model, the first measurement was used for estimation of random parameters. The Chapman–Richards model was finally selected for the basic model based on model-fitting statistics, and both the fitting model and validation data with site-, block- and plot-level random ...

  15. Collapsing Factors in Multitrait-Multimethod Models: Examining Consequences of a Mismatch Between Measurement Design and Model

    Directory of Open Access Journals (Sweden)

    Christian eGeiser

    2015-08-01

    Full Text Available Models of confirmatory factor analysis (CFA are frequently applied to examine the convergent validity of scores obtained from multiple raters or methods in so-called multitrait-multimethod (MTMM investigations. Many applications of CFA-MTMM and similarly structured models result in solutions in which at least one method (or specific factor shows non-significant loading or variance estimates. Eid et al. (2008 distinguished between MTMM measurement designs with interchangeable (randomly selected versus structurally different (fixed methods and showed that each type of measurement design implies specific CFA-MTMM measurement models. In the current study, we hypothesized that some of the problems that are commonly seen in applications of CFA-MTMM models may be due to a mismatch between the underlying measurement design and fitted models. Using simulations, we found that models with M method factors (where M is the total number of methods and unconstrained loadings led to a higher proportion of solutions in which at least one method factor became empirically unstable when these models were fit to data generated from structurally different methods. The simulations also revealed that commonly used model goodness-of-fit criteria frequently failed to identify incorrectly specified CFA-MTMM models. We discuss implications of these findings for other complex CFA models in which similar issues occur, including nested (bifactor and latent state-trait models.

  16. Exploratory structural equation modeling of personality data.

    Science.gov (United States)

    Booth, Tom; Hughes, David J

    2014-06-01

    The current article compares the use of exploratory structural equation modeling (ESEM) as an alternative to confirmatory factor analytic (CFA) models in personality research. We compare model fit, factor distinctiveness, and criterion associations of factors derived from ESEM and CFA models. In Sample 1 (n = 336) participants completed the NEO-FFI, the Trait Emotional Intelligence Questionnaire-Short Form, and the Creative Domains Questionnaire. In Sample 2 (n = 425) participants completed the Big Five Inventory and the depression and anxiety scales of the General Health Questionnaire. ESEM models provided better fit than CFA models, but ESEM solutions did not uniformly meet cutoff criteria for model fit. Factor scores derived from ESEM and CFA models correlated highly (.91 to .99), suggesting the additional factor loadings within the ESEM model add little in defining latent factor content. Lastly, criterion associations of each personality factor in CFA and ESEM models were near identical in both inventories. We provide an example of how ESEM and CFA might be used together in improving personality assessment. © The Author(s) 2014.

  17. SU-D-204-05: Fitting Four NTCP Models to Treatment Outcome Data of Salivary Glands Recorded Six Months After Radiation Therapy for Head and Neck Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Mavroidis, P; Price, A; Kostich, M; Green, R; Das, S; Marks, L; Chera, B [University North Carolina, Chapel Hill, NC (United States); Amdur, R; Mendenhall, W [University of Florida, Gainesville, FL (United States); Sheets, N [University of North Carolina, Raleigh, NC (United States)

    2016-06-15

    Purpose: To estimate the radiobiological parameters of four popular NTCP models that describe the dose-response relations of salivary glands to the severity of patient reported dry mouth 6 months post chemo-radiotherapy. To identify the glands, which best correlate with the manifestation of those clinical endpoints. Finally, to evaluate the goodness-of-fit of the NTCP models. Methods: Forty-three patients were treated on a prospective multiinstitutional phase II study for oropharyngeal squamous cell carcinoma. All the patients received 60 Gy IMRT and they reported symptoms using the novel patient reported outcome version of the CTCAE. We derived the individual patient dosimetric data of the parotid and submandibular glands (SMG) as separate structures as well as combinations. The Lyman-Kutcher-Burman (LKB), Relative Seriality (RS), Logit and Relative Logit (RL) NTCP models were used to fit the patients data. The fitting of the different models was assessed through the area under the receiver operating characteristic curve (AUC) and the Odds Ratio methods. Results: The AUC values were highest for the contralateral parotid for Grade ≥ 2 (0.762 for the LKB, RS, Logit and 0.753 for the RL). For the salivary glands the AUC values were: 0.725 for the LKB, RS, Logit and 0.721 for the RL. For the contralateral SMG the AUC values were: 0.721 for LKB, 0.714 for Logit and 0.712 for RS and RL. The Odds Ratio for the contralateral parotid was 5.8 (1.3–25.5) for all the four NTCP models for the radiobiological dose threshold of 21Gy. Conclusion: It was shown that all the examined NTCP models could fit the clinical data well with very similar accuracy. The contralateral parotid gland appears to correlated best with the clinical endpoints of severe/very severe dry mouth. An EQD2Gy dose of 21Gy appears to be a safe threshold to be used as a constraint in treatment planning.

  18. Loglinear Rasch model tests

    NARCIS (Netherlands)

    Kelderman, Hendrikus

    1984-01-01

    Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch

  19. Next-to-leading order unitarity fits in Two-Higgs-Doublet models with soft ℤ{sub 2} breaking

    Energy Technology Data Exchange (ETDEWEB)

    Cacchio, Vincenzo; Chowdhury, Debtosh; Eberhardt, Otto [Istituto Nazionale di Fisica Nucleare, Sezione di Roma,Piazzale Aldo Moro 2, I-00185 Roma (Italy); Murphy, Christopher W. [Scuola Normale Superiore,Piazza dei Cavalieri 7, I-56126 Pisa (Italy)

    2016-11-07

    We fit the next-to-leading order unitarity conditions to the Two-Higgs-Doublet model with a softly broken ℤ{sub 2} symmetry. In doing so, we alleviate the existing uncertainty on how to treat higher order corrections to quartic couplings of its Higgs potential. A simplified approach to implementing the next-to-leading order unitarity conditions is presented. These new bounds are then combined with all other relevant constraints, including the complete set of LHC Run I data. The upper 95% bounds we find are 4.2 on the absolute values of the quartic couplings, and 235 GeV (100 GeV) for the mass degeneracies between the heavy Higgs particles in the type I (type II) scenario. In type II, we exclude an unbroken ℤ{sub 2} symmetry with a probability of 95%. All fits are performed using the open-source code HEPfit.

  20. Fitting non-gaussian Models to Financial data: An Empirical Study

    Directory of Open Access Journals (Sweden)

    Pablo Olivares

    2011-04-01

    Full Text Available In this paper are presented some experiences about the modeling of financial data by three classes of models as alternative to Gaussian Linear models. Dynamic Volatility, Stable L'evy and Diffusion with Jumps models are considered. The techniques are illustrated with some examples of financial series on currency, futures and indexes.

  1. An Outcome-Based Action Study on Changes in Fitness, Blood Lipids, and Exercise Adherence, Using the Disconnected Values (Intervention) Model

    Science.gov (United States)

    Anshel, Mark H.; Kang, Minsoo

    2007-01-01

    The authors' purpose in this action study was to examine the effect of a 10-week intervention, using the Disconnected Values Model (DVM), on changes in selected measures of fitness, blood lipids, and exercise adherence among 51 university faculty (10 men and 41 women) from a school in the southeastern United States. The DVM is an intervention…

  2. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    Science.gov (United States)

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  3. Identification and modelling of Lithium ion battery

    International Nuclear Information System (INIS)

    Tsang, K.M.; Sun, L.; Chan, W.L.

    2010-01-01

    A universal battery model for the charging process has been identified for Lithium ion battery working at constant temperature. Mathematical models are fitted to different collected charging profiles using the least squares algorithm. With the removal of the component which is related to the DC resistance of the battery, a universal model can be fitted to predict profiles of different charging rates after time scaling. Experimental results are included to demonstrate the goodness of fit of the model at different charging rates and for batteries of different capacities. Comparison with standard electrical-circuit model is also presented. With the proposed model, it is possible to derive more effective way to monitor the status of Lithium ion batteries, and to develop a universal quick charger for different capacities of batteries to result with a more effective usage of Lithium ion batteries.

  4. Lambert W-function based exact representation for double diode model of solar cells: Comparison on fitness and parameter extraction

    International Nuclear Information System (INIS)

    Gao, Xiankun; Cui, Yan; Hu, Jianjun; Xu, Guangyin; Yu, Yongchang

    2016-01-01

    Highlights: • Lambert W-function based exact representation (LBER) is presented for double diode model (DDM). • Fitness difference between LBER and DDM is verified by reported parameter values. • The proposed LBER can better represent the I–V and P–V characteristics of solar cells. • Parameter extraction difference between LBER and DDM is validated by two algorithms. • The parameter values extracted from LBER are more accurate than those from DDM. - Abstract: Accurate modeling and parameter extraction of solar cells play an important role in the simulation and optimization of PV systems. This paper presents a Lambert W-function based exact representation (LBER) for traditional double diode model (DDM) of solar cells, and then compares their fitness and parameter extraction performance. Unlike existing works, the proposed LBER is rigorously derived from DDM, and in LBER the coefficients of Lambert W-function are not extra parameters to be extracted or arbitrary scalars but the vectors of terminal voltage and current of solar cells. The fitness difference between LBER and DDM is objectively validated by the reported parameter values and experimental I–V data of a solar cell and four solar modules from different technologies. The comparison results indicate that under the same parameter values, the proposed LBER can better represent the I–V and P–V characteristics of solar cells and provide a closer representation to actual maximum power points of all module types. Two different algorithms are used to compare the parameter extraction performance of LBER and DDM. One is our restart-based bound constrained Nelder-Mead (rbcNM) algorithm implemented in Matlab, and the other is the reported R_c_r-IJADE algorithm executed in Visual Studio. The comparison results reveal that, the parameter values extracted from LBER using two algorithms are always more accurate and robust than those from DDM despite more time consuming. As an improved version of DDM, the

  5. Petascale Hierarchical Modeling VIA Parallel Execution

    Energy Technology Data Exchange (ETDEWEB)

    Gelman, Andrew [Principal Investigator

    2014-04-14

    The research allows more effective model building. By allowing researchers to fit complex models to large datasets in a scalable manner, our algorithms and software enable more effective scientific research. In the new area of “big data,” it is often necessary to fit “big models” to adjust for systematic differences between sample and population. For this task, scalable and efficient model-fitting tools are needed, and these have been achieved with our new Hamiltonian Monte Carlo algorithm, the no-U-turn sampler, and our new C++ program, Stan. In layman’s terms, our research enables researchers to create improved mathematical modes for large and complex systems.

  6. Complex growing networks with intrinsic vertex fitness

    International Nuclear Information System (INIS)

    Bedogne, C.; Rodgers, G. J.

    2006-01-01

    One of the major questions in complex network research is to identify the range of mechanisms by which a complex network can self organize into a scale-free state. In this paper we investigate the interplay between a fitness linking mechanism and both random and preferential attachment. In our models, each vertex is assigned a fitness x, drawn from a probability distribution ρ(x). In Model A, at each time step a vertex is added and joined to an existing vertex, selected at random, with probability p and an edge is introduced between vertices with fitnesses x and y, with a rate f(x,y), with probability 1-p. Model B differs from Model A in that, with probability p, edges are added with preferential attachment rather than randomly. The analysis of Model A shows that, for every fixed fitness x, the network's degree distribution decays exponentially. In Model B we recover instead a power-law degree distribution whose exponent depends only on p, and we show how this result can be generalized. The properties of a number of particular networks are examined

  7. Extensions and Applications of the Cox-Aalen Survival Model

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2003-01-01

    Aalen additive risk model; competing risk; counting processes; Cox model; cumulative incidence function; goodness of fit; prediction of survival probability; time-varying effects......Aalen additive risk model; competing risk; counting processes; Cox model; cumulative incidence function; goodness of fit; prediction of survival probability; time-varying effects...

  8. Top ten accelerating cosmological models

    International Nuclear Information System (INIS)

    Szydlowski, Marek; Kurek, Aleksandra; Krawiec, Adam

    2006-01-01

    Recent astronomical observations indicate that the Universe is presently almost flat and undergoing a period of accelerated expansion. Basing on Einstein's general relativity all these observations can be explained by the hypothesis of a dark energy component in addition to cold dark matter (CDM). Because the nature of this dark energy is unknown, it was proposed some alternative scenario to explain the current accelerating Universe. The key point of this scenario is to modify the standard FRW equation instead of mysterious dark energy component. The standard approach to constrain model parameters, based on the likelihood method, gives a best-fit model and confidence ranges for those parameters. We always arbitrary choose the set of parameters which define a model which we compare with observational data. Because in the generic case, the introducing of new parameters improves a fit to the data set, there appears the problem of elimination of model parameters which can play an insufficient role. The Bayesian information criteria of model selection (BIC) is dedicated to promotion a set of parameters which should be incorporated to the model. We divide class of all accelerating cosmological models into two groups according to the two types of explanation acceleration of the Universe. Then the Bayesian framework of model selection is used to determine the set of parameters which gives preferred fit to the SNIa data. We find a few of flat cosmological models which can be recommend by the Bayes factor. We show that models with dark energy as a new fluid are favoured over models featuring a modified FRW equation

  9. Optical-model analysis of exotic atom data. Pt. 1

    International Nuclear Information System (INIS)

    Batty, C.J.

    1981-01-01

    Data for kaonic atoms are fitted using a simple optical model with a potential proportional to the nuclear density. Very satisfactory fits to strong interaction shift and width values are obtained but difficulties in fitting yield values indicate that the model is not completely satisfactory. The potential strength can be related to the free kaon-nucleon scattering lengths using a model due to Deloff. A good overall representation of the data is obtained with a black-sphere model. (orig.)

  10. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda; Hart, Jeffrey D.

    2009-01-01

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors

  11. Formulation, construction and analysis of kinetic models of metabolism: A review of modelling frameworks

    DEFF Research Database (Denmark)

    Saa, Pedro A.; Nielsen, Lars K.

    2017-01-01

    Kinetic models are critical to predict the dynamic behaviour of metabolic networks. Mechanistic kinetic models for large networks remain uncommon due to the difficulty of fitting their parameters. Recent modelling frameworks promise new ways to overcome this obstacle while retaining predictive ca...

  12. Model building strategy for logistic regression: purposeful selection.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-03-01

    Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

  13. A new risk prediction model for critical care: the Intensive Care National Audit & Research Centre (ICNARC) model.

    Science.gov (United States)

    Harrison, David A; Parry, Gareth J; Carpenter, James R; Short, Alasdair; Rowan, Kathy

    2007-04-01

    To develop a new model to improve risk prediction for admissions to adult critical care units in the UK. Prospective cohort study. The setting was 163 adult, general critical care units in England, Wales, and Northern Ireland, December 1995 to August 2003. Patients were 216,626 critical care admissions. None. The performance of different approaches to modeling physiologic measurements was evaluated, and the best methods were selected to produce a new physiology score. This physiology score was combined with other information relating to the critical care admission-age, diagnostic category, source of admission, and cardiopulmonary resuscitation before admission-to develop a risk prediction model. Modeling interactions between diagnostic category and physiology score enabled the inclusion of groups of admissions that are frequently excluded from risk prediction models. The new model showed good discrimination (mean c index 0.870) and fit (mean Shapiro's R 0.665, mean Brier's score 0.132) in 200 repeated validation samples and performed well when compared with recalibrated versions of existing published risk prediction models in the cohort of patients eligible for all models. The hypothesis of perfect fit was rejected for all models, including the Intensive Care National Audit & Research Centre (ICNARC) model, as is to be expected in such a large cohort. The ICNARC model demonstrated better discrimination and overall fit than existing risk prediction models, even following recalibration of these models. We recommend it be used to replace previously published models for risk adjustment in the UK.

  14. Amino acids intake and physical fitness among adolescents.

    Science.gov (United States)

    Gracia-Marco, Luis; Bel-Serrat, Silvia; Cuenca-Garcia, Magdalena; Gonzalez-Gross, Marcela; Pedrero-Chamizo, Raquel; Manios, Yannis; Marcos, Ascensión; Molnar, Denes; Widhalm, Kurt; Polito, Angela; Vanhelst, Jeremy; Hagströmer, Maria; Sjöström, Michael; Kafatos, Anthony; de Henauw, Stefaan; Gutierrez, Ángel; Castillo, Manuel J; Moreno, Luis A

    2017-06-01

    The aim was to investigate whether there was an association between amino acid (AA) intake and physical fitness and if so, to assess whether this association was independent of carbohydrates intake. European adolescents (n = 1481, 12.5-17.5 years) were measured. Intake was assessed via two non-consecutive 24-h dietary recalls. Lower and upper limbs muscular fitness was assessed by standing long jump and handgrip strength tests, respectively. Cardiorespiratory fitness was assessed by the 20-m shuttle run test. Physical activity was objectively measured. Socioeconomic status was obtained via questionnaires. Lower limbs muscular fitness seems to be positively associated with tryptophan, histidine and methionine intake in boys, regardless of centre, age, socioeconomic status, physical activity and total energy intake (model 1). However, these associations disappeared once carbohydrates intake was controlled for (model 2). In girls, only proline intake seems to be positively associated with lower limbs muscular fitness (model 2) while cardiorespiratory fitness seems to be positively associated with leucine (model 1) and proline intake (models 1 and 2). None of the observed significant associations remained significant once multiple testing was controlled for. In conclusion, we failed to detect any associations between any of the evaluated AAs and physical fitness after taking into account the effect of multiple testing.

  15. A global fit of the γ-ray galactic center excess within the scalar singlet Higgs portal model

    International Nuclear Information System (INIS)

    Cuoco, Alessandro; Eiteneuer, Benedikt; Heisig, Jan; Krämer, Michael

    2016-01-01

    We analyse the excess in the γ-ray emission from the center of our galaxy observed by Fermi-LAT in terms of dark matter annihilation within the scalar Higgs portal model. In particular, we include the astrophysical uncertainties from the dark matter distribution and allow for unspecified additional dark matter components. We demonstrate through a detailed numerical fit that the strength and shape of the γ-ray spectrum can indeed be described by the model in various regions of dark matter masses and couplings. Constraints from invisible Higgs decays, direct dark matter searches, indirect searches in dwarf galaxies and for γ-ray lines, and constraints from the dark matter relic density reduce the parameter space to dark matter masses near the Higgs resonance. We find two viable regions: one where the Higgs-dark matter coupling is of O(10"−"2), and an additional dark matter component beyond the scalar WIMP of our model is preferred, and one region where the Higgs-dark matter coupling may be significantly smaller, but where the scalar WIMP constitutes a significant fraction or even all of dark matter. Both viable regions are hard to probe in future direct detection and collider experiments.

  16. Functionally unidimensional item response models for multivariate binary data

    DEFF Research Database (Denmark)

    Ip, Edward; Molenberghs, Geert; Chen, Shyh-Huei

    2013-01-01

    The problem of fitting unidimensional item response models to potentially multidimensional data has been extensively studied. The focus of this article is on response data that have a strong dimension but also contain minor nuisance dimensions. Fitting a unidimensional model to such multidimensio......The problem of fitting unidimensional item response models to potentially multidimensional data has been extensively studied. The focus of this article is on response data that have a strong dimension but also contain minor nuisance dimensions. Fitting a unidimensional model...... to such multidimensional data is believed to result in ability estimates that represent a combination of the major and minor dimensions. We conjecture that the underlying dimension for the fitted unidimensional model, which we call the functional dimension, represents a nonlinear projection. In this article we investigate...... tool. An example regarding a construct of desire for physical competency is used to illustrate the functional unidimensional approach....

  17. Tennis Elbow Diagnosis Using Equivalent Uniform Voltage to Fit the Logistic and the Probit Diseased Probability Models

    Directory of Open Access Journals (Sweden)

    Tsair-Fwu Lee

    2015-01-01

    Full Text Available To develop the logistic and the probit models to analyse electromyographic (EMG equivalent uniform voltage- (EUV- response for the tenderness of tennis elbow. In total, 78 hands from 39 subjects were enrolled. In this study, surface EMG (sEMG signal is obtained by an innovative device with electrodes over forearm region. The analytical endpoint was defined as Visual Analog Score (VAS 3+ tenderness of tennis elbow. The logistic and the probit diseased probability (DP models were established for the VAS score and EMG absolute voltage-time histograms (AVTH. TV50 is the threshold equivalent uniform voltage predicting a 50% risk of disease. Twenty-one out of 78 samples (27% developed VAS 3+ tenderness of tennis elbow reported by the subject and confirmed by the physician. The fitted DP parameters were TV50 = 153.0 mV (CI: 136.3–169.7 mV, γ50 = 0.84 (CI: 0.78–0.90 and TV50 = 155.6 mV (CI: 138.9–172.4 mV, m = 0.54 (CI: 0.49–0.59 for logistic and probit models, respectively. When the EUV ≥ 153 mV, the DP of the patient is greater than 50% and vice versa. The logistic and the probit models are valuable tools to predict the DP of VAS 3+ tenderness of tennis elbow.

  18. Tennis Elbow Diagnosis Using Equivalent Uniform Voltage to Fit the Logistic and the Probit Diseased Probability Models

    Science.gov (United States)

    Lin, Wei-Chun; Lin, Shu-Yuan; Wu, Li-Fu; Guo, Shih-Sian; Huang, Hsiang-Jui; Chao, Pei-Ju

    2015-01-01

    To develop the logistic and the probit models to analyse electromyographic (EMG) equivalent uniform voltage- (EUV-) response for the tenderness of tennis elbow. In total, 78 hands from 39 subjects were enrolled. In this study, surface EMG (sEMG) signal is obtained by an innovative device with electrodes over forearm region. The analytical endpoint was defined as Visual Analog Score (VAS) 3+ tenderness of tennis elbow. The logistic and the probit diseased probability (DP) models were established for the VAS score and EMG absolute voltage-time histograms (AVTH). TV50 is the threshold equivalent uniform voltage predicting a 50% risk of disease. Twenty-one out of 78 samples (27%) developed VAS 3+ tenderness of tennis elbow reported by the subject and confirmed by the physician. The fitted DP parameters were TV50 = 153.0 mV (CI: 136.3–169.7 mV), γ 50 = 0.84 (CI: 0.78–0.90) and TV50 = 155.6 mV (CI: 138.9–172.4 mV), m = 0.54 (CI: 0.49–0.59) for logistic and probit models, respectively. When the EUV ≥ 153 mV, the DP of the patient is greater than 50% and vice versa. The logistic and the probit models are valuable tools to predict the DP of VAS 3+ tenderness of tennis elbow. PMID:26380281

  19. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    Science.gov (United States)

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  20. Multilevel modeling using R

    CERN Document Server

    Finch, W Holmes; Kelley, Ken

    2014-01-01

    A powerful tool for analyzing nested designs in a variety of fields, multilevel/hierarchical modeling allows researchers to account for data collected at multiple levels. Multilevel Modeling Using R provides you with a helpful guide to conducting multilevel data modeling using the R software environment.After reviewing standard linear models, the authors present the basics of multilevel models and explain how to fit these models using R. They then show how to employ multilevel modeling with longitudinal data and demonstrate the valuable graphical options in R. The book also describes models fo

  1. Non-linear Growth Models in Mplus and SAS

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  2. A No-Scale Inflationary Model to Fit Them All

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri; Olive, Keith

    2014-01-01

    The magnitude of B-mode polarization in the cosmic microwave background as measured by BICEP2 favours models of chaotic inflation with a quadratic $m^2 \\phi^2/2$ potential, whereas data from the Planck satellite favour a small value of the tensor-to-scalar perturbation ratio $r$ that is highly consistent with the Starobinsky $R + R^2$ model. Reality may lie somewhere between these two scenarios. In this paper we propose a minimal two-field no-scale supergravity model that interpolates between quadratic and Starobinsky-like inflation as limiting cases, while retaining the successful prediction $n_s \\simeq 0.96$.

  3. On 4-degree-of-freedom biodynamic models of seated occupants: Lumped-parameter modeling

    Science.gov (United States)

    Bai, Xian-Xu; Xu, Shi-Xu; Cheng, Wei; Qian, Li-Jun

    2017-08-01

    It is useful to develop an effective biodynamic model of seated human occupants to help understand the human vibration exposure to transportation vehicle vibrations and to help design and improve the anti-vibration devices and/or test dummies. This study proposed and demonstrated a methodology for systematically identifying the best configuration or structure of a 4-degree-of-freedom (4DOF) human vibration model and for its parameter identification. First, an equivalent simplification expression for the models was made. Second, all of the possible 23 structural configurations of the models were identified. Third, each of them was calibrated using the frequency response functions recommended in a biodynamic standard. An improved version of non-dominated sorting genetic algorithm (NSGA-II) based on Pareto optimization principle was used to determine the model parameters. Finally, a model evaluation criterion proposed in this study was used to assess the models and to identify the best one, which was based on both the goodness of curve fits and comprehensive goodness of the fits. The identified top configurations were better than those reported in the literature. This methodology may also be extended and used to develop the models with other DOFs.

  4. Right-sizing statistical models for longitudinal data.

    Science.gov (United States)

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  5. Gene conversion at the gray locus of Sordaria fimicola: fit of the experimental data to a hybrid DNA model of recombination.

    Science.gov (United States)

    Kalogeropoulos, A; Thuriaux, P

    1985-03-01

    A hybrid DNA (hDNA) model of recombination has been algebraically formulated, which allows the prediction of frequencies of postmeiotic segregation and conversion of a given allele and their probability of being associated with a crossing over. The model considered is essentially the "Aviemore model." In contrast to some other interpretations of recombination, it states that gene conversion can only result from the repair of heteroduplex hDNA, with postmeiotic segregation resulting from unrepaired heteroduplexes. The model also postulates that crossing over always occurs distally to the initiation site of the hDNA. Eleven types of conversion and postmeiotic segregation with or without associated crossover were considered. Their theoretical frequencies are given by 11 linear equations with ten variables, four describing heteroduplex repair, four giving the probability of hDNA formation and its topological properties and two giving the probability that crossing over occurs at the left or right of the converting allele. Using the experimental data of Kitani and coworkers on conversion at the six best studied gray alleles of Sordaria fimicola, we found that the model considered fit the data at a P level above or very close (allele h4) to the 5% level of sampling error provided that the hDNA is partly asymmetric. The best fitting solutions are such that the hDNA has an equal probability of being formed on either chromatid or, alternatively, that both DNA strands have the same probability of acting as the invading strand during hDNA formation. The two mismatches corresponding to a given allele are repaired with different efficiencies. Optimal solutions are found if one allows for repair to be more efficient on the asymmetric hDNA than on the symmetric one. In the case of allele g1, our data imply that the direction of repair is nonrandom with respect to the strand on which it occurs.

  6. Comparison of a layered slab and an atlas head model for Monte Carlo fitting of time-domain near-infrared spectroscopy data of the adult head.

    Science.gov (United States)

    Selb, Juliette; Ogden, Tyler M; Dubb, Jay; Fang, Qianqian; Boas, David A

    2014-01-01

    Near-infrared spectroscopy (NIRS) estimations of the adult brain baseline optical properties based on a homogeneous model of the head are known to introduce significant contamination from extracerebral layers. More complex models have been proposed and occasionally applied to in vivo data, but their performances have never been characterized on realistic head structures. Here we implement a flexible fitting routine of time-domain NIRS data using graphics processing unit based Monte Carlo simulations. We compare the results for two different geometries: a two-layer slab with variable thickness of the first layer and a template atlas head registered to the subject's head surface. We characterize the performance of the Monte Carlo approaches for fitting the optical properties from simulated time-resolved data of the adult head. We show that both geometries provide better results than the commonly used homogeneous model, and we quantify the improvement in terms of accuracy, linearity, and cross-talk from extracerebral layers.

  7. Ultra high energy interaction models for Monte Carlo calculations: what model is the best fit

    Energy Technology Data Exchange (ETDEWEB)

    Stanev, Todor [Bartol Research Institute, University of Delaware, Newark DE 19716 (United States)

    2006-01-15

    We briefly outline two methods for extension of hadronic interaction models to extremely high energy. Then we compare the main characteristics of representative computer codes that implement the different models and give examples of air shower parameters predicted by those codes.

  8. Statistical Modelling of Extreme Rainfall in Taiwan

    NARCIS (Netherlands)

    L-F. Chu (Lan-Fen); M.J. McAleer (Michael); C-C. Chang (Ching-Chung)

    2012-01-01

    textabstractIn this paper, the annual maximum daily rainfall data from 1961 to 2010 are modelled for 18 stations in Taiwan. We fit the rainfall data with stationary and non-stationary generalized extreme value distributions (GEV), and estimate their future behaviour based on the best fitting model.

  9. Statistical Modelling of Extreme Rainfall in Taiwan

    NARCIS (Netherlands)

    L. Chu (LanFen); M.J. McAleer (Michael); C-H. Chang (Chu-Hsiang)

    2013-01-01

    textabstractIn this paper, the annual maximum daily rainfall data from 1961 to 2010 are modelled for 18 stations in Taiwan. We fit the rainfall data with stationary and non-stationary generalized extreme value distributions (GEV), and estimate their future behaviour based on the best fitting model.

  10. Fitting diameter distribution models to data from forest inventories with concentric plot design

    Energy Technology Data Exchange (ETDEWEB)

    Nanos, N.; Sjöstedt de Luna, S.

    2017-11-01

    Aim: Several national forest inventories use a complex plot design based on multiple concentric subplots where smaller diameter trees are inventoried when lying in the smaller-radius subplots and ignored otherwise. Data from these plots are truncated with threshold (truncation) diameters varying according to the distance from the plot centre. In this paper we designed a maximum likelihood method to fit the Weibull diameter distribution to data from concentric plots. Material and methods: Our method (M1) was based on multiple truncated probability density functions to build the likelihood. In addition, we used an alternative method (M2) presented recently. We used methods M1 and M2 as well as two other reference methods to estimate the Weibull parameters in 40000 simulated plots. The spatial tree pattern of the simulated plots was generated using four models of spatial point patterns. Two error indices were used to assess the relative performance of M1 and M2 in estimating relevant stand-level variables. In addition, we estimated the Quadratic Mean plot Diameter (QMD) using Expansion Factors (EFs). Main results: Methods M1 and M2 produced comparable estimation errors in random and cluster tree spatial patterns. Method M2 produced biased parameter estimates in plots with inhomogeneous Poisson patterns. Estimation of QMD using EFs produced biased results in plots within inhomogeneous intensity Poisson patterns. Research highlights:We designed a new method to fit the Weibull distribution to forest inventory data from concentric plots that achieves high accuracy and precision in parameter estimates regardless of the within-plot spatial tree pattern.

  11. Worm plot to diagnose fit in quantile regression

    NARCIS (Netherlands)

    Buuren, S. van

    2007-01-01

    The worm plot is a series of detrended Q-Q plots, split by covariate levels. The worm plot is a diagnostic tool for visualizing how well a statistical model fits the data, for finding locations at which the fit can be improved, and for comparing the fit of different models. This paper shows how the

  12. Worm plot to diagnose fit in quantile regression

    NARCIS (Netherlands)

    Buuren, S. van

    2007-01-01

    The worm plot is a series of detrended Q-Q plots, split by covariate levels. The worm plot is a diagnostic tool for visualizing how well a statistical model fits the data, for finding locations at which the fit can be improved, and for comparing the fit of different models. This paper shows how

  13. Mixed Portmanteau Test for Diagnostic Checking of Time Series Models

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2014-01-01

    Full Text Available Model criticism is an important stage of model building and thus goodness of fit tests provides a set of tools for diagnostic checking of the fitted model. Several tests are suggested in literature for diagnostic checking. These tests use autocorrelation or partial autocorrelation in the residuals to criticize the adequacy of fitted model. The main idea underlying these portmanteau tests is to identify if there is any dependence structure which is yet unexplained by the fitted model. In this paper, we suggest mixed portmanteau tests based on autocorrelation and partial autocorrelation functions of the residuals. We derived the asymptotic distribution of the mixture test and studied its size and power using Monte Carlo simulations.

  14. Assessment of health surveys: fitting a multidimensional graded response model.

    Science.gov (United States)

    Depaoli, Sarah; Tiemensma, Jitske; Felt, John M

    The multidimensional graded response model, an item response theory (IRT) model, can be used to improve the assessment of surveys, even when sample sizes are restricted. Typically, health-based survey development utilizes classical statistical techniques (e.g. reliability and factor analysis). In a review of four prominent journals within the field of Health Psychology, we found that IRT-based models were used in less than 10% of the studies examining scale development or assessment. However, implementing IRT-based methods can provide more details about individual survey items, which is useful when determining the final item content of surveys. An example using a quality of life survey for Cushing's syndrome (CushingQoL) highlights the main components for implementing the multidimensional graded response model. Patients with Cushing's syndrome (n = 397) completed the CushingQoL. Results from the multidimensional graded response model supported a 2-subscale scoring process for the survey. All items were deemed as worthy contributors to the survey. The graded response model can accommodate unidimensional or multidimensional scales, be used with relatively lower sample sizes, and is implemented in free software (example code provided in online Appendix). Use of this model can help to improve the quality of health-based scales being developed within the Health Sciences.

  15. Assessing the fit of the Dysphoric Arousal model across two nationally representative epidemiological surveys: The Australian NSMHWB and the United States NESARC.

    Science.gov (United States)

    Armour, Cherie; Carragher, Natacha; Elhai, Jon D

    2013-01-01

    Since the initial inclusion of PTSD in the DSM nomenclature, PTSD symptomatology has been distributed across three symptom clusters. However, a wealth of empirical research has concluded that PTSD's latent structure is best represented by one of two four-factor models: Numbing or Dysphoria. Recently, a newly proposed five-factor Dysphoric Arousal model, which separates the DSM-IV's Arousal cluster into two factors of Anxious Arousal and Dysphoric Arousal, has gathered support across a variety of trauma samples. To date, the Dysphoric Arousal model has not been assessed using nationally representative epidemiological data. We employed confirmatory factor analysis to examine PTSD's latent structure in two independent population based surveys from American (NESARC) and Australia (NSWHWB). We specified and estimated the Numbing model, the Dysphoria model, and the Dysphoric Arousal model in both samples. Results revealed that the Dysphoric Arousal model provided superior fit to the data compared to the alternative models. In conclusion, these findings suggest that items D1-D3 (sleeping difficulties; irritability; concentration difficulties) represent a separate, fifth factor within PTSD's latent structure using nationally representative epidemiological data in addition to single trauma specific samples. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Confidence of model based shape reconstruction from sparse data

    DEFF Research Database (Denmark)

    Baka, N.; de Bruijne, Marleen; Reiber, J. H. C.

    2010-01-01

    Statistical shape models (SSM) are commonly applied for plausible interpolation of missing data in medical imaging. However, when fitting a shape model to sparse information, many solutions may fit the available data. In this paper we derive a constrained SSM to fit noisy sparse input landmarks...

  17. Categorical marginal models: quite extensive package for the estimation of marginal models for categorical data

    OpenAIRE

    Wicher Bergsma; Andries van der Ark

    2015-01-01

    A package accompanying the book Marginal Models for Dependent, Clustered, and Longitudinal Categorical Data by Bergsma, Croon, & Hagenaars, 2009. It’s purpose is fitting and testing of marginal models.

  18. GPCRM: a homology modeling web service with triple membrane-fitted quality assessment of GPCR models.

    Science.gov (United States)

    Miszta, Przemyslaw; Pasznik, Pawel; Jakowiecki, Jakub; Sztyler, Agnieszka; Latek, Dorota; Filipek, Slawomir

    2018-05-21

    Due to the involvement of G protein-coupled receptors (GPCRs) in most of the physiological and pathological processes in humans they have been attracting a lot of attention from pharmaceutical industry as well as from scientific community. Therefore, the need for new, high quality structures of GPCRs is enormous. The updated homology modeling service GPCRM (http://gpcrm.biomodellab.eu/) meets those expectations by greatly reducing the execution time of submissions (from days to hours/minutes) with nearly the same average quality of obtained models. Additionally, due to three different scoring functions (Rosetta, Rosetta-MP, BCL::Score) it is possible to select accurate models for the required purposes: the structure of the binding site, the transmembrane domain or the overall shape of the receptor. Currently, no other web service for GPCR modeling provides this possibility. GPCRM is continually upgraded in a semi-automatic way and the number of template structures has increased from 20 in 2013 to over 90 including structures the same receptor with different ligands which can influence the structure not only in the on/off manner. Two types of protein viewers can be used for visual inspection of obtained models. The extended sortable tables with available templates provide links to external databases and display ligand-receptor interactions in visual form.

  19. Measuring Quasar Spin via X-ray Continuum Fitting

    Science.gov (United States)

    Jenkins, Matthew; Pooley, David; Rappaport, Saul; Steiner, Jack

    2018-01-01

    We have identified several quasars whose X-ray spectra appear very soft. When fit with power-law models, the best-fit indices are greater than 3. This is very suggestive of thermal disk emission, indicating that the X-ray spectrum is dominated by the disk component. Galactic black hole binaries in such states have been successfully fit with disk-blackbody models to constrain the inner radius, which also constrains the spin of the black hole. We have fit those models to XMM-Newton spectra of several of our identified soft X-ray quasars to place constraints on the spins of the supermassive black holes.

  20. Discrete stochastic analogs of Erlang epidemic models.

    Science.gov (United States)

    Getz, Wayne M; Dougherty, Eric R

    2018-12-01

    Erlang differential equation models of epidemic processes provide more realistic disease-class transition dynamics from susceptible (S) to exposed (E) to infectious (I) and removed (R) categories than the ubiquitous SEIR model. The latter is itself is at one end of the spectrum of Erlang SE[Formula: see text]I[Formula: see text]R models with [Formula: see text] concatenated E compartments and [Formula: see text] concatenated I compartments. Discrete-time models, however, are computationally much simpler to simulate and fit to epidemic outbreak data than continuous-time differential equations, and are also much more readily extended to include demographic and other types of stochasticity. Here we formulate discrete-time deterministic analogs of the Erlang models, and their stochastic extension, based on a time-to-go distributional principle. Depending on which distributions are used (e.g. discretized Erlang, Gamma, Beta, or Uniform distributions), we demonstrate that our formulation represents both a discretization of Erlang epidemic models and generalizations thereof. We consider the challenges of fitting SE[Formula: see text]I[Formula: see text]R models and our discrete-time analog to data (the recent outbreak of Ebola in Liberia). We demonstrate that the latter performs much better than the former; although confining fits to strict SEIR formulations reduces the numerical challenges, but sacrifices best-fit likelihood scores by at least 7%.

  1. Fit accuracy of metal partial removable dental prosthesis frameworks fabricated by traditional or light curing modeling material technique: An in vitro study

    Science.gov (United States)

    Anan, Mohammad Tarek M.; Al-Saadi, Mohannad H.

    2015-01-01

    Objective The aim of this study was to compare the fit accuracies of metal partial removable dental prosthesis (PRDP) frameworks fabricated by the traditional technique (TT) or the light-curing modeling material technique (LCMT). Materials and methods A metal model of a Kennedy class III modification 1 mandibular dental arch with two edentulous spaces of different spans, short and long, was used for the study. Thirty identical working casts were used to produce 15 PRDP frameworks each by TT and by LCMT. Every framework was transferred to a metal master cast to measure the gap between the metal base of the framework and the crest of the alveolar ridge of the cast. Gaps were measured at three points on each side by a USB digital intraoral camera at ×16.5 magnification. Images were transferred to a graphics editing program. A single examiner performed all measurements. The two-tailed t-test was performed at the 5% significance level. Results The mean gap value was significantly smaller in the LCMT group compared to the TT group. The mean value of the short edentulous span was significantly smaller than that of the long edentulous span in the LCMT group, whereas the opposite result was obtained in the TT group. Conclusion Within the limitations of this study, it can be concluded that the fit of the LCMT-fabricated frameworks was better than the fit of the TT-fabricated frameworks. The framework fit can differ according to the span of the edentate ridge and the fabrication technique for the metal framework. PMID:26236129

  2. Fitting a mixture of von Mises distributions in order to model data on wind direction in Peninsular Malaysia

    International Nuclear Information System (INIS)

    Masseran, N.; Razali, A.M.; Ibrahim, K.; Latif, M.T.

    2013-01-01

    Highlights: • We suggest a simple way for wind direction modeling using the mixture of von Mises distribution. • We determine the most suitable probability model for wind direction regime in Malaysia. • We provide the circular density plots to show the most prominent directions of wind blows. - Abstract: A statistical distribution for describing wind direction provides information about the wind regime at a particular location. In addition, this information complements knowledge of wind speed, which allows researchers to draw some conclusions about the energy potential of wind and aids the development of efficient wind energy generation. This study focuses on modeling the frequency distribution of wind direction, including some characteristics of wind regime that cannot be represented by a unimodal distribution. To identify the most suitable model, a finite mixture of von Mises distributions were fitted to the average hourly wind direction data for nine wind stations located in Peninsular Malaysia. The data used were from the years 2000 to 2009. The suitability of each mixture distribution was judged based on the R 2 coefficient and the histogram plot with a density line. The results showed that the finite mixture of the von Mises distribution with H number of components was the best distribution to describe the wind direction distributions in Malaysia. In addition, the circular density plots of the suitable model clearly showed the most prominent directions of wind blows than the other directions

  3. Recovering stellar population parameters via two full-spectrum fitting algorithms in the absence of model uncertainties

    Science.gov (United States)

    Ge, Junqiang; Yan, Renbin; Cappellari, Michele; Mao, Shude; Li, Hongyu; Lu, Youjun

    2018-05-01

    Using mock spectra based on Vazdekis/MILES library fitted within the wavelength region 3600-7350Å, we analyze the bias and scatter on the resulting physical parameters induced by the choice of fitting algorithms and observational uncertainties, but avoid effects of those model uncertainties. We consider two full-spectrum fitting codes: pPXF and STARLIGHT, in fitting for stellar population age, metallicity, mass-to-light ratio, and dust extinction. With pPXF we find that both the bias μ in the population parameters and the scatter σ in the recovered logarithmic values follows the expected trend μ ∝ σ ∝ 1/(S/N). The bias increases for younger ages and systematically makes recovered ages older, M*/Lr larger and metallicities lower than the true values. For reference, at S/N=30, and for the worst case (t = 108yr), the bias is 0.06 dex in M/Lr, 0.03 dex in both age and [M/H]. There is no significant dependence on either E(B-V) or the shape of the error spectrum. Moreover, the results are consistent for both our 1-SSP and 2-SSP tests. With the STARLIGHT algorithm, we find trends similar to pPXF, when the input E(B-V)values, with significantly underestimated dust extinction and [M/H], and larger ages and M*/Lr. Results degrade when moving from our 1-SSP to the 2-SSP tests. The STARLIGHT convergence to the true values can be improved by increasing Markov Chains and annealing loops to the "slow mode". For the same input spectrum, pPXF is about two order of magnitudes faster than STARLIGHT's "default mode" and about three order of magnitude faster than STARLIGHT's "slow mode".

  4. Modeling patterns in count data using loglinear and related models

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1995-12-01

    This report explains the use of loglinear and logit models, for analyzing Poisson and binomial counts in the presence of explanatory variables. The explanatory variables may be unordered categorical variables or numerical variables, or both. The report shows how to construct models to fit data, and how to test whether a model is too simple or too complex. The appropriateness of the methods with small data sets is discussed. Several example analyses, using the SAS computer package, illustrate the methods

  5. Computational Software to Fit Seismic Data Using Epidemic-Type Aftershock Sequence Models and Modeling Performance Comparisons

    Science.gov (United States)

    Chu, A.

    2016-12-01

    Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work implements three of the homogeneous ETAS models described in Ogata (1998). With a model's log-likelihood function, my software finds the Maximum-Likelihood Estimates (MLEs) of the model's parameters to estimate the homogeneous background rate and the temporal and spatial parameters that govern triggering effects. EM-algorithm is employed for its advantages of stability and robustness (Veen and Schoenberg, 2008). My work also presents comparisons among the three models in robustness, convergence speed, and implementations from theory to computing practice. Up-to-date regional seismic data of seismic active areas such as Southern California and Japan are used to demonstrate the comparisons. Data analysis has been done using computer languages Java and R. Java has the advantages of being strong-typed and easiness of controlling memory resources, while R has the advantages of having numerous available functions in statistical computing. Comparisons are also made between the two programming languages in convergence and stability, computational speed, and easiness of implementation. Issues that may affect convergence such as spatial shapes are discussed.

  6. Experimental Characterization and Modeling of Thermal Contact Resistance of Electric Machine Stator-to-Cooling Jacket Interface Under Interference Fit Loading

    Energy Technology Data Exchange (ETDEWEB)

    Cousineau, Justine E [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bennion, Kevin S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Chieduko, Victor [UQM Technologies, Inc.; Lall, Rajiv [UQM Technologies, Inc.; Gilbert, Alan [UQM Technologies, Inc.

    2018-05-08

    Cooling of electric machines is a key to increasing power density and improving reliability. This paper focuses on the design of a machine using a cooling jacket wrapped around the stator. The thermal contact resistance (TCR) between the electric machine stator and cooling jacket is a significant factor in overall performance and is not well characterized. This interface is typically an interference fit subject to compressive pressure exceeding 5 MPa. An experimental investigation of this interface was carried out using a thermal transmittance setup using pressures between 5 and 10 MPa. The results were compared to currently available models for contact resistance, and one model was adapted for prediction of TCR in future motor designs.

  7. A model of diffraction scattering with unitary corrections

    International Nuclear Information System (INIS)

    Etim, E.; Malecki, A.; Satta, L.

    1989-01-01

    The inability of the multiple scattering model of Glauber and similar geometrical picture models to fit data at Collider energies, to fit low energy data at large momentum transfers and to explain the absence of multiple diffraction dips in the data is noted. It is argued and shown that a unitary correction to the multiple scattering amplitude gives rise to a better model and allows to fit all available data on nucleon-nucleon and nucleus-nucleus collisions at all energies and all momentum transfers. There are no multiple diffraction dips

  8. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    International Nuclear Information System (INIS)

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. (paper)

  9. Log-binomial models: exploring failed convergence.

    Science.gov (United States)

    Williamson, Tyler; Eliasziw, Misha; Fick, Gordon Hilton

    2013-12-13

    Relative risk is a summary metric that is commonly used in epidemiological investigations. Increasingly, epidemiologists are using log-binomial models to study the impact of a set of predictor variables on a single binary outcome, as they naturally offer relative risks. However, standard statistical software may report failed convergence when attempting to fit log-binomial models in certain settings. The methods that have been proposed in the literature for dealing with failed convergence use approximate solutions to avoid the issue. This research looks directly at the log-likelihood function for the simplest log-binomial model where failed convergence has been observed, a model with a single linear predictor with three levels. The possible causes of failed convergence are explored and potential solutions are presented for some cases. Among the principal causes is a failure of the fitting algorithm to converge despite the log-likelihood function having a single finite maximum. Despite these limitations, log-binomial models are a viable option for epidemiologists wishing to describe the relationship between a set of predictors and a binary outcome where relative risk is the desired summary measure. Epidemiologists are encouraged to continue to use log-binomial models and advocate for improvements to the fitting algorithms to promote the widespread use of log-binomial models.

  10. Simple inhomogeneous cosmological (toy) models

    International Nuclear Information System (INIS)

    Isidro, Eddy G. Chirinos; Zimdahl, Winfried; Vargas, Cristofher Zuñiga

    2016-01-01

    Based on the Lemaître-Tolman-Bondi (LTB) metric we consider two flat inhomogeneous big-bang models. We aim at clarifying, as far as possible analytically, basic features of the dynamics of the simplest inhomogeneous models and to point out the potential usefulness of exact inhomogeneous solutions as generalizations of the homogeneous configurations of the cosmological standard model. We discuss explicitly partial successes but also potential pitfalls of these simplest models. Although primarily seen as toy models, the relevant free parameters are fixed by best-fit values using the Joint Light-curve Analysis (JLA)-sample data. On the basis of a likelihood analysis we find that a local hump with an extension of almost 2 Gpc provides a better description of the observations than a local void for which we obtain a best-fit scale of about 30 Mpc. Future redshift-drift measurements are discussed as a promising tool to discriminate between inhomogeneous configurations and the ΛCDM model.

  11. Predicting Barrett's Esophagus in Families: An Esophagus Translational Research Network (BETRNet) Model Fitting Clinical Data to a Familial Paradigm.

    Science.gov (United States)

    Sun, Xiangqing; Elston, Robert C; Barnholtz-Sloan, Jill S; Falk, Gary W; Grady, William M; Faulx, Ashley; Mittal, Sumeet K; Canto, Marcia; Shaheen, Nicholas J; Wang, Jean S; Iyer, Prasad G; Abrams, Julian A; Tian, Ye D; Willis, Joseph E; Guda, Kishore; Markowitz, Sanford D; Chandar, Apoorva; Warfe, James M; Brock, Wendy; Chak, Amitabh

    2016-05-01

    Barrett's esophagus is often asymptomatic and only a small portion of Barrett's esophagus patients are currently diagnosed and under surveillance. Therefore, it is important to develop risk prediction models to identify high-risk individuals with Barrett's esophagus. Familial aggregation of Barrett's esophagus and esophageal adenocarcinoma, and the increased risk of esophageal adenocarcinoma for individuals with a family history, raise the necessity of including genetic factors in the prediction model. Methods to determine risk prediction models using both risk covariates and ascertained family data are not well developed. We developed a Barrett's Esophagus Translational Research Network (BETRNet) risk prediction model from 787 singly ascertained Barrett's esophagus pedigrees and 92 multiplex Barrett's esophagus pedigrees, fitting a multivariate logistic model that incorporates family history and clinical risk factors. The eight risk factors, age, sex, education level, parental status, smoking, heartburn frequency, regurgitation frequency, and use of acid suppressant, were included in the model. The prediction accuracy was evaluated on the training dataset and an independent validation dataset of 643 multiplex Barrett's esophagus pedigrees. Our results indicate family information helps to predict Barrett's esophagus risk, and predicting in families improves both prediction calibration and discrimination accuracy. Our model can predict Barrett's esophagus risk for anyone with family members known to have, or not have, had Barrett's esophagus. It can predict risk for unrelated individuals without knowing any relatives' information. Our prediction model will shed light on effectively identifying high-risk individuals for Barrett's esophagus screening and surveillance, consequently allowing intervention at an early stage, and reducing mortality from esophageal adenocarcinoma. Cancer Epidemiol Biomarkers Prev; 25(5); 727-35. ©2016 AACR. ©2016 American Association for

  12. Evolving Four Part Harmony Using a Multiple Worlds Model

    DEFF Research Database (Denmark)

    Scirea, Marco; Brown, Joseph Alexander

    2015-01-01

    This application of the Multiple Worlds Model examines a collaborative fitness model for generating four part harmonies. In this model we have multiple populations and the fitness of the individuals is based on the ability of a member from each population to work with the members of other...

  13. Multilevel models for longitudinal data

    OpenAIRE

    Fiona Steele

    2008-01-01

    Repeated measures and repeated events data have a hierarchical structure which can be analysed by using multilevel models. A growth curve model is an example of a multilevel random-coefficients model, whereas a discrete time event history model for recurrent events can be fitted as a multilevel logistic regression model. The paper describes extensions to the basic growth curve model to handle auto-correlated residuals, multiple-indicator latent variables and correlated growth processes, and e...

  14. Advanced impedance modeling of solid oxide electrochemical cells

    DEFF Research Database (Denmark)

    Graves, Christopher R.; Hjelm, Johan

    2014-01-01

    Impedance spectroscopy is a powerful technique for detailed study of the electrochemical and transport processes that take place in fuel cells and electrolysis cells, including solid oxide cells (SOCs). Meaningful analysis of impedance measurements is nontrivial, however, because a large number...... techniques to provide good guesses for the modeling parameters, like transforming the impedance data to the distribution of relaxation times (DRT), together with experimental parameter sensitivity studies, is the state-of-the-art approach to achieve good EC model fits. Here we present new impedance modeling...... electrode and 2-D gas transport models which have fewer unknown parameters for the same number of processes, (ii) use of a new model fitting algorithm, “multi-fitting”, in which multiple impedance spectra are fit simultaneously with parameters linked based on the variation of measurement conditions, (iii...

  15. UK energy policy ambition and UK energy modelling-fit for purpose?

    International Nuclear Information System (INIS)

    Strachan, Neil

    2011-01-01

    Aiming to lead amongst other G20 countries, the UK government has classified the twin energy policy priorities of decarbonisation and security of supply as a 'centennial challenge'. This viewpoint discusses the UK's capacity for energy modelling and scenario building as a critical underpinning of iterative decision making to meet these policy ambitions. From a nadir, over the last decade UK modelling expertise has been steadily built up. However extreme challenges remain in the level and consistency of funding of core model teams - critical to ensure a full scope of energy model types and hence insights, and in developing new state-of-the-art models to address evolving uncertainties. Meeting this challenge will facilitate a broad scope of types and geographical scale of UK's analytical tools to responsively deliver the evidence base for a range of public and private sector decision makers, and ensure that the UK contributes to global efforts to advance the field of energy-economic modelling. - Research highlights: → Energy modelling capacity is a critical underpinning for iterative energy policy making. → Full scope of energy models and analytical approaches is required. → Extreme challenges remain in consistent and sustainable funding of energy modelling teams. → National governments that lead in global energy policy also need to invest in modelling capacity.

  16. The spatial limitations of current neutral models of biodiversity.

    Directory of Open Access Journals (Sweden)

    Rampal S Etienne

    Full Text Available The unified neutral theory of biodiversity and biogeography is increasingly accepted as an informative null model of community composition and dynamics. It has successfully produced macro-ecological patterns such as species-area relationships and species abundance distributions. However, the models employed make many unrealistic auxiliary assumptions. For example, the popular spatially implicit version assumes a local plot exchanging migrants with a large panmictic regional source pool. This simple structure allows rigorous testing of its fit to data. In contrast, spatially explicit models assume that offspring disperse only limited distances from their parents, but one cannot as yet test the significance of their fit to data. Here we compare the spatially explicit and the spatially implicit model, fitting the most-used implicit model (with two levels, local and regional to data simulated by the most-used spatially explicit model (where offspring are distributed about their parent on a grid according to either a radially symmetric Gaussian or a 'fat-tailed' distribution. Based on these fits, we express spatially implicit parameters in terms of spatially explicit parameters. This suggests how we may obtain estimates of spatially explicit parameters from spatially implicit ones. The relationship between these parameters, however, makes no intuitive sense. Furthermore, the spatially implicit model usually fits observed species-abundance distributions better than those calculated from the spatially explicit model's simulated data. Current spatially explicit neutral models therefore have limited descriptive power. However, our results suggest that a fatter tail of the dispersal kernel seems to improve the fit, suggesting that dispersal kernels with even fatter tails should be studied in future. We conclude that more advanced spatially explicit models and tools to analyze them need to be developed.

  17. Testing the metacognitive model against the benchmark CBT model of social anxiety disorder: Is it time to move beyond cognition?

    Directory of Open Access Journals (Sweden)

    Henrik Nordahl

    Full Text Available The recommended treatment for Social Phobia is individual Cognitive-Behavioural Therapy (CBT. CBT-treatments emphasize social self-beliefs (schemas as the core underlying factor for maladaptive self-processing and social anxiety symptoms. However, the need for such beliefs in models of psychopathology has recently been questioned. Specifically, the metacognitive model of psychological disorders asserts that particular beliefs about thinking (metacognitive beliefs are involved in most disorders, including social anxiety, and are a more important factor underlying pathology. Comparing the relative importance of these disparate underlying belief systems has the potential to advance conceptualization and treatment for SAD. In the cognitive model, unhelpful self-regulatory processes (self-attention and safety behaviours arise from (e.g. correlate with cognitive beliefs (schemas whilst the metacognitive model proposes that such processes arise from metacognitive beliefs. In the present study we therefore set out to evaluate the absolute and relative fit of the cognitive and metacognitive models in a longitudinal data-set, using structural equation modelling. Five-hundred and five (505 participants completed a battery of self-report questionnaires at two time points approximately 8 weeks apart. We found that both models fitted the data, but that the metacognitive model was a better fit to the data than the cognitive model. Further, a specified metacognitive model, emphasising negative metacognitive beliefs about the uncontrollability and danger of thoughts and cognitive confidence improved the model fit further and was significantly better than the cognitive model. It would seem that advances in understanding and treating social anxiety could benefit from moving to a full metacognitive theory that includes negative metacognitive beliefs about the uncontrollability and danger of thoughts, and judgements of cognitive confidence. These findings challenge

  18. Testing the metacognitive model against the benchmark CBT model of social anxiety disorder: Is it time to move beyond cognition?

    Science.gov (United States)

    Nordahl, Henrik; Wells, Adrian

    2017-01-01

    The recommended treatment for Social Phobia is individual Cognitive-Behavioural Therapy (CBT). CBT-treatments emphasize social self-beliefs (schemas) as the core underlying factor for maladaptive self-processing and social anxiety symptoms. However, the need for such beliefs in models of psychopathology has recently been questioned. Specifically, the metacognitive model of psychological disorders asserts that particular beliefs about thinking (metacognitive beliefs) are involved in most disorders, including social anxiety, and are a more important factor underlying pathology. Comparing the relative importance of these disparate underlying belief systems has the potential to advance conceptualization and treatment for SAD. In the cognitive model, unhelpful self-regulatory processes (self-attention and safety behaviours) arise from (e.g. correlate with) cognitive beliefs (schemas) whilst the metacognitive model proposes that such processes arise from metacognitive beliefs. In the present study we therefore set out to evaluate the absolute and relative fit of the cognitive and metacognitive models in a longitudinal data-set, using structural equation modelling. Five-hundred and five (505) participants completed a battery of self-report questionnaires at two time points approximately 8 weeks apart. We found that both models fitted the data, but that the metacognitive model was a better fit to the data than the cognitive model. Further, a specified metacognitive model, emphasising negative metacognitive beliefs about the uncontrollability and danger of thoughts and cognitive confidence improved the model fit further and was significantly better than the cognitive model. It would seem that advances in understanding and treating social anxiety could benefit from moving to a full metacognitive theory that includes negative metacognitive beliefs about the uncontrollability and danger of thoughts, and judgements of cognitive confidence. These findings challenge a core

  19. Fitness cost

    DEFF Research Database (Denmark)

    Nielsen, Karen L.; Pedersen, Thomas M.; Udekwu, Klas I.

    2012-01-01

    phage types, predominantly only penicillin resistant. We investigated whether isolates of this epidemic were associated with a fitness cost, and we employed a mathematical model to ask whether these fitness costs could have led to the observed reduction in frequency. Bacteraemia isolates of S. aureus...... from Denmark have been stored since 1957. We chose 40 S. aureus isolates belonging to phage complex 83A, clonal complex 8 based on spa type, ranging in time of isolation from 1957 to 1980 and with varyous antibiograms, including both methicillin-resistant and -susceptible isolates. The relative fitness...... of each isolate was determined in a growth competition assay with a reference isolate. Significant fitness costs of 215 were determined for the MRSA isolates studied. There was a significant negative correlation between number of antibiotic resistances and relative fitness. Multiple regression analysis...

  20. Lotka-Volterra competition models for sessile organisms.

    Science.gov (United States)

    Spencer, Matthew; Tanner, Jason E

    2008-04-01

    Markov models are widely used to describe the dynamics of communities of sessile organisms, because they are easily fitted to field data and provide a rich set of analytical tools. In typical ecological applications, at any point in time, each point in space is in one of a finite set of states (e.g., species, empty space). The models aim to describe the probabilities of transitions between states. In most Markov models for communities, these transition probabilities are assumed to be independent of state abundances. This assumption is often suspected to be false and is rarely justified explicitly. Here, we start with simple assumptions about the interactions among sessile organisms and derive a model in which transition probabilities depend on the abundance of destination states. This model is formulated in continuous time and is equivalent to a Lotka-Volterra competition model. We fit this model and a variety of alternatives in which transition probabilities do not depend on state abundances to a long-term coral reef data set. The Lotka-Volterra model describes the data much better than all models we consider other than a saturated model (a model with a separate parameter for each transition at each time interval, which by definition fits the data perfectly). Our approach provides a basis for further development of stochastic models of sessile communities, and many of the methods we use are relevant to other types of community. We discuss possible extensions to spatially explicit models.

  1. Statistical modelling for recurrent events: an application to sports injuries.

    Science.gov (United States)

    Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F

    2014-09-01

    Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. Bayesian analysis of CCDM models

    Science.gov (United States)

    Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3αH0 model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  3. Bayesian analysis of CCDM models

    Energy Technology Data Exchange (ETDEWEB)

    Jesus, J.F. [Universidade Estadual Paulista (Unesp), Câmpus Experimental de Itapeva, Rua Geraldo Alckmin 519, Vila N. Sra. de Fátima, Itapeva, SP, 18409-010 Brazil (Brazil); Valentim, R. [Departamento de Física, Instituto de Ciências Ambientais, Químicas e Farmacêuticas—ICAQF, Universidade Federal de São Paulo (UNIFESP), Unidade José Alencar, Rua São Nicolau No. 210, Diadema, SP, 09913-030 Brazil (Brazil); Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk [Institute of Cosmology and Gravitation—University of Portsmouth, Burnaby Road, Portsmouth, PO1 3FX United Kingdom (United Kingdom)

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  4. RATES OF FITNESS DECLINE AND REBOUND SUGGEST PERVASIVE EPISTASIS

    Science.gov (United States)

    Perfeito, L; Sousa, A; Bataillon, T; Gordo, I

    2014-01-01

    Unraveling the factors that determine the rate of adaptation is a major question in evolutionary biology. One key parameter is the effect of a new mutation on fitness, which invariably depends on the environment and genetic background. The fate of a mutation also depends on population size, which determines the amount of drift it will experience. Here, we manipulate both population size and genotype composition and follow adaptation of 23 distinct Escherichia coli genotypes. These have previously accumulated mutations under intense genetic drift and encompass a substantial fitness variation. A simple rule is uncovered: the net fitness change is negatively correlated with the fitness of the genotype in which new mutations appear—a signature of epistasis. We find that Fisher's geometrical model can account for the observed patterns of fitness change and infer the parameters of this model that best fit the data, using Approximate Bayesian Computation. We estimate a genomic mutation rate of 0.01 per generation for fitness altering mutations, albeit with a large confidence interval, a mean fitness effect of mutations of −0.01, and an effective number of traits nine in mutS− E. coli. This framework can be extended to confront a broader range of models with data and test different classes of fitness landscape models. PMID:24372601

  5. Genomic Feature Models

    DEFF Research Database (Denmark)

    Sørensen, Peter; Edwards, Stefan McKinnon; Rohde, Palle Duun

    -additive genetic mechanisms. These modeling approaches have proven to be highly useful to determine population genetic parameters as well as prediction of genetic risk or value. We present a series of statistical modelling approaches that use prior biological information for evaluating the collective action......Whole-genome sequences and multiple trait phenotypes from large numbers of individuals will soon be available in many populations. Well established statistical modeling approaches enable the genetic analyses of complex trait phenotypes while accounting for a variety of additive and non...... regions and gene ontologies) that provide better model fit and increase predictive ability of the statistical model for this trait....

  6. Model for the sulfidation of calcined limestone and its use in reactor models.

    NARCIS (Netherlands)

    Heesink, Albertus B.M.; Brilman, Derk Willem Frederik; van Swaaij, Willibrordus Petrus Maria

    1998-01-01

    A mathematical model describing the sulfidation of a single calcined limestone particle was developed and experimentally verified. This model, which includes no fitting parameters, assumes a calcined limestone particle to consist of spherical grains of various sizes that react with H2S according to

  7. The influence of model parameters on catchment-response

    International Nuclear Information System (INIS)

    Shah, S.M.S.; Gabriel, H.F.; Khan, A.A.

    2002-01-01

    This paper deals with the study of influence of influence of conceptual rainfall-runoff model parameters on catchment response (runoff). A conceptual modified watershed yield model is employed to study the effects of model-parameters on catchment-response, i.e. runoff. The model is calibrated, using manual parameter-fitting approach, also known as trial and error parameter-fitting. In all, there are twenty one (21) parameters that control the functioning of the model. A lumped parametric approach is used. The detailed analysis was performed on Ling River near Kahuta, having catchment area of 56 sq. miles. The model includes physical parameters like GWSM, PETS, PGWRO, etc. fitting coefficients like CINF, CGWS, etc. and initial estimates of the surface-water and groundwater storages i.e. srosp and gwsp. Sensitivity analysis offers a good way, without repetititious computations, the proper weight and consideration that must be taken when each of the influencing factor is evaluated. Sensitivity-analysis was performed to evaluate the influence of model-parameters on runoff. The sensitivity and relative contributions of model parameters influencing catchment-response are studied. (author)

  8. 3D Face Apperance Model

    DEFF Research Database (Denmark)

    Lading, Brian; Larsen, Rasmus; Astrom, K

    2006-01-01

    We build a 3D face shape model, including inter- and intra-shape variations, derive the analytical Jacobian of its resulting 2D rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations......We build a 3D face shape model, including inter- and intra-shape variations, derive the analytical Jacobian of its resulting 2D rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations...

  9. STARS: An ArcGIS Toolset Used to Calculate the Spatial Information Needed to Fit Spatial Statistical Models to Stream Network Data

    Directory of Open Access Journals (Sweden)

    Erin Peterson

    2014-01-01

    Full Text Available This paper describes the STARS ArcGIS geoprocessing toolset, which is used to calcu- late the spatial information needed to fit spatial statistical models to stream network data using the SSN package. The STARS toolset is designed for use with a landscape network (LSN, which is a topological data model produced by the FLoWS ArcGIS geoprocessing toolset. An overview of the FLoWS LSN structure and a few particularly useful tools is also provided so that users will have a clear understanding of the underlying data struc- ture that the STARS toolset depends on. This document may be used as an introduction to new users. The methods used to calculate the spatial information and format the final .ssn object are also explicitly described so that users may create their own .ssn object using other data models and software.

  10. Sustainability of TQM Implementation Model In The Indonesia’s Oil and Gas Industry: An Assessment of Structural Relations Model Fit

    Directory of Open Access Journals (Sweden)

    Wakhid Slamet Ciptono

    2011-02-01

    Full Text Available This study purposively is to conduct an empirical analysis of the structural relations among  critical factors of quality management practices (QMPs, world-class company practice (WCC, operational excellence practice (OE, and company performance (company non-financial performance or CNFP and company financial performance or CFP in the oil and gas companies operating in Indonesia. The current study additionally examines the relationships between QMPs and CFP through WCC, OE, and CNFP (as partial mediators simultaneously. The study uses data from a survey of 140 strategic business units (SBUs within 49 oil and gas contractor companies in Indonesia.  The findings suggest that all six QMPs have positive and significant indirect relationships on CFP through WCC and CNFP. Only four of six QMPs have positive and significant indirect relationships on CFP through OE and CNFP. Hence, WCC, OE, and CNFP play as partial mediators between  QMPs and CFP. CNFP has a significant influence on CFP. A major implication of this study is that oil and gas managers need to recognize the structural relations model fit by developing all of the research constructs simultaneously associated with a comprehensive TQM practice. Furthermore, the findings will assist oil and gas companies by improving CNFP, which is very critical to TQM, thereby contributing to a better achievement of CFP. The current study uses the Deming’s principles, Hayes and Wheelwright dimensions of world-class company practice, Chevron Texaco’s operational excellence practice, and the dimensions of company financial and non-financial performances.  The paper also provides an insight into the sustainability of TQM implementation model and their effect on company financial performance in oil and gas companies in Indonesia.

  11. Simulation on Poisson and negative binomial models of count road accident modeling

    Science.gov (United States)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  12. Modeling of uranium bioleaching by Acidithiobacillus ferrooxidans

    International Nuclear Information System (INIS)

    Rashidi, A.; Safdari, J.; Roosta-Azad, R.; Zokaei-Kadijani, S.

    2012-01-01

    Highlights: ► A mathematical model for the mesophilic bioleaching of uraninite is introduced. ► New rate expressions are used for the iron precipitation and uranium leaching rates. ► Good fits of the model are obtained, while the values of the parameters are within the range expected. ► The model can be applied to other bioleaching processes under the same conditions. - Abstract: In this paper, a mathematical model for the mesophilic bioleaching of uraninite is developed. The case of constant temperature, pH, and initial ore concentration is considered. The model is validated by comparing the calculated and measured values of uranium extraction, ferric and ferrous iron in solution, and cell concentration. Good fits of the model were obtained, while the values of the parameters were within the range expected. New rate expressions were used for the iron precipitation and uranium leaching rates. The rates of chemical leaching and ferric precipitation are related to the ratio of ferric to ferrous in solution. The fitted parameters can be considered applicable only to this study. In contrast, the model equation is general and can be applied to bioleaching under the same conditions.

  13. Global fits of the two-loop renormalized Two-Higgs-Doublet model with soft Z 2 breaking

    Science.gov (United States)

    Chowdhury, Debtosh; Eberhardt, Otto

    2015-11-01

    We determine the next-to-leading order renormalization group equations for the Two-Higgs-Doublet model with a softly broken Z 2 symmetry and CP conservation in the scalar potential. We use them to identify the parameter regions which are stable up to the Planck scale and find that in this case the quartic couplings of the Higgs potential cannot be larger than 1 in magnitude and that the absolute values of the S-matrix eigenvalues cannot exceed 2 .5 at the electroweak symmetry breaking scale. Interpreting the 125 GeV resonance as the light CP -even Higgs eigenstate, we combine stability constraints, electroweak precision and flavour observables with the latest ATLAS and CMS data on Higgs signal strengths and heavy Higgs searches in global parameter fits to all four types of Z 2 symmetry. We quantify the maximal deviations from the alignment limit and find that in type II and Y the mass of the heavy CP -even ( CP -odd) scalar cannot be smaller than 340 GeV (360 GeV). Also, we pinpoint the physical parameter regions compatible with a stable scalar potential up to the Planck scale. Motivated by the question how natural a Higgs mass of 125 GeV can be in the context of a Two-Higgs-Doublet model, we also address the hierarchy problem and find that the Two-Higgs-Doublet model does not offer a perturbative solution to it beyond 5 TeV.

  14. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  15. Stochastic Modeling of Rainfall in Peninsular Malaysia Using Bartlett Lewis Rectangular Pulses Models

    Directory of Open Access Journals (Sweden)

    Ibrahim Suliman Hanaish

    2011-01-01

    Full Text Available Three versions of Bartlett Lewis rectangular pulse rainfall models, namely, the Original Bartlett Lewis (OBL, Modified Bartlett Lewis (MBL, and 2N-cell-type Bartlett Lewis model (BL2n, are considered. These models are fitted to the hourly rainfall data from 1970 to 2008 obtained from Petaling Jaya rain gauge station, located in Peninsular Malaysia. The generalized method of moments is used to estimate the model parameters. Under this method, minimization of two different objective functions which involve different weight functions, one weight is inversely proportional to the variance and another one is inversely proportional to the mean squared, is carried out using Nelder-Mead optimization technique. For the purpose of comparison of the performance of the three different models, the results found for the months of July and November are used for illustration. This performance is assessed based on the goodness of fit of the models. In addition, the sensitivity of the parameter estimates to the choice of the objective function is also investigated. It is found that BL2n slightly outperforms OBL. However, the best model is the Modified Bartlett Lewis MBL, particularly when the objective function considered involves weight which is inversely proportional to the variance.

  16. P11 πN scattering in a potential model and in the cloudy bag model

    International Nuclear Information System (INIS)

    Rinat, A.S.

    1982-01-01

    We discuss P 11 πN scattering in a model where the π is coupled to quark bags for baryons N, R, Δ. From the underlying qqπ couplings we derive B'Bπ vertices which are used in a solution of a πN, πΔ two-channel scattering problem. Using one bag radius from a fit to P 33 πN data, we are unable to reproduce delta 11 . A fit requires a Roper radius Rsub(R) > Rsub(N). We discuss the sensitivity of the fit to small variations in other bag parameters. The theory is compared with a simple potential model and with field theories employing baryons instead of quark fields. (orig.)

  17. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    Science.gov (United States)

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of

  18. Evaluation of statistical models for forecast errors from the HBV model

    Science.gov (United States)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  19. Some Improved Diagnostics for Failure of The Rasch Model.

    Science.gov (United States)

    Molenaar, Ivo W.

    1983-01-01

    Goodness of fit tests for the Rasch model are typically large-sample, global measures. This paper offers suggestions for small-sample exploratory techniques for examining the fit of item data to the Rasch model. (Author/JKS)

  20. Constitutive modeling of an electrospun tubular scaffold used for vascular tissue engineering.

    Science.gov (United States)

    Hu, Jin-Jia

    2015-08-01

    In this study, we sought to model the mechanical behavior of an electrospun tubular scaffold previously reported for vascular tissue engineering with hyperelastic constitutive equations. Specifically, the scaffolds were made by wrapping electrospun polycaprolactone membranes that contain aligned fibers around a mandrel in such a way that they have microstructure similar to the native arterial media. The biaxial stress-stretch data of the scaffolds made of moderately or highly aligned fibers with three different off-axis fiber angles α (30°, 45°, and 60°) were fit by a phenomenological Fung model and a series of structurally motivated models considering fiber directions and fiber angle distributions. In particular, two forms of fiber strain energy in the structurally motivated model for a linear and a nonlinear fiber stress-strain relation, respectively, were tested. An isotropic neo-Hookean strain energy function was also added to the structurally motivated models to examine its contribution. The two forms of fiber strain energy did not result in significantly different goodness of fit for most groups of the scaffolds. The absence of the neo-Hookean term in the structurally motivated model led to obvious nonlinear stress-stretch fits at a greater axial stretch, especially when fitting data from the scaffolds with a small α. Of the models considered, the Fung model had the overall best fitting results; its applications are limited because of its phenomenological nature. Although a structurally motivated model using the nonlinear fiber stress-strain relation with the neo-Hookean term provided fits comparably as good as the Fung model, the values of its model parameters exhibited large within-group variations. Prescribing the dispersion of fiber orientation in the structurally motivated model, however, reduced the variations without compromising the fits and was thus considered to be the best structurally motivated model for the scaffolds. It appeared that the

  1. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  2. Modeling the behavioral substrates of associate learning and memory - Adaptive neural models

    Science.gov (United States)

    Lee, Chuen-Chien

    1991-01-01

    Three adaptive single-neuron models based on neural analogies of behavior modification episodes are proposed, which attempt to bridge the gap between psychology and neurophysiology. The proposed models capture the predictive nature of Pavlovian conditioning, which is essential to the theory of adaptive/learning systems. The models learn to anticipate the occurrence of a conditioned response before the presence of a reinforcing stimulus when training is complete. Furthermore, each model can find the most nonredundant and earliest predictor of reinforcement. The behavior of the models accounts for several aspects of basic animal learning phenomena in Pavlovian conditioning beyond previous related models. Computer simulations show how well the models fit empirical data from various animal learning paradigms.

  3. Comprehensive process model of clinical information interaction in primary care: results of a "best-fit" framework synthesis.

    Science.gov (United States)

    Veinot, Tiffany C; Senteio, Charles R; Hanauer, David; Lowery, Julie C

    2018-06-01

    To describe a new, comprehensive process model of clinical information interaction in primary care (Clinical Information Interaction Model, or CIIM) based on a systematic synthesis of published research. We used the "best fit" framework synthesis approach. Searches were performed in PubMed, Embase, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), PsycINFO, Library and Information Science Abstracts, Library, Information Science and Technology Abstracts, and Engineering Village. Two authors reviewed articles according to inclusion and exclusion criteria. Data abstraction and content analysis of 443 published papers were used to create a model in which every element was supported by empirical research. The CIIM documents how primary care clinicians interact with information as they make point-of-care clinical decisions. The model highlights 3 major process components: (1) context, (2) activity (usual and contingent), and (3) influence. Usual activities include information processing, source-user interaction, information evaluation, selection of information, information use, clinical reasoning, and clinical decisions. Clinician characteristics, patient behaviors, and other professionals influence the process. The CIIM depicts the complete process of information interaction, enabling a grasp of relationships previously difficult to discern. The CIIM suggests potentially helpful functionality for clinical decision support systems (CDSSs) to support primary care, including a greater focus on information processing and use. The CIIM also documents the role of influence in clinical information interaction; influencers may affect the success of CDSS implementations. The CIIM offers a new framework for achieving CDSS workflow integration and new directions for CDSS design that can support the work of diverse primary care clinicians.

  4. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia; Harmandaris, Vagelis; Katsoulakis, Markos A.; Plechac, Petr

    2015-01-01

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics

  5. Atmospheric corrosion: statistical validation of models

    International Nuclear Information System (INIS)

    Diaz, V.; Martinez-Luaces, V.; Guineo-Cobs, G.

    2003-01-01

    In this paper we discuss two different methods for validation of regression models, applied to corrosion data. One of them is based on the correlation coefficient and the other one is the statistical test of lack of fit. Both methods are used here to analyse fitting of bi logarithmic model in order to predict corrosion for very low carbon steel substrates in rural and urban-industrial atmospheres in Uruguay. Results for parameters A and n of the bi logarithmic model are reported here. For this purpose, all repeated values were used instead of using average values as usual. Modelling is carried out using experimental data corresponding to steel substrates under the same initial meteorological conditions ( in fact, they are put in the rack at the same time). Results of correlation coefficient are compared with the lack of it tested at two different signification levels (α=0.01 and α=0.05). Unexpected differences between them are explained and finally, it is possible to conclude, at least in the studied atmospheres, that the bi logarithmic model does not fit properly the experimental data. (Author) 18 refs

  6. Probabilistic Models for Solar Particle Events

    Science.gov (United States)

    Adams, James H., Jr.; Dietrich, W. F.; Xapsos, M. A.; Welton, A. M.

    2009-01-01

    Probabilistic Models of Solar Particle Events (SPEs) are used in space mission design studies to provide a description of the worst-case radiation environment that the mission must be designed to tolerate.The models determine the worst-case environment using a description of the mission and a user-specified confidence level that the provided environment will not be exceeded. This poster will focus on completing the existing suite of models by developing models for peak flux and event-integrated fluence elemental spectra for the Z>2 elements. It will also discuss methods to take into account uncertainties in the data base and the uncertainties resulting from the limited number of solar particle events in the database. These new probabilistic models are based on an extensive survey of SPE measurements of peak and event-integrated elemental differential energy spectra. Attempts are made to fit the measured spectra with eight different published models. The model giving the best fit to each spectrum is chosen and used to represent that spectrum for any energy in the energy range covered by the measurements. The set of all such spectral representations for each element is then used to determine the worst case spectrum as a function of confidence level. The spectral representation that best fits these worst case spectra is found and its dependence on confidence level is parameterized. This procedure creates probabilistic models for the peak and event-integrated spectra.

  7. Modelling (18)O2 and (16)O2 unidirectional fluxes in plants. III: fitting of experimental data by a simple model.

    Science.gov (United States)

    André, Marcel J

    2013-08-01

    Photosynthetic assimilation of CO2 in plants results in the balance between the photochemical energy developed by light in chloroplasts, and the consumption of that energy by the oxygenation processes, mainly the photorespiration in C3 plants. The analysis of classical biological models shows the difficulties to bring to fore the oxygenation rate due to the photorespiration pathway. As for other parameters, the most important key point is the estimation of the electron transport rate (ETR or J), i.e. the flux of biochemical energy, which is shared between the reductive and oxidative cycles of carbon. The only reliable method to quantify the linear electron flux responsible for the production of reductive energy is to directly measure the O2 evolution by (18)O2 labelling and mass spectrometry. The hypothesis that the respective rates of reductive and oxidative cycles of carbon are only determined by the kinetic parameters of Rubisco, the respective concentrations of CO2 and O2 at the Rubisco site and the available electron transport rate, ultimately leads to propose new expressions of biochemical model equations. The modelling of (18)O2 and (16)O2 unidirectional fluxes in plants shows that a simple model can fit the photosynthetic and photorespiration exchanges for a wide range of environmental conditions. Its originality is to express the carboxylation and the oxygenation as a function of external gas concentrations, by the definition of a plant specificity factor Sp that mimics the internal reactions of Rubisco in plants. The difference between the specificity factors of plant (Sp) and of Rubisco (Sr) is directly related to the conductance values to CO2 transfer between the atmosphere and the Rubisco site. This clearly illustrates that the values and the variation of conductance are much more important, in higher C3 plants, than the small variations of the Rubisco specificity factor. The simple model systematically expresses the reciprocal variations of

  8. The effects of model and data complexity on predictions from species distributions models

    DEFF Research Database (Denmark)

    García-Callejas, David; Bastos, Miguel

    2016-01-01

    How complex does a model need to be to provide useful predictions is a matter of continuous debate across environmental sciences. In the species distributions modelling literature, studies have demonstrated that more complex models tend to provide better fits. However, studies have also shown...... that predictive performance does not always increase with complexity. Testing of species distributions models is challenging because independent data for testing are often lacking, but a more general problem is that model complexity has never been formally described in such studies. Here, we systematically...

  9. Eight New Luminous z > 6 Quasars Selected via SED Model Fitting of VISTA, WISE and Dark Energy Survey Year 1 Observations

    Energy Technology Data Exchange (ETDEWEB)

    Reed, S.L.; et al.

    2017-01-17

    We present the discovery and spectroscopic confirmation with the ESO NTT and Gemini South telescopes of eight new 6.0 < z < 6.5 quasars with z$_{AB}$ < 21.0. These quasars were photometrically selected without any star-galaxy morphological criteria from 1533 deg$^{2}$ using SED model fitting to photometric data from the Dark Energy Survey (g, r, i, z, Y), the VISTA Hemisphere Survey (J, H, K) and the Wide-Field Infrared Survey Explorer (W1, W2). The photometric data was fitted with a grid of quasar model SEDs with redshift dependent Lyman-{\\alpha} forest absorption and a range of intrinsic reddening as well as a series of low mass cool star models. Candidates were ranked using on a SED-model based $\\chi^{2}$-statistic, which is extendable to other future imaging surveys (e.g. LSST, Euclid). Our spectral confirmation success rate is 100% without the need for follow-up photometric observations as used in other studies of this type. Combined with automatic removal of the main types of non-astrophysical contaminants the method allows large data sets to be processed without human intervention and without being over run by spurious false candidates. We also present a robust parametric redshift estimating technique that gives comparable accuracy to MgII and CO based redshift estimators. We find two z $\\sim$ 6.2 quasars with HII near zone sizes < 3 proper Mpc which could indicate that these quasars may be young with ages < 10$^6$ - 10$^7$ years or lie in over dense regions of the IGM. The z = 6.5 quasar VDESJ0224-4711 has J$_{AB}$ = 19.75 is the second most luminous quasar known with z > 6.5.

  10. Advanced ECP model for BWRs

    International Nuclear Information System (INIS)

    Ullberg, M.; Gott, K.; Lejon, J.; Granath, G.

    2007-01-01

    The new ECP model is based on one-electron transfers only (radical mechanism). It uses reaction rates as the sole fitting parameters and unifies several different aspects of BWR electrochemistry. ECP's dependence on O 2 , H 2 O 2 , H 2 and flow rate is modeled, specifically the different influence of the levels of O 2 and H 2 O 2 , respectively. The ECP's experimental dependence on the passive current in the case of O 2 , and independence in the case of H 2 O 2 , are also modeled. Decomposition of H 2 O 2 , corrosion under oxidizing/reducing conditions, and the electrochemical interactions of O 2 , H 2 O 2 and H 2 are modeled along a SS pipe. The predictive power of the model is demonstrated by the following example: When the model has been fitted to the H 2 O 2 decomposition rate and the ECP in presence of H 2 O 2 , then the ECP in presence of O 2 is effectively determined by the O 2 level and the passive current. (author)

  11. Rcapture: Loglinear Models for Capture-Recapture in R

    Directory of Open Access Journals (Sweden)

    Sophie Baillargeon

    2007-04-01

    Full Text Available This article introduces Rcapture, an R package for capture-recapture experiments. The data for analysis consists of the frequencies of the observable capture histories over the t capture occasions of the experiment. A capture history is a vector of zeros and ones where one stands for a capture and zero for a miss. Rcapture can fit three types of models. With a closed population model, the goal of the analysis is to estimate the size N of the population which is assumed to be constant throughout the experiment. The estimator depends on the way in which the capture probabilities of the animals vary. Rcapture features several models for these capture probabilities that lead to different estimators for N. In an open population model, immigration and death occur between sampling periods. The estimation of survival rates is of primary interest. Rcapture can fit the basic Cormack-Jolly-Seber and Jolly-Seber model to such data. The third type of models fitted by Rcapture are robust design models. It features two levels of sampling; closed population models apply within primary periods and an open population model applies between periods. Most models in Rcapture have a loglinear form; they are fitted by carrying out a Poisson regression with the R function glm. Estimates of the demographic parameters of interest are derived from the loglinear parameter estimates; their variances are obtained by linearization. The novel feature of this package is the provision of several new options for modeling capture probabilities heterogeneity between animals in both closed population models and the primary periods of a robust design. It also implements many of the techniques developed by R. M. Cormack for open population models.

  12. Evaluation of global solar radiation models for Shanghai, China

    International Nuclear Information System (INIS)

    Yao, Wanxiang; Li, Zhengrong; Wang, Yuyan; Jiang, Fujian; Hu, Lingzhou

    2014-01-01

    Highlights: • 108 existing models are compared and analyzed by 42 years meteorological data. • Fitting models based on measured data are established according to 42 years data. • All models are compared by recently 10 years meteorological data. • The results show that polynomial models are the most accurate models. - Abstract: In this paper, 89 existing monthly average daily global solar radiation models and 19 existing daily global solar radiation models are compared and analyzed by 42 years meteorological data. The results show that for existing monthly average daily global solar radiation models, linear models and polynomial models have been able to estimate global solar radiation accurately, and complex equation types cannot obviously improve the precision. Considering direct parameters such as latitude, altitude, solar altitude and sunshine duration can help improve the accuracy of the models, but indirect parameters cannot. For existing daily global solar radiation models, multi-parameter models are more accurate than single-parameter models, polynomial models are more accurate than linear models. Then measured data fitting monthly average daily global solar radiation models (MADGSR models) and daily global solar radiation models (DGSR models) are established according to 42 years meteorological data. Finally, existing models and fitting models based on measured data are comparative analysis by recent 10 years meteorological data, and the results show that polynomial models (MADGSR model 2, DGSR model 2 and Maduekwe model 2) are the most accurate models

  13. Crushed-salt constitutive model update

    International Nuclear Information System (INIS)

    Callahan, G.D.; Loken, M.C.; Mellegard, K.D.; Hansen, F.D.

    1998-01-01

    Modifications to the constitutive model used to describe the deformation of crushed salt are presented in this report. Two mechanisms--dislocation creep and grain boundary diffusional pressure solutioning--defined previously but used separately are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. New creep consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt to determine material parameters for the constitutive model. Nonlinear least-squares model fitting to data from the shear consolidation tests and a combination of the shear and hydrostatic consolidation tests produced two sets of material parameter values for the model. The change in material parameter values from test group to test group indicates the empirical nature of the model but demonstrates improvement over earlier work with the previous models. Key improvements are the ability to capture lateral strain reversal and better resolve parameter values. To demonstrate the predictive capability of the model, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the model to predict the test data, the model appears to capture the creep consolidation behavior of crushed salt quite well

  14. Crushed-salt constitutive model update

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, G.D.; Loken, M.C.; Mellegard, K.D. [RE/SPEC Inc., Rapid City, SD (United States); Hansen, F.D. [Sandia National Labs., Albuquerque, NM (United States)

    1998-01-01

    Modifications to the constitutive model used to describe the deformation of crushed salt are presented in this report. Two mechanisms--dislocation creep and grain boundary diffusional pressure solutioning--defined previously but used separately are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. New creep consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt to determine material parameters for the constitutive model. Nonlinear least-squares model fitting to data from the shear consolidation tests and a combination of the shear and hydrostatic consolidation tests produced two sets of material parameter values for the model. The change in material parameter values from test group to test group indicates the empirical nature of the model but demonstrates improvement over earlier work with the previous models. Key improvements are the ability to capture lateral strain reversal and better resolve parameter values. To demonstrate the predictive capability of the model, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the model to predict the test data, the model appears to capture the creep consolidation behavior of crushed salt quite well.

  15. Introduction: Occam’s Razor (SOT - Fit for Purpose workshop introduction)

    Science.gov (United States)

    Mathematical models provide important, reproducible, and transparent information for risk-based decision making. However, these models must be constructed to fit the needs of the problem to be solved. A “fit for purpose” model is an abstraction of a complicated problem that allow...

  16. Risk Estimation for Lung Cancer in Libya: Analysis Based on Standardized Morbidity Ratio, Poisson-Gamma Model, BYM Model and Mixture Model

    Science.gov (United States)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-03-01

    Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. Creative Commons Attribution License

  17. AMS-02 fits dark matter

    Science.gov (United States)

    Balázs, Csaba; Li, Tong

    2016-05-01

    In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model.

  18. AMS-02 fits dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Balázs, Csaba; Li, Tong [ARC Centre of Excellence for Particle Physics at the Tera-scale,School of Physics and Astronomy, Monash University, Melbourne, Victoria 3800 (Australia)

    2016-05-05

    In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model.

  19. A new kinetic model based on the remote control mechanism to fit experimental data in the selective oxidation of propene into acrolein on biphasic catalysts

    Energy Technology Data Exchange (ETDEWEB)

    Abdeldayem, H.M.; Ruiz, P.; Delmon, B. [Unite de Catalyse et Chimie des Materiaux Divises, Universite Catholique de Louvain, Louvain-La-Neuve (Belgium); Thyrion, F.C. [Unite des Procedes Faculte des Sciences Appliquees, Universite Catholique de Louvain, Louvain-La-Neuve (Belgium)

    1998-12-31

    A new kinetic model for a more accurate and detailed fitting of the experimental data is proposed. The model is based on the remote control mechanism (RCM). The RCM assumes that some oxides (called `donors`) are able to activate molecular oxygen transforming it to very active mobile species (spillover oxygen (O{sub OS})). O{sub OS} migrates onto the surface of the other oxide (called `acceptor`) where it creates and/or regenerates the active sites during the reaction. The model contains tow terms, one considering the creation of selective sites and the other the catalytic reaction at each site. The model has been tested in the selective oxidation of propene into acrolein (T=380, 400, 420 C; oxygen and propene partial pressures between 38 and 152 Torr). Catalysts were prepared as pure MoO{sub 3} (acceptor) and their mechanical mixtures with {alpha}-Sb{sub 2}O{sub 4} (donor) in different proportions. The presence of {alpha}-Sb{sub 2}O{sub 4} changes the reaction order, the activation energy of the reaction and the number of active sites of MoO{sub 3} produced by oxygen spillover. These changes are consistent with a modification in the degree of irrigation of the surface by oxygen spillover. The fitting of the model to experimental results shows that the number of sites created by O{sub SO} increases with the amount of {alpha}-Sb{sub 2}O{sub 4}. (orig.)

  20. Relativistic direct interaction and hadron models

    International Nuclear Information System (INIS)

    Biswas, T.

    1984-01-01

    Direct interaction theories at a nonrelativistic level have been used successfully in several areas earlier (e.g. nuclear physics). But for hadron spectroscopy relativistic effects are important and hence the need for a relativistic direct interaction theory arises. It is the goal of this thesis to suggest such a theory which has the simplicity and the flexibility required for phenomenological model building. In general the introduction of relativity in a direct interaction theory is shown to be non-trivial. A first attempt leads to only an approximate form for allowed interactions. Even this is far too complex for phenomenological applicability. To simplify the model an extra spacelike particle called the vertex is introduced in any set of physical (timelike) particles. The vertex model is successfully used to fit and to predict experimental data on hadron spectra, γ and psi states fit very well with an interaction function inspired by QCD. Light mesons also fit reasonably well. Better forms of hyperfine interaction functions would be needed to improve the fitting of light mesons. The unexpectedly low pi meson mass is partially explained. Baryon ground states are fitted with unprecedented accuracy with very few adjustable parameters. For baryon excited states it is shown that better QCD motivated interaction functions are needed for a fit. Predictions for bb states in e + e - experiments are made to assist current experiments

  1. Parametric Explosion Spectral Model

    Energy Technology Data Exchange (ETDEWEB)

    Ford, S R; Walter, W R

    2012-01-19

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before occurred. We develop a parametric model of the nuclear explosion seismic source spectrum derived from regional phases that is compatible with earthquake-based geometrical spreading and attenuation. Earthquake spectra are fit with a generalized version of the Brune spectrum, which is a three-parameter model that describes the long-period level, corner-frequency, and spectral slope at high-frequencies. Explosion spectra can be fit with similar spectral models whose parameters are then correlated with near-source geology and containment conditions. We observe a correlation of high gas-porosity (low-strength) with increased spectral slope. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.

  2. Modeling of Experimental Adsorption Isotherm Data

    Directory of Open Access Journals (Sweden)

    Xunjun Chen

    2015-01-01

    Full Text Available Adsorption is considered to be one of the most effective technologies widely used in global environmental protection areas. Modeling of experimental adsorption isotherm data is an essential way for predicting the mechanisms of adsorption, which will lead to an improvement in the area of adsorption science. In this paper, we employed three isotherm models, namely: Langmuir, Freundlich, and Dubinin-Radushkevich to correlate four sets of experimental adsorption isotherm data, which were obtained by batch tests in lab. The linearized and non-linearized isotherm models were compared and discussed. In order to determine the best fit isotherm model, the correlation coefficient (r2 and standard errors (S.E. for each parameter were used to evaluate the data. The modeling results showed that non-linear Langmuir model could fit the data better than others, with relatively higher r2 values and smaller S.E. The linear Langmuir model had the highest value of r2, however, the maximum adsorption capacities estimated from linear Langmuir model were deviated from the experimental data.

  3. Models for Estimating Genetic Parameters of Milk Production Traits Using Random Regression Models in Korean Holstein Cattle

    Directory of Open Access Journals (Sweden)

    C. I. Cho

    2016-05-01

    Full Text Available The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs, and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK, fat yield (FAT, protein yield (PROT, and solids-not-fat yield (SNF. The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP of the third to fifth order (L3–L5, fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order. The residual variances in the models were either homogeneous (HOM or heterogeneous (15 classes, HET15; 60 classes, HET60. A total of nine models (3 orders of polynomials×3 types of residual variance including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC and/or Schwarz Bayesian information criteria (BIC statistics to identify the model(s of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF and L4-HET15 (FAT, which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first

  4. Patient-centered medical home model: do school-based health centers fit the model?

    Science.gov (United States)

    Larson, Satu A; Chapman, Susan A

    2013-01-01

    School-based health centers (SBHCs) are an important component of health care reform. The SBHC model of care offers accessible, continuous, comprehensive, family-centered, coordinated, and compassionate care to infants, children, and adolescents. These same elements comprise the patient-centered medical home (PCMH) model of care being promoted by the Affordable Care Act with the hope of lowering health care costs by rewarding clinicians for primary care services. PCMH survey tools have been developed to help payers determine whether a clinician/site serves as a PCMH. Our concern is that current survey tools will be unable to capture how a SBHC may provide a medical home and therefore be denied needed funding. This article describes how SBHCs might meet the requirements of one PCMH tool. SBHC stakeholders need to advocate for the creation or modification of existing survey tools that allow the unique characteristics of SBHCs to qualify as PCMHs.

  5. Projective Item Response Model for Test-Independent Measurement

    Science.gov (United States)

    Ip, Edward Hak-Sing; Chen, Shyh-Huei

    2012-01-01

    The problem of fitting unidimensional item-response models to potentially multidimensional data has been extensively studied. The focus of this article is on response data that contains a major dimension of interest but that may also contain minor nuisance dimensions. Because fitting a unidimensional model to multidimensional data results in…

  6. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  7. Nonparametric Bayesian Modeling of Complex Networks

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Mørup, Morten

    2013-01-01

    an infinite mixture model as running example, we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model?s fit and predictive performance. We explain how advanced nonparametric models......Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...

  8. A CAD System for Evaluating Footwear Fit

    Science.gov (United States)

    Savadkoohi, Bita Ture; de Amicis, Raffaele

    With the great growth in footwear demand, the footwear manufacturing industry, for achieving commercial success, must be able to provide the footwear that fulfills consumer's requirement better than it's competitors. Accurate fitting for shoes is an important factor in comfort and functionality. Footwear fitter measurement have been using manual measurement for a long time, but the development of 3D acquisition devices and the advent of powerful 3D visualization and modeling techniques, automatically analyzing, searching and interpretation of the models have now made automatic determination of different foot dimensions feasible. In this paper, we proposed an approach for finding footwear fit within the shoe last data base. We first properly aligned the 3D models using "Weighted" Principle Component Analysis (WPCA). After solving the alignment problem we used an efficient algorithm for cutting the 3D model in order to find the footwear fit from shoe last data base.

  9. EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS

    NARCIS (Netherlands)

    LUIJBEN, TCW

    1991-01-01

    Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank

  10. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2014-01-01

    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use. PMID:25419006

  11. A mathematical model for predicting glucose levels in critically-ill patients: the PIGnOLI model

    Directory of Open Access Journals (Sweden)

    Zhongheng Zhang

    2015-06-01

    Full Text Available Background and Objectives. Glycemic control is of paramount importance in the intensive care unit. Presently, several BG control algorithms have been developed for clinical trials, but they are mostly based on experts’ opinion and consensus. There are no validated models predicting how glucose levels will change after initiating of insulin infusion in critically ill patients. The study aimed to develop an equation for initial insulin dose setting.Methods. A large critical care database was employed for the study. Linear regression model fitting was employed. Retested blood glucose was used as the independent variable. Insulin rate was forced into the model. Multivariable fractional polynomials and interaction terms were used to explore the complex relationships among covariates. The overall fit of the model was examined by using residuals and adjusted R-squared values. Regression diagnostics were used to explore the influence of outliers on the model.Main Results. A total of 6,487 ICU admissions requiring insulin pump therapy were identified. The dataset was randomly split into two subsets at 7 to 3 ratio. The initial model comprised fractional polynomials and interactions terms. However, this model was not stable by excluding several outliers. I fitted a simple linear model without interaction. The selected prediction model (Predicting Glucose Levels in ICU, PIGnOLI included variables of initial blood glucose, insulin rate, PO volume, total parental nutrition, body mass index (BMI, lactate, congestive heart failure, renal failure, liver disease, time interval of BS recheck, dextrose rate. Insulin rate was significantly associated with blood glucose reduction (coefficient: −0.52, 95% CI [−1.03, −0.01]. The parsimonious model was well validated with the validation subset, with an adjusted R-squared value of 0.8259.Conclusion. The study developed the PIGnOLI model for the initial insulin dose setting. Furthermore, experimental study is

  12. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  13. Premium analysis for copula model: A case study for Malaysian motor insurance claims

    Science.gov (United States)

    Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah

    2014-06-01

    This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.

  14. Universality Classes of Interaction Structures for NK Fitness Landscapes

    Science.gov (United States)

    Hwang, Sungmin; Schmiegelt, Benjamin; Ferretti, Luca; Krug, Joachim

    2018-02-01

    Kauffman's NK-model is a paradigmatic example of a class of stochastic models of genotypic fitness landscapes that aim to capture generic features of epistatic interactions in multilocus systems. Genotypes are represented as sequences of L binary loci. The fitness assigned to a genotype is a sum of contributions, each of which is a random function defined on a subset of k ≤ L loci. These subsets or neighborhoods determine the genetic interactions of the model. Whereas earlier work on the NK model suggested that most of its properties are robust with regard to the choice of neighborhoods, recent work has revealed an important and sometimes counter-intuitive influence of the interaction structure on the properties of NK fitness landscapes. Here we review these developments and present new results concerning the number of local fitness maxima and the statistics of selectively accessible (that is, fitness-monotonic) mutational pathways. In particular, we develop a unified framework for computing the exponential growth rate of the expected number of local fitness maxima as a function of L, and identify two different universality classes of interaction structures that display different asymptotics of this quantity for large k. Moreover, we show that the probability that the fitness landscape can be traversed along an accessible path decreases exponentially in L for a large class of interaction structures that we characterize as locally bounded. Finally, we discuss the impact of the NK interaction structures on the dynamics of evolution using adaptive walk models.

  15. Design of spatial experiments: Model fitting and prediction

    Energy Technology Data Exchange (ETDEWEB)

    Fedorov, V.V.

    1996-03-01

    The main objective of the paper is to describe and develop model oriented methods and algorithms for the design of spatial experiments. Unlike many other publications in this area, the approach proposed here is essentially based on the ideas of convex design theory.

  16. Tilted-ring modelling of disk galaxies : Anomalous gas

    NARCIS (Netherlands)

    Jozsa, G. I. G.; Niemczyk, C.; Klein, U.; Oosterloo, T. A.

    We report our ongoing work on kinematical modelling of HI in disk galaxies. We employ our new software TiRiFiC (Tilted-Ring-Fitting-Code) in order to derive tilted-ring models by fitting artificial HI data cubes to observed ones in an automated process. With this technique we derive very reliable

  17. EVALUATION OF RATIONAL FUNCTION MODEL FOR GEOMETRIC MODELING OF CHANG'E-1 CCD IMAGES

    Directory of Open Access Journals (Sweden)

    Y. Liu

    2012-08-01

    Full Text Available Rational Function Model (RFM is a generic geometric model that has been widely used in geometric processing of high-resolution earth-observation satellite images, due to its generality and excellent capability of fitting complex rigorous sensor models. In this paper, the feasibility and precision of RFM for geometric modeling of China's Chang'E-1 (CE-1 lunar orbiter images is presented. The RFM parameters of forward-, nadir- and backward-looking CE-1 images are generated though least squares solution using virtual control points derived from the rigorous sensor model. The precision of the RFM is evaluated by comparing with the rigorous sensor model in both image space and object space. Experimental results using nine images from three orbits show that RFM can precisely fit the rigorous sensor model of CE-1 CCD images with a RMS residual error of 1/100 pixel level in image space and less than 5 meters in object space. This indicates that it is feasible to use RFM to describe the imaging geometry of CE-1 CCD images and spacecraft position and orientation. RFM will enable planetary data centers to have an option to supply RFM parameters of orbital images while keeping the original orbit trajectory data confidential.

  18. Calibration of a biome-biogeochemical cycles model for modeling the net primary production of teak forests through inverse modeling of remotely sensed data

    Science.gov (United States)

    Imvitthaya, Chomchid; Honda, Kiyoshi; Lertlum, Surat; Tangtham, Nipon

    2011-01-01

    In this paper, we present the results of a net primary production (NPP) modeling of teak (Tectona grandis Lin F.), an important species in tropical deciduous forests. The biome-biogeochemical cycles or Biome-BGC model was calibrated to estimate net NPP through the inverse modeling approach. A genetic algorithm (GA) was linked with Biome-BGC to determine the optimal ecophysiological model parameters. The Biome-BGC was calibrated by adjusting the ecophysiological model parameters to fit the simulated LAI to the satellite LAI (SPOT-Vegetation), and the best fitness confirmed the high accuracy of generated ecophysioligical parameter from GA. The modeled NPP, using optimized parameters from GA as input data, was evaluated using daily NPP derived by the MODIS satellite and the annual field data in northern Thailand. The results showed that NPP obtained using the optimized ecophysiological parameters were more accurate than those obtained using default literature parameterization. This improvement occurred mainly because the model's optimized parameters reduced the bias by reducing systematic underestimation in the model. These Biome-BGC results can be effectively applied in teak forests in tropical areas. The study proposes a more effective method of using GA to determine ecophysiological parameters at the site level and represents a first step toward the analysis of the carbon budget of teak plantations at the regional scale.

  19. Comparison among cognitive diagnostic models for the TIMSS 2007 fourth grade mathematics assessment.

    Science.gov (United States)

    Yamaguchi, Kazuhiro; Okada, Kensuke

    2018-01-01

    A variety of cognitive diagnostic models (CDMs) have been developed in recent years to help with the diagnostic assessment and evaluation of students. Each model makes different assumptions about the relationship between students' achievement and skills, which makes it important to empirically investigate which CDMs better fit the actual data. In this study, we examined this question by comparatively fitting representative CDMs to the Trends in International Mathematics and Science Study (TIMSS) 2007 assessment data across seven countries. The following two major findings emerged. First, in accordance with former studies, CDMs had a better fit than did the item response theory models. Second, main effects models generally had a better fit than other parsimonious or the saturated models. Related to the second finding, the fit of the traditional parsimonious models such as the DINA and DINO models were not optimal. The empirical educational implications of these findings are discussed.

  20. Active component modeling for analog integrated circuit design. Model parametrization and implementation in the SPICE-PAC circuit simulator

    International Nuclear Information System (INIS)

    Marchal, Xavier

    1992-01-01

    In order to use CAD efficiently in the analysis and design of electronic Integrated circuits, adequate modeling of active non-linear devices such as MOSFET transistors must be available to the designer. Many mathematical forms can be given to those models, such as explicit relations, or implicit equations to be solved. A major requirement in developing MOS transistor models for IC simulation is the availability of electrical characteristic curves over a wide range of channel width and length, including the sub-micrometer range. To account in a convenient way for bulk charge influence on I_D_S = f(V_D_S, V_G_S, v_B_S) device characteristics, all 3 standard SPICE MOS models use an empirical fitting parameter called the 'charge sharing factor'. Unfortunately, this formulation produces models which only describe correctly either some of the short channel phenomena, or some particular operating conditions (low injection, avalanche effect, etc.). We present here a cellular model (CDM = Charge Distributed Model) implemented in the open modular SPICE-PAC Simulator; this model is derived from the 4-terminal WANG charge controlled MOSFET model, using the charge sheet approximation. The CDM model describes device characteristics in ail operating regions without introducing drain current discontinuities and without requiring a 'charge sharing factor'. A usual problem to be faced by designers when they simulate MOS ICs is to find a reliable source of model parameters. Though most models have a physical basis, some of their parameters cannot be easily estimated from physical considerations. It can also happen that physically determined parameters values do not produce a good fit to measured device characteristics. Thus it is generally necessary to extract model parameters from measured transistor data, to ensure that model equations approximate measured curves accurately enough. Model parameters extraction can be done in 2 different ways, exposed in this thesis. The first

  1. Fits combining hyperon semileptonic decays and magnetic moments and CVC

    International Nuclear Information System (INIS)

    Bohm, A.; Kielanowski, P.

    1982-10-01

    We have performed a test of CVC by determining the baryon charges and magnetic moments from the hyperon semileptonic data. Then CVC was applied in order to make a joint fit of all baryon semileptonic decay data and baryon magnetic moments for the spectrum generating group (SG) model as well as for the conventional (cabibbo and magnetic moments in nuclear magnetons) model. The SG model gives a very good fit with chi 2 /n/sub D/ = 25/20 approximately equals 21% C.L. whereas the conventional model gives a fit with chi 2 /n/sub D/ = 244/20

  2. Quantitative reactive modeling and verification.

    Science.gov (United States)

    Henzinger, Thomas A

    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.

  3. Modeling of IP scanning activities with Hidden Markov Models: Darknet case study

    OpenAIRE

    De Santis , Giulia; Lahmadi , Abdelkader; Francois , Jerome; Festor , Olivier

    2016-01-01

    International audience; We propose a methodology based on Hidden Markov Models (HMMs) to model scanning activities monitored by a darknet. The HMMs of scanning activities are built on the basis of the number of scanned IP addresses within a time window and fitted using mixtures of Poisson distributions. Our methodology is applied on real data traces collected from a darknet and generated by two large scale scanners, ZMap and Shodan. We demonstrated that the built models are able to characteri...

  4. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  5. Non-linear modelling to describe lactation curve in Gir crossbred cows

    Directory of Open Access Journals (Sweden)

    Yogesh C. Bangar

    2017-02-01

    Full Text Available Abstract Background The modelling of lactation curve provides guidelines in formulating farm managerial practices in dairy cows. The aim of the present study was to determine the suitable non-linear model which most accurately fitted to lactation curves of five lactations in 134 Gir crossbred cows reared in Research-Cum-Development Project (RCDP on Cattle farm, MPKV (Maharashtra. Four models viz. gamma-type function, quadratic model, mixed log function and Wilmink model were fitted to each lactation separately and then compared on the basis of goodness of fit measures viz. adjusted R2, root mean square error (RMSE, Akaike’s Informaion Criteria (AIC and Bayesian Information Criteria (BIC. Results In general, highest milk yield was observed in fourth lactation whereas it was lowest in first lactation. Among the models investigated, mixed log function and gamma-type function provided best fit of the lactation curve of first and remaining lactations, respectively. Quadratic model gave least fit to lactation curve in almost all lactations. Peak yield was observed as highest and lowest in fourth and first lactation, respectively. Further, first lactation showed highest persistency but relatively higher time to achieve peak yield than other lactations. Conclusion Lactation curve modelling using gamma-type function may be helpful to setting the management strategies at farm level, however, modelling must be optimized regularly before implementing them to enhance productivity in Gir crossbred cows.

  6. ModelMage: a tool for automatic model generation, selection and management.

    Science.gov (United States)

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software.

  7. Modeling Implicit and Explicit Memory.

    NARCIS (Netherlands)

    Raaijmakers, J.G.W.; Ohta, N.; Izawa, C.

    2005-01-01

    Mathematical models of memory are useful for describing basic processes of memory in a way that enables generalization across a number of experimental paradigms. Models that have these characteristics do not just engage in empirical curve-fitting, but may also provide explanations for puzzling

  8. Models of πNN interactions

    International Nuclear Information System (INIS)

    Lee, T.S.H.

    1988-01-01

    A πNN model inspired by Quantum Chromodynamics is presented. The model gives an accurate fit to the most recent Arndt NN phase shifts up to 1 GeV and can be applied to study intermediate- and high-energy nuclear reactions. 20 refs., 2 figs

  9. In search of laterally heterogeneous viscosity models of Glacial Isostatic Adjustment with the ICE-6G_C global ice history model

    Science.gov (United States)

    Li, Tanghua; Wu, Patrick; Steffen, Holger; Wang, Hansheng

    2018-05-01

    Most models of Glacial Isostatic Adjustment (GIA) assume that the Earth is laterally homogeneous. However, seismic and geological observations clearly show that the Earth's mantle is laterally heterogeneous. Previous studies of GIA with lateral heterogeneity mostly focused on its effect or sensitivity on GIA predictions, and it is not clear to what extent can lateral heterogeneity solve the misfits between GIA predictions and observations. Our aim is to search for the best 3D viscosity models that can simultaneously fit the global relative sea-level (RSL) data, the peak uplift rates (u-dot from GNSS) and peak gravity-rate-of-change (g-dot from the GRACE satellite mission) in Laurentia and Fennoscandia. However, the search is dependent on the ice and viscosity model inputs - the latter depends on the background viscosity and the seismic tomography models used. In this paper, the ICE-6G_C ice model, with Bunge & Grand's seismic tomography model and background viscosity models close to VM5 will be assumed. A Coupled Laplace-Finite Element Method is used to compute gravitationally self-consistent sea level change with time dependent coastlines and rotational feedback in addition to changes in deformation, gravity and the state of stress. Several laterally heterogeneous models are found to fit the global sea level data better than laterally homogeneous models. Two of these laterally heterogeneous models also fit the ICE-6G_C peak g-dot and u-dot rates observed in Laurentia simultaneously. However, even with the introduction of lateral heterogeneity, no model that is able to fit the present-day g-dot and uplift rate data in Fennoscandia has been found. Therefore, either the ice history of ICE-6G_C in Fennoscandia and Barent Sea needs some modifications, or the sub-lithospheric property/non-thermal effect underneath northern Europe must be different from that underneath Laurentia.

  10. The Structure of Preschoolers' Emotion Knowledge: Model Equivalence and Validity Using a Structural Equation Modeling Approach

    Science.gov (United States)

    Bassett, Hideko Hamada; Denham, Susanne; Mincic, Melissa; Graling, Kelly

    2012-01-01

    Research Findings: A theory-based 2-factor structure of preschoolers' emotion knowledge (i.e., recognition of emotional expression and understanding of emotion-eliciting situations) was tested using confirmatory factor analysis. Compared to 1- and 3-factor models, the 2-factor model showed a better fit to the data. The model was found to be…

  11. Linear approximation model network and its formation via ...

    Indian Academy of Sciences (India)

    To overcome the deficiency of `local model network' (LMN) techniques, an alternative `linear approximation model' (LAM) network approach is proposed. Such a network models a nonlinear or practical system with multiple linear models fitted along operating trajectories, where individual models are simply networked ...

  12. Comparison among cognitive diagnostic models for the TIMSS 2007 fourth grade mathematics assessment.

    Directory of Open Access Journals (Sweden)

    Kazuhiro Yamaguchi

    Full Text Available A variety of cognitive diagnostic models (CDMs have been developed in recent years to help with the diagnostic assessment and evaluation of students. Each model makes different assumptions about the relationship between students' achievement and skills, which makes it important to empirically investigate which CDMs better fit the actual data. In this study, we examined this question by comparatively fitting representative CDMs to the Trends in International Mathematics and Science Study (TIMSS 2007 assessment data across seven countries. The following two major findings emerged. First, in accordance with former studies, CDMs had a better fit than did the item response theory models. Second, main effects models generally had a better fit than other parsimonious or the saturated models. Related to the second finding, the fit of the traditional parsimonious models such as the DINA and DINO models were not optimal. The empirical educational implications of these findings are discussed.

  13. A cautionary note on the use of Ornstein Uhlenbeck models in macroevolutionary studies.

    Science.gov (United States)

    Cooper, Natalie; Thomas, Gavin H; Venditti, Chris; Meade, Andrew; Freckleton, Rob P

    2016-05-01

    Phylogenetic comparative methods are increasingly used to give new insights into the dynamics of trait evolution in deep time. For continuous traits the core of these methods is a suite of models that attempt to capture evolutionary patterns by extending the Brownian constant variance model. However, the properties of these models are often poorly understood, which can lead to the misinterpretation of results. Here we focus on one of these models - the Ornstein Uhlenbeck (OU) model. We show that the OU model is frequently incorrectly favoured over simpler models when using Likelihood ratio tests, and that many studies fitting this model use datasets that are small and prone to this problem. We also show that very small amounts of error in datasets can have profound effects on the inferences derived from OU models. Our results suggest that simulating fitted models and comparing with empirical results is critical when fitting OU and other extensions of the Brownian model. We conclude by making recommendations for best practice in fitting OU models in phylogenetic comparative analyses, and for interpreting the parameters of the OU model.

  14. The Evaluation of Feed-in Tariff Models for Photovoltaic System in Thailand

    Directory of Open Access Journals (Sweden)

    Sagulpongmalee Kangsadan

    2016-01-01

    Full Text Available Thailand is targeted to reach 6,000 MW of total installed PV capacity by 2036. Feed-in tariff (FIT was one of the most successful PV mechanisms of Thailand for promoting PV generated electricity. The evaluation of the FIT models for PV in Thailand which was designed 3 models such as premium price FIT model (Adder in the first FIT policy to motivate attention on investment in PV power plant. After that used fixed price FIT model for PV ground-mounted and front-end loaded FIT model for solar rooftop. In addition to, Thailand has project-specific tariff design which FIT rates are differentiated tariff payment levels by technology, capacity size, and quality of the resource. As result of FIT policies, the PV installation is 1,287 MW of cumulative capacity in 2014. Furthermore, the financial evaluation of FIT for PV project in Thailand found that Net Present Value (NPV 32.97 million Baht, Internal Rate of Return (IRR 13.22%, payback period 8.86 years and B/C ratio was 1.66 which must be implemented in conjunction with other financial support measures such as low interest loans, tax benefits, etc. The several incentives to promote PV in Thailand especially FIT shown as PV projects are to be profitable and incentives to investors.

  15. Volatility and what Lies Beneath: A Joint Model

    DEFF Research Database (Denmark)

    Cont, Rama; Kokholm, Thomas

    by fitting VIX option prices and then options on the underlying making the model implementable from a calibration perspective. Finally, the model is implemented and it is shown how it fits VIX index option prices along with European options on S&P 500 for various maturities.......  In this paper a model for the joint dynamics of forward variance swap prices and the underlying stock index is proposed. It is shown how options on forward variance swaps, along with options on the underlying can be priced consistently. The calibration of the model is done step-wise, first...

  16. B physics beyond the Standard Model

    International Nuclear Information System (INIS)

    Hewett, J.A.L.

    1997-12-01

    The ability of present and future experiments to test the Standard Model in the B meson sector is described. The authors examine the loop effects of new interactions in flavor changing neutral current B decays and in Z → b anti b, concentrating on supersymmetry and the left-right symmetric model as specific examples of new physics scenarios. The procedure for performing a global fit to the Wilson coefficients which describe b → s transitions is outlined, and the results of such a fit from Monte Carlo generated data is compared to the predictions of the two sample new physics scenarios. A fit to the Zb anti b couplings from present data is also given

  17. Does Sluggish Cognitive Tempo Fit within a Bi-factor Model of Attention-Deficit/Hyperactivity Disorder?

    Science.gov (United States)

    Garner, Annie A.; Peugh, James; Becker, Stephen P.; Kingery, Kathleen M.; Tamm, Leanne; Vaughn, Aaron J.; Ciesielski, Heather; Simon, John O.; Loren, Richard E. A.; Epstein, Jeffery N.

    2014-01-01

    Objective Studies demonstrate sluggish cognitive tempo (SCT) symptoms to be distinct from inattentive and hyperactive-impulsive dimensions of Attention-Deficit/Hyperactivity Disorder (ADHD). No study has examined SCT within a bi-factor model of ADHD whereby SCT may form a specific factor distinct from inattention and hyperactivity/impulsivity while still fitting within a general ADHD factor, which was the purpose of the current study. Method 168 children were recruited from an ADHD clinic. Most (92%) met diagnostic criteria for ADHD. Parents and teachers completed measures of ADHD and SCT. Results Although SCT symptoms were strongly associated with inattention they loaded onto a factor independent of ADHD ‘g’. Results were consistent across parent and teacher ratings. Conclusions SCT is structurally distinct from inattention as well as from the general ADHD latent symptom structure. Findings support a growing body of research suggesting SCT to be distinct and separate from ADHD. PMID:25005039

  18. A composite computational model of liver glucose homeostasis. I. Building the composite model.

    Science.gov (United States)

    Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A

    2012-04-07

    A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.

  19. The effects of a self-efficacy intervention on exercise behavior of fitness club members in 52 weeks and long-term relationships of transtheoretical model constructs

    NARCIS (Netherlands)

    Middelkamp, J.; Rooijen, M. Van; Wolfhagen, P.; Steenbergen, B.

    2017-01-01

    The transtheoretical model of behavior change (TTM) is often used to understand changes in health-related behavior, like exercise. Exercise behavior in fitness clubs is an understudied topic, but preliminary studies showed low frequencies and large numbers of drop-out. An initial 12-week

  20. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    Science.gov (United States)

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  1. Modeling of secondary organic aerosol yields from laboratory chamber data

    Directory of Open Access Journals (Sweden)

    M. N. Chan

    2009-08-01

    Full Text Available Laboratory chamber data serve as the basis for constraining models of secondary organic aerosol (SOA formation. Current models fall into three categories: empirical two-product (Odum, product-specific, and volatility basis set. The product-specific and volatility basis set models are applied here to represent laboratory data on the ozonolysis of α-pinene under dry, dark, and low-NOx conditions in the presence of ammonium sulfate seed aerosol. Using five major identified products, the model is fit to the chamber data. From the optimal fitting, SOA oxygen-to-carbon (O/C and hydrogen-to-carbon (H/C ratios are modeled. The discrepancy between measured H/C ratios and those based on the oxidation products used in the model fitting suggests the potential importance of particle-phase reactions. Data fitting is also carried out using the volatility basis set, wherein oxidation products are parsed into volatility bins. The product-specific model is most likely hindered by lack of explicit inclusion of particle-phase accretion compounds. While prospects for identification of the majority of SOA products for major volatile organic compounds (VOCs classes remain promising, for the near future empirical product or volatility basis set models remain the approaches of choice.

  2. Modelling DW-MRI data from primary and metastatic ovarian tumours

    Energy Technology Data Exchange (ETDEWEB)

    Winfield, Jessica M. [Institute of Cancer Research, CRUK and EPSRC Cancer Imaging Centre, Division of Radiotherapy and Imaging, Surrey (United Kingdom); Royal Marsden NHS Foundation Trust, Surrey (United Kingdom); Institute of Cancer Research and Royal Marsden Hospital, MRI Unit, Surrey (United Kingdom); DeSouza, Nandita M.; Collins, David J. [Institute of Cancer Research, CRUK and EPSRC Cancer Imaging Centre, Division of Radiotherapy and Imaging, Surrey (United Kingdom); Royal Marsden NHS Foundation Trust, Surrey (United Kingdom); Priest, Andrew N.; Hodgkin, Charlotte; Freeman, Susan [University of Cambridge, Department of Radiology, Addenbrooke' s Hospital, Cambridge (United Kingdom); Wakefield, Jennifer C.; Orton, Matthew R. [Institute of Cancer Research, CRUK and EPSRC Cancer Imaging Centre, Division of Radiotherapy and Imaging, Surrey (United Kingdom)

    2015-07-15

    To assess goodness-of-fit and repeatability of mono-exponential, stretched exponential and bi-exponential models of diffusion-weighted MRI (DW-MRI) data in primary and metastatic ovarian cancer. Thirty-nine primary and metastatic lesions from thirty-one patients with stage III or IV ovarian cancer were examined before and after chemotherapy using DW-MRI with ten diffusion-weightings. The data were fitted with (a) a mono-exponential model to give the apparent diffusion coefficient (ADC), (b) a stretched exponential model to give the distributed diffusion coefficient (DDC) and stretching parameter (α), and (c) a bi-exponential model to give the diffusion coefficient (D), perfusion fraction (f) and pseudodiffusion coefficient (D*). Coefficients of variation, established from repeated baseline measurements, were: ADC 3.1 %, DDC 4.3 %, α 7.0 %, D 13.2 %, f 44.0 %, D* 165.1 %. The bi-exponential model was unsuitable in these data owing to poor repeatability. After excluding the bi-exponential model, analysis using Akaike Information Criteria showed that the stretched exponential model provided the better fit to the majority of pixels in 64 % of lesions. The stretched exponential model provides the optimal fit to DW-MRI data from ovarian, omental and peritoneal lesions and lymph nodes in pre-treatment and post-treatment measurements with good repeatability. (orig.)

  3. Statistical physics of pairwise probability models

    Directory of Open Access Journals (Sweden)

    Yasser Roudi

    2009-11-01

    Full Text Available Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.

  4. Eight new luminous z ≥ 6 quasars discovered via SED model fitting of VISTA, WISE and Dark Energy Survey Year 1 observations

    International Nuclear Information System (INIS)

    Reed, S. L.; McMahon, R. G.; Martini, P.; Banerji, M.; Auger, M.

    2017-01-01

    Here, we present the discovery and spectroscopic confirmation with the European Southern Observatory New Technology Telescope (NTT) and Gemini South telescopes of eight new, and the rediscovery of two previously known, 6.0 < z < 6.5 quasars with zAB < 21.0. These quasars were photometrically selected without any morphological criteria from 1533 deg2 using spectral energy distribution (SED) model fitting to photometric data from Dark Energy Survey (g, r, i, z, Y), VISTA Hemisphere Survey (J, H, K) and Wide-field Infrared Survey Explorer (W1, W2). The photometric data were fitted with a grid of quasar model SEDs with redshift-dependent Ly α forest absorption and a range of intrinsic reddening as well as a series of low-mass cool star models. Candidates were ranked using an SED-model-based χ2-statistic, which is extendable to other future imaging surveys (e.g. LSST and Euclid). Our spectral confirmation success rate is 100 per cent without the need for follow-up photometric observations as used in other studies of this type. Combined with automatic removal of the main types of non-astrophysical contaminants, the method allows large data sets to be processed without human intervention and without being overrun by spurious false candidates. We also present a robust parametric redshift estimator that gives comparable accuracy to Mg ii and CO-based redshift estimators. We find two z ~6.2 quasars with H ii near zone sizes ≤3 proper Mpc that could indicate that these quasars may be young with ages ≲ 10 6 -10 7 years or lie in over dense regions of the IGM. The z = 6.5 quasar VDES J0224–4711 has JAB = 19.75 and is the second most luminous quasar known with z ≥ 6.5.

  5. Probabilistic model for the simulation of secondary electron emission

    Directory of Open Access Journals (Sweden)

    M. A. Furman

    2002-12-01

    Full Text Available We provide a detailed description of a model and its computational algorithm for the secondary electron emission process. The model is based on a broad phenomenological fit to data for the secondary-emission yield and the emitted-energy spectrum. We provide two sets of values for the parameters by fitting our model to two particular data sets, one for copper and the other one for stainless steel.

  6. QCD ghost f(T)-gravity model

    Energy Technology Data Exchange (ETDEWEB)

    Karami, K.; Abdolmaleki, A.; Asadzadeh, S. [University of Kurdistan, Department of Physics, Sanandaj (Iran, Islamic Republic of); Safari, Z. [Research Institute for Astronomy and Astrophysics of Maragha (RIAAM), Maragha (Iran, Islamic Republic of)

    2013-09-15

    Within the framework of modified teleparallel gravity, we reconstruct a f(T) model corresponding to the QCD ghost dark energy scenario. For a spatially flat FRW universe containing only the pressureless matter, we obtain the time evolution of the torsion scalar T (or the Hubble parameter). Then, we calculate the effective torsion equation of state parameter of the QCD ghost f(T)-gravity model as well as the deceleration parameter of the universe. Furthermore, we fit the model parameters by using the latest observational data including SNeIa, CMB and BAO data. We also check the viability of our model using a cosmographic analysis approach. Moreover, we investigate the validity of the generalized second law (GSL) of gravitational thermodynamics for our model. Finally, we point out the growth rate of matter density perturbation. We conclude that in QCD ghost f(T)-gravity model, the universe begins a matter dominated phase and approaches a de Sitter regime at late times, as expected. Also this model is consistent with current data, passes the cosmographic test, satisfies the GSL and fits the data of the growth factor well as the {Lambda}CDM model. (orig.)

  7. Modelling the ethanol-induced sleeping time in mice through a zero inflated model

    OpenAIRE

    FOGAP, Njinju Tongwa

    2007-01-01

    In the analysis of data in statistics, it is imperative to select most suitable models. Wrong choice of model selection leads to bias parameter estimates and standard errors. In the ethanol anesthesia data set used in this thesis, we observe more than expected zero counts, usually termed zero-inflation. Traditional application of Poisson and negative binomial distributions for model fitting may not be adequate due to the presence of excess zeros. This zero-inflation comes from two sources;...

  8. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    Science.gov (United States)

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  9. Modeling abundance using N-mixture models: the importance of considering ecological mechanisms.

    Science.gov (United States)

    Joseph, Liana N; Elkin, Ché; Martin, Tara G; Possinghami, Hugh P

    2009-04-01

    Predicting abundance across a species' distribution is useful for studies of ecology and biodiversity management. Modeling of survey data in relation to environmental variables can be a powerful method for extrapolating abundances across a species' distribution and, consequently, calculating total abundances and ultimately trends. Research in this area has demonstrated that models of abundance are often unstable and produce spurious estimates, and until recently our ability to remove detection error limited the development of accurate models. The N-mixture model accounts for detection and abundance simultaneously and has been a significant advance in abundance modeling. Case studies that have tested these new models have demonstrated success for some species, but doubt remains over the appropriateness of standard N-mixture models for many species. Here we develop the N-mixture model to accommodate zero-inflated data, a common occurrence in ecology, by employing zero-inflated count models. To our knowledge, this is the first application of this method to modeling count data. We use four variants of the N-mixture model (Poisson, zero-inflated Poisson, negative binomial, and zero-inflated negative binomial) to model abundance, occupancy (zero-inflated models only) and detection probability of six birds in South Australia. We assess models by their statistical fit and the ecological realism of the parameter estimates. Specifically, we assess the statistical fit with AIC and assess the ecological realism by comparing the parameter estimates with expected values derived from literature, ecological theory, and expert opinion. We demonstrate that, despite being frequently ranked the "best model" according to AIC, the negative binomial variants of the N-mixture often produce ecologically unrealistic parameter estimates. The zero-inflated Poisson variant is preferable to the negative binomial variants of the N-mixture, as it models an ecological mechanism rather than a

  10. Model for macroevolutionary dynamics.

    Science.gov (United States)

    Maruvka, Yosef E; Shnerb, Nadav M; Kessler, David A; Ricklefs, Robert E

    2013-07-02

    The highly skewed distribution of species among genera, although challenging to macroevolutionists, provides an opportunity to understand the dynamics of diversification, including species formation, extinction, and morphological evolution. Early models were based on either the work by Yule [Yule GU (1925) Philos Trans R Soc Lond B Biol Sci 213:21-87], which neglects extinction, or a simple birth-death (speciation-extinction) process. Here, we extend the more recent development of a generic, neutral speciation-extinction (of species)-origination (of genera; SEO) model for macroevolutionary dynamics of taxon diversification. Simulations show that deviations from the homogeneity assumptions in the model can be detected in species-per-genus distributions. The SEO model fits observed species-per-genus distributions well for class-to-kingdom-sized taxonomic groups. The model's predictions for the appearance times (the time of the first existing species) of the taxonomic groups also approximately match estimates based on molecular inference and fossil records. Unlike estimates based on analyses of phylogenetic reconstruction, fitted extinction rates for large clades are close to speciation rates, consistent with high rates of species turnover and the relatively slow change in diversity observed in the fossil record. Finally, the SEO model generally supports the consistency of generic boundaries based on morphological differences between species and provides a comparator for rates of lineage splitting and morphological evolution.

  11. Prediction of Pressing Quality for Press-Fit Assembly Based on Press-Fit Curve and Maximum Press-Mounting Force

    Directory of Open Access Journals (Sweden)

    Bo You

    2015-01-01

    Full Text Available In order to predict pressing quality of precision press-fit assembly, press-fit curves and maximum press-mounting force of press-fit assemblies were investigated by finite element analysis (FEA. The analysis was based on a 3D Solidworks model using the real dimensions of the microparts and the subsequent FEA model that was built using ANSYS Workbench. The press-fit process could thus be simulated on the basis of static structure analysis. To verify the FEA results, experiments were carried out using a press-mounting apparatus. The results show that the press-fit curves obtained by FEA agree closely with the curves obtained using the experimental method. In addition, the maximum press-mounting force calculated by FEA agrees with that obtained by the experimental method, with the maximum deviation being 4.6%, a value that can be tolerated. The comparison shows that the press-fit curve and max press-mounting force calculated by FEA can be used for predicting the pressing quality during precision press-fit assembly.

  12. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    Science.gov (United States)

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  13. Fitting monthly Peninsula Malaysian rainfall using Tweedie distribution

    Science.gov (United States)

    Yunus, R. M.; Hasan, M. M.; Zubairi, Y. Z.

    2017-09-01

    In this study, the Tweedie distribution was used to fit the monthly rainfall data from 24 monitoring stations of Peninsula Malaysia for the period from January, 2008 to April, 2015. The aim of the study is to determine whether the distributions within the Tweedie family fit well the monthly Malaysian rainfall data. Within the Tweedie family, the gamma distribution is generally used for fitting the rainfall totals, however the Poisson-gamma distribution is more useful to describe two important features of rainfall pattern, which are the occurrences (dry months) and the amount (wet months). First, the appropriate distribution of the monthly rainfall was identified within the Tweedie family for each station. Then, the Tweedie Generalised Linear Model (GLM) with no explanatory variable was used to model the monthly rainfall data. Graphical representation was used to assess model appropriateness. The QQ plots of quantile residuals show that the Tweedie models fit the monthly rainfall data better for majority of the stations in the west coast and mid land than those in the east coast of Peninsula. This significant finding suggests that the best fitted distribution depends on the geographical location of the monitoring station. In this paper, a simple model is developed for generating synthetic rainfall data for use in various areas, including agriculture and irrigation. We have showed that the data that were simulated using the Tweedie distribution have fairly similar frequency histogram to that of the actual data. Both the mean number of rainfall events and mean amount of rain for a month were estimated simultaneously for the case that the Poisson gamma distribution fits the data reasonably well. Thus, this work complements previous studies that fit the rainfall amount and the occurrence of rainfall events separately, each to a different distribution.

  14. Correctness-preserving configuration of business process models

    NARCIS (Netherlands)

    Aalst, van der W.M.P.; Dumas, M.; Gottschalk, F.; Hofstede, ter A.H.M.; La Rosa, M.; Mendling, J.; Fiadeiro, J.; Inverardi, P.

    2008-01-01

    Reference process models capture recurrent business operations in a given domain such as procurement or logistics. These models are intended to be configured to fit the requirements of specific organizations or projects, leading to individualized process models that are subsequently used for domain

  15. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    Energy Technology Data Exchange (ETDEWEB)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu [Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States)

    2016-02-15

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  16. Engine Modelling for Control Applications

    DEFF Research Database (Denmark)

    Hendricks, Elbert

    1997-01-01

    In earlier work published by the author and co-authors, a dynamic engine model called a Mean Value Engine Model (MVEM) was developed. This model is physically based and is intended mainly for control applications. In its newer form, it is easy to fit to many different engines and requires little...... engine data for this purpose. It is especially well suited to embedded model applications in engine controllers, such as nonlinear observer based air/fuel ratio and advanced idle speed control. After a brief review of this model, it will be compared with other similar models which can be found...

  17. Testing a Model of Diabetes Self-Care Management: A Causal Model Analysis with LISREL.

    Science.gov (United States)

    Nowacek, George A.; And Others

    1990-01-01

    A diabetes-management model is presented, which includes an attitudinal element and depicts relationships among causal elements. LISREL-VI was used to analyze data from 115 Type-I and 105 Type-II patients. The data did not closely fit the model. Results support the importance of the personal meaning of diabetes. (TJH)

  18. One-dimensional GIS-based model compared with a two-dimensional model in urban floods simulation.

    Science.gov (United States)

    Lhomme, J; Bouvier, C; Mignot, E; Paquier, A

    2006-01-01

    A GIS-based one-dimensional flood simulation model is presented and applied to the centre of the city of Nîmes (Gard, France), for mapping flow depths or velocities in the streets network. The geometry of the one-dimensional elements is derived from the Digital Elevation Model (DEM). The flow is routed from one element to the next using the kinematic wave approximation. At the crossroads, the flows in the downstream branches are computed using a conceptual scheme. This scheme was previously designed to fit Y-shaped pipes junctions, and has been modified here to fit X-shaped crossroads. The results were compared with the results of a two-dimensional hydrodynamic model based on the full shallow water equations. The comparison shows that good agreements can be found in the steepest streets of the study zone, but differences may be important in the other streets. Some reasons that can explain the differences between the two models are given and some research possibilities are proposed.

  19. Cluster Correlation in Mixed Models

    Science.gov (United States)

    Gardini, A.; Bonometto, S. A.; Murante, G.; Yepes, G.

    2000-10-01

    We evaluate the dependence of the cluster correlation length, rc, on the mean intercluster separation, Dc, for three models with critical matter density, vanishing vacuum energy (Λ=0), and COBE normalization: a tilted cold dark matter (tCDM) model (n=0.8) and two blue mixed models with two light massive neutrinos, yielding Ωh=0.26 and 0.14 (MDM1 and MDM2, respectively). All models approach the observational value of σ8 (and hence the observed cluster abundance) and are consistent with the observed abundance of damped Lyα systems. Mixed models have a motivation in recent results of neutrino physics; they also agree with the observed value of the ratio σ8/σ25, yielding the spectral slope parameter Γ, and nicely fit Las Campanas Redshift Survey (LCRS) reconstructed spectra. We use parallel AP3M simulations, performed in a wide box (of side 360 h-1 Mpc) and with high mass and distance resolution, enabling us to build artificial samples of clusters, whose total number and mass range allow us to cover the same Dc interval inspected through Automatic Plate Measuring Facility (APM) and Abell cluster clustering data. We find that the tCDM model performs substantially better than n=1 critical density CDM models. Our main finding, however, is that mixed models provide a surprisingly good fit to cluster clustering data.

  20. Fast Newton active appearance models

    NARCIS (Netherlands)

    Kossaifi, Jean; Tzimiropoulos, Georgios; Pantic, Maja

    2014-01-01

    Active Appearance Models (AAMs) are statistical models of shape and appearance widely used in computer vision to detect landmarks on objects like faces. Fitting an AAM to a new image can be formulated as a non-linear least-squares problem which is typically solved using iterative methods. Owing to