WorldWideScience

Sample records for relative model-data fit

  1. Fitting Hidden Markov Models to Psychological Data

    Directory of Open Access Journals (Sweden)

    Ingmar Visser

    2002-01-01

    Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.

  2. HDFITS: Porting the FITS data model to HDF5

    Science.gov (United States)

    Price, D. C.; Barsdell, B. R.; Greenhill, L. J.

    2015-09-01

    The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new 'HDFITS' format, where data are stored in HDF5 in a FITS-like manner. We show that HDFITS allows faster reading of data (up to 100x of FITS in some use cases), and improved compression (higher compression ratios and higher throughput). Finally, we show that by only changing the import lines in Python-based FITS utilities, HDFITS formatted data can be presented transparently as an in-memory FITS equivalent.

  3. Contrast Gain Control Model Fits Masking Data

    Science.gov (United States)

    Watson, Andrew B.; Solomon, Joshua A.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    We studied the fit of a contrast gain control model to data of Foley (JOSA 1994), consisting of thresholds for a Gabor patch masked by gratings of various orientations, or by compounds of two orientations. Our general model includes models of Foley and Teo & Heeger (IEEE 1994). Our specific model used a bank of Gabor filters with octave bandwidths at 8 orientations. Excitatory and inhibitory nonlinearities were power functions with exponents of 2.4 and 2. Inhibitory pooling was broad in orientation, but narrow in spatial frequency and space. Minkowski pooling used an exponent of 4. All of the data for observer KMF were well fit by the model. We have developed a contrast gain control model that fits masking data. Unlike Foley's, our model accepts images as inputs. Unlike Teo & Heeger's, our model did not require multiple channels for different dynamic ranges.

  4. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  5. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  6. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  7. Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data

    Science.gov (United States)

    McNeish, Daniel; Harring, Jeffrey R.

    2017-01-01

    To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…

  8. ITEM LEVEL DIAGNOSTICS AND MODEL - DATA FIT IN ITEM ...

    African Journals Online (AJOL)

    Global Journal

    Item response theory (IRT) is a framework for modeling and analyzing item response ... data. Though, there is an argument that the evaluation of fit in IRT modeling has been ... National Council on Measurement in Education ... model data fit should be based on three types of ... prediction should be assessed through the.

  9. The bystander effect model of Brenner and Sachs fitted to lung cancer data in 11 cohorts of underground miners, and equivalence of fit of a linear relative risk model with adjustment for attained age and age at exposure

    International Nuclear Information System (INIS)

    Little, M P

    2004-01-01

    Bystander effects following exposure to α-particles have been observed in many experimental systems, and imply that linearly extrapolating low dose risks from high dose data might materially underestimate risk. Brenner and Sachs (2002 Int. J. Radiat. Biol. 78 593-604; 2003 Health Phys. 85 103-8) have recently proposed a model of the bystander effect which they use to explain the inverse dose rate effect observed for lung cancer in underground miners exposed to radon daughters. In this paper we fit the model of the bystander effect proposed by Brenner and Sachs to 11 cohorts of underground miners, taking account of the covariance structure of the data and the period of latency between the development of the first pre-malignant cell and clinically overt cancer. We also fitted a simple linear relative risk model, with adjustment for age at exposure and attained age. The methods that we use for fitting both models are different from those used by Brenner and Sachs, in particular taking account of the covariance structure, which they did not, and omitting certain unjustifiable adjustments to the miner data. The fit of the original model of Brenner and Sachs (with 0 y period of latency) is generally poor, although it is much improved by assuming a 5 or 6 y period of latency from the first appearance of a pre-malignant cell to cancer. The fit of this latter model is equivalent to that of a linear relative risk model with adjustment for age at exposure and attained age. In particular, both models are capable of describing the observed inverse dose rate effect in this data set

  10. Efficient occupancy model-fitting for extensive citizen-science data

    Science.gov (United States)

    Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen

  11. Globfit: Consistently fitting primitives by discovering global relations

    KAUST Repository

    Li, Yangyan; Wu, Xiaokun; Chrysathou, Yiorgos; Sharf, Andrei Sharf; Cohen-Or, Daniel; Mitra, Niloy J.

    2011-01-01

    Given a noisy and incomplete point set, we introduce a method that simultaneously recovers a set of locally fitted primitives along with their global mutual relations. We operate under the assumption that the data corresponds to a man-made engineering object consisting of basic primitives, possibly repeated and globally aligned under common relations. We introduce an algorithm to directly couple the local and global aspects of the problem. The local fit of the model is determined by how well the inferred model agrees to the observed data, while the global relations are iteratively learned and enforced through a constrained optimization. Starting with a set of initial RANSAC based locally fitted primitives, relations across the primitives such as orientation, placement, and equality are progressively learned and conformed to. In each stage, a set of feasible relations are extracted among the candidate relations, and then aligned to, while best fitting to the input data. The global coupling corrects the primitives obtained in the local RANSAC stage, and brings them to precise global alignment. We test the robustness of our algorithm on a range of synthesized and scanned data, with varying amounts of noise, outliers, and non-uniform sampling, and validate the results against ground truth, where available. © 2011 ACM.

  12. Globfit: Consistently fitting primitives by discovering global relations

    KAUST Repository

    Li, Yangyan

    2011-07-01

    Given a noisy and incomplete point set, we introduce a method that simultaneously recovers a set of locally fitted primitives along with their global mutual relations. We operate under the assumption that the data corresponds to a man-made engineering object consisting of basic primitives, possibly repeated and globally aligned under common relations. We introduce an algorithm to directly couple the local and global aspects of the problem. The local fit of the model is determined by how well the inferred model agrees to the observed data, while the global relations are iteratively learned and enforced through a constrained optimization. Starting with a set of initial RANSAC based locally fitted primitives, relations across the primitives such as orientation, placement, and equality are progressively learned and conformed to. In each stage, a set of feasible relations are extracted among the candidate relations, and then aligned to, while best fitting to the input data. The global coupling corrects the primitives obtained in the local RANSAC stage, and brings them to precise global alignment. We test the robustness of our algorithm on a range of synthesized and scanned data, with varying amounts of noise, outliers, and non-uniform sampling, and validate the results against ground truth, where available. © 2011 ACM.

  13. Fitting Equilibrium Search Models to Labour Market Data

    DEFF Research Database (Denmark)

    Bowlus, Audra J.; Kiefer, Nicholas M.; Neumann, George R.

    1996-01-01

    Specification and estimation of a Burdett-Mortensen type equilibrium search model is considered. The estimation is nonstandard. An estimation strategy asymptotically equivalent to maximum likelihood is proposed and applied. The results indicate that specifications with a small number of productiv...... of productivity types fit the data well compared to the homogeneous model....

  14. Item level diagnostics and model - data fit in item response theory ...

    African Journals Online (AJOL)

    Item response theory (IRT) is a framework for modeling and analyzing item response data. Item-level modeling gives IRT advantages over classical test theory. The fit of an item score pattern to an item response theory (IRT) models is a necessary condition that must be assessed for further use of item and models that best fit ...

  15. Rapid world modeling: Fitting range data to geometric primitives

    International Nuclear Information System (INIS)

    Feddema, J.; Little, C.

    1996-01-01

    For the past seven years, Sandia National Laboratories has been active in the development of robotic systems to help remediate DOE's waste sites and decommissioned facilities. Some of these facilities have high levels of radioactivity which prevent manual clean-up. Tele-operated and autonomous robotic systems have been envisioned as the only suitable means of removing the radioactive elements. World modeling is defined as the process of creating a numerical geometric model of a real world environment or workspace. This model is often used in robotics to plan robot motions which perform a task while avoiding obstacles. In many applications where the world model does not exist ahead of time, structured lighting, laser range finders, and even acoustical sensors have been used to create three dimensional maps of the environment. These maps consist of thousands of range points which are difficult to handle and interpret. This paper presents a least squares technique for fitting range data to planar and quadric surfaces, including cylinders and ellipsoids. Once fit to these primitive surfaces, the amount of data associated with a surface is greatly reduced up to three orders of magnitude, thus allowing for more rapid handling and analysis of world data

  16. Modelling binary data

    CERN Document Server

    Collett, David

    2002-01-01

    INTRODUCTION Some Examples The Scope of this Book Use of Statistical Software STATISTICAL INFERENCE FOR BINARY DATA The Binomial Distribution Inference about the Success Probability Comparison of Two Proportions Comparison of Two or More Proportions MODELS FOR BINARY AND BINOMIAL DATA Statistical Modelling Linear Models Methods of Estimation Fitting Linear Models to Binomial Data Models for Binomial Response Data The Linear Logistic Model Fitting the Linear Logistic Model to Binomial Data Goodness of Fit of a Linear Logistic Model Comparing Linear Logistic Models Linear Trend in Proportions Comparing Stimulus-Response Relationships Non-Convergence and Overfitting Some other Goodness of Fit Statistics Strategy for Model Selection Predicting a Binary Response Probability BIOASSAY AND SOME OTHER APPLICATIONS The Tolerance Distribution Estimating an Effective Dose Relative Potency Natural Response Non-Linear Logistic Regression Models Applications of the Complementary Log-Log Model MODEL CHECKING Definition of Re...

  17. Supersymmetry with prejudice: Fitting the wrong model to LHC data

    Science.gov (United States)

    Allanach, B. C.; Dolan, Matthew J.

    2012-09-01

    We critically examine interpretations of hypothetical supersymmetric LHC signals, fitting to alternative wrong models of supersymmetry breaking. The signals we consider are some of the most constraining on the sparticle spectrum: invariant mass distributions with edges and endpoints from the golden decay chain q˜→qχ20(→l˜±l∓q)→χ10l+l-q. We assume a constrained minimal supersymmetric standard model (CMSSM) point to be the ‘correct’ one, but fit the signals instead with minimal gauge mediated supersymmetry breaking models (mGMSB) with a neutralino quasistable lightest supersymmetric particle, minimal anomaly mediation and large volume string compactification models. Minimal anomaly mediation and large volume scenario can be unambiguously discriminated against the CMSSM for the assumed signal and 1fb-1 of LHC data at s=14TeV. However, mGMSB would not be discriminated on the basis of the kinematic endpoints alone. The best-fit point spectra of mGMSB and CMSSM look remarkably similar, making experimental discrimination at the LHC based on the edges or Higgs properties difficult. However, using rate information for the golden chain should provide the additional separation required.

  18. Fitting the Fractional Polynomial Model to Non-Gaussian Longitudinal Data

    Directory of Open Access Journals (Sweden)

    Ji Hoon Ryoo

    2017-08-01

    Full Text Available As in cross sectional studies, longitudinal studies involve non-Gaussian data such as binomial, Poisson, gamma, and inverse-Gaussian distributions, and multivariate exponential families. A number of statistical tools have thus been developed to deal with non-Gaussian longitudinal data, including analytic techniques to estimate parameters in both fixed and random effects models. However, as yet growth modeling with non-Gaussian data is somewhat limited when considering the transformed expectation of the response via a linear predictor as a functional form of explanatory variables. In this study, we introduce a fractional polynomial model (FPM that can be applied to model non-linear growth with non-Gaussian longitudinal data and demonstrate its use by fitting two empirical binary and count data models. The results clearly show the efficiency and flexibility of the FPM for such applications.

  19. Measuring fit of sequence data to phylogenetic model: gain of power using marginal tests.

    Science.gov (United States)

    Waddell, Peter J; Ota, Rissa; Penny, David

    2009-10-01

    Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (Unended quest: an intellectual autobiography. Fontana, London, 1976) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (Nature 297:197-200, 1982) to the present. We compare the general log-likelihood ratio (the G or G (2) statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (P approximately 0.5), but the marginalized tests do. Tests on pairwise frequency (F) matrices, strongly (P < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (P < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4( t ) patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with P < 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published trees may really be far larger than the analytical methods (e.g., bootstrap) report.

  20. Multi-binding site model-based curve-fitting program for the computation of RIA data

    International Nuclear Information System (INIS)

    Malan, P.G.; Ekins, R.P.; Cox, M.G.; Long, E.M.R.

    1977-01-01

    In this paper, a comparison will be made of model-based and empirical curve-fitting procedures. The implementation of a multiple binding-site curve-fitting model which will successfully fit a wide range of assay data, and which can be run on a mini-computer is described. The latter sophisticated model also provides estimates of binding site concentrations and the values of the respective equilibrium constants present: the latter have been used for refining assay conditions using computer optimisation techniques. (orig./AJ) [de

  1. Using the Flipchem Photochemistry Model When Fitting Incoherent Scatter Radar Data

    Science.gov (United States)

    Reimer, A. S.; Varney, R. H.

    2017-12-01

    The North face Resolute Bay Incoherent Scatter Radar (RISR-N) routinely images the dynamics of the polar ionosphere, providing measurements of the plasma density, electron temperature, ion temperature, and line of sight velocity with seconds to minutes time resolution. RISR-N does not directly measure ionospheric parameters, but backscattered signals, recording them as voltage samples. Using signal processing techniques, radar autocorrelation functions (ACF) are estimated from the voltage samples. A model of the signal ACF is then fitted to the ACF using non-linear least-squares techniques to obtain the best-fit ionospheric parameters. The signal model, and therefore the fitted parameters, depend on the ionospheric ion composition that is used [e.g. Zettergren et. al. (2010), Zou et. al. (2017)].The software used to process RISR-N ACF data includes the "flipchem" model, which is an ion photochemistry model developed by Richards [2011] that was adapted from the Field LineInterhemispheric Plasma (FLIP) model. Flipchem requires neutral densities, neutral temperatures, electron density, ion temperature, electron temperature, solar zenith angle, and F10.7 as inputs to compute ion densities, which are input to the signal model. A description of how the flipchem model is used in RISR-N fitting software will be presented. Additionally, a statistical comparison of the fitted electron density, ion temperature, electron temperature, and velocity obtained using a flipchem ionosphere, a pure O+ ionosphere, and a Chapman O+ ionosphere will be presented. The comparison covers nearly two years of RISR-N data (April 2015 - December 2016). Richards, P. G. (2011), Reexamination of ionospheric photochemistry, J. Geophys. Res., 116, A08307, doi:10.1029/2011JA016613.Zettergren, M., Semeter, J., Burnett, B., Oliver, W., Heinselman, C., Blelly, P.-L., and Diaz, M.: Dynamic variability in F-region ionospheric composition at auroral arc boundaries, Ann. Geophys., 28, 651-664, https

  2. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    Science.gov (United States)

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  3. A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models

    Directory of Open Access Journals (Sweden)

    Erin Scott

    2016-01-01

    Full Text Available The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE models to observation reference data (Mackinson et al. 2009. The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting >1000 specific individual searches to find the statistically ‘best fit’ model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the ‘best fit’ model for ecological accuracy.

  4. Older driver fitness-to-drive evaluation using naturalistic driving data.

    Science.gov (United States)

    Guo, Feng; Fang, Youjia; Antin, Jonathan F

    2015-09-01

    As our driving population continues to age, it is becoming increasingly important to find a small set of easily administered fitness metrics that can meaningfully and reliably identify at-risk seniors requiring more in-depth evaluation of their driving skills and weaknesses. Sixty driver assessment metrics related to fitness-to-drive were examined for 20 seniors who were followed for a year using the naturalistic driving paradigm. Principal component analysis and negative binomial regression modeling approaches were used to develop parsimonious models relating the most highly predictive of the driver assessment metrics to the safety-related outcomes observed in the naturalistic driving data. This study provides important confirmation using naturalistic driving methods of the relationship between contrast sensitivity and crash-related events. The results of this study provide crucial information on the continuing journey to identify metrics and protocols that could be applied to determine seniors' fitness to drive. Published by Elsevier Ltd.

  5. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    Science.gov (United States)

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  6. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Science.gov (United States)

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  7. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Directory of Open Access Journals (Sweden)

    A H Sabry

    Full Text Available The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  8. Induced subgraph searching for geometric model fitting

    Science.gov (United States)

    Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi

    2017-11-01

    In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.

  9. A scaled Lagrangian method for performing a least squares fit of a model to plant data

    International Nuclear Information System (INIS)

    Crisp, K.E.

    1988-01-01

    Due to measurement errors, even a perfect mathematical model will not be able to match all the corresponding plant measurements simultaneously. A further discrepancy may be introduced if an un-modelled change in conditions occurs within the plant which should have required a corresponding change in model parameters - e.g. a gradual deterioration in the performance of some component(s). Taking both these factors into account, what is required is that the overall discrepancy between the model predictions and the plant data is kept to a minimum. This process is known as 'model fitting', A method is presented for minimising any function which consists of the sum of squared terms, subject to any constraints. Its most obvious application is in the process of model fitting, where a weighted sum of squares of the differences between model predictions and plant data is the function to be minimised. When implemented within existing Central Electricity Generating Board computer models, it will perform a least squares fit of a model to plant data within a single job submission. (author)

  10. Are Fit Indices Biased in Favor of Bi-Factor Models in Cognitive Ability Research?: A Comparison of Fit in Correlated Factors, Higher-Order, and Bi-Factor Models via Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Grant B. Morgan

    2015-02-01

    Full Text Available Bi-factor confirmatory factor models have been influential in research on cognitive abilities because they often better fit the data than correlated factors and higher-order models. They also instantiate a perspective that differs from that offered by other models. Motivated by previous work that hypothesized an inherent statistical bias of fit indices favoring the bi-factor model, we compared the fit of correlated factors, higher-order, and bi-factor models via Monte Carlo methods. When data were sampled from a true bi-factor structure, each of the approximate fit indices was more likely than not to identify the bi-factor solution as the best fitting. When samples were selected from a true multiple correlated factors structure, approximate fit indices were more likely overall to identify the correlated factors solution as the best fitting. In contrast, when samples were generated from a true higher-order structure, approximate fit indices tended to identify the bi-factor solution as best fitting. There was extensive overlap of fit values across the models regardless of true structure. Although one model may fit a given dataset best relative to the other models, each of the models tended to fit the data well in absolute terms. Given this variability, models must also be judged on substantive and conceptual grounds.

  11. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    Science.gov (United States)

    Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise

    2013-01-01

    1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.

  12. Permutation tests for goodness-of-fit testing of mathematical models to experimental data.

    Science.gov (United States)

    Fişek, M Hamit; Barlas, Zeynep

    2013-03-01

    This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Modeling patterns in count data using loglinear and related models

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1995-12-01

    This report explains the use of loglinear and logit models, for analyzing Poisson and binomial counts in the presence of explanatory variables. The explanatory variables may be unordered categorical variables or numerical variables, or both. The report shows how to construct models to fit data, and how to test whether a model is too simple or too complex. The appropriateness of the methods with small data sets is discussed. Several example analyses, using the SAS computer package, illustrate the methods

  14. Test and intercomparisons of data fitting with general least squares code GMA versus Bayesian code GLUCS

    International Nuclear Information System (INIS)

    Pronyaev, V.G.

    2003-01-01

    Data fitting with GMA and GLUCS gives consistent results. Difference in the evaluated central values obtained with different formalisms can be related to the general accuracy with which fits could be done in different formalisms. It has stochastic nature and should be accounted in the final results of the data evaluation as small SERC uncertainty. Some shift in central values of data evaluated with GLUCS and GMA relative the central values evaluated with the R-matrix model code RAC is observed for cases of fitting strongly varying data and is related to the PPP. The procedure of evaluation, free from PPP, should be elaborated. (author)

  15. Universal Linear Fit Identification: A Method Independent of Data, Outliers and Noise Distribution Model and Free of Missing or Removed Data Imputation.

    Science.gov (United States)

    Adikaram, K K L B; Hussein, M A; Effenberger, M; Becker, T

    2015-01-01

    Data processing requires a robust linear fit identification method. In this paper, we introduce a non-parametric robust linear fit identification method for time series. The method uses an indicator 2/n to identify linear fit, where n is number of terms in a series. The ratio Rmax of amax - amin and Sn - amin*n and that of Rmin of amax - amin and amax*n - Sn are always equal to 2/n, where amax is the maximum element, amin is the minimum element and Sn is the sum of all elements. If any series expected to follow y = c consists of data that do not agree with y = c form, Rmax > 2/n and Rmin > 2/n imply that the maximum and minimum elements, respectively, do not agree with linear fit. We define threshold values for outliers and noise detection as 2/n * (1 + k1) and 2/n * (1 + k2), respectively, where k1 > k2 and 0 ≤ k1 ≤ n/2 - 1. Given this relation and transformation technique, which transforms data into the form y = c, we show that removing all data that do not agree with linear fit is possible. Furthermore, the method is independent of the number of data points, missing data, removed data points and nature of distribution (Gaussian or non-Gaussian) of outliers, noise and clean data. These are major advantages over the existing linear fit methods. Since having a perfect linear relation between two variables in the real world is impossible, we used artificial data sets with extreme conditions to verify the method. The method detects the correct linear fit when the percentage of data agreeing with linear fit is less than 50%, and the deviation of data that do not agree with linear fit is very small, of the order of ±10-4%. The method results in incorrect detections only when numerical accuracy is insufficient in the calculation process.

  16. Universal Linear Fit Identification: A Method Independent of Data, Outliers and Noise Distribution Model and Free of Missing or Removed Data Imputation.

    Directory of Open Access Journals (Sweden)

    K K L B Adikaram

    Full Text Available Data processing requires a robust linear fit identification method. In this paper, we introduce a non-parametric robust linear fit identification method for time series. The method uses an indicator 2/n to identify linear fit, where n is number of terms in a series. The ratio Rmax of amax - amin and Sn - amin*n and that of Rmin of amax - amin and amax*n - Sn are always equal to 2/n, where amax is the maximum element, amin is the minimum element and Sn is the sum of all elements. If any series expected to follow y = c consists of data that do not agree with y = c form, Rmax > 2/n and Rmin > 2/n imply that the maximum and minimum elements, respectively, do not agree with linear fit. We define threshold values for outliers and noise detection as 2/n * (1 + k1 and 2/n * (1 + k2, respectively, where k1 > k2 and 0 ≤ k1 ≤ n/2 - 1. Given this relation and transformation technique, which transforms data into the form y = c, we show that removing all data that do not agree with linear fit is possible. Furthermore, the method is independent of the number of data points, missing data, removed data points and nature of distribution (Gaussian or non-Gaussian of outliers, noise and clean data. These are major advantages over the existing linear fit methods. Since having a perfect linear relation between two variables in the real world is impossible, we used artificial data sets with extreme conditions to verify the method. The method detects the correct linear fit when the percentage of data agreeing with linear fit is less than 50%, and the deviation of data that do not agree with linear fit is very small, of the order of ±10-4%. The method results in incorrect detections only when numerical accuracy is insufficient in the calculation process.

  17. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references.

  18. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    International Nuclear Information System (INIS)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references

  19. Fitting oscillating string gas cosmology to supernova data

    International Nuclear Information System (INIS)

    Ferrer, Francesc; Multamaeki, Tuomas; Raesaenen, Syksy

    2009-01-01

    In string gas cosmology, extra dimensions are stabilised by a gas of strings. In the matter-dominated era, competition between matter pushing the extra dimensions to expand and the string gas pulling them back can lead to oscillations of the extra dimensions and acceleration in the visible dimensions. We fit this model to supernova data, taking into account the Big Bang Nucleosynthesis constraint on the energy density of the string gas. The fit to the Union set of supernova data is acceptable, but the fit to the ESSENCE data is poor.

  20. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.; Katzfuss, M.; Hu, J.; Johnson, V. E.

    2014-01-01

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  1. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.

    2014-09-16

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models\\' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  2. A Data-Driven Method for Selecting Optimal Models Based on Graphical Visualisation of Differences in Sequentially Fitted ROC Model Parameters

    Directory of Open Access Journals (Sweden)

    K S Mwitondi

    2013-05-01

    Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.

  3. Universal Linear Fit Identification: A Method Independent of Data, Outliers and Noise Distribution Model and Free of Missing or Removed Data Imputation

    Science.gov (United States)

    Adikaram, K. K. L. B.; Becker, T.

    2015-01-01

    Data processing requires a robust linear fit identification method. In this paper, we introduce a non-parametric robust linear fit identification method for time series. The method uses an indicator 2/n to identify linear fit, where n is number of terms in a series. The ratio R max of a max − a min and S n − a min *n and that of R min of a max − a min and a max *n − S n are always equal to 2/n, where a max is the maximum element, a min is the minimum element and S n is the sum of all elements. If any series expected to follow y = c consists of data that do not agree with y = c form, R max > 2/n and R min > 2/n imply that the maximum and minimum elements, respectively, do not agree with linear fit. We define threshold values for outliers and noise detection as 2/n * (1 + k 1 ) and 2/n * (1 + k 2 ), respectively, where k 1 > k 2 and 0 ≤ k 1 ≤ n/2 − 1. Given this relation and transformation technique, which transforms data into the form y = c, we show that removing all data that do not agree with linear fit is possible. Furthermore, the method is independent of the number of data points, missing data, removed data points and nature of distribution (Gaussian or non-Gaussian) of outliers, noise and clean data. These are major advantages over the existing linear fit methods. Since having a perfect linear relation between two variables in the real world is impossible, we used artificial data sets with extreme conditions to verify the method. The method detects the correct linear fit when the percentage of data agreeing with linear fit is less than 50%, and the deviation of data that do not agree with linear fit is very small, of the order of ±10−4%. The method results in incorrect detections only when numerical accuracy is insufficient in the calculation process. PMID:26571035

  4. Inverse problem theory methods for data fitting and model parameter estimation

    CERN Document Server

    Tarantola, A

    2002-01-01

    Inverse Problem Theory is written for physicists, geophysicists and all scientists facing the problem of quantitative interpretation of experimental data. Although it contains a lot of mathematics, it is not intended as a mathematical book, but rather tries to explain how a method of acquisition of information can be applied to the actual world.The book provides a comprehensive, up-to-date description of the methods to be used for fitting experimental data, or to estimate model parameters, and to unify these methods into the Inverse Problem Theory. The first part of the book deals wi

  5. CRAPONE, Optical Model Potential Fit of Neutron Scattering Data

    International Nuclear Information System (INIS)

    Fabbri, F.; Fratamico, G.; Reffo, G.

    2004-01-01

    1 - Description of problem or function: Automatic search for local and non-local optical potential parameters for neutrons. Total, elastic, differential elastic cross sections, l=0 and l=1 strength functions and scattering length can be considered. 2 - Method of solution: A fitting procedure is applied to different sets of experimental data depending on the local or non-local approximation chosen. In the non-local approximation the fitting procedure can be simultaneously performed over the whole energy range. The best fit is obtained when a set of parameters is found where CHI 2 is at its minimum. The solution of the system equations is obtained by diagonalization of the matrix according to the Jacobi method

  6. Fitting PAC spectra with stochastic models: PolyPacFit

    Energy Technology Data Exchange (ETDEWEB)

    Zacate, M. O., E-mail: zacatem1@nku.edu [Northern Kentucky University, Department of Physics and Geology (United States); Evenson, W. E. [Utah Valley University, College of Science and Health (United States); Newhouse, R.; Collins, G. S. [Washington State University, Department of Physics and Astronomy (United States)

    2010-04-15

    PolyPacFit is an advanced fitting program for time-differential perturbed angular correlation (PAC) spectroscopy. It incorporates stochastic models and provides robust options for customization of fits. Notable features of the program include platform independence and support for (1) fits to stochastic models of hyperfine interactions, (2) user-defined constraints among model parameters, (3) fits to multiple spectra simultaneously, and (4) any spin nuclear probe.

  7. Does model fit decrease the uncertainty of the data in comparison with a general non-model least squares fit?

    International Nuclear Information System (INIS)

    Pronyaev, V.G.

    2003-01-01

    The information entropy is taken as a measure of knowledge about the object and the reduced univariante variance as a common measure of uncertainty. Covariances in the model versus non-model least square fits are discussed

  8. Improvements in Spectrum's fit to program data tool.

    Science.gov (United States)

    Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John

    2017-04-01

    The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.

  9. Fast fitting of non-Gaussian state-space models to animal movement data via Template Model Builder

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Whoriskey, Kim; Yurkowski, David

    2015-01-01

    recommend using the Laplace approximation combined with automatic differentiation (as implemented in the novel R package Template Model Builder; TMB) for the fast fitting of continuous-time multivariate non-Gaussian SSMs. Through Argos satellite tracking data, we demonstrate that the use of continuous...... are able to estimate additional parameters compared to previous methods, all without requiring a substantial increase in computational time. The model implementation is made available through the R package argosTrack....

  10. Local fit evaluation of structural equation models using graphical criteria.

    Science.gov (United States)

    Thoemmes, Felix; Rosseel, Yves; Textor, Johannes

    2018-03-01

    Evaluation of model fit is critically important for every structural equation model (SEM), and sophisticated methods have been developed for this task. Among them are the χ² goodness-of-fit test, decomposition of the χ², derived measures like the popular root mean square error of approximation (RMSEA) or comparative fit index (CFI), or inspection of residuals or modification indices. Many of these methods provide a global approach to model fit evaluation: A single index is computed that quantifies the fit of the entire SEM to the data. In contrast, graphical criteria like d-separation or trek-separation allow derivation of implications that can be used for local fit evaluation, an approach that is hardly ever applied. We provide an overview of local fit evaluation from the viewpoint of SEM practitioners. In the presence of model misfit, local fit evaluation can potentially help in pinpointing where the problem with the model lies. For models that do fit the data, local tests can identify the parts of the model that are corroborated by the data. Local tests can also be conducted before a model is fitted at all, and they can be used even for models that are globally underidentified. We discuss appropriate statistical local tests, and provide applied examples. We also present novel software in R that automates this type of local fit evaluation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. Fit to Electroweak Precision Data

    International Nuclear Information System (INIS)

    Erler, Jens

    2006-01-01

    A brief review of electroweak precision data from LEP, SLC, the Tevatron, and low energies is presented. The global fit to all data including the most recent results on the masses of the top quark and the W boson reinforces the preference for a relatively light Higgs boson. I will also give an outlook on future developments at the Tevatron Run II, CEBAF, the LHC, and the ILC

  12. Pre-processing by data augmentation for improved ellipse fitting.

    Science.gov (United States)

    Kumar, Pankaj; Belchamber, Erika R; Miklavcic, Stanley J

    2018-01-01

    Ellipse fitting is a highly researched and mature topic. Surprisingly, however, no existing method has thus far considered the data point eccentricity in its ellipse fitting procedure. Here, we introduce the concept of eccentricity of a data point, in analogy with the idea of ellipse eccentricity. We then show empirically that, irrespective of ellipse fitting method used, the root mean square error (RMSE) of a fit increases with the eccentricity of the data point set. The main contribution of the paper is based on the hypothesis that if the data point set were pre-processed to strategically add additional data points in regions of high eccentricity, then the quality of a fit could be improved. Conditional validity of this hypothesis is demonstrated mathematically using a model scenario. Based on this confirmation we propose an algorithm that pre-processes the data so that data points with high eccentricity are replicated. The improvement of ellipse fitting is then demonstrated empirically in real-world application of 3D reconstruction of a plant root system for phenotypic analysis. The degree of improvement for different underlying ellipse fitting methods as a function of data noise level is also analysed. We show that almost every method tested, irrespective of whether it minimizes algebraic error or geometric error, shows improvement in the fit following data augmentation using the proposed pre-processing algorithm.

  13. Fitting diameter distribution models to data from forest inventories with concentric plot design

    Energy Technology Data Exchange (ETDEWEB)

    Nanos, N.; Sjöstedt de Luna, S.

    2017-11-01

    Aim: Several national forest inventories use a complex plot design based on multiple concentric subplots where smaller diameter trees are inventoried when lying in the smaller-radius subplots and ignored otherwise. Data from these plots are truncated with threshold (truncation) diameters varying according to the distance from the plot centre. In this paper we designed a maximum likelihood method to fit the Weibull diameter distribution to data from concentric plots. Material and methods: Our method (M1) was based on multiple truncated probability density functions to build the likelihood. In addition, we used an alternative method (M2) presented recently. We used methods M1 and M2 as well as two other reference methods to estimate the Weibull parameters in 40000 simulated plots. The spatial tree pattern of the simulated plots was generated using four models of spatial point patterns. Two error indices were used to assess the relative performance of M1 and M2 in estimating relevant stand-level variables. In addition, we estimated the Quadratic Mean plot Diameter (QMD) using Expansion Factors (EFs). Main results: Methods M1 and M2 produced comparable estimation errors in random and cluster tree spatial patterns. Method M2 produced biased parameter estimates in plots with inhomogeneous Poisson patterns. Estimation of QMD using EFs produced biased results in plots within inhomogeneous intensity Poisson patterns. Research highlights:We designed a new method to fit the Weibull distribution to forest inventory data from concentric plots that achieves high accuracy and precision in parameter estimates regardless of the within-plot spatial tree pattern.

  14. Fitting outbreak models to data from many small norovirus outbreaks

    Directory of Open Access Journals (Sweden)

    Eamon B. O’Dea

    2014-03-01

    Full Text Available Infectious disease often occurs in small, independent outbreaks in populations with varying characteristics. Each outbreak by itself may provide too little information for accurate estimation of epidemic model parameters. Here we show that using standard stochastic epidemic models for each outbreak and allowing parameters to vary between outbreaks according to a linear predictor leads to a generalized linear model that accurately estimates parameters from many small and diverse outbreaks. By estimating initial growth rates in addition to transmission rates, we are able to characterize variation in numbers of initially susceptible individuals or contact patterns between outbreaks. With simulation, we find that the estimates are fairly robust to the data being collected at discrete intervals and imputation of about half of all infectious periods. We apply the method by fitting data from 75 norovirus outbreaks in health-care settings. Our baseline regression estimates are 0.0037 transmissions per infective-susceptible day, an initial growth rate of 0.27 transmissions per infective day, and a symptomatic period of 3.35 days. Outbreaks in long-term-care facilities had significantly higher transmission and initial growth rates than outbreaks in hospitals.

  15. Analytical fitting model for rough-surface BRDF.

    Science.gov (United States)

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  16. A global fitting code for multichordal neutral beam spectroscopic data

    International Nuclear Information System (INIS)

    Seraydarian, R.P.; Burrell, K.H.; Groebner, R.J.

    1992-05-01

    Knowledge of the heat deposition profile is crucial to all transport analysis of beam heated discharges. The heat deposition profile can be inferred from the fast ion birth profile which, in turn, is directly related to the loss of neutral atoms from the beam. This loss can be measured spectroscopically be the decrease in amplitude of spectral emissions from the beam as it penetrates the plasma. The spectra are complicated by the motional Stark effect which produces a manifold of nine bright peaks for each of the three beam energy components. A code has been written to analyze this kind of data. In the first phase of this work, spectra from tokamak shots are fit with a Stark splitting and Doppler shift model that ties together the geometry of several spatial positions when they are fit simultaneously. In the second phase, a relative position-to-position intensity calibration will be applied to these results to obtain the spectral amplitudes from which beam atom loss can be estimated. This paper reports on the computer code for the first phase. Sample fits to real tokamak spectral data are shown

  17. Goodness of Fit of Skills Assessment Approaches: Insights from Patterns of Real vs. Synthetic Data Sets

    Science.gov (United States)

    Beheshti, Behzad; Desmarais, Michel C.

    2015-01-01

    This study investigates the issue of the goodness of fit of different skills assessment models using both synthetic and real data. Synthetic data is generated from the different skills assessment models. The results show wide differences of performances between the skills assessment models over synthetic data sets. The set of relative performances…

  18. Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.

    Science.gov (United States)

    Mi, Gu; Di, Yanming; Schafer, Daniel W

    2015-01-01

    This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.

  19. Modeling Evolution on Nearly Neutral Network Fitness Landscapes

    Science.gov (United States)

    Yakushkina, Tatiana; Saakian, David B.

    2017-08-01

    To describe virus evolution, it is necessary to define a fitness landscape. In this article, we consider the microscopic models with the advanced version of neutral network fitness landscapes. In this problem setting, we suppose a fitness difference between one-point mutation neighbors to be small. We construct a modification of the Wright-Fisher model, which is related to ordinary infinite population models with nearly neutral network fitness landscape at the large population limit. From the microscopic models in the realistic sequence space, we derive two versions of nearly neutral network models: with sinks and without sinks. We claim that the suggested model describes the evolutionary dynamics of RNA viruses better than the traditional Wright-Fisher model with few sequences.

  20. VizieR Online Data Catalog: GRB prompt emission fitted with the DREAM model (Ahlgren+, 2015)

    Science.gov (United States)

    Ahlgren, B.; Larsson, J.; Nymark, T.; Ryde, F.; Pe'Er, A.

    2018-01-01

    We illustrate the application of the DREAM model by fitting it to two different, bright Fermi GRBs; GRB 090618 and GRB 100724B. While GRB 090618 is well fitted by a Band function, GRB 100724B was the first example of a burst with a significant additional BB component (Guiriec et al. 2011ApJ...727L..33G). GRB 090618 is analysed using Gamma-ray Burst Monitor (GBM) data (Meegan et al. 2009ApJ...702..791M) from the NaI and BGO detectors. For GRB 100724B, we used GBM data from the NaI and BGO detectors as well as Large Area Telescope Low Energy (LAT-LLE) data. For both bursts we selected NaI detectors seeing the GRB at an off-axis angle lower than 60° and the BGO detector as being the best aligned of the two BGO detectors. The spectra were fitted in the energy ranges 8-1000 keV (NaI), 200-40000 keV (BGO) and 30-1000 MeV (LAT-LLE). (2 data files).

  1. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    DEFF Research Database (Denmark)

    Bolker, B.M.; Gardner, B.; Maunder, M.

    2013-01-01

    Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. R is convenient and (relatively) easy...... to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield...

  2. Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Aab, A. [Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud Universiteit, Nijmegen (Netherlands); Abreu, P.; Andringa, S. [Laboratório de Instrumentação e Física Experimental de Partículas—LIP and Instituto Superior Técnico—IST, Universidade de Lisboa—UL (Portugal); Aglietta, M. [Osservatorio Astrofisico di Torino (INAF), Torino (Italy); Samarai, I. Al [Laboratoire de Physique Nucléaire et de Hautes Energies (LPNHE), Universités Paris 6 et Paris 7, CNRS-IN2P3 (France); Albuquerque, I.F.M. [Universidade de São Paulo, Inst. de Física, São Paulo (Brazil); Allekotte, I. [Centro Atómico Bariloche and Instituto Balseiro (CNEA-UNCuyo-CONICET) (Argentina); Almela, A.; Andrada, B. [Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET, UNSAM), Centro Atómico Constituyentes, Comisión Nacional de Energía Atómica (Argentina); Castillo, J. Alvarez [Universidad Nacional Autónoma de México, México (Mexico); Alvarez-Muñiz, J. [Universidad de Santiago de Compostela (Spain); Anastasi, G.A. [Gran Sasso Science Institute (INFN), L' Aquila (Italy); Anchordoqui, L., E-mail: auger_spokespersons@fnal.gov [Department of Physics and Astronomy, Lehman College, City University of New York (United States); and others

    2017-04-01

    We present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above 5 ⋅ 10{sup 18} eV, i.e. the region of the all-particle spectrum above the so-called 'ankle' feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show that uncertainties about physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results.

  3. Parametric fitting of data obtained from detectors with finite resolution and limited acceptance

    International Nuclear Information System (INIS)

    Gagunashvili, N.D.

    2011-01-01

    A goodness-of-fit test for fitting of a parametric model to data obtained from a detector with finite resolution and limited acceptance is proposed. The parameters of the model are found by minimization of a statistic that is used for comparing experimental data and simulated reconstructed data. Numerical examples are presented to illustrate and validate the fitting procedure.

  4. Parametric fitting of corneal height data to a biconic surface.

    Science.gov (United States)

    Janunts, Edgar; Kannengießer, Marc; Langenbucher, Achim

    2015-03-01

    As the average corneal shape can effectively be approximated by a conic section, a determination of the corneal shape by biconic parameters is desired. The purpose of the paper is to introduce a straightforward mathematical approach for extracting clinically relevant parameters of corneal surface, such as radii of curvature and conic constants for principle meridians and astigmatism. A general description for modeling the ocular surfaces in a biconic form is given, based on which an implicit parametric surface fitting algorithm is introduced. The solution of the biconic fitting is obtained by a two sequential least squares optimization approach with constrains. The data input can be raw information from any corneal topographer with not necessarily a uniform data distribution. Various simulated and clinical data are studied including surfaces with rotationally symmetric and non-symmetric geometries. The clinical data was obtained from the Pentacam (Oculus) for the patient having undergone a refractive surgery. A sub-micrometer fitting accuracy was obtained for all simulated surfaces: 0,08 μm RMS fitting error at max for rotationally symmetric and 0,125 μm for non-symmetric surfaces. The astigmatism was recovered in a sub-minutes resolution. The equality in rotational symmetric and the superiority in non-symmetric surfaces of the presented model over the widely used quadric fitting model is shown. The introduced biconic surface fitting algorithm is able to recover the apical radii of curvature and conic constants in principle meridians. This methodology could be a platform for advanced IOL calculations and enhanced contact lens fitting. Copyright © 2014. Published by Elsevier GmbH.

  5. An approximation to the adaptive exponential integrate-and-fire neuron model allows fast and predictive fitting to physiological data

    Directory of Open Access Journals (Sweden)

    Loreen eHertäg

    2012-09-01

    Full Text Available For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('in-vivo-like' input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  6. PyCorrFit-generic data evaluation for fluorescence correlation spectroscopy.

    Science.gov (United States)

    Müller, Paul; Schwille, Petra; Weidemann, Thomas

    2014-09-01

    We present a graphical user interface (PyCorrFit) for the fitting of theoretical model functions to experimental data obtained by fluorescence correlation spectroscopy (FCS). The program supports many data file formats and features a set of tools specialized in FCS data evaluation. The Python source code is freely available for download from the PyCorrFit web page at http://pycorrfit.craban.de. We offer binaries for Ubuntu Linux, Mac OS X and Microsoft Windows. © The Author 2014. Published by Oxford University Press.

  7. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    Science.gov (United States)

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  8. Optical-model analysis of exotic atom data. Pt. 1

    International Nuclear Information System (INIS)

    Batty, C.J.

    1981-01-01

    Data for kaonic atoms are fitted using a simple optical model with a potential proportional to the nuclear density. Very satisfactory fits to strong interaction shift and width values are obtained but difficulties in fitting yield values indicate that the model is not completely satisfactory. The potential strength can be related to the free kaon-nucleon scattering lengths using a model due to Deloff. A good overall representation of the data is obtained with a black-sphere model. (orig.)

  9. Automatic fitting of spiking neuron models to electrophysiological recordings

    Directory of Open Access Journals (Sweden)

    Cyrille Rossant

    2010-03-01

    Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.

  10. topicmodels: An R Package for Fitting Topic Models

    Directory of Open Access Journals (Sweden)

    Bettina Grun

    2011-05-01

    Full Text Available Topic models allow the probabilistic modeling of term frequency occurrences in documents. The fitted model can be used to estimate the similarity between documents as well as between a set of specified keywords using an additional layer of latent variables which are referred to as topics. The R package topicmodels provides basic infrastructure for fitting topic models based on data structures from the text mining package tm. The package includes interfaces to two algorithms for fitting topic models: the variational expectation-maximization algorithm provided by David M. Blei and co-authors and an algorithm using Gibbs sampling by Xuan-Hieu Phan and co-authors.

  11. Measured, modeled, and causal conceptions of fitness

    Science.gov (United States)

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  12. The FORCE Fitness Profile--Adding a Measure of Health-Related Fitness to the Canadian Armed Forces Operational Fitness Evaluation.

    Science.gov (United States)

    Gagnon, Patrick; Spivock, Michael; Reilly, Tara; Mattie, Paige; Stockbrugger, Barry

    2015-11-01

    In 2013, the Canadian Armed Forces (CAF) implemented the Fitness for Operational Requirements of Canadian Armed Forces Employment (FORCE), a field expedient fitness test designed to predict the physical requirements of completing common military tasks. Given that attaining this minimal physical fitness standard may not represent a challenge to some personnel, a fitness incentive program was requested by the chain of command to recognize and reward fitness over and above the minimal standard. At the same time, it was determined that the CAF would benefit from a measure of general health-related fitness, in addition to this measure of operational fitness. The resulting incentive program structure is based on gender and 8 age categories. The results on the 4 elements of the FORCE evaluation were converted to a point scale from which normative scores were derived, where the median score corresponds to the bronze level, and silver, gold, and platinum correspond to a score which is 1, 2, and 3 SDs above this median, respectively. A suite of rewards including merit board point toward promotions and recognition on the uniform and material rewards was developed. A separate group rewards program was also tabled, to recognize achievements in fitness at the unit level. For general fitness, oxygen capacity was derived from FORCE evaluation results and combined with a measure of abdominal circumference. Fitness categories were determined based on relative risks of mortality and morbidity for each age and gender group. Pilot testing of this entire program was performed with 624 participants to assess participants' reactions to the enhanced test, and also to verify logistical aspects of the electronic data capture, calculation, and transfer system. The newly dubbed fitness profile program was subsequently approved by the senior leadership of the CAF and is scheduled to begin a phased implementation in June 2015.

  13. Nonlinear models for fitting growth curves of Nellore cows reared in the Amazon Biome

    Directory of Open Access Journals (Sweden)

    Kedma Nayra da Silva Marinho

    2013-09-01

    Full Text Available Growth curves of Nellore cows were estimated by comparing six nonlinear models: Brody, Logistic, two alternatives by Gompertz, Richards and Von Bertalanffy. The models were fitted to weight-age data, from birth to 750 days of age of 29,221 cows, born between 1976 and 2006 in the Brazilian states of Acre, Amapá, Amazonas, Pará, Rondônia, Roraima and Tocantins. The models were fitted by the Gauss-Newton method. The goodness of fit of the models was evaluated by using mean square error, adjusted coefficient of determination, prediction error and mean absolute error. Biological interpretation of parameters was accomplished by plotting estimated weights versus the observed weight means, instantaneous growth rate, absolute maturity rate, relative instantaneous growth rate, inflection point and magnitude of the parameters A (asymptotic weight and K (maturing rate. The Brody and Von Bertalanffy models fitted the weight-age data but the other models did not. The average weight (A and growth rate (K were: 384.6±1.63 kg and 0.0022±0.00002 (Brody and 313.40±0.70 kg and 0.0045±0.00002 (Von Bertalanffy. The Brody model provides better goodness of fit than the Von Bertalanffy model.

  14. A History of Regression and Related Model-Fitting in the Earth Sciences (1636?-2000)

    International Nuclear Information System (INIS)

    Howarth, Richard J.

    2001-01-01

    The (statistical) modeling of the behavior of a dependent variate as a function of one or more predictors provides examples of model-fitting which span the development of the earth sciences from the 17th Century to the present. The historical development of these methods and their subsequent application is reviewed. Bond's predictions (c. 1636 and 1668) of change in the magnetic declination at London may be the earliest attempt to fit such models to geophysical data. Following publication of Newton's theory of gravitation in 1726, analysis of data on the length of a 1 o meridian arc, and the length of a pendulum beating seconds, as a function of sin 2 (latitude), was used to determine the ellipticity of the oblate spheroid defining the Figure of the Earth. The pioneering computational methods of Mayer in 1750, Boscovich in 1755, and Lambert in 1765, and the subsequent independent discoveries of the principle of least squares by Gauss in 1799, Legendre in 1805, and Adrain in 1808, and its later substantiation on the basis of probability theory by Gauss in 1809 were all applied to the analysis of such geodetic and geophysical data. Notable later applications include: the geomagnetic survey of Ireland by Lloyd, Sabine, and Ross in 1836, Gauss's model of the terrestrial magnetic field in 1838, and Airy's 1845 analysis of the residuals from a fit to pendulum lengths, from which he recognized the anomalous character of measurements of gravitational force which had been made on islands. In the early 20th Century applications to geological topics proliferated, but the computational burden effectively held back applications of multivariate analysis. Following World War II, the arrival of digital computers in universities in the 1950s facilitated computation, and fitting linear or polynomial models as a function of geographic coordinates, trend surface analysis, became popular during the 1950-60s. The inception of geostatistics in France at this time by Matheron had its

  15. A Comparison of Item Fit Statistics for Mixed IRT Models

    Science.gov (United States)

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  16. Random-growth urban model with geographical fitness

    Science.gov (United States)

    Kii, Masanobu; Akimoto, Keigo; Doi, Kenji

    2012-12-01

    This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.

  17. LEP asymmetries and fits of the standard model

    International Nuclear Information System (INIS)

    Pietrzyk, B.

    1994-01-01

    The lepton and quark asymmetries measured at LEP are presented. The results of the Standard Model fits to the electroweak data presented at this conference are given. The top mass obtained from the fit to the LEP data is 172 -14-20 +13+18 GeV; it is 177 -11-19 +11+18 when also the collider, ν and A LR data are included. (author). 10 refs., 3 figs., 2 tabs

  18. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  19. Automated model fit method for diesel engine control development

    NARCIS (Netherlands)

    Seykens, X.L.J.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.J.H.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  20. SU-D-204-05: Fitting Four NTCP Models to Treatment Outcome Data of Salivary Glands Recorded Six Months After Radiation Therapy for Head and Neck Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Mavroidis, P; Price, A; Kostich, M; Green, R; Das, S; Marks, L; Chera, B [University North Carolina, Chapel Hill, NC (United States); Amdur, R; Mendenhall, W [University of Florida, Gainesville, FL (United States); Sheets, N [University of North Carolina, Raleigh, NC (United States)

    2016-06-15

    Purpose: To estimate the radiobiological parameters of four popular NTCP models that describe the dose-response relations of salivary glands to the severity of patient reported dry mouth 6 months post chemo-radiotherapy. To identify the glands, which best correlate with the manifestation of those clinical endpoints. Finally, to evaluate the goodness-of-fit of the NTCP models. Methods: Forty-three patients were treated on a prospective multiinstitutional phase II study for oropharyngeal squamous cell carcinoma. All the patients received 60 Gy IMRT and they reported symptoms using the novel patient reported outcome version of the CTCAE. We derived the individual patient dosimetric data of the parotid and submandibular glands (SMG) as separate structures as well as combinations. The Lyman-Kutcher-Burman (LKB), Relative Seriality (RS), Logit and Relative Logit (RL) NTCP models were used to fit the patients data. The fitting of the different models was assessed through the area under the receiver operating characteristic curve (AUC) and the Odds Ratio methods. Results: The AUC values were highest for the contralateral parotid for Grade ≥ 2 (0.762 for the LKB, RS, Logit and 0.753 for the RL). For the salivary glands the AUC values were: 0.725 for the LKB, RS, Logit and 0.721 for the RL. For the contralateral SMG the AUC values were: 0.721 for LKB, 0.714 for Logit and 0.712 for RS and RL. The Odds Ratio for the contralateral parotid was 5.8 (1.3–25.5) for all the four NTCP models for the radiobiological dose threshold of 21Gy. Conclusion: It was shown that all the examined NTCP models could fit the clinical data well with very similar accuracy. The contralateral parotid gland appears to correlated best with the clinical endpoints of severe/very severe dry mouth. An EQD2Gy dose of 21Gy appears to be a safe threshold to be used as a constraint in treatment planning.

  1. An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models

    Science.gov (United States)

    Ames, Allison J.; Penfield, Randall D.

    2015-01-01

    Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…

  2. Impact of CrossFit-Related Spinal Injuries.

    Science.gov (United States)

    Hopkins, Benjamin S; Cloney, Michael B; Kesavabhotla, Kartik; Yamaguchi, Jonathon; Smith, Zachary A; Koski, Tyler R; Hsu, Wellington K; Dahdaleh, Nader S

    2017-11-16

    Exercise-related injuries (ERIs) are a common cause of nonfatal emergency department and hospital visits. CrossFit is a high-intensity workout regimen whose popularity has grown rapidly. However, ERIs due to CrossFit remained under investigated. All patients who presented to the main hospital at a major academic center complaining of an injury sustained performing CrossFit between June 2010 and June 2016 were identified. Injuries were classified by anatomical location (eg, knee, spine). For patients with spinal injuries, data were collected including age, sex, body mass index (BMI), CrossFit experience level, symptom duration, type of symptoms, type of clinic presentation, cause of injury, objective neurological examination findings, imaging type, number of clinic visits, and treatments prescribed. Four hundred ninety-eight patients with 523 CrossFit-related injuries were identified. Spine injuries were the most common injuries identified, accounting for 20.9%. Among spine injuries, the most common location of injury was the lumbar spine (83.1%). Average symptom duration was 6.4 months ± 15.1, and radicular complaints were the most common symptom (53%). A total of 30 (32%) patients had positive findings on neurologic examination. Six patients (6.7%) required surgical intervention for treatment after failing an average of 9.66 months of conservative treatment. There was no difference in age, sex, BMI, or duration of symptoms of patients requiring surgery with those who did not. CrossFit is a popular, high-intensity style workout with the potential to injure its participants. Spine injuries were the most common type of injury observed and frequently required surgical intervention.

  3. Model-fitting approach to kinetic analysis of non-isothermal oxidation of molybdenite

    International Nuclear Information System (INIS)

    Ebrahimi Kahrizsangi, R.; Abbasi, M. H.; Saidi, A.

    2007-01-01

    The kinetics of molybdenite oxidation was studied by non-isothermal TGA-DTA with heating rate 5 d eg C .min -1 . The model-fitting kinetic approach applied to TGA data. The Coats-Redfern method used of model fitting. The popular model-fitting gives excellent fit non-isothermal data in chemically controlled regime. The apparent activation energy was determined to be about 34.2 kcalmol -1 With pre-exponential factor about 10 8 sec -1 for extent of reaction less than 0.5

  4. The 'fitting problem' in cosmology

    International Nuclear Information System (INIS)

    Ellis, G.F.R.; Stoeger, W.

    1987-01-01

    The paper considers the best way to fit an idealised exactly homogeneous and isotropic universe model to a realistic ('lumpy') universe; whether made explicit or not, some such approach of necessity underlies the use of the standard Robertson-Walker models as models of the real universe. Approaches based on averaging, normal coordinates and null data are presented, the latter offering the best opportunity to relate the fitting procedure to data obtainable by astronomical observations. (author)

  5. ROBUST CYLINDER FITTING IN THREE-DIMENSIONAL POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    A. Nurunnabi

    2017-05-01

    Full Text Available This paper investigates the problems of cylinder fitting in laser scanning three-dimensional Point Cloud Data (PCD. Most existing methods require full cylinder data, do not study the presence of outliers, and are not statistically robust. But especially mobile laser scanning often has incomplete data, as street poles for example are only scanned from the road. Moreover, existence of outliers is common. Outliers may occur as random or systematic errors, and may be scattered and/or clustered. In this paper, we present a statistically robust cylinder fitting algorithm for PCD that combines Robust Principal Component Analysis (RPCA with robust regression. Robust principal components as obtained by RPCA allow estimating cylinder directions more accurately, and an existing efficient circle fitting algorithm following robust regression principles, properly fit cylinder. We demonstrate the performance of the proposed method on artificial and real PCD. Results show that the proposed method provides more accurate and robust results: (i in the presence of noise and high percentage of outliers, (ii for incomplete as well as complete data, (iii for small and large number of points, and (iv for different sizes of radius. On 1000 simulated quarter cylinders of 1m radius with 10% outliers a PCA based method fit cylinders with a radius of on average 3.63 meter (m; the proposed method on the other hand fit cylinders of on average 1.02 m radius. The algorithm has potential in applications such as fitting cylindrical (e.g., light and traffic poles, diameter at breast height estimation for trees, and building and bridge information modelling.

  6. Potential fitting biases resulting from grouping data into variable width bins

    International Nuclear Information System (INIS)

    Towers, S.

    2014-01-01

    When reading peer-reviewed scientific literature describing any analysis of empirical data, it is natural and correct to proceed with the underlying assumption that experiments have made good faith efforts to ensure that their analyses yield unbiased results. However, particle physics experiments are expensive and time consuming to carry out, thus if an analysis has inherent bias (even if unintentional), much money and effort can be wasted trying to replicate or understand the results, particularly if the analysis is fundamental to our understanding of the universe. In this note we discuss the significant biases that can result from data binning schemes. As we will show, if data are binned such that they provide the best comparison to a particular (but incorrect) model, the resulting model parameter estimates when fitting to the binned data can be significantly biased, leading us to too often accept the model hypothesis when it is not in fact true. When using binned likelihood or least squares methods there is of course no a priori requirement that data bin sizes need to be constant, but we show that fitting to data grouped into variable width bins is particularly prone to produce biased results if the bin boundaries are chosen to optimize the comparison of the binned data to a wrong model. The degree of bias that can be achieved simply with variable binning can be surprisingly large. Fitting the data with an unbinned likelihood method, when possible to do so, is the best way for researchers to show that their analyses are not biased by binning effects. Failing that, equal bin widths should be employed as a cross-check of the fitting analysis whenever possible

  7. Potential fitting biases resulting from grouping data into variable width bins

    Energy Technology Data Exchange (ETDEWEB)

    Towers, S., E-mail: smtowers@asu.edu

    2014-07-30

    When reading peer-reviewed scientific literature describing any analysis of empirical data, it is natural and correct to proceed with the underlying assumption that experiments have made good faith efforts to ensure that their analyses yield unbiased results. However, particle physics experiments are expensive and time consuming to carry out, thus if an analysis has inherent bias (even if unintentional), much money and effort can be wasted trying to replicate or understand the results, particularly if the analysis is fundamental to our understanding of the universe. In this note we discuss the significant biases that can result from data binning schemes. As we will show, if data are binned such that they provide the best comparison to a particular (but incorrect) model, the resulting model parameter estimates when fitting to the binned data can be significantly biased, leading us to too often accept the model hypothesis when it is not in fact true. When using binned likelihood or least squares methods there is of course no a priori requirement that data bin sizes need to be constant, but we show that fitting to data grouped into variable width bins is particularly prone to produce biased results if the bin boundaries are chosen to optimize the comparison of the binned data to a wrong model. The degree of bias that can be achieved simply with variable binning can be surprisingly large. Fitting the data with an unbinned likelihood method, when possible to do so, is the best way for researchers to show that their analyses are not biased by binning effects. Failing that, equal bin widths should be employed as a cross-check of the fitting analysis whenever possible.

  8. PLOTnFIT: A BASIC program for data plotting and curve fitting

    Energy Technology Data Exchange (ETDEWEB)

    Schiffgens, J O

    1989-10-01

    PLOTnFIT is a BASIC program to be used with an IBM or IBM-compatible personal computer (PC) for plotting and fitting curves to measured or observed data for both extrapolation and interpolation. It uses the Least Squares method to calculate the coefficients of nth degree polynomials (e.g., up to 10th degree) of Basis Functions so that each polynomial fits the data in a Least Squares sense, then plots the data and the polynomial that a user decides best represents them. PLOTnFIT is very versatile. It can be used to generate linear, semilog, and log-log graphs and can automatically scale the coordinate axes to suit the data. It can plot more than one data set on a graph (e.g., up to 8 data sets) and more data points than a user is likely to put on one graph (e.g., up to 225 points). A PC diskette containing (1) READIST.PNF (a summary of this NUREG), (2) INI06891.SIS and FOL06891.SIS (two data files), and 3) PLOTNFIT.4TH (the latest version of the program) may be obtained from the National Energy Software Center, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439. (author)

  9. Fitting model-based psychometric functions to simultaneity and temporal-order judgment data: MATLAB and R routines.

    Science.gov (United States)

    Alcalá-Quintana, Rocío; García-Pérez, Miguel A

    2013-12-01

    Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.

  10. Using Fit Indexes to Select a Covariance Model for Longitudinal Data

    Science.gov (United States)

    Liu, Siwei; Rovine, Michael J.; Molenaar, Peter C. M.

    2012-01-01

    This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error…

  11. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....

  12. Protein Simulation Data in the Relational Model.

    Science.gov (United States)

    Simms, Andrew M; Daggett, Valerie

    2012-10-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.

  13. Health-related physical fitness for children with cerebral palsy

    Science.gov (United States)

    Maltais, Désirée B.; Wiart, Lesley; Fowler, Eileen; Verschuren, Olaf; Damiano, Diane L.

    2014-01-01

    Low levels of physical activity are a global health concern for all children. Children with cerebral palsy have even lower physical activity levels than their typically developing peers. Low levels of physical activity, and thus an increased risk for related chronic diseases, are associated with deficits in health-related physical fitness. Recent research has provided therapists with the resources to effectively perform physical fitness testing and physical activity training in clinical settings with children who have cerebral palsy, although most testing and training data to date pertains to those who walk. Nevertheless, based on the present evidence, all children with cerebral palsy should engage, to the extent they are able, in aerobic, anaerobic and muscle strengthening activities. Future research is required to determine the best ways to evaluate health-related physical fitness in non-ambulatory children with cerebral palsy and foster long-term changes in physical activity behavior in all children with this condition. PMID:24820339

  14. Direction Dependent Background Fitting for the Fermi GBM Data

    OpenAIRE

    Szécsi, Dorottya; Bagoly, Zsolt; Kóbori, József; Horváth, István; Balázs, Lajos G.

    2013-01-01

    We present a method for determining the background of Fermi GBM GRBs using the satellite positional information and a physical model. Since the polynomial fitting method typically used for GRBs is generally only indicative of the background over relatively short timescales, this method is particularly useful in the cases of long GRBs or those which have Autonomous Repoint Request (ARR) and a background with much variability on short timescales. We give a Direction Dependent Background Fitting...

  15. Bivariate copula in fitting rainfall data

    Science.gov (United States)

    Yee, Kong Ching; Suhaila, Jamaludin; Yusof, Fadhilah; Mean, Foo Hui

    2014-07-01

    The usage of copula to determine the joint distribution between two variables is widely used in various areas. The joint distribution of rainfall characteristic obtained using the copula model is more ideal than the standard bivariate modelling where copula is belief to have overcome some limitation. Six copula models will be applied to obtain the most suitable bivariate distribution between two rain gauge stations. The copula models are Ali-Mikhail-Haq (AMH), Clayton, Frank, Galambos, Gumbel-Hoogaurd (GH) and Plackett. The rainfall data used in the study is selected from rain gauge stations which are located in the southern part of Peninsular Malaysia, during the period from 1980 to 2011. The goodness-of-fit test in this study is based on the Akaike information criterion (AIC).

  16. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    Science.gov (United States)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  17. Influence of a health-related physical fitness model on students' physical activity, perceived competence, and enjoyment.

    Science.gov (United States)

    Fu, You; Gao, Zan; Hannon, James; Shultz, Barry; Newton, Maria; Sibthorp, Jim

    2013-12-01

    This study was designed to explore the effects of a health-related physical fitness physical education model on students' physical activity, perceived competence, and enjoyment. 61 students (25 boys, 36 girls; M age = 12.6 yr., SD = 0.6) were assigned to two groups (health-related physical fitness physical education group, and traditional physical education group), and participated in one 50-min. weekly basketball class for 6 wk. Students' in-class physical activity was assessed using NL-1000 pedometers. The physical subscale of the Perceived Competence Scale for Children was employed to assess perceived competence, and children's enjoyment was measured using the Sport Enjoyment Scale. The findings suggest that students in the intervention group increased their perceived competence, enjoyment, and physical activity over a 6-wk. intervention, while the comparison group simply increased physical activity over time. Children in the intervention group had significantly greater enjoyment.

  18. Modeling of Experimental Adsorption Isotherm Data

    Directory of Open Access Journals (Sweden)

    Xunjun Chen

    2015-01-01

    Full Text Available Adsorption is considered to be one of the most effective technologies widely used in global environmental protection areas. Modeling of experimental adsorption isotherm data is an essential way for predicting the mechanisms of adsorption, which will lead to an improvement in the area of adsorption science. In this paper, we employed three isotherm models, namely: Langmuir, Freundlich, and Dubinin-Radushkevich to correlate four sets of experimental adsorption isotherm data, which were obtained by batch tests in lab. The linearized and non-linearized isotherm models were compared and discussed. In order to determine the best fit isotherm model, the correlation coefficient (r2 and standard errors (S.E. for each parameter were used to evaluate the data. The modeling results showed that non-linear Langmuir model could fit the data better than others, with relatively higher r2 values and smaller S.E. The linear Langmuir model had the highest value of r2, however, the maximum adsorption capacities estimated from linear Langmuir model were deviated from the experimental data.

  19. Fitting measurement models to vocational interest data: are dominance models ideal?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz; Rounds, James; Williams, Bruce A

    2009-09-01

    In this study, the authors examined the item response process underlying 3 vocational interest inventories: the Occupational Preference Inventory (C.-P. Deng, P. I. Armstrong, & J. Rounds, 2007), the Interest Profiler (J. Rounds, T. Smith, L. Hubert, P. Lewis, & D. Rivkin, 1999; J. Rounds, C. M. Walker, et al., 1999), and the Interest Finder (J. E. Wall & H. E. Baker, 1997; J. E. Wall, L. L. Wise, & H. E. Baker, 1996). Item response theory (IRT) dominance models, such as the 2-parameter and 3-parameter logistic models, assume that item response functions (IRFs) are monotonically increasing as the latent trait increases. In contrast, IRT ideal point models, such as the generalized graded unfolding model, have IRFs that peak where the latent trait matches the item. Ideal point models are expected to fit better because vocational interest inventories ask about typical behavior, as opposed to requiring maximal performance. Results show that across all 3 interest inventories, the ideal point model provided better descriptions of the response process. The importance of specifying the correct item response model for precise measurement is discussed. In particular, scores computed by a dominance model were shown to be sometimes illogical: individuals endorsing mostly realistic or mostly social items were given similar scores, whereas scores based on an ideal point model were sensitive to which type of items respondents endorsed.

  20. Peplau's Theory of Interpersonal Relations: An Alternate Factor Structure for Patient Experience Data?

    Science.gov (United States)

    Hagerty, Thomas A; Samuels, William; Norcini-Pala, Andrea; Gigliotti, Eileen

    2017-04-01

    A confirmatory factor analysis of data from the responses of 12,436 patients to 16 items on the Consumer Assessment of Healthcare Providers and Systems-Hospital survey was used to test a latent factor structure based on Peplau's middle-range theory of interpersonal relations. A two-factor model based on Peplau's theory fit these data well, whereas a three-factor model also based on Peplau's theory fit them excellently and provided a suitable alternate factor structure for the data. Though neither the two- nor three-factor model fit as well as the original factor structure, these results support using Peplau's theory to demonstrate nursing's extensive contribution to the experiences of hospitalized patients.

  1. arXiv Updated Global SMEFT Fit to Higgs, Diboson and Electroweak Data

    CERN Document Server

    Ellis, John; Sanz, Verónica; You, Tevong

    The ATLAS and CMS collaborations have recently released significant new data on Higgs and diboson production in LHC Run 2. Measurements of Higgs properties have improved in many channels, while kinematic information for $h \\to \\gamma\\gamma$ and $h \\to ZZ$ can now be more accurately incorporated in fits using the STXS method, and $W^+ W^-$ diboson production at high $p_T$ gives new sensitivity to deviations from the Standard Model. We have performed an updated global fit to precision electroweak data, $W^+W^-$ measurements at LEP, and Higgs and diboson data from Runs 1 and 2 of the LHC in the framework of the Standard Model Effective Field Theory (SMEFT), allowing all coefficients to vary across the combined dataset, and present the results in both the Warsaw and SILH operator bases. We exhibit the improvement in the constraints on operator coefficients provided by the LHC Run 2 data, and discuss the correlations between them. We also explore the constraints our fit results impose on several models of physics ...

  2. A flexible, interactive software tool for fitting the parameters of neuronal models.

    Science.gov (United States)

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.

  3. A flexible, interactive software tool for fitting the parameters of neuronal models

    Directory of Open Access Journals (Sweden)

    Péter eFriedrich

    2014-07-01

    Full Text Available The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problem of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting

  4. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    International Nuclear Information System (INIS)

    Liang, Zhong Wei; Wang, Yi Jun; Ye, Bang Yan; Brauwer, Richard Kars

    2012-01-01

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process

  5. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Zhong Wei; Wang, Yi Jun [Guangzhou Univ., Guangzhou (China); Ye, Bang Yan [South China Univ. of Technology, Guangzhou (China); Brauwer, Richard Kars [Indian Institute of Technology, Kanpur (India)

    2012-10-15

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process.

  6. Knowledge in Action: Fitness Lesson Segments That Teach Health-Related Fitness in Elementary Physical Education

    Science.gov (United States)

    Hodges, Michael G.; Kulinna, Pamela Hodges; van der Mars, Hans; Lee, Chong

    2016-01-01

    The purpose of this study was to determine students' health-related fitness knowledge (HRFK) and physical activity levels after the implementation of a series of fitness lessons segments called Knowledge in Action (KIA). KIA aims to teach health-related fitness knowledge (HRFK) during short episodes of the physical education lesson. Teacher…

  7. Physical characteristics related to bra fit.

    Science.gov (United States)

    Chen, Chin-Man; LaBat, Karen; Bye, Elizabeth

    2010-04-01

    Producing well-fitting garments has been a challenge for retailers and manufacturers since mass production began. Poorly fitted bras can cause discomfort or pain and result in lost sales for retailers. Because body contours are important factors affecting bra fit, this study analyses the relationship of physical characteristics to bra-fit problems. This study has used 3-D body-scanning technology to extract upper body angles from a sample of 103 college women; these data were used to categorise physical characteristics into shoulder slope, bust prominence, back curvature and acromion placement. Relationships between these physical categories and bra-fit problems were then analysed. Results show that significant main effects and two-way interactions of the physical categories exist in the fit problems of poor bra support and bra-motion restriction. The findings are valuable in helping the apparel industry create better-fitting bras. STATEMENT OF RELEVANCE: Poorly fitted bras can cause discomfort or pain and result in lost sales for retailers. The findings regarding body-shape classification provide researchers with a statistics method to quantify physical characteristics and the findings regarding the relationship analysis between physical characteristics and bra fit offer bra companies valuable information about bra-fit perceptions attributable to women with figure variations.

  8. Improving health-related fitness in adolescents: the CrossFit Teens™ randomised controlled trial.

    Science.gov (United States)

    Eather, Narelle; Morgan, Philip James; Lubans, David Revalds

    2016-01-01

    The aim of this study was to evaluate the preliminary efficacy and feasibility of the CrossFit Teens™ resistance training programme for improving health-related fitness and resistance training skill competency in adolescents. This assessor-blinded randomised controlled trial was conducted in one secondary school in the Hunter Region, Australia, from July to September 2013. Ninety-six (96) students (age = 15.4 (.5) years, 51.5% female) were randomised into intervention (n = 51) or control (n = 45) conditions for 8-weeks (60 min twice per week). Waist circumference, body mass index (BMI), BMI-Z score (primary outcomes), cardiorespiratory fitness (shuttle run test), muscular fitness (standing jump, push-up, handgrip, curl-up test), flexibility (sit and reach) and resistance training skill competency were measured at baseline and immediate post-intervention. Feasibility measures of recruitment, retention, adherence and satisfaction were assessed. Significant group-by-time intervention effects were found for waist circumference [-3.1 cm, P CrossFit Teens™ is a feasible and efficacious programme for improving health-related fitness in adolescents.

  9. Applicability of Zero-Inflated Models to Fit the Torrential Rainfall Count Data with Extra Zeros in South Korea

    Directory of Open Access Journals (Sweden)

    Cheol-Eung Lee

    2017-02-01

    Full Text Available Several natural disasters occur because of torrential rainfalls. The change in global climate most likely increases the occurrences of such downpours. Hence, it is necessary to investigate the characteristics of the torrential rainfall events in order to introduce effective measures for mitigating disasters such as urban floods and landslides. However, one of the major problems is evaluating the number of torrential rainfall events from a statistical viewpoint. If the number of torrential rainfall occurrences during a month is considered as count data, their frequency distribution could be identified using a probability distribution. Generally, the number of torrential rainfall occurrences has been analyzed using the Poisson distribution (POI or the Generalized Poisson Distribution (GPD. However, it was reported that POI and GPD often overestimated or underestimated the observed count data when additional or fewer zeros were included. Hence, in this study, a zero-inflated model concept was applied to solve this problem existing in the conventional models. Zero-Inflated Poisson (ZIP model, Zero-Inflated Generalized Poisson (ZIGP model, and the Bayesian ZIGP model have often been applied to fit the count data having additional or fewer zeros. However, the applications of these models in water resource management have been very limited despite their efficiency and accuracy. The five models, namely, POI, GPD, ZIP, ZIGP, and Bayesian ZIGP, were applied to the torrential rainfall data having additional zeros obtained from two rain gauges in South Korea, and their applicability was examined in this study. In particular, the informative prior distributions evaluated via the empirical Bayes method using ten rain gauges were developed in the Bayesian ZIGP model. Finally, it was suggested to avoid using the POI and GPD models to fit the frequency of torrential rainfall data. In addition, it was concluded that the Bayesian ZIGP model used in this study

  10. Modeling patterns in data using linear and related models

    International Nuclear Information System (INIS)

    Engelhardt, M.E.

    1996-06-01

    This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models

  11. Tweedie distributions for fitting semicontinuous health care utilization cost data

    Directory of Open Access Journals (Sweden)

    Christoph F. Kurz

    2017-12-01

    Full Text Available Abstract Background The statistical analysis of health care cost data is often problematic because these data are usually non-negative, right-skewed and have excess zeros for non-users. This prevents the use of linear models based on the Gaussian or Gamma distribution. A common way to counter this is the use of Two-part or Tobit models, which makes interpretation of the results more difficult. In this study, I explore a statistical distribution from the Tweedie family of distributions that can simultaneously model the probability of zero outcome, i.e. of being a non-user of health care utilization and continuous costs for users. Methods I assess the usefulness of the Tweedie model in a Monte Carlo simulation study that addresses two common situations of low and high correlation of the users and the non-users of health care utilization. Furthermore, I compare the Tweedie model with several other models using a real data set from the RAND health insurance experiment. Results I show that the Tweedie distribution fits cost data very well and provides better fit, especially when the number of non-users is low and the correlation between users and non-users is high. Conclusion The Tweedie distribution provides an interesting solution to many statistical problems in health economic analyses.

  12. Least Squares Data Fitting with Applications

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Pereyra, Víctor; Scherer, Godela

    As one of the classical statistical regression techniques, and often the first to be taught to new students, least squares fitting can be a very effective tool in data analysis. Given measured data, we establish a relationship between independent and dependent variables so that we can use the data....... In a number of applications, the accuracy and efficiency of the least squares fit is central, and Per Christian Hansen, Víctor Pereyra, and Godela Scherer survey modern computational methods and illustrate them in fields ranging from engineering and environmental sciences to geophysics. Anyone working...... with problems of linear and nonlinear least squares fitting will find this book invaluable as a hands-on guide, with accessible text and carefully explained problems. Included are • an overview of computational methods together with their properties and advantages • topics from statistical regression analysis...

  13. Milgrom Relation Models for Spiral Galaxies from Two-Dimensional Velocity Maps

    OpenAIRE

    Barnes, Eric I.; Kosowsky, Arthur; Sellwood, Jerry A.

    2007-01-01

    Using two-dimensional velocity maps and I-band photometry, we have created mass models of 40 spiral galaxies using the Milgrom relation (the basis of modified Newtonian dynamics, or MOND) to complement previous work. A Bayesian technique is employed to compare several different dark matter halo models to Milgrom and Newtonian models. Pseudo-isothermal dark matter halos provide the best statistical fits to the data in a majority of cases, while the Milgrom relation generally provides good fits...

  14. An intervention program to promote health-related physical fitness in nurses.

    Science.gov (United States)

    Yuan, Su-Chuan; Chou, Ming-Chih; Hwu, Lien-Jen; Chang, Yin-O; Hsu, Wen-Hsin; Kuo, Hsien-Wen

    2009-05-01

    To assess the effects of exercise intervention on nurses' health-related physical fitness. Regular exercise that includes gymnastics or aerobics has a positive effect on fitness. In Taiwan, there are not much data which assess the effects of exercise intervention on nurses' health-related physical fitness. Many studies have reported the high incidence of musculoskeletal disorders (MSDs) in nurses However, there has been limited research on intervention programs that are designed to improve the general physical fitness of nurses. A quasi-experimental study was conducted at a medical centre in central Taiwan. Ninety nurses from five different units of a hospital volunteered to participate in this study and participated in an experimental group and a control group. The experimental group engaged in a three-month intervention program consisting of treadmill exercise. Indicators of the health-related physical fitness of both groups were established and assessed before and after the intervention. Before intervention, the control group had significantly better grasp strength, flexibility and durability of abdominal muscles than the experimental group (p work duration, regular exercise and workload and found that the experimental group performed significantly better (p flexibility, durability of abdominal and back muscles and cardiopulmonary function. This study demonstrates that the development and implementation of an intervention program can promote and improve the health-related physical fitness of nurses. It is suggested that nurses engage in an exercise program while in the workplace to lower the risk of MSDs and to promote working efficiency.

  15. Elevation data fitting and precision analysis of Google Earth in road survey

    Science.gov (United States)

    Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei

    2018-05-01

    Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously

  16. Spline function fit for multi-sets of correlative data

    International Nuclear Information System (INIS)

    Liu Tingjin; Zhou Hongmo

    1992-01-01

    The Spline fit method for multi-sets of correlative data is developed. The properties of correlative data fit are investigated. The data of 23 Na(n, 2n) cross section are fitted in the cases with and without correlation

  17. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  18. Health-Related Fitness Knowledge Development through Project-Based Learning

    Science.gov (United States)

    Hastle, Peter A.; Chen, Senlin; Guarino, Anthony J.

    2017-01-01

    Purpose: The purpose of this study was to examine the process and outcome of an intervention using the project-based learning (PBL) model to increase students' health-related fitness (HRF) knowledge. Method: The participants were 185 fifth-grade students from three schools in Alabama (PBL group: n = 109; control group: n = 76). HRF knowledge was…

  19. A versatile curve-fit model for linear to deeply concave rank abundance curves

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    A new, flexible curve-fit model for linear to concave rank abundance curves was conceptualized and validated using observational data. The model links the geometric-series model and log-series model and can also fit deeply concave rank abundance curves. The model is based ¿ in an unconventional way

  20. The transtheoretical model and strategies of European fitness professionals to support clients in changing health-related behaviour: A survey study

    NARCIS (Netherlands)

    Middelkamp, P.J.C.; Wolfhagen, P.; Steenbergen, B.

    2015-01-01

    Introduction: The transtheoretical model of behaviour change (TTM) is often used to understand and predict changes in health related behaviour, for example exercise behaviour and eating behaviour. Fitness professionals like personal trainers typically service and support clients in improving

  1. Kernel-density estimation and approximate Bayesian computation for flexible epidemiological model fitting in Python.

    Science.gov (United States)

    Irvine, Michael A; Hollingsworth, T Déirdre

    2018-05-26

    Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Identifying the Best-Fitting Factor Structure of the Experience of Close Relations

    DEFF Research Database (Denmark)

    Esbjørn, Barbara Hoff; Breinholst, Sonja; Niclasen, Janni

    2015-01-01

    . The present study used a Danish sample with the purpose of addressing limitations in previous studies, such as the lack of diversity in cultural back- ground, restricted sample characteristics, and poorly fitting structure models. Participants consisted of 253 parents of children between the ages of 7 and 12...... years, 53% being moth- ers. The parents completed the paper version of the questionnaire. Confirmatory Factor Analyses were carried out to determine whether theoretically and empirically established models including one and two factors would also provide adequate fits in a Danish sample. A previous...... study using the original ECR suggested that Scandinavian samples may best be described using a five-factor solution. Our results indicated that the one- and two-factor models of the ECR-R did not fit the data well. Exploratory Factor Analysis revealed a five- factor model. Our study provides evidence...

  3. When the model fits the frame: the impact of regulatory fit on efficacy appraisal and persuasion in health communication.

    Science.gov (United States)

    Bosone, Lucia; Martinez, Frédéric; Kalampalikis, Nikos

    2015-04-01

    In health-promotional campaigns, positive and negative role models can be deployed to illustrate the benefits or costs of certain behaviors. The main purpose of this article is to investigate why, how, and when exposure to role models strengthens the persuasiveness of a message, according to regulatory fit theory. We argue that exposure to a positive versus a negative model activates individuals' goals toward promotion rather than prevention. By means of two experiments, we demonstrate that high levels of persuasion occur when a message advertising healthy dietary habits offers a regulatory fit between its framing and the described role model. Our data also establish that the effects of such internal regulatory fit by vicarious experience depend on individuals' perceptions of response-efficacy and self-efficacy. Our findings constitute a significant theoretical complement to previous research on regulatory fit and contain valuable practical implications for health-promotional campaigns. © 2015 by the Society for Personality and Social Psychology, Inc.

  4. A Rigorous Test of the Fit of the Circumplex Model to Big Five Personality Data: Theoretical and Methodological Issues and Two Large Sample Empirical Tests.

    Science.gov (United States)

    DeGeest, David Scott; Schmidt, Frank

    2015-01-01

    Our objective was to apply the rigorous test developed by Browne (1992) to determine whether the circumplex model fits Big Five personality data. This test has yet to be applied to personality data. Another objective was to determine whether blended items explained correlations among the Big Five traits. We used two working adult samples, the Eugene-Springfield Community Sample and the Professional Worker Career Experience Survey. Fit to the circumplex was tested via Browne's (1992) procedure. Circumplexes were graphed to identify items with loadings on multiple traits (blended items), and to determine whether removing these items changed five-factor model (FFM) trait intercorrelations. In both samples, the circumplex structure fit the FFM traits well. Each sample had items with dual-factor loadings (8 items in the first sample, 21 in the second). Removing blended items had little effect on construct-level intercorrelations among FFM traits. We conclude that rigorous tests show that the fit of personality data to the circumplex model is good. This finding means the circumplex model is competitive with the factor model in understanding the organization of personality traits. The circumplex structure also provides a theoretically and empirically sound rationale for evaluating intercorrelations among FFM traits. Even after eliminating blended items, FFM personality traits remained correlated.

  5. Genome-Enabled Modeling of Biogeochemical Processes Predicts Metabolic Dependencies that Connect the Relative Fitness of Microbial Functional Guilds

    Science.gov (United States)

    Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Steefel, C. I.; Banfield, J. F.; Beller, H. R.; Anantharaman, K.; Ligocki, T. J.; Trebotich, D.

    2015-12-01

    Pore-scale processes mediated by microorganisms underlie a range of critical ecosystem services, regulating carbon stability, nutrient flux, and the purification of water. Advances in cultivation-independent approaches now provide us with the ability to reconstruct thousands of genomes from microbial populations from which functional roles may be assigned. With this capability to reveal microbial metabolic potential, the next step is to put these microbes back where they belong to interact with their natural environment, i.e. the pore scale. At this scale, microorganisms communicate, cooperate and compete across their fitness landscapes with communities emerging that feedback on the physical and chemical properties of their environment, ultimately altering the fitness landscape and selecting for new microbial communities with new properties and so on. We have developed a trait-based model of microbial activity that simulates coupled functional guilds that are parameterized with unique combinations of traits that govern fitness under dynamic conditions. Using a reactive transport framework, we simulate the thermodynamics of coupled electron donor-acceptor reactions to predict energy available for cellular maintenance, respiration, biomass development, and enzyme production. From metagenomics, we directly estimate some trait values related to growth and identify the linkage of key traits associated with respiration and fermentation, macromolecule depolymerizing enzymes, and other key functions such as nitrogen fixation. Our simulations were carried out to explore abiotic controls on community emergence such as seasonally fluctuating water table regimes across floodplain organic matter hotspots. Simulations and metagenomic/metatranscriptomic observations highlighted the many dependencies connecting the relative fitness of functional guilds and the importance of chemolithoautotrophic lifestyles. Using an X-Ray microCT-derived soil microaggregate physical model combined

  6. Standard error propagation in R-matrix model fitting for light elements

    International Nuclear Information System (INIS)

    Chen Zhenpeng; Zhang Rui; Sun Yeying; Liu Tingjin

    2003-01-01

    The error propagation features with R-matrix model fitting 7 Li, 11 B and 17 O systems were researched systematically. Some laws of error propagation were revealed, an empirical formula P j = U j c / U j d = K j · S-bar · √m / √N for describing standard error propagation was established, the most likely error ranges for standard cross sections of 6 Li(n,t), 10 B(n,α0) and 10 B(n,α1) were estimated. The problem that the standard error of light nuclei standard cross sections may be too small results mainly from the R-matrix model fitting, which is not perfect. Yet R-matrix model fitting is the most reliable evaluation method for such data. The error propagation features of R-matrix model fitting for compound nucleus system of 7 Li, 11 B and 17 O has been studied systematically, some laws of error propagation are revealed, and these findings are important in solving the problem mentioned above. Furthermore, these conclusions are suitable for similar model fitting in other scientific fields. (author)

  7. The l z ( p ) * Person-Fit Statistic in an Unfolding Model Context.

    Science.gov (United States)

    Tendeiro, Jorge N

    2017-01-01

    Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded unfolding model is used. Results from a simulation study indicate that the person-fit statistic performed relatively well in detecting midpoint response style patterns and not so well in detecting extreme response style patterns.

  8. Wavelet transform approach for fitting financial time series data

    Science.gov (United States)

    Ahmed, Amel Abdoullah; Ismail, Mohd Tahir

    2015-10-01

    This study investigates a newly developed technique; a combined wavelet filtering and VEC model, to study the dynamic relationship among financial time series. Wavelet filter has been used to annihilate noise data in daily data set of NASDAQ stock market of US, and three stock markets of Middle East and North Africa (MENA) region, namely, Egypt, Jordan, and Istanbul. The data covered is from 6/29/2001 to 5/5/2009. After that, the returns of generated series by wavelet filter and original series are analyzed by cointegration test and VEC model. The results show that the cointegration test affirms the existence of cointegration between the studied series, and there is a long-term relationship between the US, stock markets and MENA stock markets. A comparison between the proposed model and traditional model demonstrates that, the proposed model (DWT with VEC model) outperforms traditional model (VEC model) to fit the financial stock markets series well, and shows real information about these relationships among the stock markets.

  9. Multivariable modeling of pressure vessel and piping J-R data

    International Nuclear Information System (INIS)

    Eason, E.D.; Wright, J.E.; Nelson, E.E.

    1991-05-01

    Multivariable models were developed for predicting J-R curves from available data, such as material chemistry, radiation exposure, temperature, and Charpy V-notch energy. The present work involved collection of public test data, application of advanced pattern recognition tools, and calibration of improved multivariable models. Separate models were fitted for different material groups, including RPV welds, Linde 80 welds, RPV base metals, piping welds, piping base metals, and the combined database. Three different types of models were developed, involving different combinations of variables that might be available for applications: a Charpy model, a preirradiation Charpy model, and a copper-fluence model. In general, the best results were obtained with the preirradiation Charpy model. The copper-fluence model is recommended only if Charpy data are unavailable, and then only for Linde 80 welds. Relatively good fits were obtained, capable of predicting the values of J for pressure vessel steels to with a standard deviation of 13--18% over the range of test data. The models were qualified for predictive purposes by demonstrating their ability to predict validation data not used for fitting. 20 refs., 45 figs., 16 tabs

  10. ROC generated thresholds for field-assessed aerobic fitness related to body size and cardiometabolic risk in schoolchildren.

    Directory of Open Access Journals (Sweden)

    Lynne M Boddy

    Full Text Available OBJECTIVES: 1. to investigate whether 20 m multi-stage shuttle run performance (20mSRT, an indirect measure of aerobic fitness, could discriminate between healthy and overweight status in 9-10.9 yr old schoolchildren using Receiver Operating Characteristic (ROC analysis; 2. Investigate if cardiometabolic risk differed by aerobic fitness group by applying the ROC cut point to a second, cross-sectional cohort. DESIGN: Analysis of cross-sectional data. PARTICIPANTS: 16,619 9-10.9 year old participants from SportsLinx project and 300 11-13.9 year old participants from the Welsh Schools Health and Fitness Study. OUTCOME MEASURES: SportsLinx; 20mSRT, body mass index (BMI, waist circumference, subscapular and superilliac skinfold thicknesses. Welsh Schools Health and Fitness Study; 20mSRT performance, waist circumference, and clustered cardiometabolic risk. ANALYSES: Three ROC curve analyses were completed, each using 20mSRT performance with ROC curve 1 related to BMI, curve 2 was related to waist circumference and 3 was related to skinfolds (estimated % body fat. These were repeated for both girls and boys. The mean of the three aerobic fitness thresholds was retained for analysis. The thresholds were subsequently applied to clustered cardiometabolic risk data from the Welsh Schools study to assess whether risk differed by aerobic fitness group. RESULTS: The diagnostic accuracy of the ROC generated thresholds was higher than would be expected by chance (all models AUC >0.7. The mean thresholds were 33 and 25 shuttles for boys and girls respectively. Participants classified as 'fit' had significantly lower cardiometabolic risk scores in comparison to those classed as unfit (p<0.001. CONCLUSION: The use of the ROC generated cut points by health professionals, teachers and coaches may provide the opportunity to apply population level 'risk identification and stratification' processes and plan for "at-risk" children to be referred onto intervention

  11. Twitter classification model: the ABC of two million fitness tweets.

    Science.gov (United States)

    Vickey, Theodore A; Ginis, Kathleen Martin; Dabrowski, Maciej

    2013-09-01

    The purpose of this project was to design and test data collection and management tools that can be used to study the use of mobile fitness applications and social networking within the context of physical activity. This project was conducted over a 6-month period and involved collecting publically shared Twitter data from five mobile fitness apps (Nike+, RunKeeper, MyFitnessPal, Endomondo, and dailymile). During that time, over 2.8 million tweets were collected, processed, and categorized using an online tweet collection application and a customized JavaScript. Using the grounded theory, a classification model was developed to categorize and understand the types of information being shared by application users. Our data show that by tracking mobile fitness app hashtags, a wealth of information can be gathered to include but not limited to daily use patterns, exercise frequency, location-based workouts, and overall workout sentiment.

  12. Exploring the relations among physical fitness, executive functioning, and low academic achievement.

    Science.gov (United States)

    de Bruijn, A G M; Hartman, E; Kostons, D; Visscher, C; Bosker, R J

    2018-03-01

    Physical fitness seems to be related to academic performance, at least when taking the role of executive functioning into account. This assumption is highly relevant for the vulnerable population of low academic achievers because their academic performance might benefit from enhanced physical fitness. The current study examined whether physical fitness and executive functioning are independent predictors of low mathematics and spelling achievement or whether the relation between physical fitness and low achievement is mediated by specific executive functions. In total, 477 students from second- and third-grade classes of 12 primary schools were classified as either low or average-to-high achievers in mathematics and spelling based on their scores on standardized achievement tests. Multilevel structural equation models were built with direct paths between physical fitness and academic achievement and added indirect paths via components of executive functioning: inhibition, verbal working memory, visuospatial working memory, and shifting. Physical fitness was only indirectly related to low achievement via specific executive functions, depending on the academic domain involved. Verbal working memory was a mediator between physical fitness and low achievement in both domains, whereas visuospatial working memory had a mediating role only in mathematics. Physical fitness interventions aiming to improve low academic achievement, thus, could potentially be successful. The mediating effect of executive functioning suggests that these improvements in academic achievement will be preceded by enhanced executive functions, either verbal working memory (in spelling) or both verbal and visuospatial working memory (in mathematics). Copyright © 2017 Elsevier Inc. All rights reserved.

  13. FITTING OF THE DATA FOR DIFFUSION COEFFICIENTS IN UNSATURATED POROUS MEDIA

    Energy Technology Data Exchange (ETDEWEB)

    B. Bullard

    1999-05-01

    The purpose of this calculation is to evaluate diffusion coefficients in unsaturated porous media for use in the TSPA-VA analyses. Using experimental data, regression techniques were used to curve fit the diffusion coefficient in unsaturated porous media as a function of volumetric water content. This calculation substantiates the model fit used in Total System Performance Assessment-1995 An Evaluation of the Potential Yucca Mountain Repository (TSPA-1995), Section 6.5.4.

  14. FITTING OF THE DATA FOR DIFFUSION COEFFICIENTS IN UNSATURATED POROUS MEDIA

    International Nuclear Information System (INIS)

    B. Bullard

    1999-01-01

    The purpose of this calculation is to evaluate diffusion coefficients in unsaturated porous media for use in the TSPA-VA analyses. Using experimental data, regression techniques were used to curve fit the diffusion coefficient in unsaturated porous media as a function of volumetric water content. This calculation substantiates the model fit used in Total System Performance Assessment-1995 An Evaluation of the Potential Yucca Mountain Repository (TSPA-1995), Section 6.5.4

  15. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development.

    Science.gov (United States)

    Tøndel, Kristin; Niederer, Steven A; Land, Sander; Smith, Nicolas P

    2014-05-20

    Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input-output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on

  16. Relative Age Effect in Physical Fitness Among Elementary and Junior High School Students.

    Science.gov (United States)

    Nakata, Hiroki; Akido, Miki; Naruse, Kumi; Fujiwara, Motoko

    2017-10-01

    The present study investigated characteristics of the relative age effect (RAE) among a general sample of Japanese elementary and junior high school students. Japan applies a unique annual age-grouping by birthdates between April 1 and March 31 of the following year for sport and education. Anthropometric and physical fitness data were obtained from 3,610 Japanese students, including height, weight, the 50-m sprint, standing long jump, grip strength, bent-leg sit-ups, sit and reach, side steps, 20-m shuttle run, and ball throw. We examined RAE-related differences in these data using a one-way analysis of variance by comparing students with birthdates in the first (April-September) versus second (October-March of the following year) semesters. We observed a significant RAE for boys aged 7 to 15 years on both anthropometric and fitness data, but a significant RAE for girls was only evident for physical fitness tests among elementary school and not junior high school students. Thus, a significant RAE in anthropometry and physical fitness was evident in a general sample of school children, and there were RAE gender differences among adolescents.

  17. Legal issues relating to the Ontario FIT contract - An update

    International Nuclear Information System (INIS)

    Weizman, Michael

    2011-01-01

    The paper discusses the legal issues related to the Ontario FIT contract, which includes the FIT waiver agreement, WTO challenge, FIT extension, political risk assessment and issues related to unforeseen events beyond human control (force majeure). The risk of termination of the FIT waiver is omitted for convenience by OPA but timing implications relating to the FIT waiver are included. The binding agreement for supply of generating equipment is also presented and the term sheet for turbine equipment and bill of purchase being understood as binding agreements is questioned. Political risks relate to existing contracts, lawsuit risks and changes to the REA process. Change in government and the implications of minority government can be added to the political risks. A successful WTO challenge has been assumed and the possible implications are discussed. Some of them include risk to FIT contracts already issued; changes in DC requirements and in FIT contract pricing and re-pricing of construction and turbine equipment supply contracts if DC requirements are relaxed.

  18. Introduction to Hierarchical Bayesian Modeling for Ecological Data

    CERN Document Server

    Parent, Eric

    2012-01-01

    Making statistical modeling and inference more accessible to ecologists and related scientists, Introduction to Hierarchical Bayesian Modeling for Ecological Data gives readers a flexible and effective framework to learn about complex ecological processes from various sources of data. It also helps readers get started on building their own statistical models. The text begins with simple models that progressively become more complex and realistic through explanatory covariates and intermediate hidden states variables. When fitting the models to data, the authors gradually present the concepts a

  19. Confronting Theoretical Predictions With Experimental Data; Fitting Strategy For Multi-Dimensional Distributions

    Directory of Open Access Journals (Sweden)

    Tomasz Przedziński

    2015-01-01

    Full Text Available After developing a Resonance Chiral Lagrangian (RχL model to describe hadronic τ lepton decays [18], the model was confronted with experimental data. This was accomplished using a fitting framework which was developed to take into account the complexity of the model and to ensure the numerical stability for the algorithms used in the fitting. Since the model used in the fit contained 15 parameters and there were only three 1-dimensional distributions available, we could expect multiple local minima or even whole regions of equal potential to appear. Our methods had to thoroughly explore the whole parameter space and ensure, as well as possible, that the result is a global minimum. This paper is focused on the technical aspects of the fitting strategy used. The first approach was based on re-weighting algorithm published in [17] and produced results in around two weeks. Later approach, with improved theoretical model and simple parallelization algorithm based on Inter-Process Communication (IPC methods of UNIX system, reduced computation time down to 2-3 days. Additional approximations were introduced to the model decreasing time to obtain the preliminary results down to 8 hours. This allowed to better validate the results leading to a more robust analysis published in [12].

  20. Covariance fitting of highly-correlated data in lattice QCD

    Science.gov (United States)

    Yoon, Boram; Jang, Yong-Chull; Jung, Chulwoo; Lee, Weonjong

    2013-07-01

    We address a frequently-asked question on the covariance fitting of highly-correlated data such as our B K data based on the SU(2) staggered chiral perturbation theory. Basically, the essence of the problem is that we do not have a fitting function accurate enough to fit extremely precise data. When eigenvalues of the covariance matrix are small, even a tiny error in the fitting function yields a large chi-square value and spoils the fitting procedure. We have applied a number of prescriptions available in the market, such as the cut-off method, modified covariance matrix method, and Bayesian method. We also propose a brand new method, the eigenmode shift (ES) method, which allows a full covariance fitting without modifying the covariance matrix at all. We provide a pedagogical example of data analysis in which the cut-off method manifestly fails in fitting, but the rest work well. In our case of the B K fitting, the diagonal approximation, the cut-off method, the ES method, and the Bayesian method work reasonably well in an engineering sense. However, interpreting the meaning of χ 2 is easier in the case of the ES method and the Bayesian method in a theoretical sense aesthetically. Hence, the ES method can be a useful alternative optional tool to check the systematic error caused by the covariance fitting procedure.

  1. Fitting neuron models to spike trains

    Directory of Open Access Journals (Sweden)

    Cyrille eRossant

    2011-02-01

    Full Text Available Computational modeling is increasingly used to understand the function of neural circuitsin systems neuroscience.These studies require models of individual neurons with realisticinput-output properties.Recently, it was found that spiking models can accurately predict theprecisely timed spike trains produced by cortical neurons in response tosomatically injected currents,if properly fitted. This requires fitting techniques that are efficientand flexible enough to easily test different candidate models.We present a generic solution, based on the Brian simulator(a neural network simulator in Python, which allowsthe user to define and fit arbitrary neuron models to electrophysiological recordings.It relies on vectorization and parallel computing techniques toachieve efficiency.We demonstrate its use on neural recordings in the barrel cortex andin the auditory brainstem, and confirm that simple adaptive spiking modelscan accurately predict the response of cortical neurons. Finally, we show how a complexmulticompartmental model can be reduced to a simple effective spiking model.

  2. The Fitbit Fault Line: Two Proposals to Protect Health and Fitness Data at Work.

    Science.gov (United States)

    Brown, Elizabeth A

    2016-01-01

    Employers are collecting and using their employees' health data, mined from wearable fitness devices and health apps, in new, profitable, and barely regulated ways. The importance of protecting employee health and fitness data will grow exponentially in the future. This is the moment for a robust discussion of how law can better protect employees from the potential misuse of their health data. While scholars have just begun to examine the problem of health data privacy, this Article contributes to the academic literature in three important ways. First, it analyzes the convergence of three trends resulting in an unprecedented growth of health-related data: the Internet of Things, the Quantified Self movement, and the Rise of Health Platforms. Second, it describes the insufficiencies of specific data privacy laws and federal agency actions in the context of protecting employee health data from employer misuse. Finally, it provides two detailed and workable solutions for remedying the current lack of protection of employee health data that will realign employer use with reasonable expectations of health and fitness privacy. The Article proceeds in four Parts. Part I describes the growth of self-monitoring apps, devices, and other sensor-enabled technology that can monitor a wide range of data related to an employee's health and fitness and the relationship of this growth to both the Quantified Self movement and the Internet of Things. Part II explains the increasing use of employee monitoring through a wide range of sensors, including wearable devices, and the potential uses of that health and fitness data. Part III explores the various regulations and agency actions that might protect employees from the potential misuse of their health and fitness data and the shortcomings of each. Part IV proposes two specific measures that would help ameliorate the ineffective legal protections that currently exist in this context. In order to improve employee notice of and control

  3. Fitting methods for baryon acoustic oscillations in the Lyman-α forest fluctuations in BOSS data release 9

    Energy Technology Data Exchange (ETDEWEB)

    Kirkby, David; Margala, Daniel; Blomqvist, Michael [Department of Physics and Astronomy, University of California, Irvine, 92697 (United States); Slosar, Anže [Brookhaven National Laboratory, Blgd 510, Upton NY 11375 (United States); Bailey, Stephen; Carithers, Bill [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Busca, Nicolás G.; Bautista, Julian E. [APC, Université Paris Diderot-Paris 7, CNRS/IN2P3, CEA, Observatoire de Paris, 10, rue A. Domon and L. Duquet, Paris (France); Delubac, Timothée; Rich, James; Palanque-Delabrouille, Nathalie [CEA, Centre de Saclay, IRFU, F-91191 Gif-sur-Yvette (France); Brownstein, Joel R.; Dawson, Kyle S. [Department of Physics and Astronomy, University of Utah, 115 S 1400 E, Salt Lake City, UT 84112 (United States); Croft, Rupert A.C. [Bruce and Astrid McWilliams Center for Cosmology, Carnegie Mellon University, Pittsburgh, PA 15213 (United States); Font-Ribera, Andreu [Institute of Theoretical Physics, University of Zurich, 8057 Zurich (Switzerland); Miralda-Escudé, Jordi [Institució Catalana de Recerca i Estudis Avançats, Barcelona, Catalonia (Spain); Myers, Adam D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Nichol, Robert C. [Institute of Cosmology and Gravitation, Dennis Sciama Building, University of Portsmouth, Portsmouth, PO1 3FX (United Kingdom); Pâris, Isabelle; Petitjean, Patrick, E-mail: dkirkby@uci.edu [Université Paris 6 et CNRS, Institut d' Astrophysique de Paris, 98bis blvd. Arago, 75014 Paris (France); and others

    2013-03-01

    We describe fitting methods developed to analyze fluctuations in the Lyman-α forest and measure the parameters of baryon acoustic oscillations (BAO). We apply our methods to BOSS Data Release 9. Our method is based on models of the three-dimensional correlation function in physical coordinate space, and includes the effects of redshift-space distortions, anisotropic non-linear broadening, and broadband distortions. We allow for independent scale factors along and perpendicular to the line of sight to minimize the dependence on our assumed fiducial cosmology and to obtain separate measurements of the BAO angular and relative velocity scales. Our fitting software and the input files needed to reproduce our main BOSS Data Release 9 results are publicly available.

  4. Fitting diameter distribution models to data from forest inventories with concentric plot design

    Directory of Open Access Journals (Sweden)

    Nikos Nanos

    2017-10-01

    Research highlights:We designed a new method to fit the Weibull distribution to forest inventory data from concentric plots that achieves high accuracy and precision in parameter estimates regardless of the within-plot spatial tree pattern.

  5. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    International Nuclear Information System (INIS)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by χ 2 -minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimates for the fit parameters. They compare this method with a χ 2 -minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than ∼20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers

  6. The application of bayesian statistic in data fit processing

    International Nuclear Information System (INIS)

    Guan Xingyin; Li Zhenfu; Song Zhaohui

    2010-01-01

    The rationality and disadvantage of least squares fitting that is usually used in data processing is analyzed, and the theory and commonly method that Bayesian statistic is applied in data processing is shown in detail. As it is proved in analysis, Bayesian approach avoid the limitative hypothesis that least squares fitting has in data processing, and the result has traits that it is more scientific and more easily understood, may replace the least squares fitting to apply in data processing. (authors)

  7. Study on fitness functions of genetic algorithm for dynamically correcting nuclide atmospheric diffusion model

    International Nuclear Information System (INIS)

    Ji Zhilong; Ma Yuanwei; Wang Dezhong

    2014-01-01

    Background: In radioactive nuclides atmospheric diffusion models, the empirical dispersion coefficients were deduced under certain experiment conditions, whose difference with nuclear accident conditions is a source of deviation. A better estimation of the radioactive nuclide's actual dispersion process could be done by correcting dispersion coefficients with observation data, and Genetic Algorithm (GA) is an appropriate method for this correction procedure. Purpose: This study is to analyze the fitness functions' influence on the correction procedure and the forecast ability of diffusion model. Methods: GA, coupled with Lagrange dispersion model, was used in a numerical simulation to compare 4 fitness functions' impact on the correction result. Results: In the numerical simulation, the fitness function with observation deviation taken into consideration stands out when significant deviation exists in the observed data. After performing the correction procedure on the Kincaid experiment data, a significant boost was observed in the diffusion model's forecast ability. Conclusion: As the result shows, in order to improve dispersion models' forecast ability using GA, observation data should be given different weight in the fitness function corresponding to their error. (authors)

  8. Multivariate rational data fitting

    Science.gov (United States)

    Cuyt, Annie; Verdonk, Brigitte

    1992-12-01

    Sections 1 and 2 discuss the advantages of an object-oriented implementation combined with higher floating-point arithmetic, of the algorithms available for multivariate data fitting using rational functions. Section 1 will in particular explain what we mean by "higher arithmetic". Section 2 will concentrate on the concepts of "object orientation". In sections 3 and 4 we shall describe the generality of the data structure that can be dealt with: due to some new results virtually every data set is acceptable right now, with possible coalescence of coordinates or points. In order to solve the multivariate rational interpolation problem the data sets are fed to different algorithms depending on the structure of the interpolation points in then-variate space.

  9. Non-linear least squares curve fitting of a simple theoretical model to radioimmunoassay dose-response data using a mini-computer

    International Nuclear Information System (INIS)

    Wilkins, T.A.; Chadney, D.C.; Bryant, J.; Palmstroem, S.H.; Winder, R.L.

    1977-01-01

    Using the simple univalent antigen univalent-antibody equilibrium model the dose-response curve of a radioimmunoassay (RIA) may be expressed as a function of Y, X and the four physical parameters of the idealised system. A compact but powerful mini-computer program has been written in BASIC for rapid iterative non-linear least squares curve fitting and dose interpolation with this function. In its simplest form the program can be operated in an 8K byte mini-computer. The program has been extensively tested with data from 10 different assay systems (RIA and CPBA) for measurement of drugs and hormones ranging in molecular size from thyroxine to insulin. For each assay system the results have been analysed in terms of (a) curve fitting biases and (b) direct comparison with manual fitting. In all cases the quality of fitting was remarkably good in spite of the fact that the chemistry of each system departed significantly from one or more of the assumptions implicit in the model used. A mathematical analysis of departures from the model's principal assumption has provided an explanation for this somewhat unexpected observation. The essential features of this analysis are presented in this paper together with the statistical analyses of the performance of the program. From these and the results obtained to date in the routine quality control of these 10 assays, it is concluded that the method of curve fitting and dose interpolation presented in this paper is likely to be of general applicability. (orig.) [de

  10. A heteroscedastic measurement error model for method comparison data with replicate measurements.

    Science.gov (United States)

    Nawarathna, Lakshika S; Choudhary, Pankaj K

    2015-03-30

    Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  11. [How to fit and interpret multilevel models using SPSS].

    Science.gov (United States)

    Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael

    2007-05-01

    Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.

  12. Measures of relative fitness of social behaviors in finite structured population models.

    Science.gov (United States)

    Tarnita, Corina E; Taylor, Peter D

    2014-10-01

    How should we measure the relative selective advantage of different behavioral strategies? The various approaches to this question have fallen into one of the following categories: the fixation probability of a mutant allele in a wild type population, some measures of gene frequency and gene frequency change, and a formulation of the inclusive fitness effect. Countless theoretical studies have examined the relationship between these approaches, and it has generally been thought that, under standard simplifying assumptions, they yield equivalent results. Most of this theoretical work, however, has assumed homogeneity of the population interaction structure--that is, that all individuals are equivalent. We explore the question of selective advantage in a general (heterogeneous) population and show that, although appropriate measures of fixation probability and gene frequency change are equivalent, they are not, in general, equivalent to the inclusive fitness effect. The latter does not reflect effects of selection acting via mutation, which can arise on heterogeneous structures, even for low mutation. Our theoretical framework provides a transparent analysis of the different biological factors at work in the comparison of these fitness measures and suggests that their theoretical and empirical use needs to be revised and carefully grounded in a more general theory.

  13. Model Fit and Item Factor Analysis: Overfactoring, Underfactoring, and a Program to Guide Interpretation.

    Science.gov (United States)

    Clark, D Angus; Bowles, Ryan P

    2018-04-23

    In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.

  14. Different fits satisfy different needs: linking person-environment fit to employee commitment and performance using self-determination theory.

    Science.gov (United States)

    Greguras, Gary J; Diefendorff, James M

    2009-03-01

    Integrating and expanding upon the person-environment fit (PE fit) and the self-determination theory literatures, the authors hypothesized and tested a model in which the satisfaction of the psychological needs for autonomy, relatedness, and competence partially mediated the relations between different types of perceived PE fit (i.e., person-organization fit, person-group fit, and job demands-abilities fit) with employee affective organizational commitment and overall job performance. Data from 163 full-time working employees and their supervisors were collected across 3 time periods. Results indicate that different types of PE fit predicted different types of psychological need satisfaction and that psychological need satisfaction predicted affective commitment and performance. Further, person-organization fit and demands-abilities fit also evidenced direct effects on employee affective commitment. These results begin to explicate the processes through which different types of PE fit relate to employee attitudes and behaviors. (c) 2009 APA, all rights reserved.

  15. Revisiting the Global Electroweak Fit of the Standard Model and Beyond with Gfitter

    CERN Document Server

    Flächer, Henning; Haller, J; Höcker, A; Mönig, K; Stelzer, J

    2009-01-01

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter projec...

  16. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    Science.gov (United States)

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.

  17. A CONTRASTIVE ANALYSIS OF THE FACTORIAL STRUCTURE OF THE PCL-R: WHICH MODEL FITS BEST THE DATA?

    Directory of Open Access Journals (Sweden)

    Beatriz Pérez

    2015-01-01

    Full Text Available The aim of this study was to determine which of the factorial solutions proposed for the Hare Psychopathy Checklist-Revised (PCL-R of two, three, four factors, and unidimensional fitted best the data. Two trained and experienced independent raters scored 197 prisoners from the Villabona Penitentiary (Asturias, Spain, age range 21 to 73 years (M = 36.0, SD = 9.7, of whom 60.12% were reoffenders and 73% had committed violent crimes. The results revealed that the two-factor correlational, three-factor hierarchical without testlets, four-factor correlational and hierarchical, and unidimensional models were a poor fit for the data (CFI ≤ .86, and the three-factor model with testlets was a reasonable fit for the data (CFI = .93. The scale resulting from the three-factor hierarchical model with testlets (13 items classified psychopathy significantly higher than the original 20-item scale. The results are discussed in terms of their implications for theoretical models of psychopathy, decision-making, prison classification and intervention, and prevention. Se diseñó un estudio con el objetivo de conocer cuál de las soluciones factoriales propuestas para la Hare Psychopathy Checklist-Revised (PCL-R de dos, tres y cuatro factores y unidimensional era la que presentaba mejor ajuste a los datos. Para ello, dos evaluadores entrenados y con experiencia evaluaron de forma independiente a 197 internos en la prisión Villabona (Asturias, España, con edades comprendidas entre los 21 y los 73 años (M = 36.0, DT = 9.7, de los cuales el 60.12% eran reincidentes y el 73% había cometido delitos violentos. Los resultados mostraron que los modelos unidimensional, correlacional de 2 factores, jerárquico de 3 factores sin testlest y correlacional y jerárquico de 4 factores, presentaban un pobre ajuste con los datos (CFI ≤ .86 y un ajuste razonable del modelo jerárquico de tres factores con testlets (CFI = .93. La escala resultante del modelo de tres factores

  18. A goodness-of-fit test for occupancy models with correlated within-season revisits

    Science.gov (United States)

    Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.

    2016-01-01

    Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and

  19. Fitting experimental data by using weighted Monte Carlo events

    International Nuclear Information System (INIS)

    Stojnev, S.

    2003-01-01

    A method for fitting experimental data using modified Monte Carlo (MC) sample is developed. It is intended to help when a single finite MC source has to fit experimental data looking for parameters in a certain underlying theory. The extraction of the searched parameters, the errors estimation and the goodness-of-fit testing is based on the binned maximum likelihood method

  20. A Data Forward Stepwise Fitting Algorithm Based on Orthogonal Function System

    Directory of Open Access Journals (Sweden)

    Li Han-Ju

    2017-01-01

    Full Text Available Data fitting is the main method of functional data analysis, and it is widely used in the fields of economy, social science, engineering technology and so on. Least square method is the main method of data fitting, but the least square method is not convergent, no memory property, big fitting error and it is easy to over fitting. Based on the orthogonal trigonometric function system, this paper presents a data forward stepwise fitting algorithm. This algorithm takes forward stepwise fitting strategy, each time using the nearest base function to fit the residual error generated by the previous base function fitting, which makes the residual mean square error minimum. In this paper, we theoretically prove the convergence, the memory property and the fitting error diminishing character for the algorithm. Experimental results show that the proposed algorithm is effective, and the fitting performance is better than that of the least square method and the forward stepwise fitting algorithm based on the non-orthogonal function system.

  1. PLOTNFIT.4TH, Data Plotting and Curve Fitting by Polynomials

    International Nuclear Information System (INIS)

    Schiffgens, J.O.

    1990-01-01

    1 - Description of program or function: PLOTnFIT is used for plotting and analyzing data by fitting nth degree polynomials of basis functions to the data interactively and printing graphs of the data and the polynomial functions. It can be used to generate linear, semi-log, and log-log graphs and can automatically scale the coordinate axes to suit the data. Multiple data sets may be plotted on a single graph. An auxiliary program, READ1ST, is included which produces an on-line summary of the information contained in the PLOTnFIT reference report. 2 - Method of solution: PLOTnFIT uses the least squares method to calculate the coefficients of nth-degree (up to 10. degree) polynomials of 11 selected basis functions such that each polynomial fits the data in a least squares sense. The procedure incorporated in the code uses a linear combination of orthogonal polynomials to avoid 'i11-conditioning' and to perform the curve fitting task with single-precision arithmetic. 3 - Restrictions on the complexity of the problem - Maxima of: 225 data points per job (or graph) including all data sets 8 data sets (or tasks) per job (or graph)

  2. Regression models for interval censored survival data: Application to HIV infection in Danish homosexual men

    DEFF Research Database (Denmark)

    Carstensen, Bendix

    1996-01-01

    This paper shows how to fit excess and relative risk regression models to interval censored survival data, and how to implement the models in standard statistical software. The methods developed are used for the analysis of HIV infection rates in a cohort of Danish homosexual men.......This paper shows how to fit excess and relative risk regression models to interval censored survival data, and how to implement the models in standard statistical software. The methods developed are used for the analysis of HIV infection rates in a cohort of Danish homosexual men....

  3. Improving health-related fitness in children: the fit-4-Fun randomized controlled trial study protocol

    Directory of Open Access Journals (Sweden)

    Eather Narelle

    2011-12-01

    Full Text Available Abstract Background Declining levels of physical fitness in children are linked to an increased risk of developing poor physical and mental health. Physical activity programs for children that involve regular high intensity physical activity, along with muscle and bone strengthening activities, have been identified by the World Health Organisation as a key strategy to reduce the escalating burden of ill health caused by non-communicable diseases. This paper reports the rationale and methods for a school-based intervention designed to improve physical fitness and physical activity levels of Grades 5 and 6 primary school children. Methods/Design Fit-4-Fun is an 8-week multi-component school-based health-related fitness education intervention and will be evaluated using a group randomized controlled trial. Primary schools from the Hunter Region in NSW, Australia, will be invited to participate in the program in 2011 with a target sample size of 128 primary schools children (age 10-13. The Fit-4-Fun program is theoretically grounded and will be implemented applying the Health Promoting Schools framework. Students will participate in weekly curriculum-based health and physical education lessons, daily break-time physical activities during recess and lunch, and will complete an 8-week (3 × per week home activity program with their parents and/or family members. A battery of six health-related fitness assessments, four days of pedometery-assessed physical activity and a questionnaire, will be administered at baseline, immediate post-intervention (2-months and at 6-months (from baseline to determine intervention effects. Details of the methodological aspects of recruitment, inclusion criteria, randomization, intervention program, assessments, process evaluation and statistical analyses are described. Discussion The Fit-4-Fun program is an innovative school-based intervention targeting fitness improvements in primary school children. The program will

  4. Improving health-related fitness in children: the Fit-4-Fun randomized controlled trial study protocol.

    Science.gov (United States)

    Eather, Narelle; Morgan, Philip J; Lubans, David R

    2011-12-05

    Declining levels of physical fitness in children are linked to an increased risk of developing poor physical and mental health. Physical activity programs for children that involve regular high intensity physical activity, along with muscle and bone strengthening activities, have been identified by the World Health Organisation as a key strategy to reduce the escalating burden of ill health caused by non-communicable diseases. This paper reports the rationale and methods for a school-based intervention designed to improve physical fitness and physical activity levels of Grades 5 and 6 primary school children. Fit-4-Fun is an 8-week multi-component school-based health-related fitness education intervention and will be evaluated using a group randomized controlled trial. Primary schools from the Hunter Region in NSW, Australia, will be invited to participate in the program in 2011 with a target sample size of 128 primary schools children (age 10-13). The Fit-4-Fun program is theoretically grounded and will be implemented applying the Health Promoting Schools framework. Students will participate in weekly curriculum-based health and physical education lessons, daily break-time physical activities during recess and lunch, and will complete an 8-week (3 × per week) home activity program with their parents and/or family members. A battery of six health-related fitness assessments, four days of pedometery-assessed physical activity and a questionnaire, will be administered at baseline, immediate post-intervention (2-months) and at 6-months (from baseline) to determine intervention effects. Details of the methodological aspects of recruitment, inclusion criteria, randomization, intervention program, assessments, process evaluation and statistical analyses are described. The Fit-4-Fun program is an innovative school-based intervention targeting fitness improvements in primary school children. The program will involve a range of evidence-based behaviour change strategies to

  5. A GPS Satellite Clock Offset Prediction Method Based on Fitting Clock Offset Rates Data

    Directory of Open Access Journals (Sweden)

    WANG Fuhong

    2016-12-01

    Full Text Available It is proposed that a satellite atomic clock offset prediction method based on fitting and modeling clock offset rates data. This method builds quadratic model or linear model combined with periodic terms to fit the time series of clock offset rates, and computes the model coefficients of trend with the best estimation. The clock offset precisely estimated at the initial prediction epoch is directly adopted to calculate the model coefficient of constant. The clock offsets in the rapid ephemeris (IGR provided by IGS are used as modeling data sets to perform certain experiments for different types of GPS satellite clocks. The results show that the clock prediction accuracies of the proposed method for 3, 6, 12 and 24 h achieve 0.43, 0.58, 0.90 and 1.47 ns respectively, which outperform the traditional prediction method based on fitting original clock offsets by 69.3%, 61.8%, 50.5% and 37.2%. Compared with the IGU real-time clock products provided by IGS, the prediction accuracies of the new method have improved about 15.7%, 23.7%, 27.4% and 34.4% respectively.

  6. Gfitter - Revisiting the global electroweak fit of the Standard Model and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Flaecher, H.; Hoecker, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Goebel, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Haller, J. [Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Moenig, K.; Stelzer, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2008-11-15

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M{sub H}=116.4{sup +18.3}{sub -1.3} GeV, and the 2{sigma} and 3{sigma} allowed regions [114,145] GeV and [[113,168] and [180,225

  7. Brief communication: human cranial variation fits iterative founder effect model with African origin.

    Science.gov (United States)

    von Cramon-Taubadel, Noreen; Lycett, Stephen J

    2008-05-01

    Recent studies comparing craniometric and neutral genetic affinity matrices have concluded that, on average, human cranial variation fits a model of neutral expectation. While human craniometric and genetic data fit a model of isolation by geographic distance, it is not yet clear whether this is due to geographically mediated gene flow or human dispersal events. Recently, human genetic data have been shown to fit an iterative founder effect model of dispersal with an African origin, in line with the out-of-Africa replacement model for modern human origins, and Manica et al. (Nature 448 (2007) 346-349) have demonstrated that human craniometric data also fit this model. However, in contrast with the neutral model of cranial evolution suggested by previous studies, Manica et al. (2007) made the a priori assumption that cranial form has been subject to climatically driven natural selection and therefore correct for climate prior to conducting their analyses. Here we employ a modified theoretical and methodological approach to test whether human cranial variability fits the iterative founder effect model. In contrast with Manica et al. (2007) we employ size-adjusted craniometric variables, since climatic factors such as temperature have been shown to correlate with aspects of cranial size. Despite these differences, we obtain similar results to those of Manica et al. (2007), with up to 26% of global within-population craniometric variation being explained by geographic distance from sub-Saharan Africa. Comparative analyses using non-African origins do not yield significant results. The implications of these results are discussed in the light of the modern human origins debate. (c) 2007 Wiley-Liss, Inc.

  8. Significant uncertainty in global scale hydrological modeling from precipitation data erros

    NARCIS (Netherlands)

    Sperna Weiland, F.; Vrugt, J.A.; Beek, van P.H.; Weerts, A.H.; Bierkens, M.F.P.

    2015-01-01

    In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we

  9. Significant uncertainty in global scale hydrological modeling from precipitation data errors

    NARCIS (Netherlands)

    Weiland, Frederiek C. Sperna; Vrugt, Jasper A.; van Beek, Rens (L. ) P. H.; Weerts, Albrecht H.; Bierkens, Marc F. P.

    2015-01-01

    In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we

  10. Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.

    Science.gov (United States)

    Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui

    2018-01-13

    Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.

  11. Modelling job support, job fit, job role and job satisfaction for school of nursing sessional academic staff.

    Science.gov (United States)

    Cowin, Leanne S; Moroney, Robyn

    2018-01-01

    Sessional academic staff are an important part of nursing education. Increases in casualisation of the academic workforce continue and satisfaction with the job role is an important bench mark for quality curricula delivery and influences recruitment and retention. This study examined relations between four job constructs - organisation fit, organisation support, staff role and job satisfaction for Sessional Academic Staff at a School of Nursing by creating two path analysis models. A cross-sectional correlational survey design was utilised. Participants who were currently working as sessional or casual teaching staff members were invited to complete an online anonymous survey. The data represents a convenience sample of Sessional Academic Staff in 2016 at a large school of Nursing and Midwifery in Australia. After psychometric evaluation of each of the job construct measures in this study we utilised Structural Equation Modelling to better understand the relations of the variables. The measures used in this study were found to be both valid and reliable for this sample. Job support and job fit are positively linked to job satisfaction. Although the hypothesised model did not meet model fit standards, a new 'nested' model made substantive sense. This small study explored a new scale for measuring academic job role, and demonstrated how it promotes the constructs of job fit and job supports. All four job constructs are important in providing job satisfaction - an outcome that in turn supports staffing stability, retention, and motivation.

  12. Functionally unidimensional item response models for multivariate binary data

    DEFF Research Database (Denmark)

    Ip, Edward; Molenberghs, Geert; Chen, Shyh-Huei

    2013-01-01

    The problem of fitting unidimensional item response models to potentially multidimensional data has been extensively studied. The focus of this article is on response data that have a strong dimension but also contain minor nuisance dimensions. Fitting a unidimensional model to such multidimensio......The problem of fitting unidimensional item response models to potentially multidimensional data has been extensively studied. The focus of this article is on response data that have a strong dimension but also contain minor nuisance dimensions. Fitting a unidimensional model...... to such multidimensional data is believed to result in ability estimates that represent a combination of the major and minor dimensions. We conjecture that the underlying dimension for the fitted unidimensional model, which we call the functional dimension, represents a nonlinear projection. In this article we investigate...... tool. An example regarding a construct of desire for physical competency is used to illustrate the functional unidimensional approach....

  13. The universal Higgs fit

    DEFF Research Database (Denmark)

    Giardino, P. P.; Kannike, K.; Masina, I.

    2014-01-01

    We perform a state-of-the-art global fit to all Higgs data. We synthesise them into a 'universal' form, which allows to easily test any desired model. We apply the proposed methodology to extract from data the Higgs branching ratios, production cross sections, couplings and to analyse composite...... Higgs models, models with extra Higgs doublets, supersymmetry, extra particles in the loops, anomalous top couplings, and invisible Higgs decays into Dark Matter. Best fit regions lie around the Standard Model predictions and are well approximated by our 'universal' fit. Latest data exclude the dilaton...... as an alternative to the Higgs, and disfavour fits with negative Yukawa couplings. We derive for the first time the SM Higgs boson mass from the measured rates, rather than from the peak positions, obtaining M-h = 124.4 +/- 1.6 GeV....

  14. The Association of Health-Related Fitness and Chronic Absenteeism Status in New York City Middle School Youth.

    Science.gov (United States)

    D'Agostino, Emily M; Day, Sophia E; Konty, Kevin J; Larkin, Michael; Saha, Subir; Wyka, Katarzyna

    2018-03-23

    Extensive research demonstrates the benefits of fitness on children's health and academic performance. Although decreases in health-related fitness may increase school absenteeism, multiple years of prospective, child-level data are needed to examine whether fitness changes predict subsequent chronic absenteeism status. Six cohorts of New York City public school students were followed from grades 5-8 (2006/2007-2012/2013; N = 349,381). A longitudinal 3-level logistic generalized linear mixed model with random intercepts was used to test the association of individual children's changes in fitness and 1-year lagged chronic absenteeism. The odds of chronic absenteeism increased 27% [odds ratio (OR) 95% confidence interval (CI), 1.25-1.30], 15% (OR 95% CI, 1.13-1.18), 9% (OR 95% CI, 1.07-1.11), and 1% (OR 95% CI, 0.98-1.04), for students who had a >20% decrease, 10%-20% decrease, 20% fitness increase. These findings contribute important longitudinal evidence to a cross-sectional literature, demonstrating reductions in youth fitness may increase absenteeism. Given only 25% of youth aged 12-15 years achieve the recommended daily 60 minutes or more of moderate to vigorous physical activity, future work should examine the potential for youth fitness interventions to reduce absenteeism and foster positive attitudes toward lifelong physical activity.

  15. iFit: a new data analysis framework. Applications for data reduction and optimization of neutron scattering instrument simulations with McStas

    DEFF Research Database (Denmark)

    Farhi, E.; Y., Debab,; Willendrup, Peter Kjær

    2014-01-01

    and noisy problems. These optimizers can then be used to fit models onto data objects, and optimize McStas instrument simulations. As an application, we propose a methodology to analyse neutron scattering measurements in a pure Monte Carlo optimization procedure using McStas and iFit. As opposed...

  16. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example

    Science.gov (United States)

    Helgesson, P.; Sjöstrand, H.

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  17. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example.

    Science.gov (United States)

    Helgesson, P; Sjöstrand, H

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  18. Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory

    OpenAIRE

    Aab, A.; Abreu, P.; Aglietta, M.; Samarai, I. Al; Albuquerque, I.F.M.; Allekotte, I.; Almela, A.; Castillo, J. Alvarez; Alvarez-Muñiz, J.; Anastasi, G.A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.

    2017-01-01

    We present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above $5 \\cdot 10^{18}$ eV, i.e.~the region of the all-particle spectrum above the so-called "ankle" feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism...

  19. Sustainability of TQM Implementation Model In The Indonesia’s Oil and Gas Industry: An Assessment of Structural Relations Model Fit

    Directory of Open Access Journals (Sweden)

    Wakhid Slamet Ciptono

    2011-02-01

    Full Text Available This study purposively is to conduct an empirical analysis of the structural relations among  critical factors of quality management practices (QMPs, world-class company practice (WCC, operational excellence practice (OE, and company performance (company non-financial performance or CNFP and company financial performance or CFP in the oil and gas companies operating in Indonesia. The current study additionally examines the relationships between QMPs and CFP through WCC, OE, and CNFP (as partial mediators simultaneously. The study uses data from a survey of 140 strategic business units (SBUs within 49 oil and gas contractor companies in Indonesia.  The findings suggest that all six QMPs have positive and significant indirect relationships on CFP through WCC and CNFP. Only four of six QMPs have positive and significant indirect relationships on CFP through OE and CNFP. Hence, WCC, OE, and CNFP play as partial mediators between  QMPs and CFP. CNFP has a significant influence on CFP. A major implication of this study is that oil and gas managers need to recognize the structural relations model fit by developing all of the research constructs simultaneously associated with a comprehensive TQM practice. Furthermore, the findings will assist oil and gas companies by improving CNFP, which is very critical to TQM, thereby contributing to a better achievement of CFP. The current study uses the Deming’s principles, Hayes and Wheelwright dimensions of world-class company practice, Chevron Texaco’s operational excellence practice, and the dimensions of company financial and non-financial performances.  The paper also provides an insight into the sustainability of TQM implementation model and their effect on company financial performance in oil and gas companies in Indonesia.

  20. Robustly Fitting and Forecasting Dynamical Data With Electromagnetically Coupled Artificial Neural Network: A Data Compression Method.

    Science.gov (United States)

    Wang, Ziyin; Liu, Mandan; Cheng, Yicheng; Wang, Rubin

    2017-06-01

    In this paper, a dynamical recurrent artificial neural network (ANN) is proposed and studied. Inspired from a recent research in neuroscience, we introduced nonsynaptic coupling to form a dynamical component of the network. We mathematically proved that, with adequate neurons provided, this dynamical ANN model is capable of approximating any continuous dynamic system with an arbitrarily small error in a limited time interval. Its extreme concise Jacobian matrix makes the local stability easy to control. We designed this ANN for fitting and forecasting dynamic data and obtained satisfied results in simulation. The fitting performance is also compared with those of both the classic dynamic ANN and the state-of-the-art models. Sufficient trials and the statistical results indicated that our model is superior to those have been compared. Moreover, we proposed a robust approximation problem, which asking the ANN to approximate a cluster of input-output data pairs in large ranges and to forecast the output of the system under previously unseen input. Our model and learning scheme proposed in this paper have successfully solved this problem, and through this, the approximation becomes much more robust and adaptive to noise, perturbation, and low-order harmonic wave. This approach is actually an efficient method for compressing massive external data of a dynamic system into the weight of the ANN.

  1. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species.

    Science.gov (United States)

    Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R

    2017-01-04

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  2. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    Science.gov (United States)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  3. Objectively measured daily physical activity related to aerobic fitness in young children

    DEFF Research Database (Denmark)

    Dencker, Magnus; Bugge, Anna; Hermansen, Bianca

    2010-01-01

    The purpose of this study was to investigate by direct measurement the cross-sectional relationship between accelerometer-measured physical activity and peak oxygen uptake (VO(2peak): ml x min(-1) x kg(-1)), in a population-based cohort of young children, since such data are scarce. The study...... analyses indicated that the various physical activity variables explained between 2 and 8% of the variance in VO(2peak) in boys. In this population-based cohort, most daily activity variables were positively related to aerobic fitness in boys, whereas less clear relationships were observed in girls. Our...... finding that physical activity was only uniformly related to aerobic fitness in boys partly contradicts previous studies in older children and adolescents....

  4. Confidence of model based shape reconstruction from sparse data

    DEFF Research Database (Denmark)

    Baka, N.; de Bruijne, Marleen; Reiber, J. H. C.

    2010-01-01

    Statistical shape models (SSM) are commonly applied for plausible interpolation of missing data in medical imaging. However, when fitting a shape model to sparse information, many solutions may fit the available data. In this paper we derive a constrained SSM to fit noisy sparse input landmarks...

  5. IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.

    Science.gov (United States)

    Huang, Lihan

    2017-12-04

    The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.

  6. Comparison of machine learning techniques to predict all-cause mortality using fitness data: the Henry ford exercIse testing (FIT) project.

    Science.gov (United States)

    Sakr, Sherif; Elshawi, Radwa; Ahmed, Amjad M; Qureshi, Waqas T; Brawner, Clinton A; Keteyian, Steven J; Blaha, Michael J; Al-Mallah, Mouaz H

    2017-12-19

    Prior studies have demonstrated that cardiorespiratory fitness (CRF) is a strong marker of cardiovascular health. Machine learning (ML) can enhance the prediction of outcomes through classification techniques that classify the data into predetermined categories. The aim of this study is to present an evaluation and comparison of how machine learning techniques can be applied on medical records of cardiorespiratory fitness and how the various techniques differ in terms of capabilities of predicting medical outcomes (e.g. mortality). We use data of 34,212 patients free of known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems Between 1991 and 2009 and had a complete 10-year follow-up. Seven machine learning classification techniques were evaluated: Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN), K-Nearest Neighbor (KNN) and Random Forest (RF). In order to handle the imbalanced dataset used, the Synthetic Minority Over-Sampling Technique (SMOTE) is used. Two set of experiments have been conducted with and without the SMOTE sampling technique. On average over different evaluation metrics, SVM Classifier has shown the lowest performance while other models like BN, BC and DT performed better. The RF classifier has shown the best performance (AUC = 0.97) among all models trained using the SMOTE sampling. The results show that various ML techniques can significantly vary in terms of its performance for the different evaluation metrics. It is also not necessarily that the more complex the ML model, the more prediction accuracy can be achieved. The prediction performance of all models trained with SMOTE is much better than the performance of models trained without SMOTE. The study shows the potential of machine learning methods for predicting all-cause mortality using cardiorespiratory fitness

  7. Choosing an optimal model for failure data analysis by graphical approach

    International Nuclear Information System (INIS)

    Zhang, Tieling; Dwight, Richard

    2013-01-01

    Many models involving combination of multiple Weibull distributions, modification of Weibull distribution or extension of its modified ones, etc. have been developed to model a given set of failure data. The application of these models to modeling a given data set can be based on plotting the data on Weibull probability paper (WPP). Of them, two or more models are appropriate to model one typical shape of the fitting plot, whereas a specific model may be fit for analyzing different shapes of the plots. Hence, a problem arises, that is how to choose an optimal model for a given data set and how to model the data. The motivation of this paper is to address this issue. This paper summarizes the characteristics of Weibull-related models with more than three parameters including sectional models involving two or three Weibull distributions, competing risk model and mixed Weibull model. The models as discussed in this present paper are appropriate to model the data of which the shapes of plots on WPP can be concave, convex, S-shaped or inversely S-shaped. Then, the method for model selection is proposed, which is based on the shapes of the fitting plots. The main procedure for parameter estimation of the models is described accordingly. In addition, the range of data plots on WPP is clearly highlighted from the practical point of view. To note this is important as mathematical analysis of a model with neglecting the applicable range of the model plot will incur discrepancy or big errors in model selection and parameter estimates

  8. Tests of fit of historically-informed models of African American Admixture.

    Science.gov (United States)

    Gross, Jessica M

    2018-02-01

    African American populations in the U.S. formed primarily by mating between Africans and Europeans over the last 500 years. To date, studies of admixture have focused on either a one-time admixture event or continuous input into the African American population from Europeans only. Our goal is to gain a better understanding of the admixture process by examining models that take into account (a) assortative mating by ancestry in the African American population, (b) continuous input from both Europeans and Africans, and (c) historically informed variation in the rate of African migration over time. We used a model-based clustering method to generate distributions of African ancestry in three samples comprised of 147 African Americans from two published sources. We used a log-likelihood method to examine the fit of four models to these distributions and used a log-likelihood ratio test to compare the relative fit of each model. The mean ancestry estimates for our datasets of 77% African/23% European to 83% African/17% European ancestry are consistent with previous studies. We find admixture models that incorporate continuous gene flow from Europeans fit significantly better than one-time event models, and that a model involving continuous gene flow from Africans and Europeans fits better than one with continuous gene flow from Europeans only for two samples. Importantly, models that involve continuous input from Africans necessitate a higher level of gene flow from Europeans than previously reported. We demonstrate that models that take into account information about the rate of African migration over the past 500 years fit observed patterns of African ancestry better than alternative models. Our approach will enrich our understanding of the admixture process in extant and past populations. © 2017 Wiley Periodicals, Inc.

  9. Everything you wanted to know about data analysis and fitting but were afraid to ask

    CERN Document Server

    Young, Peter

    2015-01-01

    These notes describe how to average and fit numerical data that have been obtained either by simulation or measurement. Following an introduction on how to estimate various average values, they discuss how to determine error bars on those estimates, and how to proceed for combinations of measured values. Techniques for fitting data to a given set of models will be described in the second part of these notes. This primer equips readers to properly derive the results covered, presenting the content in a style suitable for a physics audience. It also includes scripts in python, perl and gnuplot for performing a number of tasks in data analysis and fitting, thereby providing readers with a useful reference guide.

  10. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    Science.gov (United States)

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  11. Model Fitting for Predicted Precipitation in Darwin: Some Issues with Model Choice

    Science.gov (United States)

    Farmer, Jim

    2010-01-01

    In Volume 23(2) of the "Australian Senior Mathematics Journal," Boncek and Harden present an exercise in fitting a Markov chain model to rainfall data for Darwin Airport (Boncek & Harden, 2009). Days are subdivided into those with precipitation and precipitation-free days. The author abbreviates these labels to wet days and dry days.…

  12. Physical Fitness Assessment.

    Science.gov (United States)

    Valdes, Alice

    This document presents baseline data on physical fitness that provides an outline for assessing the physical fitness of students. It consists of 4 tasks and a 13-item questionnaire on fitness-related behaviors. The fitness test evaluates cardiorespiratory endurance by a steady state jog; muscular strength and endurance with a two-minute bent-knee…

  13. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    Science.gov (United States)

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  14. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan; Krebs-Smith, Susan M.; Midthune, Douglas; Perez, Adriana; Buckman, Dennis W.; Kipnis, Victor; Freedman, Laurence S.; Dodd, Kevin W.; Carroll, Raymond J

    2011-01-01

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  15. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  16. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    OpenAIRE

    Matthew P. Adams; Catherine J. Collier; Sven Uthicke; Yan X. Ow; Lucas Langlois; Katherine R. O’Brien

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluat...

  17. Human X-chromosome inactivation pattern distributions fit a model of genetically influenced choice better than models of completely random choice

    Science.gov (United States)

    Renault, Nisa K E; Pritchett, Sonja M; Howell, Robin E; Greer, Wenda L; Sapienza, Carmen; Ørstavik, Karen Helene; Hamilton, David C

    2013-01-01

    In eutherian mammals, one X-chromosome in every XX somatic cell is transcriptionally silenced through the process of X-chromosome inactivation (XCI). Females are thus functional mosaics, where some cells express genes from the paternal X, and the others from the maternal X. The relative abundance of the two cell populations (X-inactivation pattern, XIP) can have significant medical implications for some females. In mice, the ‘choice' of which X to inactivate, maternal or paternal, in each cell of the early embryo is genetically influenced. In humans, the timing of XCI choice and whether choice occurs completely randomly or under a genetic influence is debated. Here, we explore these questions by analysing the distribution of XIPs in large populations of normal females. Models were generated to predict XIP distributions resulting from completely random or genetically influenced choice. Each model describes the discrete primary distribution at the onset of XCI, and the continuous secondary distribution accounting for changes to the XIP as a result of development and ageing. Statistical methods are used to compare models with empirical data from Danish and Utah populations. A rigorous data treatment strategy maximises information content and allows for unbiased use of unphased XIP data. The Anderson–Darling goodness-of-fit statistics and likelihood ratio tests indicate that a model of genetically influenced XCI choice better fits the empirical data than models of completely random choice. PMID:23652377

  18. Physical Work Demands and Fitness

    DEFF Research Database (Denmark)

    Larsen, Mette Korshøj

    . The effects were evaluated with objective physiological or diurnal data in an intention-to-treat analysis using multi-adjusted mixed models. The results indicated that the intervention led to several improvements in risk factors for cardiovascular disease, e.g. enhanced cardiorespiratory fitness, reduced...... exposed to high relative aerobic workloads obtained more pronounced increases of resting and 24-hour ambulatory blood pressure, an unaltered cardiorespiratory fitness and a reduced sleeping heart rate. The enhanced resting and 24-hour ambulatory blood pressure may be explained as a potential...

  19. Executive functions, visual-motor coordination, physical fitness and academic achievement: Longitudinal relations in typically developing children.

    Science.gov (United States)

    Oberer, Nicole; Gashaj, Venera; Roebers, Claudia M

    2018-04-01

    The present longitudinal study included different school readiness factors measured in kindergarten with the aim to predict later academic achievement in second grade. Based on data of N = 134 children, the predictive power of executive functions, visual-motor coordination and physical fitness on later academic achievement was estimated using a latent variable approach. By entering all three predictors simultaneously into the model to predict later academic achievement, significant effects of executive functions and visual-motor coordination on later academic achievement were found. The influence of physical fitness was found to be substantial but indirect via executive functions. The cognitive stimulation hypothesis as well as the automaticity hypothesis are discussed as an explanation for the reported relations. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Ontological knowledge engine and health screening data enabled ubiquitous personalized physical fitness (UFIT).

    Science.gov (United States)

    Su, Chuan-Jun; Chiang, Chang-Yu; Chih, Meng-Chun

    2014-03-07

    Good physical fitness generally makes the body less prone to common diseases. A personalized exercise plan that promotes a balanced approach to fitness helps promotes fitness, while inappropriate forms of exercise can have adverse consequences for health. This paper aims to develop an ontology-driven knowledge-based system for generating custom-designed exercise plans based on a user's profile and health status, incorporating international standard Health Level Seven International (HL7) data on physical fitness and health screening. The generated plan exposing Representational State Transfer (REST) style web services which can be accessed from any Internet-enabled device and deployed in cloud computing environments. To ensure the practicality of the generated exercise plans, encapsulated knowledge used as a basis for inference in the system is acquired from domain experts. The proposed Ubiquitous Exercise Plan Generation for Personalized Physical Fitness (UFIT) will not only improve health-related fitness through generating personalized exercise plans, but also aid users in avoiding inappropriate work outs.

  1. Curve fitting for RHB Islamic Bank annual net profit

    Science.gov (United States)

    Nadarajan, Dineswary; Noor, Noor Fadiya Mohd

    2015-05-01

    The RHB Islamic Bank net profit data are obtained from 2004 to 2012. Curve fitting is done by assuming the data are exact or experimental due to smoothing process. Higher order Lagrange polynomial and cubic spline with curve fitting procedure are constructed using Maple software. Normality test is performed to check the data adequacy. Regression analysis with curve estimation is conducted in SPSS environment. All the eleven models are found to be acceptable at 10% significant level of ANOVA. Residual error and absolute relative true error are calculated and compared. The optimal model based on the minimum average error is proposed.

  2. Fitness cost

    DEFF Research Database (Denmark)

    Nielsen, Karen L.; Pedersen, Thomas M.; Udekwu, Klas I.

    2012-01-01

    phage types, predominantly only penicillin resistant. We investigated whether isolates of this epidemic were associated with a fitness cost, and we employed a mathematical model to ask whether these fitness costs could have led to the observed reduction in frequency. Bacteraemia isolates of S. aureus...... from Denmark have been stored since 1957. We chose 40 S. aureus isolates belonging to phage complex 83A, clonal complex 8 based on spa type, ranging in time of isolation from 1957 to 1980 and with varyous antibiograms, including both methicillin-resistant and -susceptible isolates. The relative fitness...... of each isolate was determined in a growth competition assay with a reference isolate. Significant fitness costs of 215 were determined for the MRSA isolates studied. There was a significant negative correlation between number of antibiotic resistances and relative fitness. Multiple regression analysis...

  3. Goodness-of-Fit Assessment of Item Response Theory Models

    Science.gov (United States)

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  4. ParFit: A Python-Based Object-Oriented Program for Fitting Molecular Mechanics Parameters to ab Initio Data.

    Science.gov (United States)

    Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S; Windus, Theresa L; Dick-Perez, Marilu

    2017-03-27

    A newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides, important for metal extraction chemistry, are parametrized using ParFit. ParFit is in an open source program available for free on GitHub ( https://github.com/fzahari/ParFit ).

  5. Assessing performance of Bayesian state-space models fit to Argos satellite telemetry locations processed with Kalman filtering.

    Directory of Open Access Journals (Sweden)

    Mónica A Silva

    Full Text Available Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF. The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km was nearly half that of LS estimates (11.6 ± 8.4 km. Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.

  6. Comparison of a layered slab and an atlas head model for Monte Carlo fitting of time-domain near-infrared spectroscopy data of the adult head.

    Science.gov (United States)

    Selb, Juliette; Ogden, Tyler M; Dubb, Jay; Fang, Qianqian; Boas, David A

    2014-01-01

    Near-infrared spectroscopy (NIRS) estimations of the adult brain baseline optical properties based on a homogeneous model of the head are known to introduce significant contamination from extracerebral layers. More complex models have been proposed and occasionally applied to in vivo data, but their performances have never been characterized on realistic head structures. Here we implement a flexible fitting routine of time-domain NIRS data using graphics processing unit based Monte Carlo simulations. We compare the results for two different geometries: a two-layer slab with variable thickness of the first layer and a template atlas head registered to the subject's head surface. We characterize the performance of the Monte Carlo approaches for fitting the optical properties from simulated time-resolved data of the adult head. We show that both geometries provide better results than the commonly used homogeneous model, and we quantify the improvement in terms of accuracy, linearity, and cross-talk from extracerebral layers.

  7. STARS: An ArcGIS Toolset Used to Calculate the Spatial Information Needed to Fit Spatial Statistical Models to Stream Network Data

    Directory of Open Access Journals (Sweden)

    Erin Peterson

    2014-01-01

    Full Text Available This paper describes the STARS ArcGIS geoprocessing toolset, which is used to calcu- late the spatial information needed to fit spatial statistical models to stream network data using the SSN package. The STARS toolset is designed for use with a landscape network (LSN, which is a topological data model produced by the FLoWS ArcGIS geoprocessing toolset. An overview of the FLoWS LSN structure and a few particularly useful tools is also provided so that users will have a clear understanding of the underlying data struc- ture that the STARS toolset depends on. This document may be used as an introduction to new users. The methods used to calculate the spatial information and format the final .ssn object are also explicitly described so that users may create their own .ssn object using other data models and software.

  8. Universal Rate Model Selector: A Method to Quickly Find the Best-Fit Kinetic Rate Model for an Experimental Rate Profile

    Science.gov (United States)

    2017-08-01

    k2 – k1) 3.3 Universal Kinetic Rate Platform Development Kinetic rate models range from pure chemical reactions to mass transfer...14 8. The rate model that best fits the experimental data is a first-order or homogeneous catalytic reaction ...Avrami (7), and intraparticle diffusion (6) rate equations to name a few. A single fitting algorithm (kinetic rate model ) for a reaction does not

  9. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  10. Feature extraction through least squares fit to a simple model

    International Nuclear Information System (INIS)

    Demuth, H.B.

    1976-01-01

    The Oak Ridge National Laboratory (ORNL) presented the Los Alamos Scientific Laboratory (LASL) with 18 radiographs of fuel rod test bundles. The problem is to estimate the thickness of the gap between some cylindrical rods and a flat wall surface. The edges of the gaps are poorly defined due to finite source size, x-ray scatter, parallax, film grain noise, and other degrading effects. The radiographs were scanned and the scan-line data were averaged to reduce noise and to convert the problem to one dimension. A model of the ideal gap, convolved with an appropriate point-spread function, was fit to the averaged data with a least squares program; and the gap width was determined from the final fitted-model parameters. The least squares routine did converge and the gaps obtained are of reasonable size. The method is remarkably insensitive to noise. This report describes the problem, the techniques used to solve it, and the results and conclusions. Suggestions for future work are also given

  11. The FITS model office ergonomics program: a model for best practice.

    Science.gov (United States)

    Chim, Justine M Y

    2014-01-01

    An effective office ergonomics program can predict positive results in reducing musculoskeletal injury rates, enhancing productivity, and improving staff well-being and job satisfaction. Its objective is to provide a systematic solution to manage the potential risk of musculoskeletal disorders among computer users in an office setting. A FITS Model office ergonomics program is developed. The FITS Model Office Ergonomics Program has been developed which draws on the legislative requirements for promoting the health and safety of workers using computers for extended periods as well as previous research findings. The Model is developed according to the practical industrial knowledge in ergonomics, occupational health and safety management, and human resources management in Hong Kong and overseas. This paper proposes a comprehensive office ergonomics program, the FITS Model, which considers (1) Furniture Evaluation and Selection; (2) Individual Workstation Assessment; (3) Training and Education; (4) Stretching Exercises and Rest Break as elements of an effective program. An experienced ergonomics practitioner should be included in the program design and implementation. Through the FITS Model Office Ergonomics Program, the risk of musculoskeletal disorders among computer users can be eliminated or minimized, and workplace health and safety and employees' wellness enhanced.

  12. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  13. vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments

    Directory of Open Access Journals (Sweden)

    Demeter Lisa

    2010-05-01

    Full Text Available Abstract Background The replication rate (or fitness between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV. HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Results Based on a mathematical model and several statistical methods (least-squares approach and measurement error models, a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1. Conclusions Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/.

  14. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    Science.gov (United States)

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  15. Efficient scheme for parametric fitting of data in arbitrary dimensions.

    Science.gov (United States)

    Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching

    2008-07-01

    We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.

  16. Levy flights and self-similar exploratory behaviour of termite workers: beyond model fitting.

    Directory of Open Access Journals (Sweden)

    Octavio Miramontes

    Full Text Available Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties--including Lévy flights--in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.

  17. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.

    2002-01-01

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  18. Introducing the fit-criteria assessment plot - A visualisation tool to assist class enumeration in group-based trajectory modelling.

    Science.gov (United States)

    Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria

    2017-10-01

    Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.

  19. The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing.

    Directory of Open Access Journals (Sweden)

    Jaclyn K Mann

    2014-08-01

    Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  20. Top 10 Research Questions Related to Youth Aerobic Fitness.

    Science.gov (United States)

    Armstrong, Neil

    2017-06-01

    Peak oxygen uptake ([Formula: see text] 2 ) is internationally recognized as the criterion measure of youth aerobic fitness, but despite pediatric data being available for almost 80 years, its measurement and interpretation in relation to growth, maturation, and health remain controversial. The trainability of youth aerobic fitness continues to be hotly debated, and causal mechanisms of training-induced changes and their modulation by chronological age, biological maturation, and sex are still to be resolved. The daily physical activity of youth is characterized by intermittent bouts and rapid changes in intensity, but physical activity of the intensity and duration required to determine peak [Formula: see text] 2 is rarely (if ever) experienced by most youth. In this context, it may therefore be the transient kinetics of pulmonary [Formula: see text] 2 that best reflect youth aerobic fitness. There are remarkably few rigorous studies of youth pulmonary [Formula: see text] 2 kinetics at the onset of exercise in different intensity domains, and the influence of chronological age, biological maturation, and sex during step changes in exercise intensity are not confidently documented. Understanding the trainability of the parameters of youth pulmonary [Formula: see text] 2 kinetics is primarily based on a few comparative studies of athletes and nonathletes. The underlying mechanisms of changes due to training require further exploration. The aims of the present article are therefore to provide a brief overview of aerobic fitness during growth and maturation, increase awareness of current controversies in its assessment and interpretation, identify gaps in knowledge, raise 10 relevant research questions, and indicate potential areas for future research.

  1. Stochastic modeling of sunshine number data

    Science.gov (United States)

    Brabec, Marek; Paulescu, Marius; Badescu, Viorel

    2013-11-01

    In this paper, we will present a unified statistical modeling framework for estimation and forecasting sunshine number (SSN) data. Sunshine number has been proposed earlier to describe sunshine time series in qualitative terms (Theor Appl Climatol 72 (2002) 127-136) and since then, it was shown to be useful not only for theoretical purposes but also for practical considerations, e.g. those related to the development of photovoltaic energy production. Statistical modeling and prediction of SSN as a binary time series has been challenging problem, however. Our statistical model for SSN time series is based on an underlying stochastic process formulation of Markov chain type. We will show how its transition probabilities can be efficiently estimated within logistic regression framework. In fact, our logistic Markovian model can be relatively easily fitted via maximum likelihood approach. This is optimal in many respects and it also enables us to use formalized statistical inference theory to obtain not only the point estimates of transition probabilities and their functions of interest, but also related uncertainties, as well as to test of various hypotheses of practical interest, etc. It is straightforward to deal with non-homogeneous transition probabilities in this framework. Very importantly from both physical and practical points of view, logistic Markov model class allows us to test hypotheses about how SSN dependents on various external covariates (e.g. elevation angle, solar time, etc.) and about details of the dynamic model (order and functional shape of the Markov kernel, etc.). Therefore, using generalized additive model approach (GAM), we can fit and compare models of various complexity which insist on keeping physical interpretation of the statistical model and its parts. After introducing the Markovian model and general approach for identification of its parameters, we will illustrate its use and performance on high resolution SSN data from the Solar

  2. Stochastic modeling of sunshine number data

    Energy Technology Data Exchange (ETDEWEB)

    Brabec, Marek, E-mail: mbrabec@cs.cas.cz [Department of Nonlinear Modeling, Institute of Computer Science, Academy of Sciences of the Czech Republic, Pod Vodarenskou vezi 2, 182 07 Prague 8 (Czech Republic); Paulescu, Marius [Physics Department, West University of Timisoara, V. Parvan 4, 300223 Timisoara (Romania); Badescu, Viorel [Candida Oancea Institute, Polytechnic University of Bucharest, Spl. Independentei 313, 060042 Bucharest (Romania)

    2013-11-13

    In this paper, we will present a unified statistical modeling framework for estimation and forecasting sunshine number (SSN) data. Sunshine number has been proposed earlier to describe sunshine time series in qualitative terms (Theor Appl Climatol 72 (2002) 127-136) and since then, it was shown to be useful not only for theoretical purposes but also for practical considerations, e.g. those related to the development of photovoltaic energy production. Statistical modeling and prediction of SSN as a binary time series has been challenging problem, however. Our statistical model for SSN time series is based on an underlying stochastic process formulation of Markov chain type. We will show how its transition probabilities can be efficiently estimated within logistic regression framework. In fact, our logistic Markovian model can be relatively easily fitted via maximum likelihood approach. This is optimal in many respects and it also enables us to use formalized statistical inference theory to obtain not only the point estimates of transition probabilities and their functions of interest, but also related uncertainties, as well as to test of various hypotheses of practical interest, etc. It is straightforward to deal with non-homogeneous transition probabilities in this framework. Very importantly from both physical and practical points of view, logistic Markov model class allows us to test hypotheses about how SSN dependents on various external covariates (e.g. elevation angle, solar time, etc.) and about details of the dynamic model (order and functional shape of the Markov kernel, etc.). Therefore, using generalized additive model approach (GAM), we can fit and compare models of various complexity which insist on keeping physical interpretation of the statistical model and its parts. After introducing the Markovian model and general approach for identification of its parameters, we will illustrate its use and performance on high resolution SSN data from the Solar

  3. Stochastic modeling of sunshine number data

    International Nuclear Information System (INIS)

    Brabec, Marek; Paulescu, Marius; Badescu, Viorel

    2013-01-01

    In this paper, we will present a unified statistical modeling framework for estimation and forecasting sunshine number (SSN) data. Sunshine number has been proposed earlier to describe sunshine time series in qualitative terms (Theor Appl Climatol 72 (2002) 127-136) and since then, it was shown to be useful not only for theoretical purposes but also for practical considerations, e.g. those related to the development of photovoltaic energy production. Statistical modeling and prediction of SSN as a binary time series has been challenging problem, however. Our statistical model for SSN time series is based on an underlying stochastic process formulation of Markov chain type. We will show how its transition probabilities can be efficiently estimated within logistic regression framework. In fact, our logistic Markovian model can be relatively easily fitted via maximum likelihood approach. This is optimal in many respects and it also enables us to use formalized statistical inference theory to obtain not only the point estimates of transition probabilities and their functions of interest, but also related uncertainties, as well as to test of various hypotheses of practical interest, etc. It is straightforward to deal with non-homogeneous transition probabilities in this framework. Very importantly from both physical and practical points of view, logistic Markov model class allows us to test hypotheses about how SSN dependents on various external covariates (e.g. elevation angle, solar time, etc.) and about details of the dynamic model (order and functional shape of the Markov kernel, etc.). Therefore, using generalized additive model approach (GAM), we can fit and compare models of various complexity which insist on keeping physical interpretation of the statistical model and its parts. After introducing the Markovian model and general approach for identification of its parameters, we will illustrate its use and performance on high resolution SSN data from the Solar

  4. Goodness-of-fit tests for multi-dimensional copulas: Expanding application to historical drought data

    Directory of Open Access Journals (Sweden)

    Ming-wei Ma

    2013-01-01

    Full Text Available The question of how to choose a copula model that best fits a given dataset is a predominant limitation of the copula approach, and the present study aims to investigate the techniques of goodness-of-fit tests for multi-dimensional copulas. A goodness-of-fit test based on Rosenblatt's transformation was mathematically expanded from two dimensions to three dimensions and procedures of a bootstrap version of the test were provided. Through stochastic copula simulation, an empirical application of historical drought data at the Lintong Gauge Station shows that the goodness-of-fit tests perform well, revealing that both trivariate Gaussian and Student t copulas are acceptable for modeling the dependence structures of the observed drought duration, severity, and peak. The goodness-of-fit tests for multi-dimensional copulas can provide further support and help a lot in the potential applications of a wider range of copulas to describe the associations of correlated hydrological variables. However, for the application of copulas with the number of dimensions larger than three, more complicated computational efforts as well as exploration and parameterization of corresponding copulas are required.

  5. More Sophisticated Fits of the Oribts of Haumea's Interacting Moons

    Science.gov (United States)

    Oldroyd, William Jared; Ragozzine, Darin; Porter, Simon

    2018-04-01

    Since the discovery of Haumea's moons, it has been a challenge to model the orbits of its moons, Hi’iaka and Namaka. With many precision HST observations, Ragozzine & Brown 2009 succeeded in calculating a three-point mass model which was essential because Keplerian orbits were not a statistically acceptable fit. New data obtained in 2010 could be fit by adding a J2 and spin pole to Haumea, but new data from 2015 was far from the predicted locations, even after an extensive exploration using Bayesian Markov Chain Monte Carlo methods (using emcee). Here we report on continued investigations as to why our model cannot fit the full 10-year baseline of data. We note that by ignoring Haumea and instead examining the relative motion of the two moons in the Hi’iaka centered frame leads to adequate fits for the data. This suggests there are additional parameters connected to Haumea that will be required in a full model. These parameters are potentially related to photocenter-barycenter shifts which could be significant enough to affect the fitting process; these are unlikely to be caused by the newly discovered ring (Ortiz et al. 2017) or by unknown satellites (Burkhart et al. 2016). Additionally, we have developed a new SPIN+N-bodY integrator called SPINNY that self-consistently calculates the interactions between n-quadrupoles and is designed to test the importance of other possible effects (Haumea C22, satellite torques on the spin-pole, Sun, etc.) on our astrometric fits. By correctly determining the orbit of Haumea’s satellites we develop a better understanding of the physical properties of each of the objects with implications for the formation of Haumea, its moons, and its collisional family.

  6. Are Physical Education Majors Models for Fitness?

    Science.gov (United States)

    Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela

    2012-01-01

    The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…

  7. Can the Social Vulnerability Index Be Used for More Than Emergency Preparedness? An Examination Using Youth Physical Fitness Data.

    Science.gov (United States)

    Gay, Jennifer L; Robb, Sara W; Benson, Kelsey M; White, Alice

    2016-02-01

    The Social Vulnerability Index (SVI), a publicly available dataset, is used in emergency preparedness to identify communities in greatest need of resources. The SVI includes multiple socioeconomic, demographic, and geographic indicators that also are associated with physical fitness and physical activity. This study examined the utility of using the SVI to explain variation in youth fitness, including aerobic capacity and body mass index. FITNESSGRAM data from 2,126 Georgia schools were matched at the census tract level with SVI themes of socioeconomic, household composition, minority status and language, and housing and transportation. Multivariate multiple regression models were used to test whether SVI factors explained fitness outcomes, controlling for grade level (ie, elementary, middle, high school) and stratified by gender. SVI themes explained the most variation in aerobic fitness and body mass index for both boys and girls (R2 values 11.5% to 26.6%). Socioeconomic, Minority Status and Language, and Housing and Transportation themes were salient predictors of fitness outcomes. Youth fitness in Georgia was related to socioeconomic, demographic, and geographic themes. The SVI may be a useful needs assessment tool for health officials and researchers examining multilevel influences on health behaviors or identifying communities for prevention efforts.

  8. An evaluation of the Bayesian approach to fitting the N-mixture model for use with pseudo-replicated count data

    Science.gov (United States)

    Toribo, S.G.; Gray, B.R.; Liang, S.

    2011-01-01

    The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.

  9. BIG DATA-Related Challenges and Opportunities in Earth System Modeling

    Science.gov (United States)

    Bamzai, A. S.

    2012-12-01

    Knowledge of the Earth's climate has increased immensely in recent decades, both through observational analysis and modeling. BIG DATA-related challenges emerge in our quest for understanding the variability and predictability of the climate and earth system on a range of time scales, as well as in our endeavor to improve predictive capability using state-of-the-science models. To enable further scientific discovery, bottlenecks in current paradigms need to be addressed. An overview of current NSF activities in Earth System Modeling with a focus on associated data-related challenges and opportunities, will be presented.

  10. Mutation Supply and Relative Fitness Shape the Genotypes of Ciprofloxacin-Resistant Escherichia coli.

    Science.gov (United States)

    Huseby, Douglas L; Pietsch, Franziska; Brandis, Gerrit; Garoff, Linnéa; Tegehall, Angelica; Hughes, Diarmaid

    2017-05-01

    Ciprofloxacin is an important antibacterial drug targeting Type II topoisomerases, highly active against Gram-negatives including Escherichia coli. The evolution of resistance to ciprofloxacin in E. coli always requires multiple genetic changes, usually including mutations affecting two different drug target genes, gyrA and parC. Resistant mutants selected in vitro or in vivo can have many different mutations in target genes and efflux regulator genes that contribute to resistance. Among resistant clinical isolates the genotype, gyrA S83L D87N, parC S80I is significantly overrepresented suggesting that it has a selective advantage. However, the evolutionary or functional significance of this high frequency resistance genotype is not fully understood. By combining experimental data and mathematical modeling, we addressed the reasons for the predominance of this specific genotype. The experimental data were used to model trajectories of mutational resistance evolution under different conditions of drug exposure and population bottlenecks. We identified the order in which specific mutations are selected in the clinical genotype, showed that the high frequency genotype could be selected over a range of drug selective pressures, and was strongly influenced by the relative fitness of alternative mutations and factors affecting mutation supply. Our data map for the first time the fitness landscape that constrains the evolutionary trajectories taken during the development of clinical resistance to ciprofloxacin and explain the predominance of the most frequently selected genotype. This study provides strong support for the use of in vitro competition assays as a tool to trace evolutionary trajectories, not only in the antibiotic resistance field. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  11. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  12. Alcohol advertising, consumption and abuse: a covariance-structural modelling look at Strickland's data.

    Science.gov (United States)

    Adlaf, E M; Kohn, P M

    1989-07-01

    Re-analysis employing covariance-structural models was conducted on Strickland's (1983) survey data on 772 drinking students from Grades 7, 9 and 11. These data bear on the relations among alcohol consumption, alcohol abuse, association with drinking peers and exposure to televised alcohol advertising. Whereas Strickland used a just-identified model which, therefore, could not be tested for goodness of fit, our re-analysis tested several alternative models, which could be contradicted by the data. One model did fit his data particularly well. Its major implications are as follows: (1) Symptomatic consumption, negative consequences and self-rated severity of alcohol-related problems apparently reflect a common underlying factor, namely alcohol abuse. (2) Use of alcohol to relieve distress and frequency of intoxication, however, appear not to reflect abuse, although frequent intoxication contributes substantially to it. (3). Alcohol advertising affects consumption directly and abuse indirectly, although peer association has far greater impact on both consumption and abuse. These findings are interpreted as lending little support to further restrictions on advertising.

  13. Predicting Barrett's Esophagus in Families: An Esophagus Translational Research Network (BETRNet) Model Fitting Clinical Data to a Familial Paradigm.

    Science.gov (United States)

    Sun, Xiangqing; Elston, Robert C; Barnholtz-Sloan, Jill S; Falk, Gary W; Grady, William M; Faulx, Ashley; Mittal, Sumeet K; Canto, Marcia; Shaheen, Nicholas J; Wang, Jean S; Iyer, Prasad G; Abrams, Julian A; Tian, Ye D; Willis, Joseph E; Guda, Kishore; Markowitz, Sanford D; Chandar, Apoorva; Warfe, James M; Brock, Wendy; Chak, Amitabh

    2016-05-01

    Barrett's esophagus is often asymptomatic and only a small portion of Barrett's esophagus patients are currently diagnosed and under surveillance. Therefore, it is important to develop risk prediction models to identify high-risk individuals with Barrett's esophagus. Familial aggregation of Barrett's esophagus and esophageal adenocarcinoma, and the increased risk of esophageal adenocarcinoma for individuals with a family history, raise the necessity of including genetic factors in the prediction model. Methods to determine risk prediction models using both risk covariates and ascertained family data are not well developed. We developed a Barrett's Esophagus Translational Research Network (BETRNet) risk prediction model from 787 singly ascertained Barrett's esophagus pedigrees and 92 multiplex Barrett's esophagus pedigrees, fitting a multivariate logistic model that incorporates family history and clinical risk factors. The eight risk factors, age, sex, education level, parental status, smoking, heartburn frequency, regurgitation frequency, and use of acid suppressant, were included in the model. The prediction accuracy was evaluated on the training dataset and an independent validation dataset of 643 multiplex Barrett's esophagus pedigrees. Our results indicate family information helps to predict Barrett's esophagus risk, and predicting in families improves both prediction calibration and discrimination accuracy. Our model can predict Barrett's esophagus risk for anyone with family members known to have, or not have, had Barrett's esophagus. It can predict risk for unrelated individuals without knowing any relatives' information. Our prediction model will shed light on effectively identifying high-risk individuals for Barrett's esophagus screening and surveillance, consequently allowing intervention at an early stage, and reducing mortality from esophageal adenocarcinoma. Cancer Epidemiol Biomarkers Prev; 25(5); 727-35. ©2016 AACR. ©2016 American Association for

  14. Survival analysis of clinical mastitis data using a nested frailty Cox model fit as a mixed-effects Poisson model.

    Science.gov (United States)

    Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik

    2014-12-01

    Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation

  15. Modelling dense relational data

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2012-01-01

    they are not naturally suited for kernel K-means. We propose a generative Bayesian model for dense matrices which generalize kernel K-means to consider off-diagonal interactions in matrices of interactions, and demonstrate its ability to detect structure on both artificial data and two real data sets....

  16. Resolving overlapping peaks in ARXPS data: The effect of noise and fitting method

    International Nuclear Information System (INIS)

    Muñoz-Flores, Jaime; Herrera-Gomez, Alberto

    2012-01-01

    Highlights: ► Noise is an important factor affecting the fitting of overlapping peaks in XPS data. ► The combined information in ARXPS data can be used to improve fitting reliability. ► The error on the estimation of the peak parameters depends on the peak-fitting method. ► Simultaneous fitting method is much more robust against noise than sequential fitting. ► The estimation of the error range is better done with ARXPS data than with XPS data. - Abstract: Peak-fitting of X-ray photoelectron spectroscopy (XPS) data can be very sensitive to noise when the difference on the binding energy among the peaks is smaller than the width of the peaks. This sensitivity depends on the fitting algorithm. Angle-resolved XPS (ARXPS) analysis offers the opportunity of employing the combined information contained in the data at the various angles to reduce the sensitivity to noise. The assumption of shared peak parameters (center and width) among the spectra for the different angles, and how it is introduced into the analysis, plays a basic role. Sequential fitting is the usual practice in ARXPS data peak-fitting. It consist on first estimating the center and width of the peaks from the data acquired at one of the angles, and then using those parameters as a starting approximation for fitting the data for each of the rest of the angles. An improvement of this method consists of averaging the centers and widths of the peaks obtained at the different angles, and then employing these values to assess the areas of the peaks for each angle. Another strategy for using the combined information is by assessing the peak parameters from the sum of the experimental data. The complete use of the combined information contained in the data-set is optimized by the simultaneous fitting method. It consists of the assessment of the center and width of the peaks by fitting the data at all the angles simultaneously. Computer-generated data was employed to compare the sensitivity with respect

  17. Code REX to fit experimental data to exponential functions and graphics plotting

    International Nuclear Information System (INIS)

    Romero, L.; Travesi, A.

    1983-01-01

    The REX code, written in Fortran IV, performs the fitting a set of experimental data to different kind of functions as: straight-line (Y = A + BX) , and various exponential type (Y-A B x , Y=A X B ; Y=A exp(BX) ) , using the Least Squares criterion. Such fitting could be done directly for one selected function of for the our simultaneously and allows to chose the function that best fitting to the data, since presents the statistics data of all the fitting. Further, it presents the graphics plotting, of the fitted function, in the appropriate coordinate axes system. An additional option allows also the Graphic plotting of experimental data used for the fitting. All the data necessary to execute this code are asked to the operator in the terminal screen, in the iterative way by screen-operator dialogue, and the values are introduced through the keyboard. This code could be executed with any computer provided with graphic screen and keyboard terminal, with a X-Y plotter serial connected to the graphics terminal. (Author) 5 refs

  18. Fitted HBT radii versus space-time variances in flow-dominated models

    International Nuclear Information System (INIS)

    Lisa, Mike; Frodermann, Evan; Heinz, Ulrich

    2007-01-01

    The inability of otherwise successful dynamical models to reproduce the 'HBT radii' extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the 'RHIC HBT Puzzle'. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source which can be directly computed from the emission function, without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models some of which exhibit significant deviations from simple Gaussian behaviour. By Fourier transforming the emission function we compute the 2-particle correlation function and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and measured HBT radii remain, we show that a more 'apples-to-apples' comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data. (author)

  19. Emotional fit with culture: a predictor of individual differences in relational well-being.

    Science.gov (United States)

    De Leersnyder, Jozefien; Mesquita, Batja; Kim, Heejung; Eom, Kimin; Choi, Hyewon

    2014-04-01

    There is increasing evidence for emotional fit in couples and groups, but also within cultures. In the current research, we investigated the consequences of emotional fit at the cultural level. Given that emotions reflect people's view on the world, and that shared views are associated with good social relationships, we expected that an individual's fit to the average cultural patterns of emotion would be associated with relational well-being. Using an implicit measure of cultural fit of emotions, we found across 3 different cultural contexts (United States, Belgium, and Korea) that (1) individuals' emotional fit is associated with their level of relational well-being, and that (2) the link between emotional fit and relational well-being is particularly strong when emotional fit is measured for situations pertaining to relationships (rather than for situations that are self-focused). Together, the current studies suggest that people may benefit from emotionally "fitting in" to their culture.

  20. A cautionary note on the use of information fit indexes in covariance structure modeling with means

    NARCIS (Netherlands)

    Wicherts, J.M.; Dolan, C.V.

    2004-01-01

    Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases

  1. Rail Track Detection and Modelling in Mobile Laser Scanner Data

    Directory of Open Access Journals (Sweden)

    S. Oude Elberink

    2013-10-01

    Full Text Available We present a method for detecting and modelling rails in mobile laser scanner data. The detection is based on the properties of the rail tracks and contact wires such as relative height, linearity and relative position with respect to other objects. Points classified as rail track are used in a 3D modelling algorithm. The modelling is done by first fitting a parametric model of a rail piece to the points along each track, and estimating the position and orientation parameters of each piece model. For each position and orientation parameter a smooth low-order Fourier curve is interpolated. Using all interpolated parameters a mesh model of the rail is reconstructed. The method is explained using two areas from a dataset acquired by a LYNX mobile mapping system in a mountainous area. Residuals between railway laser points and 3D models are in the range of 2 cm. It is concluded that a curve fitting algorithm is essential to reliably and accurately model the rail tracks by using the knowledge that railways are following a continuous and smooth path.

  2. Longitudinal beta regression models for analyzing health-related quality of life scores over time

    Directory of Open Access Journals (Sweden)

    Hunger Matthias

    2012-09-01

    Full Text Available Abstract Background Health-related quality of life (HRQL has become an increasingly important outcome parameter in clinical trials and epidemiological research. HRQL scores are typically bounded at both ends of the scale and often highly skewed. Several regression techniques have been proposed to model such data in cross-sectional studies, however, methods applicable in longitudinal research are less well researched. This study examined the use of beta regression models for analyzing longitudinal HRQL data using two empirical examples with distributional features typically encountered in practice. Methods We used SF-6D utility data from a German older age cohort study and stroke-specific HRQL data from a randomized controlled trial. We described the conceptual differences between mixed and marginal beta regression models and compared both models to the commonly used linear mixed model in terms of overall fit and predictive accuracy. Results At any measurement time, the beta distribution fitted the SF-6D utility data and stroke-specific HRQL data better than the normal distribution. The mixed beta model showed better likelihood-based fit statistics than the linear mixed model and respected the boundedness of the outcome variable. However, it tended to underestimate the true mean at the upper part of the distribution. Adjusted group means from marginal beta model and linear mixed model were nearly identical but differences could be observed with respect to standard errors. Conclusions Understanding the conceptual differences between mixed and marginal beta regression models is important for their proper use in the analysis of longitudinal HRQL data. Beta regression fits the typical distribution of HRQL data better than linear mixed models, however, if focus is on estimating group mean scores rather than making individual predictions, the two methods might not differ substantially.

  3. Gene conversion at the gray locus of Sordaria fimicola: fit of the experimental data to a hybrid DNA model of recombination.

    Science.gov (United States)

    Kalogeropoulos, A; Thuriaux, P

    1985-03-01

    A hybrid DNA (hDNA) model of recombination has been algebraically formulated, which allows the prediction of frequencies of postmeiotic segregation and conversion of a given allele and their probability of being associated with a crossing over. The model considered is essentially the "Aviemore model." In contrast to some other interpretations of recombination, it states that gene conversion can only result from the repair of heteroduplex hDNA, with postmeiotic segregation resulting from unrepaired heteroduplexes. The model also postulates that crossing over always occurs distally to the initiation site of the hDNA. Eleven types of conversion and postmeiotic segregation with or without associated crossover were considered. Their theoretical frequencies are given by 11 linear equations with ten variables, four describing heteroduplex repair, four giving the probability of hDNA formation and its topological properties and two giving the probability that crossing over occurs at the left or right of the converting allele. Using the experimental data of Kitani and coworkers on conversion at the six best studied gray alleles of Sordaria fimicola, we found that the model considered fit the data at a P level above or very close (allele h4) to the 5% level of sampling error provided that the hDNA is partly asymmetric. The best fitting solutions are such that the hDNA has an equal probability of being formed on either chromatid or, alternatively, that both DNA strands have the same probability of acting as the invading strand during hDNA formation. The two mismatches corresponding to a given allele are repaired with different efficiencies. Optimal solutions are found if one allows for repair to be more efficient on the asymmetric hDNA than on the symmetric one. In the case of allele g1, our data imply that the direction of repair is nonrandom with respect to the strand on which it occurs.

  4. Fitting Higgs data with nonlinear effective theory.

    Science.gov (United States)

    Buchalla, G; Catà, O; Celis, A; Krause, C

    2016-01-01

    In a recent paper we showed that the electroweak chiral Lagrangian at leading order is equivalent to the conventional [Formula: see text] formalism used by ATLAS and CMS to test Higgs anomalous couplings. Here we apply this fact to fit the latest Higgs data. The new aspect of our analysis is a systematic interpretation of the fit parameters within an EFT. Concentrating on the processes of Higgs production and decay that have been measured so far, six parameters turn out to be relevant: [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text]. A global Bayesian fit is then performed with the result [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text]. Additionally, we show how this leading-order parametrization can be generalized to next-to-leading order, thus improving the [Formula: see text] formalism systematically. The differences with a linear EFT analysis including operators of dimension six are also discussed. One of the main conclusions of our analysis is that since the conventional [Formula: see text] formalism can be properly justified within a QFT framework, it should continue to play a central role in analyzing and interpreting Higgs data.

  5. Health-related physical fitness in healthy untrained men

    DEFF Research Database (Denmark)

    Milanović, Zoran; Pantelić, Saša; Sporiš, Goran

    2015-01-01

    The purpose of this study was to determine the effects of recreational soccer (SOC) compared to moderate-intensity continuous running (RUN) on all health-related physical fitness components in healthy untrained men. Sixty-nine participants were recruited and randomly assigned to one of three groups...... weeks and consisted of three 60-min sessions per week. All participants were tested for each of the following physical fitness components: maximal aerobic power, minute ventilation, maximal heart rate, squat jump (SJ), countermovement jump with arm swing (CMJ), sit-and-reach flexibility, and body...... improvements in maximal aerobic power after 12 weeks of soccer training and moderate-intensity running, partly due to large decreases in body mass. Additionally soccer training induced pronounced positive effects on jump performance and flexibility, making soccer an effective broad-spectrum fitness training...

  6. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    Science.gov (United States)

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  7. The issue of statistical power for overall model fit in evaluating structural equation models

    Directory of Open Access Journals (Sweden)

    Richard HERMIDA

    2015-06-01

    Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.

  8. Fitting a function to time-dependent ensemble averaged data.

    Science.gov (United States)

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  9. Importance of Health-Related Fitness Knowledge to Increasing Physical Activity and Physical Fitness

    Science.gov (United States)

    Ferkel, Rick C.; Judge, Lawrence W.; Stodden, David F.; Griffin, Kent

    2014-01-01

    Physical inactivity is expanding across all ages in the United States. Research has documented a deficiency in health-related fitness knowledge (HRFK) among elementary- through college-aged students. The need for a credible and reliable resource that provides research-based information regarding the importance of HRFK is significant. The purpose…

  10. Application of tan h curve fitting to toughness data

    International Nuclear Information System (INIS)

    Sakai, Yuzuru; Ogura, Nobukazu

    1985-01-01

    Curve-fitting regression procedures for toughness data have been examined. The objectives of fitting curve in the context of the study of nuclear pressure vessel steels are (1) convenient summarization of test data to permit comparison of materials and testing methods; (2) development of statistical base concerning the data; (3) the surveying of the relationships between charpy data and fracture toughness data; (4) estimation of fracture toughness level from charpy absorbed energy data. The computational procedures using the tanh function have been applied to the toughness data (charpy absorbed energy, static fracture toughness, dynamic fracture toughness, crack arrest toughness) of A533B cl.1 and A508 cl.3 steels. The results of the analysis shows the statistical features of the material toughness and gives the method for estimating fracture toughness level from charpy absorbed energy data. (author)

  11. CrossFit Overview: Systematic Review and Meta-analysis.

    Science.gov (United States)

    Claudino, João Gustavo; Gabbett, Tim J; Bourgeois, Frank; Souza, Helton de Sá; Miranda, Rafael Chagas; Mezêncio, Bruno; Soncin, Rafael; Cardoso Filho, Carlos Alberto; Bottaro, Martim; Hernandez, Arnaldo Jose; Amadio, Alberto Carlos; Serrão, Julio Cerca

    2018-02-26

    CrossFit is recognized as one of the fastest growing high-intensity functional training modes in the world. However, scientific data regarding the practice of CrossFit is sparse. Therefore, the objective of this study is to analyze the findings of scientific literature related to CrossFit via systematic review and meta-analysis. Systematic searches of the PubMed, Web of Science, Scopus, Bireme/MedLine, and SciELO online databases were conducted for articles reporting the effects of CrossFit training. The systematic review followed the PRISMA guidelines. The Oxford Levels of Evidence was used for all included articles, and only studies that investigated the effects of CrossFit as a training program were included in the meta-analysis. For the meta-analysis, effect sizes (ESs) with 95% confidence interval (CI) were calculated and heterogeneity was assessed using a random-effects model. Thirty-one articles were included in the systematic review and four were included in the meta-analysis. However, only two studies had a high level of evidence at low risk of bias. Scientific literature related to CrossFit has reported on body composition, psycho-physiological parameters, musculoskeletal injury risk, life and health aspects, and psycho-social behavior. In the meta-analysis, significant results were not found for any variables. The current scientific literature related to CrossFit has few studies with high level of evidence at low risk of bias. However, preliminary data has suggested that CrossFit practice is associated with higher levels of sense of community, satisfaction, and motivation.

  12. An R package for fitting age, period and cohort models

    Directory of Open Access Journals (Sweden)

    Adriano Decarli

    2014-11-01

    Full Text Available In this paper we present the R implementation of a GLIM macro which fits age-period-cohort model following Osmond and Gardner. In addition to the estimates of the corresponding model, owing to the programming capability of R as an object oriented language, methods for printing, plotting and summarizing the results are provided. Furthermore, the researcher has fully access to the output of the main function (apc which returns all the models fitted within the function. It is so possible to critically evaluate the goodness of fit of the resulting model.

  13. Fitting non-gaussian Models to Financial data: An Empirical Study

    Directory of Open Access Journals (Sweden)

    Pablo Olivares

    2011-04-01

    Full Text Available In this paper are presented some experiences about the modeling of financial data by three classes of models as alternative to Gaussian Linear models. Dynamic Volatility, Stable L'evy and Diffusion with Jumps models are considered. The techniques are illustrated with some examples of financial series on currency, futures and indexes.

  14. A Bayesian goodness of fit test and semiparametric generalization of logistic regression with measurement data.

    Science.gov (United States)

    Schörgendorfer, Angela; Branscum, Adam J; Hanson, Timothy E

    2013-06-01

    Logistic regression is a popular tool for risk analysis in medical and population health science. With continuous response data, it is common to create a dichotomous outcome for logistic regression analysis by specifying a threshold for positivity. Fitting a linear regression to the nondichotomized response variable assuming a logistic sampling model for the data has been empirically shown to yield more efficient estimates of odds ratios than ordinary logistic regression of the dichotomized endpoint. We illustrate that risk inference is not robust to departures from the parametric logistic distribution. Moreover, the model assumption of proportional odds is generally not satisfied when the condition of a logistic distribution for the data is violated, leading to biased inference from a parametric logistic analysis. We develop novel Bayesian semiparametric methodology for testing goodness of fit of parametric logistic regression with continuous measurement data. The testing procedures hold for any cutoff threshold and our approach simultaneously provides the ability to perform semiparametric risk estimation. Bayes factors are calculated using the Savage-Dickey ratio for testing the null hypothesis of logistic regression versus a semiparametric generalization. We propose a fully Bayesian and a computationally efficient empirical Bayesian approach to testing, and we present methods for semiparametric estimation of risks, relative risks, and odds ratios when parametric logistic regression fails. Theoretical results establish the consistency of the empirical Bayes test. Results from simulated data show that the proposed approach provides accurate inference irrespective of whether parametric assumptions hold or not. Evaluation of risk factors for obesity shows that different inferences are derived from an analysis of a real data set when deviations from a logistic distribution are permissible in a flexible semiparametric framework. © 2013, The International Biometric

  15. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  16. Fitting of hadron spectrum in 5-dimensional conformal relativity

    International Nuclear Information System (INIS)

    Luna-Acosta, G.A.

    1984-11-01

    There is no well known kinematic theory of masses which can be used to compute masses of observed particles. The theory of mass of Conformal Relativity in 5-dimensions does promise to fulfill this need. Here we apply its theoretical results to hadrons and successfully fit their masses with a universal length l (the size of the 5th dimension) of 1.36 Fermi. Our fitting scheme shows a trend from which we can predict the observed masses. We conjecture about reasons our fitting distinguishes between hadrons in terms of their quark composition. The value of l suggests physical interpretations and possible means of detection. (author)

  17. An empirical analysis of the quantitative effect of data when fitting quadratic and cubic polynomials

    Science.gov (United States)

    Canavos, G. C.

    1974-01-01

    A study is made of the extent to which the size of the sample affects the accuracy of a quadratic or a cubic polynomial approximation of an experimentally observed quantity, and the trend with regard to improvement in the accuracy of the approximation as a function of sample size is established. The task is made possible through a simulated analysis carried out by the Monte Carlo method in which data are simulated by using several transcendental or algebraic functions as models. Contaminated data of varying amounts are fitted to either quadratic or cubic polynomials, and the behavior of the mean-squared error of the residual variance is determined as a function of sample size. Results indicate that the effect of the size of the sample is significant only for relatively small sizes and diminishes drastically for moderate and large amounts of experimental data.

  18. Multiple organ definition in CT using a Bayesian approach for 3D model fitting

    Science.gov (United States)

    Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.

    1995-08-01

    Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.

  19. Statistics of Data Fitting: Flaws and Fixes of Polynomial Analysis of Channeled Spectra

    Science.gov (United States)

    Karstens, William; Smith, David

    2013-03-01

    Starting from general statistical principles, we have critically examined Baumeister's procedure* for determining the refractive index of thin films from channeled spectra. Briefly, the method assumes that the index and interference fringe order may be approximated by polynomials quadratic and cubic in photon energy, respectively. The coefficients of the polynomials are related by differentiation, which is equivalent to comparing energy differences between fringes. However, we find that when the fringe order is calculated from the published IR index for silicon* and then analyzed with Baumeister's procedure, the results do not reproduce the original index. This problem has been traced to 1. Use of unphysical powers in the polynomials (e.g., time-reversal invariance requires that the index is an even function of photon energy), and 2. Use of insufficient terms of the correct parity. Exclusion of unphysical terms and addition of quartic and quintic terms to the index and order polynomials yields significantly better fits with fewer parameters. This represents a specific example of using statistics to determine if the assumed fitting model adequately captures the physics contained in experimental data. The use of analysis of variance (ANOVA) and the Durbin-Watson statistic to test criteria for the validity of least-squares fitting will be discussed. *D.F. Edwards and E. Ochoa, Appl. Opt. 19, 4130 (1980). Supported in part by the US Department of Energy, Office of Nuclear Physics under contract DE-AC02-06CH11357.

  20. Evaluation of Interpolants in Their Ability to Fit Seismometric Time Series

    OpenAIRE

    Basu, Kanadpriya; Mariani, Maria; Serpa, Laura; Sinha, Ritwik

    2015-01-01

    This article is devoted to the study of the ASARCO demolition seismic data. Two different classes of modeling techniques are explored: First, mathematical interpolation methods and second statistical smoothing approaches for curve fitting. We estimate the characteristic parameters of the propagation medium for seismic waves with multiple mathematical and statistical techniques, and provide the relative advantages of each approach to address fitting of such data. We conclude that mathematical ...

  1. The Impact of Values-Job Fit and Age on Work-Related Learning

    Science.gov (United States)

    Van Den Ouweland, Loth; Van den Bossche, Piet

    2017-01-01

    Research shows that both individual and job-related factors influence a worker's work-related learning. This study combines these factors, examining the impact of fit between one's work values and job characteristics on learning. Although research indicates that fit benefits multiple work-related outcomes, little is known about the impact of fit…

  2. The global electroweak Standard Model fit after the Higgs discovery

    CERN Document Server

    Baak, Max

    2013-01-01

    We present an update of the global Standard Model (SM) fit to electroweak precision data under the assumption that the new particle discovered at the LHC is the SM Higgs boson. In this scenario all parameters entering the calculations of electroweak precision observalbes are known, allowing, for the first time, to over-constrain the SM at the electroweak scale and assert its validity. Within the SM the W boson mass and the effective weak mixing angle can be accurately predicted from the global fit. The results are compatible with, and exceed in precision, the direct measurements. An updated determination of the S, T and U parameters, which parametrize the oblique vacuum corrections, is given. The obtained values show good consistency with the SM expectation and no direct signs of new physics are seen. We conclude with an outlook to the global electroweak fit for a future e+e- collider.

  3. Development and design of a late-model fitness test instrument based on LabView

    Science.gov (United States)

    Xie, Ying; Wu, Feiqing

    2010-12-01

    Undergraduates are pioneers of China's modernization program and undertake the historic mission of rejuvenating our nation in the 21st century, whose physical fitness is vital. A smart fitness test system can well help them understand their fitness and health conditions, thus they can choose more suitable approaches and make practical plans for exercising according to their own situation. following the future trends, a Late-model fitness test Instrument based on LabView has been designed to remedy defects of today's instruments. The system hardware consists of fives types of sensors with their peripheral circuits, an acquisition card of NI USB-6251 and a computer, while the system software, on the basis of LabView, includes modules of user register, data acquisition, data process and display, and data storage. The system, featured by modularization and an open structure, is able to be revised according to actual needs. Tests results have verified the system's stability and reliability.

  4. Fitting Diffusion Item Response Theory Models for Responses and Response Times Using the R Package diffIRT

    Directory of Open Access Journals (Sweden)

    Dylan Molenaar

    2015-08-01

    Full Text Available In the psychometric literature, item response theory models have been proposed that explicitly take the decision process underlying the responses of subjects to psychometric test items into account. Application of these models is however hampered by the absence of general and flexible software to fit these models. In this paper, we present diffIRT, an R package that can be used to fit item response theory models that are based on a diffusion process. We discuss parameter estimation and model fit assessment, show the viability of the package in a simulation study, and illustrate the use of the package with two datasets pertaining to extraversion and mental rotation. In addition, we illustrate how the package can be used to fit the traditional diffusion model (as it has been originally developed in experimental psychology to data.

  5. The use of continuous data versus binary data in MTC models: a case study in rheumatoid arthritis.

    Science.gov (United States)

    Schmitz, Susanne; Adams, Roisin; Walsh, Cathal

    2012-11-06

    Estimates of relative efficacy between alternative treatments are crucial for decision making in health care. When sufficient head to head evidence is not available Bayesian mixed treatment comparison models provide a powerful methodology to obtain such estimates. While models can be fit to a broad range of efficacy measures, this paper illustrates the advantages of using continuous outcome measures compared to binary outcome measures. Using a case study in rheumatoid arthritis a Bayesian mixed treatment comparison model is fit to estimate the relative efficacy of five anti-TNF agents currently licensed in Europe. The model is fit for the continuous HAQ improvement outcome measure and a binary version thereof as well as for the binary ACR response measure and the underlying continuous effect. Results are compared regarding their power to detect differences between treatments. Sixteen randomized controlled trials were included for the analysis. For both analyses, based on the HAQ improvement as well as based on the ACR response, differences between treatments detected by the binary outcome measures are subsets of the differences detected by the underlying continuous effects. The information lost when transforming continuous data into a binary response measure translates into a loss of power to detect differences between treatments in mixed treatment comparison models. Binary outcome measures are therefore less sensitive to change than continuous measures. Furthermore the choice of cut-off point to construct the binary measure also impacts the relative efficacy estimates.

  6. The use of continuous data versus binary data in MTC models: A case study in rheumatoid arthritis

    Directory of Open Access Journals (Sweden)

    Schmitz Susanne

    2012-11-01

    Full Text Available Abstract Background Estimates of relative efficacy between alternative treatments are crucial for decision making in health care. When sufficient head to head evidence is not available Bayesian mixed treatment comparison models provide a powerful methodology to obtain such estimates. While models can be fit to a broad range of efficacy measures, this paper illustrates the advantages of using continuous outcome measures compared to binary outcome measures. Methods Using a case study in rheumatoid arthritis a Bayesian mixed treatment comparison model is fit to estimate the relative efficacy of five anti-TNF agents currently licensed in Europe. The model is fit for the continuous HAQ improvement outcome measure and a binary version thereof as well as for the binary ACR response measure and the underlying continuous effect. Results are compared regarding their power to detect differences between treatments. Results Sixteen randomized controlled trials were included for the analysis. For both analyses, based on the HAQ improvement as well as based on the ACR response, differences between treatments detected by the binary outcome measures are subsets of the differences detected by the underlying continuous effects. Conclusions The information lost when transforming continuous data into a binary response measure translates into a loss of power to detect differences between treatments in mixed treatment comparison models. Binary outcome measures are therefore less sensitive to change than continuous measures. Furthermore the choice of cut-off point to construct the binary measure also impacts the relative efficacy estimates.

  7. A Global Moving Hotspot Reference Frame: How well it fits?

    Science.gov (United States)

    Doubrovine, P. V.; Steinberger, B.; Torsvik, T. H.

    2010-12-01

    Since the early 1970s, when Jason Morgan proposed that hotspot tracks record motion of lithosphere over deep-seated mantle plumes, the concept of fixed hotspots has dominated the way we think about absolute plate reconstructions. In the last decade, with compelling evidence for southward drift of the Hawaiian hotspot from paleomagnetic studies, and for the relative motion between the Pacific and Indo-Atlantic hotspots from refined plate circuit reconstructions, the perception changed and a global moving hotspot reference frame (GMHRF) was introduced, in which numerical models of mantle convection and advection of plume conduits in the mantle flow were used to estimate hotspot motion. This reference frame showed qualitatively better performance in fitting hotspot tracks globally, but the error analysis and formal estimates of the goodness of fitted rotations were lacking in this model. Here we present a new generation of the GMHRF, in which updated plate circuit reconstructions and radiometric age data from the hotspot tracks were combined with numerical models of plume motion, and uncertainties of absolute plate rotations were estimated through spherical regression analysis. The overall quality of fit was evaluated using a formal statistical test, by comparing misfits produced by the model with uncertainties assigned to the data. Alternative plate circuit models linking the Pacific plate to the plates of Indo-Atlantic hemisphere were tested and compared to the fixed hotspot models with identical error budgets. Our results show that, with an appropriate choice of the Pacific plate circuit, it is possible to reconcile relative plate motions and modeled motions of mantle plumes globally back to Late Cretaceous time (80 Ma). In contrast, all fixed hotspot models failed to produce acceptable fits for Paleogene to Late Cretaceous time (30-80 Ma), highlighting significance of relative motion between the Pacific and Indo-Atlantic hotspots during this interval. The

  8. A Model Fit Statistic for Generalized Partial Credit Model

    Science.gov (United States)

    Liang, Tie; Wells, Craig S.

    2009-01-01

    Investigating the fit of a parametric model is an important part of the measurement process when implementing item response theory (IRT), but research examining it is limited. A general nonparametric approach for detecting model misfit, introduced by J. Douglas and A. S. Cohen (2001), has exhibited promising results for the two-parameter logistic…

  9. A multilevel shape fit analysis of neutron transmission data

    International Nuclear Information System (INIS)

    Naguib, K.; Sallam, O.H.; Adib, M.

    1989-01-01

    A multilevel shape fit analysis of neutron transmission data is presented. A multilevel computer code SHAPE is used to analyse clean transmission data obtained from time-of-flight (TOF) measurements. The shape analysis deduces the parameters of the observed resonances in the energy region considered in the measurements. The shape code is based upon a least square fit of a multilevel Breit-Wigner formula and includes both instrumental resolution and Doppler broadenings. Operating the SHAPE code on a test example of a measured transmission data of 151 Eu, 153 Eu and natural Eu in the energy range 0.025-1 eV acquired a good result for the used technique of analysis. (author)

  10. FitSKIRT: genetic algorithms to automatically fit dusty galaxies with a Monte Carlo radiative transfer code

    Science.gov (United States)

    De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.

    2013-02-01

    We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.

  11. Direct fit of a theoretical model of phase transition in oscillatory finger motions.

    NARCIS (Netherlands)

    Newell, K.M.; Molenaar, P.C.M.

    2003-01-01

    This paper presents a general method to fit the Schoner-Haken-Kelso (SHK) model of human movement phase transitions directly to time series data. A robust variant of the extended Kalman filter technique is applied to the data of a single subject. The options of covariance resetting and iteration

  12. Discrete competing risk model with application to modeling bus-motor failure data

    International Nuclear Information System (INIS)

    Jiang, R.

    2010-01-01

    Failure data are often modeled using continuous distributions. However, a discrete distribution can be appropriate for modeling interval or grouped data. When failure data come from a complex system, a simple discrete model can be inappropriate for modeling such data. This paper presents two types of discrete distributions. One is formed by exponentiating an underlying distribution, and the other is a two-fold competing risk model. The paper focuses on two special distributions: (a) exponentiated Poisson distribution and (b) competing risk model involving a geometric distribution and an exponentiated Poisson distribution. The competing risk model has a decreasing-followed-by-unimodal mass function and a bathtub-shaped failure rate. Five classical data sets on bus-motor failures can be simultaneously and appropriately fitted by a general 5-parameter competing risk model with the parameters being functions of the number of successive failures. The lifetime and aging characteristics of the fitted distribution are analyzed.

  13. FITTER. The package for fitting experimental data of the YuMO spectrometer by theoretical form factors. Version 1.0. Long write-up and user's guide

    International Nuclear Information System (INIS)

    Solov'ev, A.G.; Stadnik, A.V.; Islamov, A.Kh.; Kuklin, A.I.

    2003-01-01

    FITTER is a C++ code aimed to fit a chosen theoretical multi-parameter function through a set of data points. The method of fitting is chi-square minimization. Moreover, the robust fitting method can be applied. FITTER was designed for a small-angle neutron scattering data analysis, using respective theoretical models. Commonly used models (Gaussian and polynomials) are also implemented for wider applicability

  14. Testing the validity of stock-recruitment curve fits

    International Nuclear Information System (INIS)

    Christensen, S.W.; Goodyear, C.P.

    1988-01-01

    The utilities relied heavily on the Ricker stock-recruitment model as the basis for quantifying biological compensation in the Hudson River power case. They presented many fits of the Ricker model to data derived from striped bass catch and effort records compiled by the National Marine Fisheries Service. Based on this curve-fitting exercise, a value of 4 was chosen for the parameter alpha in the Ricker model, and this value was used to derive the utilities' estimates of the long-term impact of power plants on striped bass populations. A technique was developed and applied to address a single fundamental question: if the Ricker model were applicable to the Hudson River striped bass population, could the estimates of alpha from the curve-fitting exercise be considered reliable. The technique involved constructing a simulation model that incorporated the essential biological features of the population and simulated the characteristics of the available actual catch-per-unit-effort data through time. The ability or failure to retrieve the known parameter values underlying the simulation model via the curve-fitting exercise was a direct test of the reliability of the results of fitting stock-recruitment curves to the real data. The results demonstrated that estimates of alpha from the curve-fitting exercise were not reliable. The simulation-modeling technique provides an effective way to identify whether or not particular data are appropriate for use in fitting such models. 39 refs., 2 figs., 3 tabs

  15. GOSSIP: SED fitting code

    Science.gov (United States)

    Franzetti, Paolo; Scodeggio, Marco

    2012-10-01

    GOSSIP fits the electro-magnetic emission of an object (the SED, Spectral Energy Distribution) against synthetic models to find the simulated one that best reproduces the observed data. It builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions.

  16. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    Science.gov (United States)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on

  17. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    Directory of Open Access Journals (Sweden)

    Javier Macias-Guarasa

    2012-10-01

    Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  18. XMRF: an R package to fit Markov Networks to high-throughput genetics data.

    Science.gov (United States)

    Wan, Ying-Wooi; Allen, Genevera I; Baker, Yulia; Yang, Eunho; Ravikumar, Pradeep; Anderson, Matthew; Liu, Zhandong

    2016-08-26

    Technological advances in medicine have led to a rapid proliferation of high-throughput "omics" data. Tools to mine this data and discover disrupted disease networks are needed as they hold the key to understanding complicated interactions between genes, mutations and aberrations, and epi-genetic markers. We developed an R software package, XMRF, that can be used to fit Markov Networks to various types of high-throughput genomics data. Encoding the models and estimation techniques of the recently proposed exponential family Markov Random Fields (Yang et al., 2012), our software can be used to learn genetic networks from RNA-sequencing data (counts via Poisson graphical models), mutation and copy number variation data (categorical via Ising models), and methylation data (continuous via Gaussian graphical models). XMRF is the only tool that allows network structure learning using the native distribution of the data instead of the standard Gaussian. Moreover, the parallelization feature of the implemented algorithms computes the large-scale biological networks efficiently. XMRF is available from CRAN and Github ( https://github.com/zhandong/XMRF ).

  19. Menopause and big data: Word Adjacency Graph modeling of menopause-related ChaCha data.

    Science.gov (United States)

    Carpenter, Janet S; Groves, Doyle; Chen, Chen X; Otte, Julie L; Miller, Wendy R

    2017-07-01

    To detect and visualize salient queries about menopause using Big Data from ChaCha. We used Word Adjacency Graph (WAG) modeling to detect clusters and visualize the range of menopause-related topics and their mutual proximity. The subset of relevant queries was fully modeled. We split each query into token words (ie, meaningful words and phrases) and removed stopwords (ie, not meaningful functional words). The remaining words were considered in sequence to build summary tables of words and two and three-word phrases. Phrases occurring at least 10 times were used to build a network graph model that was iteratively refined by observing and removing clusters of unrelated content. We identified two menopause-related subsets of queries by searching for questions containing menopause and menopause-related terms (eg, climacteric, hot flashes, night sweats, hormone replacement). The first contained 263,363 queries from individuals aged 13 and older and the second contained 5,892 queries from women aged 40 to 62 years. In the first set, we identified 12 topic clusters: 6 relevant to menopause and 6 less relevant. In the second set, we identified 15 topic clusters: 11 relevant to menopause and 4 less relevant. Queries about hormones were pervasive within both WAG models. Many of the queries reflected low literacy levels and/or feelings of embarrassment. We modeled menopause-related queries posed by ChaCha users between 2009 and 2012. ChaCha data may be used on its own or in combination with other Big Data sources to identify patient-driven educational needs and create patient-centered interventions.

  20. Fitting and interpreting continuous-time latent Markov models for panel data.

    Science.gov (United States)

    Lange, Jane M; Minin, Vladimir N

    2013-11-20

    Multistate models characterize disease processes within an individual. Clinical studies often observe the disease status of individuals at discrete time points, making exact times of transitions between disease states unknown. Such panel data pose considerable modeling challenges. Assuming the disease process progresses accordingly, a standard continuous-time Markov chain (CTMC) yields tractable likelihoods, but the assumption of exponential sojourn time distributions is typically unrealistic. More flexible semi-Markov models permit generic sojourn distributions yet yield intractable likelihoods for panel data in the presence of reversible transitions. One attractive alternative is to assume that the disease process is characterized by an underlying latent CTMC, with multiple latent states mapping to each disease state. These models retain analytic tractability due to the CTMC framework but allow for flexible, duration-dependent disease state sojourn distributions. We have developed a robust and efficient expectation-maximization algorithm in this context. Our complete data state space consists of the observed data and the underlying latent trajectory, yielding computationally efficient expectation and maximization steps. Our algorithm outperforms alternative methods measured in terms of time to convergence and robustness. We also examine the frequentist performance of latent CTMC point and interval estimates of disease process functionals based on simulated data. The performance of estimates depends on time, functional, and data-generating scenario. Finally, we illustrate the interpretive power of latent CTMC models for describing disease processes on a dataset of lung transplant patients. We hope our work will encourage wider use of these models in the biomedical setting. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Item response theory and structural equation modelling for ordinal data: Describing the relationship between KIDSCREEN and Life-H.

    Science.gov (United States)

    Titman, Andrew C; Lancaster, Gillian A; Colver, Allan F

    2016-10-01

    Both item response theory and structural equation models are useful in the analysis of ordered categorical responses from health assessment questionnaires. We highlight the advantages and disadvantages of the item response theory and structural equation modelling approaches to modelling ordinal data, from within a community health setting. Using data from the SPARCLE project focussing on children with cerebral palsy, this paper investigates the relationship between two ordinal rating scales, the KIDSCREEN, which measures quality-of-life, and Life-H, which measures participation. Practical issues relating to fitting models, such as non-positive definite observed or fitted correlation matrices, and approaches to assessing model fit are discussed. item response theory models allow properties such as the conditional independence of particular domains of a measurement instrument to be assessed. When, as with the SPARCLE data, the latent traits are multidimensional, structural equation models generally provide a much more convenient modelling framework. © The Author(s) 2013.

  2. Optimal weights for circle fitting with discrete granular data

    International Nuclear Information System (INIS)

    Chernov, N.; Kolganova, E.; Ososkov, G.

    1995-01-01

    The problem of the data approximation measured along a circle by modern detectors in high energy physics, as for example, RICH (Ring Imaging Cherenkov) is considered. Such detectors having the discrete cell structure register the energy dissipation produced by a passing elementary particle not in a single point, but in several adjacent cells where all this energy is distributed. The presence of background hits makes inapplicable circle fitting methods based on the least square fit due to their noise sensitivity. In this paper it's shown that the efficient way to overcome these problems of the curve fitting is the robust fitting technique based on a reweighted least square method with optimally chosen weights, obtained by the use of maximum likelihood estimates. Results of numerical experiments are given proving the high efficiency of the suggested method. 9 refs., 5 figs., 1 tab

  3. Effects of the Boy Scouts of America Personal Fitness Merit Badge on Cardio-Metabolic Risk, Health Related Fitness and Physical Activity in Adolescent Boys.

    Science.gov (United States)

    Maxwell, Justin; Burns, Ryan D; Brusseau, Timothy A

    2017-01-01

    A growing number of adolescents are more sedentary and have fewer formal opportunities to participate in physical activity. With the mounting evidence that sedentary time has a negative impact on cardiometabolic profiles, health related fitness and physical activity, there is a pressing need to find an affordable adolescent physical activity intervention. One possible intervention that has been overlooked in the past is Boy Scouts of America. There are nearly 900,000 adolescent boys who participate in Boy Scouts in the United States. The purpose of this research study was to evaluate the effect of the Personal Fitness merit badge system on physical activity, health-related fitness, and cardio-metabolic blood profiles in Boy Scouts 11-17 years of age. Participants were fourteen (N = 14) Boy Scouts from the Great Salt Lake Council of the Boy Scouts of America who earned their Personal Fitness merit badge. Classes were held in the Spring of 2016 where boys received the information needed to obtain the merit badge and data were collected. Results from the related-samples Wilcoxon signed rank test showed that the median of differences between VO 2 peak pre-test and post-test scores were statistically significant ( p = 0.004). However, it also showed that the differences between the Pre-MetS (metabolic syndrome) and Post-MetS scores (p = 0.917), average steps taken per day ( p = 0.317), and BMI ( p = 0.419) were not statistically significant. In conclusion, the merit badge program had a positive impact on cardiovascular endurance, suggesting this program has potential to improve cardiovascular fitness and should be considered for boys participating in Boy Scouts.

  4. Evaluation of Interpolants in Their Ability to Fit Seismometric Time Series

    Directory of Open Access Journals (Sweden)

    Kanadpriya Basu

    2015-08-01

    Full Text Available This article is devoted to the study of the ASARCO demolition seismic data. Two different classes of modeling techniques are explored: First, mathematical interpolation methods and second statistical smoothing approaches for curve fitting. We estimate the characteristic parameters of the propagation medium for seismic waves with multiple mathematical and statistical techniques, and provide the relative advantages of each approach to address fitting of such data. We conclude that mathematical interpolation techniques and statistical curve fitting techniques complement each other and can add value to the study of one dimensional time series seismographic data: they can be use to add more data to the system in case the data set is not large enough to perform standard statistical tests.

  5. A fit method for the determination of inherent filtration with diagnostic x-ray units

    International Nuclear Information System (INIS)

    Meghzifene, K; Nowotny, R; Aiginger, H

    2006-01-01

    A method for the determination of total inherent filtration for clinical x-ray units using attenuation curves was devised. A model for the calculation of x-ray spectra is used to calculate kerma values which are then adjusted to the experimental data in minimizing the sum of the squared relative differences in kerma using a modified simplex fit process. The model considers tube voltage, voltage ripple, anode angle and additional filters. Fit parameters are the thickness of an additional inherent Al filter and a general normalization factor. Nineteen sets of measurements including attenuation data for three tube voltages and five Al-filter settings each were obtained. Relative differences of experimental and calculated kerma using the data for the additional filter thickness are within a range of -7.6% to 6.4%. Quality curves, i.e. the relationship of additional filtration to HVL, are often used to determine filtration but the results show that standard quality curves do not reflect the variety of conditions encountered in practice. To relate the thickness of the additional filter to the condition of the anode surface, the data fits were also made using tungsten as the filter material. These fits gave an identical fit quality compared to aluminium with a tungsten filter thickness of 2.12-8.21 μm which is within the range of the additional absorbing layers determined for rough anodes

  6. Interferometric data modelling: issues in realistic data generation

    International Nuclear Information System (INIS)

    Mukherjee, Soma

    2004-01-01

    This study describes algorithms developed for modelling interferometric noise in a realistic manner, i.e. incorporating non-stationarity that can be seen in the data from the present generation of interferometers. The noise model is based on individual component models (ICM) with the application of auto regressive moving average (ARMA) models. The data obtained from the model are vindicated by standard statistical tests, e.g. the KS test and Akaike minimum criterion. The results indicate a very good fit. The advantage of using ARMA for ICMs is that the model parameters can be controlled and hence injection and efficiency studies can be conducted in a more controlled environment. This realistic non-stationary noise generator is intended to be integrated within the data monitoring tool framework

  7. Fitting the two-compartment model in DCE-MRI by linear inversion.

    Science.gov (United States)

    Flouri, Dimitra; Lesnic, Daniel; Sourbron, Steven P

    2016-09-01

    Model fitting of dynamic contrast-enhanced-magnetic resonance imaging-MRI data with nonlinear least squares (NLLS) methods is slow and may be biased by the choice of initial values. The aim of this study was to develop and evaluate a linear least squares (LLS) method to fit the two-compartment exchange and -filtration models. A second-order linear differential equation for the measured concentrations was derived where model parameters act as coefficients. Simulations of normal and pathological data were performed to determine calculation time, accuracy and precision under different noise levels and temporal resolutions. Performance of the LLS was evaluated by comparison against the NLLS. The LLS method is about 200 times faster, which reduces the calculation times for a 256 × 256 MR slice from 9 min to 3 s. For ideal data with low noise and high temporal resolution the LLS and NLLS were equally accurate and precise. The LLS was more accurate and precise than the NLLS at low temporal resolution, but less accurate at high noise levels. The data show that the LLS leads to a significant reduction in calculation times, and more reliable results at low noise levels. At higher noise levels the LLS becomes exceedingly inaccurate compared to the NLLS, but this may be improved using a suitable weighting strategy. Magn Reson Med 76:998-1006, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  8. Perceived versus used workplace flexibility in Singapore: predicting work-family fit.

    Science.gov (United States)

    Jones, Blake L; Scoville, D Phillip; Hill, E Jeffrey; Childs, Geniel; Leishman, Joan M; Nally, Kathryn S

    2008-10-01

    This study examined the relationship of 2 types of workplace flexibility to work-family fit and work, personal, and marriage-family outcomes using data (N = 1,601) representative of employed persons in Singapore. We hypothesized that perceived and used workplace flexibility would be positively related to the study variables. Results derived from structural equation modeling revealed that perceived flexibility predicted work-family fit; however, used flexibility did not. Work-family fit related positively to each work, personal, and marriage-family outcome; however, workplace flexibility only predicted work and personal outcomes. Findings suggest work-family fit may be an important facilitating factor in the interface between work and family life, relating directly to marital satisfaction and satisfaction in other family relationships. Implications of these findings are discussed. Copyright 2008 APA, all rights reserved.

  9. Modelling SANS and SAXS data

    International Nuclear Information System (INIS)

    Reynolds, P.

    1999-01-01

    Full text: Small angle scattering data while on an absolute scale and relatively accurate over large ranges of observables (0.003 -1 ; 0.1 -1 ) is often relatively featureless. I will address some of the problems this causes, and some of the ways of minimising these, by reference to our recent SANS results. For the benefit of newer chums this will involve discussion of the strengths and weaknesses of data from ISIS (LOQ), Argonne (SAND) and the I.L.L. (D22), and the consequences these have for modelling. The use of simple portable or remote access systems for modelling will be discussed - in particular the IGOR based NIST system of Dr. S. Kline and the VAX based FISH system of Dr. R. Heenan, ISIS. I will illustrate that a wide variety of physically appealing and complete models are now available. If you have reason to believe in a particular microstructure, this belief can now be either falsified, or the microstructure quantified, by fitting to the entire set of scattering patterns over the entire Q-range. For example, only in cases of drastic ignorance need we use only Guinier and Porod analyses, although these may provide useful initial guidance in the modelling. We now rarely need to use oversimplified logically incomplete models - such as spherical micelles with neglect of intermicellar correlation- now that we possess fast desktop/experimental computers

  10. A Four–Component Model of Age–Related Memory Change

    Science.gov (United States)

    Healey, M. Karl; Kahana, Michael J.

    2015-01-01

    We develop a novel, computationally explicit, theory of age–related memory change within the framework of the context maintenance and retrieval (CMR2) model of memory search. We introduce a set of benchmark findings from the free recall and recognition tasks that includes aspects of memory performance that show both age-related stability and decline. We test aging theories by lesioning the corresponding mechanisms in a model fit to younger adult free recall data. When effects are considered in isolation, many theories provide an adequate account, but when all effects are considered simultaneously, the existing theories fail. We develop a novel theory by fitting the full model (i.e., allowing all parameters to vary) to individual participants and comparing the distributions of parameter values for older and younger adults. This theory implicates four components: 1) the ability to sustain attention across an encoding episode, 2) the ability to retrieve contextual representations for use as retrieval cues, 3) the ability to monitor retrievals and reject intrusions, and 4) the level of noise in retrieval competitions. We extend CMR2 to simulate a recognition memory task using the same mechanisms the free recall model uses to reject intrusions. Without fitting any additional parameters, the four–component theory that accounts for age differences in free recall predicts the magnitude of age differences in recognition memory accuracy. Confirming a prediction of the model, free recall intrusion rates correlate positively with recognition false alarm rates. Thus we provide a four–component theory of a complex pattern of age differences across two key laboratory tasks. PMID:26501233

  11. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    Science.gov (United States)

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  13. Fitting a mixture of von Mises distributions in order to model data on wind direction in Peninsular Malaysia

    International Nuclear Information System (INIS)

    Masseran, N.; Razali, A.M.; Ibrahim, K.; Latif, M.T.

    2013-01-01

    Highlights: • We suggest a simple way for wind direction modeling using the mixture of von Mises distribution. • We determine the most suitable probability model for wind direction regime in Malaysia. • We provide the circular density plots to show the most prominent directions of wind blows. - Abstract: A statistical distribution for describing wind direction provides information about the wind regime at a particular location. In addition, this information complements knowledge of wind speed, which allows researchers to draw some conclusions about the energy potential of wind and aids the development of efficient wind energy generation. This study focuses on modeling the frequency distribution of wind direction, including some characteristics of wind regime that cannot be represented by a unimodal distribution. To identify the most suitable model, a finite mixture of von Mises distributions were fitted to the average hourly wind direction data for nine wind stations located in Peninsular Malaysia. The data used were from the years 2000 to 2009. The suitability of each mixture distribution was judged based on the R 2 coefficient and the histogram plot with a density line. The results showed that the finite mixture of the von Mises distribution with H number of components was the best distribution to describe the wind direction distributions in Malaysia. In addition, the circular density plots of the suitable model clearly showed the most prominent directions of wind blows than the other directions

  14. Modeling of secondary organic aerosol yields from laboratory chamber data

    Directory of Open Access Journals (Sweden)

    M. N. Chan

    2009-08-01

    Full Text Available Laboratory chamber data serve as the basis for constraining models of secondary organic aerosol (SOA formation. Current models fall into three categories: empirical two-product (Odum, product-specific, and volatility basis set. The product-specific and volatility basis set models are applied here to represent laboratory data on the ozonolysis of α-pinene under dry, dark, and low-NOx conditions in the presence of ammonium sulfate seed aerosol. Using five major identified products, the model is fit to the chamber data. From the optimal fitting, SOA oxygen-to-carbon (O/C and hydrogen-to-carbon (H/C ratios are modeled. The discrepancy between measured H/C ratios and those based on the oxidation products used in the model fitting suggests the potential importance of particle-phase reactions. Data fitting is also carried out using the volatility basis set, wherein oxidation products are parsed into volatility bins. The product-specific model is most likely hindered by lack of explicit inclusion of particle-phase accretion compounds. While prospects for identification of the majority of SOA products for major volatile organic compounds (VOCs classes remain promising, for the near future empirical product or volatility basis set models remain the approaches of choice.

  15. Modelling DW-MRI data from primary and metastatic ovarian tumours

    Energy Technology Data Exchange (ETDEWEB)

    Winfield, Jessica M. [Institute of Cancer Research, CRUK and EPSRC Cancer Imaging Centre, Division of Radiotherapy and Imaging, Surrey (United Kingdom); Royal Marsden NHS Foundation Trust, Surrey (United Kingdom); Institute of Cancer Research and Royal Marsden Hospital, MRI Unit, Surrey (United Kingdom); DeSouza, Nandita M.; Collins, David J. [Institute of Cancer Research, CRUK and EPSRC Cancer Imaging Centre, Division of Radiotherapy and Imaging, Surrey (United Kingdom); Royal Marsden NHS Foundation Trust, Surrey (United Kingdom); Priest, Andrew N.; Hodgkin, Charlotte; Freeman, Susan [University of Cambridge, Department of Radiology, Addenbrooke' s Hospital, Cambridge (United Kingdom); Wakefield, Jennifer C.; Orton, Matthew R. [Institute of Cancer Research, CRUK and EPSRC Cancer Imaging Centre, Division of Radiotherapy and Imaging, Surrey (United Kingdom)

    2015-07-15

    To assess goodness-of-fit and repeatability of mono-exponential, stretched exponential and bi-exponential models of diffusion-weighted MRI (DW-MRI) data in primary and metastatic ovarian cancer. Thirty-nine primary and metastatic lesions from thirty-one patients with stage III or IV ovarian cancer were examined before and after chemotherapy using DW-MRI with ten diffusion-weightings. The data were fitted with (a) a mono-exponential model to give the apparent diffusion coefficient (ADC), (b) a stretched exponential model to give the distributed diffusion coefficient (DDC) and stretching parameter (α), and (c) a bi-exponential model to give the diffusion coefficient (D), perfusion fraction (f) and pseudodiffusion coefficient (D*). Coefficients of variation, established from repeated baseline measurements, were: ADC 3.1 %, DDC 4.3 %, α 7.0 %, D 13.2 %, f 44.0 %, D* 165.1 %. The bi-exponential model was unsuitable in these data owing to poor repeatability. After excluding the bi-exponential model, analysis using Akaike Information Criteria showed that the stretched exponential model provided the better fit to the majority of pixels in 64 % of lesions. The stretched exponential model provides the optimal fit to DW-MRI data from ovarian, omental and peritoneal lesions and lymph nodes in pre-treatment and post-treatment measurements with good repeatability. (orig.)

  16. An Improved Cognitive Model of the Iowa and Soochow Gambling Tasks With Regard to Model Fitting Performance and Tests of Parameter Consistency

    Directory of Open Access Journals (Sweden)

    Junyi eDai

    2015-03-01

    Full Text Available The Iowa Gambling Task (IGT and the Soochow Gambling Task (SGT are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning model (EVL and the prospect valence learning model (PVL, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79 and 27 control participants (mean age 35; SD 10.44 completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.

  17. Changes in relative fit of human heat stress indices to cardiovascular, respiratory, and renal hospitalizations across five Australian urban populations

    Science.gov (United States)

    Goldie, James; Alexander, Lisa; Lewis, Sophie C.; Sherwood, Steven C.; Bambrick, Hilary

    2018-03-01

    Various human heat stress indices have been developed to relate atmospheric measures of extreme heat to human health impacts, but the usefulness of different indices across various health impacts and in different populations is poorly understood. This paper determines which heat stress indices best fit hospital admissions for sets of cardiovascular, respiratory, and renal diseases across five Australian cities. We hypothesized that the best indices would be largely dependent on location. We fit parent models to these counts in the summers (November-March) between 2001 and 2013 using negative binomial regression. We then added 15 heat stress indices to these models, ranking their goodness of fit using the Akaike information criterion. Admissions for each health outcome were nearly always higher in hot or humid conditions. Contrary to our hypothesis that location would determine the best-fitting heat stress index, we found that the best indices were related largely by health outcome of interest, rather than location as hypothesized. In particular, heatwave and temperature indices had the best fit to cardiovascular admissions, humidity indices had the best fit to respiratory admissions, and combined heat-humidity indices had the best fit to renal admissions. With a few exceptions, the results were similar across all five cities. The best-fitting heat stress indices appear to be useful across several Australian cities with differing climates, but they may have varying usefulness depending on the outcome of interest. These findings suggest that future research on heat and health impacts, and in particular hospital demand modeling, could better reflect reality if it avoided "all-cause" health outcomes and used heat stress indices appropriate to specific diseases and disease groups.

  18. Changes in relative fit of human heat stress indices to cardiovascular, respiratory, and renal hospitalizations across five Australian urban populations.

    Science.gov (United States)

    Goldie, James; Alexander, Lisa; Lewis, Sophie C; Sherwood, Steven C; Bambrick, Hilary

    2018-03-01

    Various human heat stress indices have been developed to relate atmospheric measures of extreme heat to human health impacts, but the usefulness of different indices across various health impacts and in different populations is poorly understood. This paper determines which heat stress indices best fit hospital admissions for sets of cardiovascular, respiratory, and renal diseases across five Australian cities. We hypothesized that the best indices would be largely dependent on location. We fit parent models to these counts in the summers (November-March) between 2001 and 2013 using negative binomial regression. We then added 15 heat stress indices to these models, ranking their goodness of fit using the Akaike information criterion. Admissions for each health outcome were nearly always higher in hot or humid conditions. Contrary to our hypothesis that location would determine the best-fitting heat stress index, we found that the best indices were related largely by health outcome of interest, rather than location as hypothesized. In particular, heatwave and temperature indices had the best fit to cardiovascular admissions, humidity indices had the best fit to respiratory admissions, and combined heat-humidity indices had the best fit to renal admissions. With a few exceptions, the results were similar across all five cities. The best-fitting heat stress indices appear to be useful across several Australian cities with differing climates, but they may have varying usefulness depending on the outcome of interest. These findings suggest that future research on heat and health impacts, and in particular hospital demand modeling, could better reflect reality if it avoided "all-cause" health outcomes and used heat stress indices appropriate to specific diseases and disease groups.

  19. Sustained fitness gains and variability in fitness trajectories in the long-term evolution experiment with Escherichia coli

    Science.gov (United States)

    Lenski, Richard E.; Wiser, Michael J.; Ribeck, Noah; Blount, Zachary D.; Nahum, Joshua R.; Morris, J. Jeffrey; Zaman, Luis; Turner, Caroline B.; Wade, Brian D.; Maddamsetti, Rohan; Burmeister, Alita R.; Baird, Elizabeth J.; Bundy, Jay; Grant, Nkrumah A.; Card, Kyle J.; Rowles, Maia; Weatherspoon, Kiyana; Papoulis, Spiridon E.; Sullivan, Rachel; Clark, Colleen; Mulka, Joseph S.; Hajela, Neerja

    2015-01-01

    Many populations live in environments subject to frequent biotic and abiotic changes. Nonetheless, it is interesting to ask whether an evolving population's mean fitness can increase indefinitely, and potentially without any limit, even in a constant environment. A recent study showed that fitness trajectories of Escherichia coli populations over 50 000 generations were better described by a power-law model than by a hyperbolic model. According to the power-law model, the rate of fitness gain declines over time but fitness has no upper limit, whereas the hyperbolic model implies a hard limit. Here, we examine whether the previously estimated power-law model predicts the fitness trajectory for an additional 10 000 generations. To that end, we conducted more than 1100 new competitive fitness assays. Consistent with the previous study, the power-law model fits the new data better than the hyperbolic model. We also analysed the variability in fitness among populations, finding subtle, but significant, heterogeneity in mean fitness. Some, but not all, of this variation reflects differences in mutation rate that evolved over time. Taken together, our results imply that both adaptation and divergence can continue indefinitely—or at least for a long time—even in a constant environment. PMID:26674951

  20. Behavior and sensitivity of an optimal tree diameter growth model under data uncertainty

    Science.gov (United States)

    Don C. Bragg

    2005-01-01

    Using loblolly pine, shortleaf pine, white oak, and northern red oak as examples, this paper considers the behavior of potential relative increment (PRI) models of optimal tree diameter growth under data uncertainity. Recommendations on intial sample size and the PRI iteractive curve fitting process are provided. Combining different state inventories prior to PRI model...

  1. Field strength correlators in QCD: new fits to the lattice data

    International Nuclear Information System (INIS)

    Meggiolaro, E.

    1999-01-01

    We discuss the results obtained by fitting the lattice data of the gauge-invariant field strength correlators in QCD with some particular functions which are commonly used in the literature in some phenomenological approaches to high-energy hadron-hadron scattering. A comparison is done with the results obtained in the original fits to the lattice data. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  2. Effects of a school-based intervention on active commuting to school and health-related fitness

    Directory of Open Access Journals (Sweden)

    Emilio Villa-González

    2017-01-01

    Full Text Available Abstract Background Active commuting to school has declined over time, and interventions are needed to reverse this trend. The main objective was to investigate the effects of a school-based intervention on active commuting to school and health-related fitness in school-age children of Southern Spain. Methods A total of 494 children aged 8 to 11 years were invited to participate in the study. The schools were non-randomly allocated (i.e., school level allocation into the experimental group (EG or the control group (CG. The EG received an intervention program for 6 months (a monthly activity focused on increasing the level of active commuting to school and mainly targeting children’s perceptions and attitudes. Active commuting to school and health-related fitness (i.e., cardiorespiratory fitness, muscular fitness and speed-agility, were measured at baseline and at the end of the intervention. Children with valid data on commuting to school at baseline and follow-up, sex, age and distance from home to school were included in the final analysis (n = 251. Data was analyzed through a factorial ANOVA and the Bonferroni post-hoc test. Results At follow up, the EG had higher rates of cycling to school than CG for boys only (p = 0.04, but not for walking to school for boys or girls. The EG avoided increases in the rates of passive commuting at follow up, which increased in the CG among girls for car (MD = 1.77; SE = 0.714; p = 0.010 and bus (MD = 1.77; SE = 0.714; p = 0.010 modes. Moreover, we observed significant interactions and main effects between independent variables (study group, sex and assessment time point on health-related fitness (p < 0.05 over the 6-month period between groups, with higher values in the control group (mainly in boys. Conclusion A school-based intervention focused on increasing active commuting to school was associated with increases in rates of cycling to school among boys, but not for

  3. An analysis of the uncertainty in temperature and density estimates from fitting model spectra to data. 1998 summer research program for high school juniors at the University of Rochester's Laboratory for Laser Energetics. Student research reports

    International Nuclear Information System (INIS)

    Schubmehl, M.

    1999-03-01

    Temperature and density histories of direct-drive laser fusion implosions are important to an understanding of the reaction's progress. Such measurements also document phenomena such as preheating of the core and improper compression that can interfere with the thermonuclear reaction. Model x-ray spectra from the non-LTE (local thermodynamic equilibrium) radiation transport post-processor for LILAC have recently been fitted to OMEGA data. The spectrum fitting code reads in a grid of model spectra and uses an iterative weighted least-squares algorithm to perform a fit to experimental data, based on user-input parameter estimates. The purpose of this research was to upgrade the fitting code to compute formal uncertainties on fitted quantities, and to provide temperature and density estimates with error bars. A standard error-analysis process was modified to compute these formal uncertainties from information about the random measurement error in the data. Preliminary tests of the code indicate that the variances it returns are both reasonable and useful

  4. Fitness and fatness in relation with attention capacity in European adolescents: The HELENA study.

    Science.gov (United States)

    Cadenas-Sanchez, Cristina; Vanhelst, Jeremy; Ruiz, Jonatan R; Castillo-Gualda, Ruth; Libuda, Lars; Labayen, Idoia; De Miguel-Etayo, Pilar; Marcos, Ascensión; Molnár, Eszter; Catena, Andrés; Moreno, Luis A; Sjöström, Michael; Gottrand, Frederic; Widhalm, Kurt; Ortega, Francisco B

    2017-04-01

    To examine the association of health-related physical fitness components and accurate measures of fatness with attention in European adolescents. Cross-sectional study. A sub-sample of 444 adolescents from the HELENA study (14.5±1.2years) from 6 different countries participated in this study. Adolescents underwent evaluations of fitness (20m shuttle run, handgrip strength, standing long jump and 4×10m shuttle run tests), fatness (body mass index, skinfold thicknesses, bioelectrical impedance, Bod Pod and dual-energy X-ray absorptiometry) and attention (d2-test). Higher cardiorespiratory fitness was positively associated with better attention capacity (β=0.1, p=0.03). Body mass index and fat mass index measured by Bod Pod and dual-energy X-ray absorptiometry in a subset were negatively associated with attention (β=-0.11, p=0.02; β=-0.36, p=0.02; β=-0.34, p=0.03; respectively). All models were adjusted for age, sex, family-affluence scale and mother education. When these models were additionally adjusted for cardiorespiratory fitness when fatness was the main predictor and vice versa, the associations were somewhat attenuated and were no longer statistically significant. Muscular strength, speed-agility and body fatness markers measured by bioelectrical impedance and skinfolds were not associated with attention. The fit and non-overweight adolescents presented the highest values of attention capacity whilst their unfit and overweight peers showed the lowest values of attention (47.31±2.34 vs. 33.74±4.39; pattention, yet these associations are not independent. A combined effect was also observed, with fit and non-overweight adolescents showing the highest levels of attention and those unfit and overweight the lowest. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  5. Health-related fitness profiles in adolescents with complex congenital heart disease

    DEFF Research Database (Denmark)

    Klausen, Susanne Hwiid; Wetterslev, Jørn; Søndergaard, Lars

    2015-01-01

    PURPOSE: This study investigates whether subgroups of different health-related fitness (HrF) profiles exist among girls and boys with complex congenital heart disease (ConHD) and how these are associated with lifestyle behaviors. METHODS: We measured the cardiorespiratory fitness, muscle strength...... in the Robust clusters reported leading a physically active lifestyle and participants in the Less robust cluster reported leading a sedentary lifestyle. Diagnoses were evenly distributed between clusters. CONCLUSIONS: The cluster analysis attributed some of the variability in cardiorespiratory fitness among...

  6. Multilevel models for longitudinal data

    OpenAIRE

    Fiona Steele

    2008-01-01

    Repeated measures and repeated events data have a hierarchical structure which can be analysed by using multilevel models. A growth curve model is an example of a multilevel random-coefficients model, whereas a discrete time event history model for recurrent events can be fitted as a multilevel logistic regression model. The paper describes extensions to the basic growth curve model to handle auto-correlated residuals, multiple-indicator latent variables and correlated growth processes, and e...

  7. [Study on HIV prevention related knowledge-motivation-psychological model in men who have sex with men, based on a structural equation model].

    Science.gov (United States)

    Jiang, Y; Dou, Y L; Cai, A J; Zhang, Z; Tian, T; Dai, J H; Huang, A L

    2016-02-01

    Knowledge-motivation-psychological model was set up and tested through structural equation model to provide evidence on HIV prevention related strategy in Men who have Sex with Men (MSM). Snowball sampling method was used to recruit a total of 550 MSM volunteers from two MSM Non-Governmental Organizations in Urumqi, Xinjiang province. HIV prevention related information on MSM was collected through a questionnaire survey. A total of 477 volunteers showed with complete information. HIV prevention related Knowledge-motivation-psychological model was built under related experience and literature. Relations between knowledge, motivation and psychological was studied, using a ' structural equation model' with data from the fitting questionnaires and modification of the model. Structural equation model presented good fitting results. After revising the fitting index: RMSEA was 0.035, NFI was 0.965 and RFI was 0.920. Thereafter the exogenous latent variables would include knowledge, motivation and psychological effects. The endogenous latent variable appeared as prevention related behaviors. The standardized total effects of motivation, knowledge, psychological on prevention behavior were 0.44, 0.41 and 0.17 respectively. Correlation coefficient of motivation and psychological effects was 0.16. Correlation coefficient on knowledge and psychological effects was -0.17 (Pmotivation did not show statistical significance. Knowledge of HIV and motivation of HIV prevention did not show any accordance in MSM population. It was necessary to increase the awareness and to improve the motivation of HIV prevention in MSM population.

  8. Fitting monthly Peninsula Malaysian rainfall using Tweedie distribution

    Science.gov (United States)

    Yunus, R. M.; Hasan, M. M.; Zubairi, Y. Z.

    2017-09-01

    In this study, the Tweedie distribution was used to fit the monthly rainfall data from 24 monitoring stations of Peninsula Malaysia for the period from January, 2008 to April, 2015. The aim of the study is to determine whether the distributions within the Tweedie family fit well the monthly Malaysian rainfall data. Within the Tweedie family, the gamma distribution is generally used for fitting the rainfall totals, however the Poisson-gamma distribution is more useful to describe two important features of rainfall pattern, which are the occurrences (dry months) and the amount (wet months). First, the appropriate distribution of the monthly rainfall was identified within the Tweedie family for each station. Then, the Tweedie Generalised Linear Model (GLM) with no explanatory variable was used to model the monthly rainfall data. Graphical representation was used to assess model appropriateness. The QQ plots of quantile residuals show that the Tweedie models fit the monthly rainfall data better for majority of the stations in the west coast and mid land than those in the east coast of Peninsula. This significant finding suggests that the best fitted distribution depends on the geographical location of the monitoring station. In this paper, a simple model is developed for generating synthetic rainfall data for use in various areas, including agriculture and irrigation. We have showed that the data that were simulated using the Tweedie distribution have fairly similar frequency histogram to that of the actual data. Both the mean number of rainfall events and mean amount of rain for a month were estimated simultaneously for the case that the Poisson gamma distribution fits the data reasonably well. Thus, this work complements previous studies that fit the rainfall amount and the occurrence of rainfall events separately, each to a different distribution.

  9. Specific count model for investing the related factors of cost of GERD and functional dyspepsia

    Science.gov (United States)

    Abadi, Alireza; Chaibakhsh, Samira; Safaee, Azadeh; Moghimi-Dehkordi, Bijan

    2013-01-01

    Aim The purpose of this study is to analyze the cost of GERD and functional dyspepsia for investing its related factors. Background Gastro-oesophageal reflux disease GERD and dyspepsia are the most common symptoms of gastrointestinal disorders. Recent studies showed high prevalence and variety of clinical presentation of these two symptoms imposed enormous economic burden to the society. Cost data that related to economics burden have specific characteristics. So this kind of data needs to specific models. Poisson regression (PR) and negative binomial regression (NB) are the models that were used for analyzing cost data in this paper. Patients and methods This study designed as a cross-sectional household survey from May 2006 to December 2007 on a random sample of individual in the Tehran province, Iran to find the prevalence of gastrointestinal symptoms and disorders and its related factors. The Cost in each item was counted. PR and NB were carried out to the data respectively. Likelihood ratio test was performed for comparison between models. Also Log likelihood, Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) were used to compare performance of the models. Results According to Likelihood ratio test and all three criterions that we used to compare performance of the models, NB was the best model for analyzing this cost data. Sex, age and insurance statues were being significant. Conclusion PR and NB models were carried out for this data and according the results improved fit of the NB model over PR, it clearly indicates that over-dispersion is involved due to unobserved heterogeneity and/or clustering. NB model in cost data more appropriate fit than PR. PMID:24834282

  10. Birth order and physical fitness in early adulthood: evidence from Swedish military conscription data.

    Science.gov (United States)

    Barclay, Kieron; Myrskylä, Mikko

    2014-12-01

    Physical fitness at young adult ages is an important determinant of physical health, cognitive ability, and mortality. However, few studies have addressed the relationship between early life conditions and physical fitness in adulthood. An important potential factor influencing physical fitness is birth order, which prior studies associate with several early- and later-life outcomes such as height and mortality. This is the first study to analyse the association between birth order and physical fitness in late adolescence. We use military conscription data on 218,873 Swedish males born between 1965 and 1977. Physical fitness is measured by a test of maximal working capacity, a measure of cardiovascular fitness closely related to V02max. We use linear regression with sibling fixed effects, meaning a within-family comparison, to eliminate the confounding influence of unobserved factors that vary between siblings. To understand the mechanism we further analyse whether the association between birth order and physical fitness varies by sibship size, parental socioeconomic status, birth cohort or length of the birth interval. We find a strong, negative and monotonic relationship between birth order and physical fitness. For example, third-born children have a maximal working capacity approximately 0.1 (p birth order effect does not depend on the length of the birth intervals, in two-child families a longer birth interval strengthens the advantage of the first-born. Our results illustrate the importance of birth order for physical fitness, and suggest that the first-born advantage already arises in late adolescence. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Cardiorespiratory fitness protects against stress-related symptoms of burnout and depression.

    Science.gov (United States)

    Gerber, Markus; Lindwall, Magnus; Lindegård, Agneta; Börjesson, Mats; Jonsdottir, Ingibjörg H

    2013-10-01

    To examine how cardiorespiratory fitness and self-perceived stress are associated with burnout and depression. To determine if any relationship between stress and burnout/depression is mitigated among participants with high fitness levels. 197 participants (51% men, mean age=39.2 years) took part in the study. The Åstrand bicycle test was used to assess cardorespiratory fitness. Burnout was measured with the Shirom-Melamed Burnout Questionnaire (SMBQ), depressive symptoms with the Hospital Anxiety and Depression Scale (HAD-D). A gender-matched stratified sample was used to ensure that participants with varying stress levels were equally represented. Participants with moderate and high fitness reported fewer symptoms of burnout and depression than participants with low fitness. Individuals with high stress who also had moderate or high fitness levels reported lower scores on the SMBQ Tension subscale and the HAD-D than individuals with high stress, but low fitness levels. Better cardiovascular fitness seems to be associated with decreased symptoms of burnout and a better capacity to cope with stress. Promoting and measuring cardiorespiratory fitness can motivate employees to adopt a more physically active lifestyle and thus strengthen their ability to cope with stress exposure and stress-related disorders. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Beyond axial symmetry: An improved class of models for global data

    KAUST Repository

    Castruccio, Stefano

    2014-03-01

    An important class of models for data on a spherical domain, called axially symmetric, assumes stationarity across longitudes but not across latitudes. The main aim of this work is to introduce a new and more flexible class of models by relaxing the assumption of longitudinal stationarity in the context of regularly gridded climate model output. In this investigation, two other related topics are discussed: the lack of fit of an axially symmetric parametric model compared with a non-parametric model and to longitudinally reversible processes, an important subclass of axially symmetric models.

  13. Beyond axial symmetry: An improved class of models for global data

    KAUST Repository

    Castruccio, Stefano; Genton, Marc G.

    2014-01-01

    An important class of models for data on a spherical domain, called axially symmetric, assumes stationarity across longitudes but not across latitudes. The main aim of this work is to introduce a new and more flexible class of models by relaxing the assumption of longitudinal stationarity in the context of regularly gridded climate model output. In this investigation, two other related topics are discussed: the lack of fit of an axially symmetric parametric model compared with a non-parametric model and to longitudinally reversible processes, an important subclass of axially symmetric models.

  14. Analysis of chromosome aberration data by hybrid-scale models

    International Nuclear Information System (INIS)

    Indrawati, Iwiq; Kumazawa, Shigeru

    2000-02-01

    This paper presents a new methodology for analyzing data of chromosome aberrations, which is useful to understand the characteristics of dose-response relationships and to construct the calibration curves for the biological dosimetry. The hybrid scale of linear and logarithmic scales brings a particular plotting paper, where the normal section paper, two types of semi-log papers and the log-log paper are continuously connected. The hybrid-hybrid plotting paper may contain nine kinds of linear relationships, and these are conveniently called hybrid scale models. One can systematically select the best-fit model among the nine models by among the conditions for a straight line of data points. A biological interpretation is possible with some hybrid-scale models. In this report, the hybrid scale models were applied to separately reported data on chromosome aberrations in human lymphocytes as well as on chromosome breaks in Tradescantia. The results proved that the proposed models fit the data better than the linear-quadratic model, despite the demerit of the increased number of model parameters. We showed that the hybrid-hybrid model (both variables of dose and response using the hybrid scale) provides the best-fit straight lines to be used as the reliable and readable calibration curves of chromosome aberrations. (author)

  15. Incorporating doubly resonant $W^\\pm$ data in a global fit of SMEFT parameters to lift flat directions

    CERN Document Server

    Berthier, Laure; Trott, Michael

    2016-09-27

    We calculate the double pole contribution to two to four fermion scattering through $W^{\\pm}$ currents at tree level in the Standard Model Effective Field Theory (SMEFT). We assume all fermions to be massless, $\\rm U(3)^5$ flavour and $\\rm CP$ symmetry. Using this result, we update the global constraint picture on SMEFT parameters including LEPII data on these charged current processes, and also include modifications to our fit procedure motivated by a companion paper focused on $W^{\\pm}$ mass extractions. The fit reported is now to 177 observables and emphasises the need for a consistent inclusion of theoretical errors, and a consistent treatment of observables. Including charged current data lifts the two-fold degeneracy previously encountered in LEP (and lower energy) data, and allows us to set simultaneous constraints on 20 of 53 Wilson coefficients in the SMEFT, consistent with our assumptions. This allows the model independent inclusion of LEP data in SMEFT studies at LHC, which are projected into the S...

  16. Fitting ARMA Time Series by Structural Equation Models.

    Science.gov (United States)

    van Buuren, Stef

    1997-01-01

    This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)

  17. Dust in the small Magellanic Cloud. 2: Dust models from interstellar polarization and extinction data

    Science.gov (United States)

    Rodrigues, C. V.; Magalhaes, A. M.; Coyne, G. V.

    1995-01-01

    We study the dust in the Small Magellanic Cloud using our polarization and extinction data (Paper 1) and existing dust models. The data suggest that the monotonic SMC extinction curve is related to values of lambda(sub max), the wavelength of maximum polarization, which are on the average smaller than the mean for the Galaxy. On the other hand, AZV 456, a star with an extinction similar to that for the Galaxy, shows a value of lambda(sub max) similar to the mean for the Galaxy. We discuss simultaneous dust model fits to extinction and polarization. Fits to the wavelength dependent polarization data are possible for stars with small lambda(sub max). In general, they imply dust size distributions which are narrower and have smaller mean sizes compared to typical size distributions for the Galaxy. However, stars with lambda(sub max) close to the Galactic norm, which also have a narrower polarization curve, cannot be fit adequately. This holds true for all of the dust models considered. The best fits to the extinction curves are obtained with a power law size distribution by assuming that the cylindrical and spherical silicate grains have a volume distribution which is continuous from the smaller spheres to the larger cylinders. The size distribution for the cylinders is taken from the fit to the polarization. The 'typical', monotonic SMC extinction curve can be fit well with graphite and silicate grains if a small fraction of the SMC carbon is locked up in the grain. However, amorphous carbon and silicate grains also fit the data well. AZV456, which has an extinction curve similar to that for the Galaxy, has a UV bump which is too blue to be fit by spherical graphite grains.

  18. Low physical activity work-related and other risk factors increased the risk of poor physical fitness in cement workers

    Directory of Open Access Journals (Sweden)

    Ditha Diana

    2009-09-01

    Full Text Available Aim Low physical activity causes poor physical fitness, which leads to low productivity. The objective of this study was to determine the effects of low work-related physical activity and other risk factors on physical fitness.Methods This study was done in February 2008. Subjects were workers from 15 departments in PT Semen Padang, West Sumatera (Indonesia. Data on physical activities were collected using the questionnaire from the Student Field Work I Guidebook and Hypertension – Geriatric Integrated Program of the Faculty of Medicine, Universitas Indonesia2003. Physical fitness was measured using the Harvard Step Test.Results A number of 937 male workers aged 18 – 56 years participated in this study. Poor physical fitness was found in 15.9% of the subjects. Low work-related physical activity, smoking, lack of exercise, hypertension, diabetes mellitus, and asthma were dominant risk factors related to poor physical fi tness. Subjects with low compared to high work-related activity had a ten-fold risk of poor physical fitness [adjusted odds ratio (ORa = 10.71; 95% confidence interval (CI = 4.71–24.33]. In term of physical exercise, subjects who had no compared to those who had physical exercise had a six-fold risk of poor physical fitness (ORa = 6.30; 95%CI = 3.69-10.75.Conclusion Low work-related physical activities, smoking, lack of exercise, hypertension, diabetes mellitus, and sthma were correlated to poor physical fi tness. It is, among others, therefore necessary to implement exercises for workers with poor physical fitness. (Med J Indones. 2009;18:201-5Key words: exercise test, occupational healths, physical fitness

  19. A four-component model of age-related memory change.

    Science.gov (United States)

    Healey, M Karl; Kahana, Michael J

    2016-01-01

    We develop a novel, computationally explicit, theory of age-related memory change within the framework of the context maintenance and retrieval (CMR2) model of memory search. We introduce a set of benchmark findings from the free recall and recognition tasks that include aspects of memory performance that show both age-related stability and decline. We test aging theories by lesioning the corresponding mechanisms in a model fit to younger adult free recall data. When effects are considered in isolation, many theories provide an adequate account, but when all effects are considered simultaneously, the existing theories fail. We develop a novel theory by fitting the full model (i.e., allowing all parameters to vary) to individual participants and comparing the distributions of parameter values for older and younger adults. This theory implicates 4 components: (a) the ability to sustain attention across an encoding episode, (b) the ability to retrieve contextual representations for use as retrieval cues, (c) the ability to monitor retrievals and reject intrusions, and (d) the level of noise in retrieval competitions. We extend CMR2 to simulate a recognition memory task using the same mechanisms the free recall model uses to reject intrusions. Without fitting any additional parameters, the 4-component theory that accounts for age differences in free recall predicts the magnitude of age differences in recognition memory accuracy. Confirming a prediction of the model, free recall intrusion rates correlate positively with recognition false alarm rates. Thus, we provide a 4-component theory of a complex pattern of age differences across 2 key laboratory tasks. (c) 2015 APA, all rights reserved).

  20. Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models

    Science.gov (United States)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.

  1. Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models

    International Nuclear Information System (INIS)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-01-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data

  2. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  3. Right-sizing statistical models for longitudinal data.

    Science.gov (United States)

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  4. The interpretation of Charpy impact test data using hyper-logistic fitting functions

    International Nuclear Information System (INIS)

    Helm, J.L.

    1996-01-01

    The hyperbolic tangent function is used almost exclusively for computer assisted curve fitting of Charpy impact test data. Unfortunately, there is no physical basis to justify the use of this function and it cannot be generalized to test data that exhibits asymmetry. Using simple physical arguments, a semi-empirical model is derived and identified as a special case of the so called hyper-logistic equation. Although one solution of this equation is the hyperbolic tangent, other more physically interpretable solutions are provided. From the mathematics of the family of functions derived from the hyper-logistic equation, several useful generalizations are made such that asymmetric and wavy Charpy data can be physically interpreted

  5. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  6. On the use of the covariance matrix to fit correlated data

    Science.gov (United States)

    D'Agostini, G.

    1994-07-01

    Best fits to data which are affected by systematic uncertainties on the normalization factor have the tendency to produce curves lower than expected if the covariance matrix of the data points is used in the definition of the χ2. This paper shows that the effect is a direct consequence of the hypothesis used to estimate the empirical covariance matrix, namely the linearization on which the usual error propagation relies. The bias can become unacceptable if the normalization error is large, or a large number of data points are fitted.

  7. The impact of LHC jet data on the MMHT PDF fit at NNLO

    Science.gov (United States)

    Harland-Lang, L. A.; Martin, A. D.; Thorne, R. S.

    2018-03-01

    We investigate the impact of the high precision ATLAS and CMS 7 TeV measurements of inclusive jet production on the MMHT global PDF analysis at next-to-next-to-leading order (NNLO). This is made possible by the recent completion of the long-term project to calculate the NNLO corrections to the hard cross section. We find that a good description of the ATLAS data is not possible with the default treatment of experimental systematic errors, and propose a simplified solution that retains the dominant physical information of the data. We then investigate the fit quality and the impact on the gluon PDF central value and uncertainty when the ATLAS and CMS data are included in a MMHT fit. We consider both common choices for the factorization and renormalization scale, namely the inclusive jet transverse momentum, p_\\perp , and the leading jet p_\\perp , as well as the different jet radii for which the ATLAS and CMS data are made available. We find that the impact of these data on the gluon is relatively insensitive to these inputs, in particular the scale choice, while the inclusion of NNLO corrections tends to improve the data description somewhat and has a qualitatively similar though not identical impact on the gluon in comparison to NLO.

  8. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  9. A data model for environmental scientists

    Science.gov (United States)

    Kapeljushnik, O.; Beran, B.; Valentine, D.; van Ingen, C.; Zaslavsky, I.; Whitenack, T.

    2008-12-01

    Environmental science encompasses a wide range of disciplines from water chemistry to microbiology, ecology and atmospheric sciences. Studies often require working across disciplines which differ in their ways of describing and storing data such that it is not possible to devise a monolithic one-size-fits-all data solution. Based on our experiences with Consortium of the Universities for the Advancement of Hydrologic Science Inc. (CUAHSI) Observations Data Model, Berkeley Water Center FLUXNET carbon-climate work and by examining standards like EPA's Water Quality Exchange (WQX), we have developed a flexible data model that allows extensions without need to altering the schema such that scientists can define custom metadata elements to describe their data including observations, analysis methods as well as sensors and geographical features. The data model supports various types of observations including fixed point and moving sensors, bottled samples, rasters from remote sensors and models, and categorical descriptions (e.g. taxonomy) by employing user-defined-types when necessary. It leverages ADO .NET Entity Framework to provide the semantic data models for differing disciplines, while maintaining a common schema below the entity layer. This abstraction layer simplifies data retrieval and manipulation by hiding the logic and complexity of the relational schema from users thus allows programmers and scientists to deal directly with objects such as observations, sensors, watersheds, river reaches, channel cross-sections, laboratory analysis methods and samples as opposed to table joins, columns and rows.

  10. Polarimetry data inversion in conditions of tokamak plasma: Model based tomography concept

    International Nuclear Information System (INIS)

    Bieg, B.; Chrzanowski, J.; Kravtsov, Yu. A.; Mazon, D.

    2015-01-01

    Highlights: • Model based plasma tomography is presented. • Minimization procedure for the error function is suggested to be performed using the gradient method. • model based procedure of data inversion in the case of joint polarimetry–interferometry data. - Abstract: Model based plasma tomography is studied which fits a hypothetical multi-parameter plasma model to polarimetry and interferometry experimental data. Fitting procedure implies minimization of the error function, defined as a sum of squared differences between theoretical and empirical values. Minimization procedure for the function is suggested to be performed using the gradient method. Contrary to traditional tomography, which deals exclusively with observational data, model-based tomography (MBT) operates also with reasonable model of inhomogeneous plasma distribution and verifies which profile of a given class better fits experimental data. Model based tomography (MBT) restricts itself by definite class of models for instance power series, Fourier expansion etc. The basic equations of MBT are presented which generalize the equations of model based procedure of polarimetric data inversion in the case of joint polarimetry–interferometry data.

  11. Polarimetry data inversion in conditions of tokamak plasma: Model based tomography concept

    Energy Technology Data Exchange (ETDEWEB)

    Bieg, B. [Maritime University of Szczecin, Waly Chrobrego 1-2, 70-500 Szczecin (Poland); Chrzanowski, J., E-mail: j.chrzanowski@am.szczecin.pl [Maritime University of Szczecin, Waly Chrobrego 1-2, 70-500 Szczecin (Poland); Kravtsov, Yu. A. [Maritime University of Szczecin, Waly Chrobrego 1-2, 70-500 Szczecin (Poland); Space Research Institute, Profsoyuznaya St. 82/34 Russian Academy of Science, Moscow 117997 (Russian Federation); Mazon, D. [CEA, IRFM, F-13108 Saint Paul-lez-Durance (France)

    2015-10-15

    Highlights: • Model based plasma tomography is presented. • Minimization procedure for the error function is suggested to be performed using the gradient method. • model based procedure of data inversion in the case of joint polarimetry–interferometry data. - Abstract: Model based plasma tomography is studied which fits a hypothetical multi-parameter plasma model to polarimetry and interferometry experimental data. Fitting procedure implies minimization of the error function, defined as a sum of squared differences between theoretical and empirical values. Minimization procedure for the function is suggested to be performed using the gradient method. Contrary to traditional tomography, which deals exclusively with observational data, model-based tomography (MBT) operates also with reasonable model of inhomogeneous plasma distribution and verifies which profile of a given class better fits experimental data. Model based tomography (MBT) restricts itself by definite class of models for instance power series, Fourier expansion etc. The basic equations of MBT are presented which generalize the equations of model based procedure of polarimetric data inversion in the case of joint polarimetry–interferometry data.

  12. Elementary Physical Education Teachers' Content Knowledge of Physical Activity and Health-Related Fitness

    Science.gov (United States)

    Santiago, Jose A.; Disch, James G.; Morales, Julio

    2012-01-01

    The purpose of this study was to examine elementary physical education teachers' content knowledge of physical activity and health-related fitness. Sixty-four female and 24 male teachers completed the Appropriate Physical Activity and Health-Related Fitness test. Descriptive statistics results indicated that the mean percentage score for the test…

  13. Lunar photometric modelling with SMART-1/AMIE imaging data

    International Nuclear Information System (INIS)

    Wilkman, O.; Muinonen, K.; Videen, G.; Josset, J.-L.; Souchon, A.

    2014-01-01

    We investigate the light-scattering properties of the lunar mare areas. A large photometric dataset was extracted from images taken by the AMIE camera on board the SMART-1 spacecraft. Inter-particle shadowing effects in the regolith are modelled using ray-tracing simulations, and then a phase function is fit to the data using Bayesian techniques and Markov chain Monte Carlo. Additionally, the data are fit with phase functions computed from radiative-transfer coherent-backscatter (RT-CB) simulations. The results indicate that the lunar photometry, including both the opposition effect and azimuthal effects, can be explained well with a combination of inter-particle shadowing and coherent backscattering. Our results produce loose constraints on the mare physical properties. The RT-CB results indicate that the scattering volume element is optically thick. In both the Bayesian analysis and the RT-CB fit, models with lower packing density and/or higher surface roughness always produce better fits to the data than densely packed, smoother ones

  14. Exponential Data Fitting and its Applications

    CERN Document Server

    Pereyra, Victor

    2010-01-01

    Real and complex exponential data fitting is an important activity in many different areas of science and engineering, ranging from Nuclear Magnetic Resonance Spectroscopy and Lattice Quantum Chromodynamics to Electrical and Chemical Engineering, Vision and Robotics. The most commonly used norm in the approximation by linear combinations of exponentials is the l2 norm (sum of squares of residuals), in which case one obtains a nonlinear separable least squares problem. A number of different methods have been proposed through the years to solve these types of problems and new applications appear

  15. Fitter. The package for fitting a chosen theoretical multi-parameter function through a set of data points. Application to experimental data of the YuMO spectrometer. Version 2.1.0. Long write-up and user's guide

    International Nuclear Information System (INIS)

    Solov'ev, A.G.; Stadnik, A.V.; Islamov, A.N.; Kuklin, A.I.

    2008-01-01

    Fitter is a C++ program aimed to fit a chosen theoretical multi-parameter function through a set of data points. The method of fitting is chi-square minimization. Moreover, the robust fitting method can be applied to Fitter. Fitter was designed to be used for a small-angle neutron scattering data analysis. Respective theoretical models are implemented in it. Some commonly used models (Gaussian and polynomials) are also implemented for wider applicability

  16. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Huang, Can

    2018-01-01

    –slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost......In order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master...... optimality. Numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods....

  17. Testing the goodness of fit of selected infiltration models on soils with different land use histories

    International Nuclear Information System (INIS)

    Mbagwu, J.S.C.

    1993-10-01

    Six infiltration models, some obtained by reformulating the fitting parameters of the classical Kostiakov (1932) and Philip (1957) equations, were investigated for their ability to describe water infiltration into highly permeable sandy soils from the Nsukka plains of SE Nigeria. The models were Kostiakov, Modified Kostiakov (A), Modified Kostiakov (B), Philip, Modified Philip (A) and Modified Philip (B). Infiltration data were obtained from double ring infiltrometers on field plots established on a Knadic Paleustult (Nkpologu series) to investigate the effects of land use on soil properties and maize yield. The treatments were; (i) tilled-mulched (TM), (ii) tilled-unmulched (TU), (iii) untilled-mulched (UM), (iv) untilled-unmulched (UU) and (v) continuous pasture (CP). Cumulative infiltration was highest on the TM and lowest on the CP plots. All estimated model parameters obtained by the best fit of measured data differed significantly among the treatments. Based on the magnitude of R 2 values, the Kostiakov, Modified Kostiakov (A), Philip and Modified Philip (A) models provided best predictions of cumulative infiltration as a function of time. Comparing experimental with model-predicted cumulative infiltration showed, however, that on all treatments the values predicted by the classical Kostiakov, Philip and Modified Philip (A) models deviated most from experimental data. The other models produced values that agreed very well with measured data. Considering the eases of determining the fitting parameters it is proposed that on soils with high infiltration rates, either Modified Kostiakov model (I = Kt a + Ict) or Modified Philip model (I St 1/2 + Ict), (where I is cumulative infiltration, K, the time coefficient, t, time elapsed, 'a' the time exponent, Ic the equilibrium infiltration rate and S, the soil water sorptivity), be used for routine characterization of the infiltration process. (author). 33 refs, 3 figs 6 tabs

  18. Fitting the CDO correlation skew: a tractable structural jump-diffusion model

    DEFF Research Database (Denmark)

    Willemann, Søren

    2007-01-01

    We extend a well-known structural jump-diffusion model for credit risk to handle both correlations through diffusion of asset values and common jumps in asset value. Through a simplifying assumption on the default timing and efficient numerical techniques, we develop a semi-analytic framework...... allowing for instantaneous calibration to heterogeneous CDS curves and fast computation of CDO tranche spreads. We calibrate the model to CDX and iTraxx data from February 2007 and achieve a satisfactory fit. To price the senior tranches for both indices, we require a risk-neutral probability of a market...

  19. Checking the Adequacy of Fit of Models from Split-Plot Designs

    DEFF Research Database (Denmark)

    Almini, A. A.; Kulahci, Murat; Montgomery, D. C.

    2009-01-01

    models. In this article, we propose the computation of two R-2, R-2-adjusted, prediction error sums of squares (PRESS), and R-2-prediction statistics to measure the adequacy of fit for the WP and the SP submodels in a split-plot design. This is complemented with the graphical analysis of the two types......One of the main features that distinguish split-plot experiments from other experiments is that they involve two types of experimental errors: the whole-plot (WP) error and the subplot (SP) error. Taking this into consideration is very important when computing measures of adequacy of fit for split-plot...... of errors to check for any violation of the underlying assumptions and the adequacy of fit of split-plot models. Using examples, we show how computing two measures of model adequacy of fit for each split-plot design model is appropriate and useful as they reveal whether the correct WP and SP effects have...

  20. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    Science.gov (United States)

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  1. Repair models of cell survival and corresponding computer program for survival curve fitting

    International Nuclear Information System (INIS)

    Shen Xun; Hu Yiwei

    1992-01-01

    Some basic concepts and formulations of two repair models of survival, the incomplete repair (IR) model and the lethal-potentially lethal (LPL) model, are introduced. An IBM-PC computer program for survival curve fitting with these models was developed and applied to fit the survivals of human melanoma cells HX118 irradiated at different dose rates. Comparison was made between the repair models and two non-repair models, the multitar get-single hit model and the linear-quadratic model, in the fitting and analysis of the survival-dose curves. It was shown that either IR model or LPL model can fit a set of survival curves of different dose rates with same parameters and provide information on the repair capacity of cells. These two mathematical models could be very useful in quantitative study on the radiosensitivity and repair capacity of cells

  2. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    Science.gov (United States)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  3. Application Mail Tracking Using RSA Algorithm As Security Data and HOT-Fit a Model for Evaluation System

    Science.gov (United States)

    Permadi, Ginanjar Setyo; Adi, Kusworo; Gernowo, Rahmad

    2018-02-01

    RSA algorithm give security in the process of the sending of messages or data by using 2 key, namely private key and public key .In this research to ensure and assess directly systems are made have meet goals or desire using a comprehensive evaluation methods HOT-Fit system .The purpose of this research is to build a information system sending mail by applying methods of security RSA algorithm and to evaluate in uses the method HOT-Fit to produce a system corresponding in the faculty physics. Security RSA algorithm located at the difficulty of factoring number of large coiled factors prima, the results of the prime factors has to be done to obtain private key. HOT-Fit has three aspects assessment, in the aspect of technology judging from the system status, the quality of system and quality of service. In the aspect of human judging from the use of systems and satisfaction users while in the aspect of organization judging from the structure and environment. The results of give a tracking system sending message based on the evaluation acquired.

  4. Application Mail Tracking Using RSA Algorithm As Security Data and HOT-Fit a Model for Evaluation System

    Directory of Open Access Journals (Sweden)

    Setyo Permadi Ginanjar

    2018-01-01

    Full Text Available RSA algorithm give security in the process of the sending of messages or data by using 2 key, namely private key and public key .In this research to ensure and assess directly systems are made have meet goals or desire using a comprehensive evaluation methods HOT-Fit system .The purpose of this research is to build a information system sending mail by applying methods of security RSA algorithm and to evaluate in uses the method HOT-Fit to produce a system corresponding in the faculty physics. Security RSA algorithm located at the difficulty of factoring number of large coiled factors prima, the results of the prime factors has to be done to obtain private key. HOT-Fit has three aspects assessment, in the aspect of technology judging from the system status, the quality of system and quality of service. In the aspect of human judging from the use of systems and satisfaction users while in the aspect of organization judging from the structure and environment. The results of give a tracking system sending message based on the evaluation acquired.

  5. Bayesian inference and model comparison for metallic fatigue data

    KAUST Repository

    Babuska, Ivo

    2016-01-06

    In this work, we present a statistical treatment of stress-life (S-N) data drawn from a collection of records of fatigue experiments that were performed on 75S-T6 aluminum alloys. Our main objective is to predict the fatigue life of materials by providing a systematic approach to model calibration, model selection and model ranking with reference to S-N data. To this purpose, we consider fatigue-limit models and random fatigue-limit models that are specially designed to allow the treatment of the run-outs (right-censored data). We first fit the models to the data by maximum likelihood methods and estimate the quantiles of the life distribution of the alloy specimen. We then compare and rank the models by classical measures of fit based on information criteria. We also consider a Bayesian approach that provides, under the prior distribution of the model parameters selected by the user, their simulation-based posterior distributions.

  6. Predictive modelling of fault related fracturing in carbonate damage-zones: analytical and numerical models of field data (Central Apennines, Italy)

    Science.gov (United States)

    Mannino, Irene; Cianfarra, Paola; Salvini, Francesco

    2010-05-01

    Permeability in carbonates is strongly influenced by the presence of brittle deformation patterns, i.e pressure-solution surfaces, extensional fractures, and faults. Carbonate rocks achieve fracturing both during diagenesis and tectonic processes. Attitude, spatial distribution and connectivity of brittle deformation features rule the secondary permeability of carbonatic rocks and therefore the accumulation and the pathway of deep fluids (ground-water, hydrocarbon). This is particularly true in fault zones, where the damage zone and the fault core show different hydraulic properties from the pristine rock as well as between them. To improve the knowledge of fault architecture and faults hydraulic properties we study the brittle deformation patterns related to fault kinematics in carbonate successions. In particular we focussed on the damage-zone fracturing evolution. Fieldwork was performed in Meso-Cenozoic carbonate units of the Latium-Abruzzi Platform, Central Apennines, Italy. These units represent field analogues of rock reservoir in the Southern Apennines. We combine the study of rock physical characteristics of 22 faults and quantitative analyses of brittle deformation for the same faults, including bedding attitudes, fracturing type, attitudes, and spatial intensity distribution by using the dimension/spacing ratio, namely H/S ratio where H is the dimension of the fracture and S is the spacing between two analogous fractures of the same set. Statistical analyses of structural data (stereonets, contouring and H/S transect) were performed to infer a focussed, general algorithm that describes the expected intensity of fracturing process. The analytical model was fit to field measurements by a Montecarlo-convergent approach. This method proved a useful tool to quantify complex relations with a high number of variables. It creates a large sequence of possible solution parameters and results are compared with field data. For each item an error mean value is

  7. Worm plot to diagnose fit in quantile regression

    NARCIS (Netherlands)

    Buuren, S. van

    2007-01-01

    The worm plot is a series of detrended Q-Q plots, split by covariate levels. The worm plot is a diagnostic tool for visualizing how well a statistical model fits the data, for finding locations at which the fit can be improved, and for comparing the fit of different models. This paper shows how the

  8. Worm plot to diagnose fit in quantile regression

    NARCIS (Netherlands)

    Buuren, S. van

    2007-01-01

    The worm plot is a series of detrended Q-Q plots, split by covariate levels. The worm plot is a diagnostic tool for visualizing how well a statistical model fits the data, for finding locations at which the fit can be improved, and for comparing the fit of different models. This paper shows how

  9. A data-driven model for influenza transmission incorporating media effects.

    Science.gov (United States)

    Mitchell, Lewis; Ross, Joshua V

    2016-10-01

    Numerous studies have attempted to model the effect of mass media on the transmission of diseases such as influenza; however, quantitative data on media engagement has until recently been difficult to obtain. With the recent explosion of 'big data' coming from online social media and the like, large volumes of data on a population's engagement with mass media during an epidemic are becoming available to researchers. In this study, we combine an online dataset comprising millions of shared messages relating to influenza with traditional surveillance data on flu activity to suggest a functional form for the relationship between the two. Using this data, we present a simple deterministic model for influenza dynamics incorporating media effects, and show that such a model helps explain the dynamics of historical influenza outbreaks. Furthermore, through model selection we show that the proposed media function fits historical data better than other media functions proposed in earlier studies.

  10. Sensitivity of goodness-of-fit statistics to rainfall data rounding off

    Science.gov (United States)

    Deidda, Roberto; Puliga, Michelangelo

    An analysis based on the L-moments theory suggests of adopting the generalized Pareto distribution to interpret daily rainfall depths recorded by the rain-gauge network of the Hydrological Survey of the Sardinia Region. Nevertheless, a big problem, not yet completely resolved, arises in the estimation of a left-censoring threshold able to assure a good fitting of rainfall data with the generalized Pareto distribution. In order to detect an optimal threshold, keeping the largest possible number of data, we chose to apply a “failure-to-reject” method based on goodness-of-fit tests, as it was proposed by Choulakian and Stephens [Choulakian, V., Stephens, M.A., 2001. Goodness-of-fit tests for the generalized Pareto distribution. Technometrics 43, 478-484]. Unfortunately, the application of the test, using percentage points provided by Choulakian and Stephens (2001), did not succeed in detecting a useful threshold value in most analyzed time series. A deeper analysis revealed that these failures are mainly due to the presence of large quantities of rounding off values among sample data, affecting the distribution of goodness-of-fit statistics and leading to significant departures from percentage points expected for continuous random variables. A procedure based on Monte Carlo simulations is thus proposed to overcome these problems.

  11. Dynamic data analysis modeling data with differential equations

    CERN Document Server

    Ramsay, James

    2017-01-01

    This text focuses on the use of smoothing methods for developing and estimating differential equations following recent developments in functional data analysis and building on techniques described in Ramsay and Silverman (2005) Functional Data Analysis. The central concept of a dynamical system as a buffer that translates sudden changes in input into smooth controlled output responses has led to applications of previously analyzed data, opening up entirely new opportunities for dynamical systems. The technical level has been kept low so that those with little or no exposure to differential equations as modeling objects can be brought into this data analysis landscape. There are already many texts on the mathematical properties of ordinary differential equations, or dynamic models, and there is a large literature distributed over many fields on models for real world processes consisting of differential equations. However, a researcher interested in fitting such a model to data, or a statistician interested in...

  12. FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES

    Directory of Open Access Journals (Sweden)

    U. S. Panday

    2012-09-01

    Full Text Available In literature and in photogrammetric workstations many approaches and systems to automatically reconstruct buildings from remote sensing data are described and available. Those building models are being used for instance in city modeling or in cadastre context. If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS data. This leads to inconsistent building outlines, which has a negative influence on visual impression, but more seriously also represents a wrong legal boundary in the cadaster. Oblique aerial images as opposed to nadir-view images reveal greater detail, enabling to see different views of an object taken from different directions. Building walls are visible from oblique images directly and those images are used for automated roof overhang estimation in this research. A fitting algorithm is employed to find roof parameters of simple buildings. It uses a least squares algorithm to fit projected wire frames to their corresponding edge lines extracted from the images. Self-occlusion is detected based on intersection result of viewing ray and the planes formed by the building whereas occlusion from other objects is detected using an ALS point cloud. Overhang and ground height are obtained by sweeping vertical and horizontal planes respectively. Experimental results are verified with high resolution ortho-images, field survey, and ALS data. Planimetric accuracy of 1cm mean and 5cm standard deviation was obtained, while buildings' orientation were accurate to mean of 0.23° and standard deviation of 0.96° with ortho-image. Overhang parameters were aligned to approximately 10cm with field survey. The ground and roof heights were accurate to mean of – 9cm and 8cm with standard deviations of 16cm and 8cm with ALS respectively. The developed approach reconstructs 3D building models well in cases of sufficient texture. More images should be acquired for

  13. Tests of α{sub s} running from QCD fits to collider data

    Energy Technology Data Exchange (ETDEWEB)

    Kuprash, Oleg; Geiser, Achim [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Hamburg University, Institute of Experimental Physics, Hamburg (Germany)

    2015-07-01

    The running of the strong coupling constant, α{sub s}(μ), is tested in a QCD analysis using jet measurements at LHC, Tevatron and HERA in combination with inclusive DIS data. Here μ is associated with the energy scale in the process, typically with the jet transverse energy. For the α{sub s} running test, the parameter n{sub f} of the running, which gives the number of active quarks contributing to loop corrections of the jet and DIS cross sections, is replaced by n{sub f} + Δn{sub f} at energy scales greater than μ > μ{sub thresh}. A series of simultaneous α{sub s}(M{sub Z}) + Δn{sub f} + proton PDF fits to world collider cross section data is done at Next-to-Leading Order QCD, for μ{sub thresh} values ranging from 1 GeV to 1 TeV. The fitted Δn{sub f} is consistent with zero at all tested scales, which gives a precise quantitative confirmation of the QCD running of α{sub s} over 3 orders of magnitude in energy scale. The presented study also provides a new way for indirect searches of the physics beyond the Standard Model.

  14. Hawkes process model with a time-dependent background rate and its application to high-frequency financial data

    Science.gov (United States)

    Omi, Takahiro; Hirata, Yoshito; Aihara, Kazuyuki

    2017-07-01

    A Hawkes process model with a time-varying background rate is developed for analyzing the high-frequency financial data. In our model, the logarithm of the background rate is modeled by a linear model with a relatively large number of variable-width basis functions, and the parameters are estimated by a Bayesian method. Our model can capture not only the slow time variation, such as in the intraday seasonality, but also the rapid one, which follows a macroeconomic news announcement. By analyzing the tick data of the Nikkei 225 mini, we find that (i) our model is better fitted to the data than the Hawkes models with a constant background rate or a slowly varying background rate, which have been commonly used in the field of quantitative finance; (ii) the improvement in the goodness-of-fit to the data by our model is significant especially for sessions where considerable fluctuation of the background rate is present; and (iii) our model is statistically consistent with the data. The branching ratio, which quantifies the level of the endogeneity of markets, estimated by our model is 0.41, suggesting the relative importance of exogenous factors in the market dynamics. We also demonstrate that it is critically important to appropriately model the time-dependent background rate for the branching ratio estimation.

  15. Hawkes process model with a time-dependent background rate and its application to high-frequency financial data.

    Science.gov (United States)

    Omi, Takahiro; Hirata, Yoshito; Aihara, Kazuyuki

    2017-07-01

    A Hawkes process model with a time-varying background rate is developed for analyzing the high-frequency financial data. In our model, the logarithm of the background rate is modeled by a linear model with a relatively large number of variable-width basis functions, and the parameters are estimated by a Bayesian method. Our model can capture not only the slow time variation, such as in the intraday seasonality, but also the rapid one, which follows a macroeconomic news announcement. By analyzing the tick data of the Nikkei 225 mini, we find that (i) our model is better fitted to the data than the Hawkes models with a constant background rate or a slowly varying background rate, which have been commonly used in the field of quantitative finance; (ii) the improvement in the goodness-of-fit to the data by our model is significant especially for sessions where considerable fluctuation of the background rate is present; and (iii) our model is statistically consistent with the data. The branching ratio, which quantifies the level of the endogeneity of markets, estimated by our model is 0.41, suggesting the relative importance of exogenous factors in the market dynamics. We also demonstrate that it is critically important to appropriately model the time-dependent background rate for the branching ratio estimation.

  16. Modelling population dynamics model formulation, fitting and assessment using state-space methods

    CERN Document Server

    Newman, K B; Morgan, B J T; King, R; Borchers, D L; Cole, D J; Besbeas, P; Gimenez, O; Thomas, L

    2014-01-01

    This book gives a unifying framework for estimating the abundance of open populations: populations subject to births, deaths and movement, given imperfect measurements or samples of the populations.  The focus is primarily on populations of vertebrates for which dynamics are typically modelled within the framework of an annual cycle, and for which stochastic variability in the demographic processes is usually modest. Discrete-time models are developed in which animals can be assigned to discrete states such as age class, gender, maturity,  population (within a metapopulation), or species (for multi-species models). The book goes well beyond estimation of abundance, allowing inference on underlying population processes such as birth or recruitment, survival and movement. This requires the formulation and fitting of population dynamics models.  The resulting fitted models yield both estimates of abundance and estimates of parameters characterizing the underlying processes.  

  17. The Alberta moving beyond breast cancer (AMBER cohort study: a prospective study of physical activity and health-related fitness in breast cancer survivors

    Directory of Open Access Journals (Sweden)

    Courneya Kerry S

    2012-11-01

    Full Text Available Abstract Background Limited research has examined the association between physical activity, health-related fitness, and disease outcomes in breast cancer survivors. Here, we present the rationale and design of the Alberta Moving Beyond Breast Cancer (AMBER Study, a prospective cohort study designed specifically to examine the role of physical activity and health-related fitness in breast cancer survivorship from the time of diagnosis and for the balance of life. The AMBER Study will examine the role of physical activity and health-related fitness in facilitating treatment completion, alleviating treatment side effects, hastening recovery after treatments, improving long term quality of life, and reducing the risks of disease recurrence, other chronic diseases, and premature death. Methods/Design The AMBER Study will enroll 1500 newly diagnosed, incident, stage I-IIIc breast cancer survivors in Alberta, Canada over a 5 year period. Assessments will be made at baseline (within 90 days of surgery, 1 year, and 3 years consisting of objective and self-reported measurements of physical activity, health-related fitness, blood collection, lymphedema, patient-reported outcomes, and determinants of physical activity. A final assessment at 5 years will measure patient-reported data only. The cohort members will be followed for an additional 5 years for disease outcomes. Discussion The AMBER cohort will answer key questions related to physical activity and health-related fitness in breast cancer survivors including: (1 the independent and interactive associations of physical activity and health-related fitness with disease outcomes (e.g., recurrence, breast cancer-specific mortality, overall survival, treatment completion rates, symptoms and side effects (e.g., pain, lymphedema, fatigue, neuropathy, quality of life, and psychosocial functioning (e.g., anxiety, depression, self-esteem, happiness, (2 the determinants of physical activity and

  18. Fitting the Phenomenological MSSM

    CERN Document Server

    AbdusSalam, S S; Quevedo, F; Feroz, F; Hobson, M

    2010-01-01

    We perform a global Bayesian fit of the phenomenological minimal supersymmetric standard model (pMSSM) to current indirect collider and dark matter data. The pMSSM contains the most relevant 25 weak-scale MSSM parameters, which are simultaneously fit using `nested sampling' Monte Carlo techniques in more than 15 years of CPU time. We calculate the Bayesian evidence for the pMSSM and constrain its parameters and observables in the context of two widely different, but reasonable, priors to determine which inferences are robust. We make inferences about sparticle masses, the sign of the $\\mu$ parameter, the amount of fine tuning, dark matter properties and the prospects for direct dark matter detection without assuming a restrictive high-scale supersymmetry breaking model. We find the inferred lightest CP-even Higgs boson mass as an example of an approximately prior independent observable. This analysis constitutes the first statistically convergent pMSSM global fit to all current data.

  19. A fitting LEGACY – modelling Kepler's best stars

    Directory of Open Access Journals (Sweden)

    Aarslev Magnus J.

    2017-01-01

    Full Text Available The LEGACY sample represents the best solar-like stars observed in the Kepler mission[5, 8]. The 66 stars in the sample are all on the main sequence or only slightly more evolved. They each have more than one year's observation data in short cadence, allowing for precise extraction of individual frequencies. Here we present model fits using a modified ASTFIT procedure employing two different near-surface-effect corrections, one by Christensen-Dalsgaard[4] and a newer correction proposed by Ball & Gizon[1]. We then compare the results obtained using the different corrections. We find that using the latter correction yields lower masses and significantly lower χ2 values for a large part of the sample.

  20. Health-Related Physical Fitness in Dutch Children With Developmental Coordination Disorder

    NARCIS (Netherlands)

    van der Hoek, Frouwien D.; Stuive, Ilse; Reinders-Messelink, Heleen A.; Holty, Lian; de Blecourt, Alida C. E.; Maathuis, Carel G. B.; van Weert, Ellen

    2012-01-01

    Objective: To compare components of health-related physical fitness between Dutch children with clinically diagnosed developmental coordination disorder (DCD) and typically developing children (TDC), and to examine associations between motor performance problems and components of health-related

  1. The effect of measurement quality on targeted structural model fit indices: A comment on Lance, Beck, Fan, and Carter (2016).

    Science.gov (United States)

    McNeish, Daniel; Hancock, Gregory R

    2018-03-01

    Lance, Beck, Fan, and Carter (2016) recently advanced 6 new fit indices and associated cutoff values for assessing data-model fit in the structural portion of traditional latent variable path models. The authors appropriately argued that, although most researchers' theoretical interest rests with the latent structure, they still rely on indices of global model fit that simultaneously assess both the measurement and structural portions of the model. As such, Lance et al. proposed indices intended to assess the structural portion of the model in isolation of the measurement model. Unfortunately, although these strategies separate the assessment of the structure from the fit of the measurement model, they do not isolate the structure's assessment from the quality of the measurement model. That is, even with a perfectly fitting measurement model, poorer quality (i.e., less reliable) measurements will yield a more favorable verdict regarding structural fit, whereas better quality (i.e., more reliable) measurements will yield a less favorable structural assessment. This phenomenon, referred to by Hancock and Mueller (2011) as the reliability paradox, affects not only traditional global fit indices but also those structural indices proposed by Lance et al. as well. Fortunately, as this comment will clarify, indices proposed by Hancock and Mueller help to mitigate this problem and allow the structural portion of the model to be assessed independently of both the fit of the measurement model as well as the quality of indicator variables contained therein. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Inferring genetic interactions from comparative fitness data.

    Science.gov (United States)

    Crona, Kristina; Gavryushkin, Alex; Greene, Devin; Beerenwinkel, Niko

    2017-12-20

    Darwinian fitness is a central concept in evolutionary biology. In practice, however, it is hardly possible to measure fitness for all genotypes in a natural population. Here, we present quantitative tools to make inferences about epistatic gene interactions when the fitness landscape is only incompletely determined due to imprecise measurements or missing observations. We demonstrate that genetic interactions can often be inferred from fitness rank orders, where all genotypes are ordered according to fitness, and even from partial fitness orders. We provide a complete characterization of rank orders that imply higher order epistasis. Our theory applies to all common types of gene interactions and facilitates comprehensive investigations of diverse genetic interactions. We analyzed various genetic systems comprising HIV-1, the malaria-causing parasite Plasmodium vivax , the fungus Aspergillus niger , and the TEM-family of β-lactamase associated with antibiotic resistance. For all systems, our approach revealed higher order interactions among mutations.

  3. Health-Related Fitness and Young Children.

    Science.gov (United States)

    Gabbard, Carl; LeBlanc, Betty

    Because research indicates that American youth have become fatter since the 1960's, the development of fitness among young children should not be left to chance. Simple games, rhythms, and dance are not sufficient to insure fitness, for, during the regular free play situation, children very seldom experience physical activity of enough intensity…

  4. Height, Weight, and Aerobic Fitness Level in Relation to the Risk of Atrial Fibrillation.

    Science.gov (United States)

    Crump, Casey; Sundquist, Jan; Winkleby, Marilyn A; Sundquist, Kristina

    2018-03-01

    Tall stature and obesity have been associated with a higher risk of atrial fibrillation (AF), but there have been conflicting reports of the effects of aerobic fitness. We conducted a national cohort study to examine interactions between height or weight and level of aerobic fitness among 1,547,478 Swedish military conscripts during 1969-1997 (97%-98% of all 18-year-old men) in relation to AF identified from nationwide inpatient and outpatient diagnoses through 2012 (maximal age, 62 years). Increased height, weight, and aerobic fitness level (but not muscular strength) at age 18 years were all associated with a higher AF risk in adulthood. Positive additive and multiplicative interactions were found between height or weight and aerobic fitness level (for the highest tertiles of height and aerobic fitness level vs. the lowest, relative excess risk = 0.51, 95% confidence interval (CI): 0.40, 0.62; ratio of hazard ratios = 1.50, 95% CI: 1.34, 1.65). High aerobic fitness levels were associated with higher risk among men who were at least 186 cm (6 feet, 1 inch) tall but were protective among shorter men. Men with the combination of tall stature and high aerobic fitness level had the highest risk (for the highest tertiles vs. the lowest, adjusted hazard ratio = 1.70, 95% CI: 1.61, 1.80). These findings suggest important interactions between body size and aerobic fitness level in relation to AF and may help identify high-risk subgroups.

  5. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  6. Lee-Carter state space modeling: Application to the Malaysia mortality data

    Science.gov (United States)

    Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.

    2014-06-01

    This article presents an approach that formalizes the Lee-Carter (LC) model as a state space model. Maximum likelihood through Expectation-Maximum (EM) algorithm was used to estimate the model. The methodology is applied to Malaysia's total population mortality data. Malaysia's mortality data was modeled based on age specific death rates (ASDR) data from 1971-2009. The fitted ASDR are compared to the actual observed values. However, results from the comparison of the fitted and actual values between LC-SS model and the original LC model shows that the fitted values from the LC-SS model and original LC model are quite close. In addition, there is not much difference between the value of root mean squared error (RMSE) and Akaike information criteria (AIC) from both models. The LC-SS model estimated for this study can be extended for forecasting ASDR in Malaysia. Then, accuracy of the LC-SS compared to the original LC can be further examined by verifying the forecasting power using out-of-sample comparison.

  7. Data Modeling, Feature Extraction, and Classification of Magnetic and EMI Data, ESTCP Discrimination Study, Camp Sibert, AL. Demonstration Report

    Science.gov (United States)

    2008-09-01

    Figure 19. Misfit versus depth curve for the EM63 Pasion -Oldenburg model fit to anomaly 649. Two cases are considered: (i) using all the data which...selection of optimal models; c) Fitting of 2- and 3-dipole Pasion -Oldenburg models to the EM63 cued- interrogation data and selection of optimal models...Hart et al., 2001; Collins et al., 2001; Pasion & Oldenburg, 2001; Zhang et al., 2003a, 2003b; Billings, 2004). The most promising discrimination

  8. Automated ligand fitting by core-fragment fitting and extension into density

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Klei, Herbert; Adams, Paul D.; Moriarty, Nigel W.; Cohn, Judith D.

    2006-01-01

    An automated ligand-fitting procedure has been developed and tested on 9327 ligands and (F o − F c )exp(iϕ c ) difference density from macromolecular structures in the Protein Data Bank. A procedure for fitting of ligands to electron-density maps by first fitting a core fragment of the ligand to density and then extending the remainder of the ligand into density is presented. The approach was tested by fitting 9327 ligands over a wide range of resolutions (most are in the range 0.8-4.8 Å) from the Protein Data Bank (PDB) into (F o − F c )exp(iϕ c ) difference density calculated using entries from the PDB without these ligands. The procedure was able to place 58% of these 9327 ligands within 2 Å (r.m.s.d.) of the coordinates of the atoms in the original PDB entry for that ligand. The success of the fitting procedure was relatively insensitive to the size of the ligand in the range 10–100 non-H atoms and was only moderately sensitive to resolution, with the percentage of ligands placed near the coordinates of the original PDB entry for fits in the range 58–73% over all resolution ranges tested

  9. Bayesian Evaluation of Dynamical Soil Carbon Models Using Soil Carbon Flux Data

    Science.gov (United States)

    Xie, H. W.; Romero-Olivares, A.; Guindani, M.; Allison, S. D.

    2017-12-01

    2016 was Earth's hottest year in the modern temperature record and the third consecutive record-breaking year. As the planet continues to warm, temperature-induced changes in respiration rates of soil microbes could reduce the amount of carbon sequestered in the soil organic carbon (SOC) pool, one of the largest terrestrial stores of carbon. This would accelerate temperature increases. In order to predict the future size of the SOC pool, mathematical soil carbon models (SCMs) describing interactions between the biosphere and atmosphere are needed. SCMs must be validated before they can be chosen for predictive use. In this study, we check two SCMs called CON and AWB for consistency with observed data using Bayesian goodness of fit testing that can be used in the future to compare other models. We compare the fit of the models to longitudinal soil respiration data from a meta-analysis of soil heating experiments using a family of Bayesian goodness of fit metrics called information criteria (IC), including the Widely Applicable Information Criterion (WAIC), the Leave-One-Out Information Criterion (LOOIC), and the Log Pseudo Marginal Likelihood (LPML). These IC's take the entire posterior distribution into account, rather than just one outputted model fit line. A lower WAIC and LOOIC and larger LPML indicate a better fit. We compare AWB and CON with fixed steady state model pool sizes. At equivalent SOC, dissolved organic carbon, and microbial pool sizes, CON always outperforms AWB quantitatively by all three IC's used. AWB monotonically improves in fit as we reduce the SOC steady state pool size while fixing all other pool sizes, and the same is almost true for CON. The AWB model with the lowest SOC is the best performing AWB model, while the CON model with the second lowest SOC is the best performing model. We observe that AWB displays more changes in slope sign and qualitatively displays more adaptive dynamics, which prevents AWB from being fully ruled out for

  10. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  11. FIREFLY (Fitting IteRativEly For Likelihood analYsis): a full spectral fitting code

    Science.gov (United States)

    Wilkinson, David M.; Maraston, Claudia; Goddard, Daniel; Thomas, Daniel; Parikh, Taniya

    2017-12-01

    We present a new spectral fitting code, FIREFLY, for deriving the stellar population properties of stellar systems. FIREFLY is a chi-squared minimization fitting code that fits combinations of single-burst stellar population models to spectroscopic data, following an iterative best-fitting process controlled by the Bayesian information criterion. No priors are applied, rather all solutions within a statistical cut are retained with their weight. Moreover, no additive or multiplicative polynomials are employed to adjust the spectral shape. This fitting freedom is envisaged in order to map out the effect of intrinsic spectral energy distribution degeneracies, such as age, metallicity, dust reddening on galaxy properties, and to quantify the effect of varying input model components on such properties. Dust attenuation is included using a new procedure, which was tested on Integral Field Spectroscopic data in a previous paper. The fitting method is extensively tested with a comprehensive suite of mock galaxies, real galaxies from the Sloan Digital Sky Survey and Milky Way globular clusters. We also assess the robustness of the derived properties as a function of signal-to-noise ratio (S/N) and adopted wavelength range. We show that FIREFLY is able to recover age, metallicity, stellar mass, and even the star formation history remarkably well down to an S/N ∼ 5, for moderately dusty systems. Code and results are publicly available.1

  12. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    Science.gov (United States)

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.

  13. Health-related physical fitness and physical activity in elementary school students.

    Science.gov (United States)

    Chen, Weiyun; Hammond-Bennett, Austin; Hypnar, Andrew; Mason, Steve

    2018-01-30

    This study examined associations between students' physical fitness and physical activity (PA), as well as what specific physical fitness components were more significant correlates to being physically active in different settings for boys and girls. A total of 265 fifth-grade students with an average age of 11 voluntarily participated in this study. The students' physical fitness was assessed using four FitnessGram tests, including Progressive Aerobic Cardiovascular Endurance Run (PACER), curl-up, push-up, and trunk lift tests. The students' daily PA was assessed in various settings using a daily PA log for 7 days. Data was analyzed with descriptive statistics, univariate analyses, and multiple R-squared liner regression methods. Performance on the four physical fitness tests was significantly associated with the PA minutes spent in physical education (PE) class and recess for the total sample and for girls, but not for boys. Performance on the four fitness tests was significantly linked to participation in sports/dances outside school and the total weekly PA minutes for the total sample, boys, and girls. Further, boys and girls who were the most physically fit spent significantly more time engaging in sports/dances and had greater total weekly PA than boys and girls who were not physically fit. In addition, the physically fit girls were more physically active in recess than girls who were not physically fit. Overall, students' performance on the four physical fitness tests was significantly associated with them being physically active during PE and in recess and engaging in sports/dances, as well as with their total weekly PA minutes, but not with their participation in non-organized physical play outside school. ClinicalTrials.gov ID: NCT03015337 , registered date: 1/09/2017, as "retrospectively registered".

  14. Fit reduced GUTS models online: From theory to practice.

    Science.gov (United States)

    Baudrot, Virgile; Veber, Philippe; Gence, Guillaume; Charles, Sandrine

    2018-05-20

    Mechanistic modeling approaches, such as the toxicokinetic-toxicodynamic (TKTD) framework, are promoted by international institutions such as the European Food Safety Authority and the Organization for Economic Cooperation and Development to assess the environmental risk of chemical products generated by human activities. TKTD models can encompass a large set of mechanisms describing the kinetics of compounds inside organisms (e.g., uptake and elimination) and their effect at the level of individuals (e.g., damage accrual, recovery, and death mechanism). Compared to classical dose-response models, TKTD approaches have many advantages, including accounting for temporal aspects of exposure and toxicity, considering data points all along the experiment and not only at the end, and making predictions for untested situations as realistic exposure scenarios. Among TKTD models, the general unified threshold model of survival (GUTS) is within the most recent and innovative framework but is still underused in practice, especially by risk assessors, because specialist programming and statistical skills are necessary to run it. Making GUTS models easier to use through a new module freely available from the web platform MOSAIC (standing for MOdeling and StAtistical tools for ecotoxIClogy) should promote GUTS operability in support of the daily work of environmental risk assessors. This paper presents the main features of MOSAIC_GUTS: uploading of the experimental data, GUTS fitting analysis, and LCx estimates with their uncertainty. These features will be exemplified from literature data. Integr Environ Assess Manag 2018;00:000-000. © 2018 SETAC. © 2018 SETAC.

  15. New ROOT Graphical User Interfaces for fitting

    International Nuclear Information System (INIS)

    Maline, D Gonzalez; Moneta, L; Antcheva, I

    2010-01-01

    ROOT, as a scientific data analysis framework, provides extensive capabilities via Graphical User Interfaces (GUI) for performing interactive analysis and visualizing data objects like histograms and graphs. A new interface for fitting has been developed for performing, exploring and comparing fits on data point sets such as histograms, multi-dimensional graphs or trees. With this new interface, users can build interactively the fit model function, set parameter values and constraints and select fit and minimization methods with their options. Functionality for visualizing the fit results is as well provided, with the possibility of drawing residuals or confidence intervals. Furthermore, the new fit panel reacts as a standalone application and it does not prevent users from interacting with other windows. We will describe in great detail the functionality of this user interface, covering as well new capabilities provided by the new fitting and minimization tools introduced recently in the ROOT framework.

  16. A comparison of fit of CNC-milled titanium and zirconia frameworks to implants.

    Science.gov (United States)

    Abduo, Jaafar; Lyons, Karl; Waddell, Neil; Bennani, Vincent; Swain, Michael

    2012-05-01

    Computer numeric controlled (CNC) milling was proven to be predictable method to fabricate accurately fitting implant titanium frameworks. However, no data are available regarding the fit of CNC-milled implant zirconia frameworks. To compare the precision of fit of implant frameworks milled from titanium and zirconia and relate it to peri-implant strain development after framework fixation. A partially edentulous epoxy resin models received two Branemark implants in the areas of the lower left second premolar and second molar. From this model, 10 identical frameworks were fabricated by mean of CNC milling. Half of them were made from titanium and the other half from zirconia. Strain gauges were mounted close to the implants to qualitatively and quantitatively assess strain development as a result of framework fitting. In addition, the fit of the framework implant interface was measured using an optical microscope, when only one screw was tightened (passive fit) and when all screws were tightened (vertical fit). The data was statistically analyzed using the Mann-Whitney test. All frameworks produced measurable amounts of peri-implant strain. The zirconia frameworks produced significantly less strain than titanium. Combining the qualitative and quantitative information indicates that the implants were under vertical displacement rather than horizontal. The vertical fit was similar for zirconia (3.7 µm) and titanium (3.6 µm) frameworks; however, the zirconia frameworks exhibited a significantly finer passive fit (5.5 µm) than titanium frameworks (13.6 µm). CNC milling produced zirconia and titanium frameworks with high accuracy. The difference between the two materials in terms of fit is expected to be of minimal clinical significance. The strain developed around the implants was more related to the framework fit rather than framework material. © 2011 Wiley Periodicals, Inc.

  17. RATES OF FITNESS DECLINE AND REBOUND SUGGEST PERVASIVE EPISTASIS

    Science.gov (United States)

    Perfeito, L; Sousa, A; Bataillon, T; Gordo, I

    2014-01-01

    Unraveling the factors that determine the rate of adaptation is a major question in evolutionary biology. One key parameter is the effect of a new mutation on fitness, which invariably depends on the environment and genetic background. The fate of a mutation also depends on population size, which determines the amount of drift it will experience. Here, we manipulate both population size and genotype composition and follow adaptation of 23 distinct Escherichia coli genotypes. These have previously accumulated mutations under intense genetic drift and encompass a substantial fitness variation. A simple rule is uncovered: the net fitness change is negatively correlated with the fitness of the genotype in which new mutations appear—a signature of epistasis. We find that Fisher's geometrical model can account for the observed patterns of fitness change and infer the parameters of this model that best fit the data, using Approximate Bayesian Computation. We estimate a genomic mutation rate of 0.01 per generation for fitness altering mutations, albeit with a large confidence interval, a mean fitness effect of mutations of −0.01, and an effective number of traits nine in mutS− E. coli. This framework can be extended to confront a broader range of models with data and test different classes of fitness landscape models. PMID:24372601

  18. THE HERSCHEL ORION PROTOSTAR SURVEY: SPECTRAL ENERGY DISTRIBUTIONS AND FITS USING A GRID OF PROTOSTELLAR MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Furlan, E. [Infrared Processing and Analysis Center, California Institute of Technology, 770 S. Wilson Ave., Pasadena, CA 91125 (United States); Fischer, W. J. [Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771 (United States); Ali, B. [Space Science Institute, 4750 Walnut Street, Boulder, CO 80301 (United States); Stutz, A. M. [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Stanke, T. [ESO, Karl-Schwarzschild-Strasse 2, D-85748 Garching bei München (Germany); Tobin, J. J. [National Radio Astronomy Observatory, Charlottesville, VA 22903 (United States); Megeath, S. T.; Booker, J. [Ritter Astrophysical Research Center, Department of Physics and Astronomy, University of Toledo, 2801 W. Bancroft Street, Toledo, OH 43606 (United States); Osorio, M. [Instituto de Astrofísica de Andalucía, CSIC, Camino Bajo de Huétor 50, E-18008 Granada (Spain); Hartmann, L.; Calvet, N. [Department of Astronomy, University of Michigan, 500 Church Street, Ann Arbor, MI 48109 (United States); Poteet, C. A. [New York Center for Astrobiology, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180 (United States); Manoj, P. [Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India); Watson, D. M. [Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627 (United States); Allen, L., E-mail: furlan@ipac.caltech.edu [National Optical Astronomy Observatory, 950 N. Cherry Avenue, Tucson, AZ 85719 (United States)

    2016-05-01

    We present key results from the Herschel Orion Protostar Survey: spectral energy distributions (SEDs) and model fits of 330 young stellar objects, predominantly protostars, in the Orion molecular clouds. This is the largest sample of protostars studied in a single, nearby star formation complex. With near-infrared photometry from 2MASS, mid- and far-infrared data from Spitzer and Herschel , and submillimeter photometry from APEX, our SEDs cover 1.2–870 μ m and sample the peak of the protostellar envelope emission at ∼100 μ m. Using mid-IR spectral indices and bolometric temperatures, we classify our sample into 92 Class 0 protostars, 125 Class I protostars, 102 flat-spectrum sources, and 11 Class II pre-main-sequence stars. We implement a simple protostellar model (including a disk in an infalling envelope with outflow cavities) to generate a grid of 30,400 model SEDs and use it to determine the best-fit model parameters for each protostar. We argue that far-IR data are essential for accurate constraints on protostellar envelope properties. We find that most protostars, and in particular the flat-spectrum sources, are well fit. The median envelope density and median inclination angle decrease from Class 0 to Class I to flat-spectrum protostars, despite the broad range in best-fit parameters in each of the three categories. We also discuss degeneracies in our model parameters. Our results confirm that the different protostellar classes generally correspond to an evolutionary sequence with a decreasing envelope infall rate, but the inclination angle also plays a role in the appearance, and thus interpretation, of the SEDs.

  19. Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Pantic, Maja

    2016-01-01

    Fitting algorithms for Active Appearance Models (AAMs) are usually considered to be robust but slow or fast but less able to generalize well to unseen variations. In this paper, we look into AAM fitting algorithms and make the following orthogonal contributions: We present a simple “project-out‿

  20. A Probabilistic Approach to Fitting Period–luminosity Relations and Validating Gaia Parallaxes

    Energy Technology Data Exchange (ETDEWEB)

    Sesar, Branimir; Fouesneau, Morgan; Bailer-Jones, Coryn A. L.; Gould, Andy; Rix, Hans-Walter [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Price-Whelan, Adrian M., E-mail: bsesar@mpia.de [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States)

    2017-04-01

    Pulsating stars, such as Cepheids, Miras, and RR Lyrae stars, are important distance indicators and calibrators of the “cosmic distance ladder,” and yet their period–luminosity–metallicity (PLZ) relations are still constrained using simple statistical methods that cannot take full advantage of available data. To enable optimal usage of data provided by the Gaia mission, we present a probabilistic approach that simultaneously constrains parameters of PLZ relations and uncertainties in Gaia parallax measurements. We demonstrate this approach by constraining PLZ relations of type ab RR Lyrae stars in near-infrared W 1 and W 2 bands, using Tycho- Gaia Astrometric Solution (TGAS) parallax measurements for a sample of ≈100 type ab RR Lyrae stars located within 2.5 kpc of the Sun. The fitted PLZ relations are consistent with previous studies, and in combination with other data, deliver distances precise to 6% (once various sources of uncertainty are taken into account). To a precision of 0.05 mas (1 σ ), we do not find a statistically significant offset in TGAS parallaxes for this sample of distant RR Lyrae stars (median parallax of 0.8 mas and distance of 1.4 kpc). With only minor modifications, our probabilistic approach can be used to constrain PLZ relations of other pulsating stars, and we intend to apply it to Cepheid and Mira stars in the near future.

  1. SAMMY, Multilevel R-Matrix Fits to Neutron and Charged-Particle Cross-Section Data Using Bayes' Equations

    International Nuclear Information System (INIS)

    Larson, Nancy M.

    2007-01-01

    1 - Description of problem or function: The purpose of the code is to analyze time-of-flight cross section data in the resolved and unresolved resonance regions, where the incident particle is either a neutron or a charged particle (p, alpha, d,...). Energy-differential cross sections and angular-distribution data are treated, as are certain forms of energy-integrated data. In the resolved resonance region (RRR), theoretical cross sections are generated using the Reich-Moore approximation to R-matrix theory (and extensions thereof). Sophisticated models are used to describe the experimental situation: Data-reduction parameters (e.g. normalization, background, sample thickness) are included. Several options are available for both resolution and Doppler broadening, including a crystal-lattice model for Doppler broadening. Self-shielding and multiple-scattering correction options are available for analysis of capture cross sections. Multiple isotopes and impurities within a sample are handled accurately. Cross sections in the unresolved resonance region (URR) can also be analyzed using SAMMY. The capability was borrowed from Froehner's FITACS code; SAMMY modifications for the URR include more exact calculation of partial derivatives, normalization options for the experimental data, increased flexibility for input of experimental data, introduction of user-friendly input options. In both energy regions, values for resonance parameters and for data-related parameters (such as normalization, sample thickness, effective temperature, resolution parameters) are determined via fits to the experimental data using Bayes' method (see below). Final results may be reported in ENDF format for inclusion in the evaluated nuclear data files. The manner in which SAMMY 7 (released in 2006) differs from the previous version (SAMMY-M6) is itemized in Section I.A of the SAMMY users' manual. Details of the 7.0.1 update are documented in an errata SAMMY 7.0.1 Errata (http://www.ornl.gov/sci/nuclear_science_technology/nuclear_data

  2. Fast and exact Newton and Bidirectional fitting of Active Appearance Models.

    Science.gov (United States)

    Kossaifi, Jean; Tzimiropoulos, Yorgos; Pantic, Maja

    2016-12-21

    Active Appearance Models (AAMs) are generative models of shape and appearance that have proven very attractive for their ability to handle wide changes in illumination, pose and occlusion when trained in the wild, while not requiring large training dataset like regression-based or deep learning methods. The problem of fitting an AAM is usually formulated as a non-linear least squares one and the main way of solving it is a standard Gauss-Newton algorithm. In this paper we extend Active Appearance Models in two ways: we first extend the Gauss-Newton framework by formulating a bidirectional fitting method that deforms both the image and the template to fit a new instance. We then formulate a second order method by deriving an efficient Newton method for AAMs fitting. We derive both methods in a unified framework for two types of Active Appearance Models, holistic and part-based, and additionally show how to exploit the structure in the problem to derive fast yet exact solutions. We perform a thorough evaluation of all algorithms on three challenging and recently annotated inthe- wild datasets, and investigate fitting accuracy, convergence properties and the influence of noise in the initialisation. We compare our proposed methods to other algorithms and show that they yield state-of-the-art results, out-performing other methods while having superior convergence properties.

  3. xMDFF: molecular dynamics flexible fitting of low-resolution X-ray structures.

    Science.gov (United States)

    McGreevy, Ryan; Singharoy, Abhishek; Li, Qufei; Zhang, Jingfen; Xu, Dong; Perozo, Eduardo; Schulten, Klaus

    2014-09-01

    X-ray crystallography remains the most dominant method for solving atomic structures. However, for relatively large systems, the availability of only medium-to-low-resolution diffraction data often limits the determination of all-atom details. A new molecular dynamics flexible fitting (MDFF)-based approach, xMDFF, for determining structures from such low-resolution crystallographic data is reported. xMDFF employs a real-space refinement scheme that flexibly fits atomic models into an iteratively updating electron-density map. It addresses significant large-scale deformations of the initial model to fit the low-resolution density, as tested with synthetic low-resolution maps of D-ribose-binding protein. xMDFF has been successfully applied to re-refine six low-resolution protein structures of varying sizes that had already been submitted to the Protein Data Bank. Finally, via systematic refinement of a series of data from 3.6 to 7 Å resolution, xMDFF refinements together with electrophysiology experiments were used to validate the first all-atom structure of the voltage-sensing protein Ci-VSP.

  4. Assessing the Goodness of Fit of Phylogenetic Comparative Methods: A Meta-Analysis and Simulation Study.

    Directory of Open Access Journals (Sweden)

    Dwueng-Chwuan Jhwueng

    Full Text Available Phylogenetic comparative methods (PCMs have been applied widely in analyzing data from related species but their fit to data is rarely assessed.Can one determine whether any particular comparative method is typically more appropriate than others by examining comparative data sets?I conducted a meta-analysis of 122 phylogenetic data sets found by searching all papers in JEB, Blackwell Synergy and JSTOR published in 2002-2005 for the purpose of assessing the fit of PCMs. The number of species in these data sets ranged from 9 to 117.I used the Akaike information criterion to compare PCMs, and then fit PCMs to bivariate data sets through REML analysis. Correlation estimates between two traits and bootstrapped confidence intervals of correlations from each model were also compared.For phylogenies of less than one hundred taxa, the Independent Contrast method and the independent, non-phylogenetic models provide the best fit.For bivariate analysis, correlations from different PCMs are qualitatively similar so that actual correlations from real data seem to be robust to the PCM chosen for the analysis. Therefore, researchers might apply the PCM they believe best describes the evolutionary mechanisms underlying their data.

  5. Statistical modeling and extrapolation of carcinogenesis data

    International Nuclear Information System (INIS)

    Krewski, D.; Murdoch, D.; Dewanji, A.

    1986-01-01

    Mathematical models of carcinogenesis are reviewed, including pharmacokinetic models for metabolic activation of carcinogenic substances. Maximum likelihood procedures for fitting these models to epidemiological data are discussed, including situations where the time to tumor occurrence is unobservable. The plausibility of different possible shapes of the dose response curve at low doses is examined, and a robust method for linear extrapolation to low doses is proposed and applied to epidemiological data on radiation carcinogenesis

  6. A general theory for the construction of best-fit correlation equations for multi-dimensioned numerical data

    International Nuclear Information System (INIS)

    Moore, S.E.; Moffat, D.G.

    2007-01-01

    A general theory for the construction of best-fit correlation equations for multi-dimensioned sets of numerical data is presented. This new theory is based on the mathematics of n-dimensional surfaces and goodness-of-fit statistics. It is shown that orthogonal best-fit analytical trend lines for each of the independent parameters of the data can be used to construct an overall best-fit correlation equation that satisfies both physical boundary conditions and best-of-fit statistical measurements. Application of the theory is illustrated by fitting a three-parameter set of numerical finite-element maximum-stress data obtained earlier by Dr. Moffat for pressure vessel nozzles and/or piping system branch connections

  7. The Meaning of Goodness-of-Fit Tests: Commentary on "Goodness-of-Fit Assessment of Item Response Theory Models"

    Science.gov (United States)

    Thissen, David

    2013-01-01

    In this commentary, David Thissen states that "Goodness-of-fit assessment for IRT models is maturing; it has come a long way from zero." Thissen then references prior works on "goodness of fit" in the index of Lord and Novick's (1968) classic text; Yen (1984); Drasgow, Levine, Tsien, Williams, and Mead (1995); Chen and…

  8. Assessing a moderating effect and the global fit of a PLS model on online trading

    Directory of Open Access Journals (Sweden)

    Juan J. García-Machado

    2017-12-01

    Full Text Available This paper proposes a PLS Model for the study of Online Trading. Traditional investing has experienced a revolution due to the rise of e-trading services that enable investors to use Internet conduct secure trading. On the hand, model results show that there is a positive, direct and statistically significant relationship between personal outcome expectations, perceived relative advantage, shared vision and economy-based trust with the quality of knowledge. On the other hand, trading frequency and portfolio performance has also this relationship. After including the investor’s income and financial wealth (IFW as moderating effect, the PLS model was enhanced, and we found that the interaction term is negative and statistically significant, so, higher IFW levels entail a weaker relationship between trading frequency and portfolio performance and vice-versa. Finally, with regard to the goodness of overall model fit measures, they showed that the model is fit for SRMR and dG measures, so it is likely that the model is true.

  9. Fitting statistical distributions the generalized lambda distribution and generalized bootstrap methods

    CERN Document Server

    Karian, Zaven A

    2000-01-01

    Throughout the physical and social sciences, researchers face the challenge of fitting statistical distributions to their data. Although the study of statistical modelling has made great strides in recent years, the number and variety of distributions to choose from-all with their own formulas, tables, diagrams, and general properties-continue to create problems. For a specific application, which of the dozens of distributions should one use? What if none of them fit well?Fitting Statistical Distributions helps answer those questions. Focusing on techniques used successfully across many fields, the authors present all of the relevant results related to the Generalized Lambda Distribution (GLD), the Generalized Bootstrap (GB), and Monte Carlo simulation (MC). They provide the tables, algorithms, and computer programs needed for fitting continuous probability distributions to data in a wide variety of circumstances-covering bivariate as well as univariate distributions, and including situations where moments do...

  10. Fit Gap Analysis – The Role of Business Process Reference Models

    Directory of Open Access Journals (Sweden)

    Dejan Pajk

    2013-12-01

    Full Text Available Enterprise resource planning (ERP systems support solutions for standard business processes such as financial, sales, procurement and warehouse. In order to improve the understandability and efficiency of their implementation, ERP vendors have introduced reference models that describe the processes and underlying structure of an ERP system. To select and successfully implement an ERP system, the capabilities of that system have to be compared with a company’s business needs. Based on a comparison, all of the fits and gaps must be identified and further analysed. This step usually forms part of ERP implementation methodologies and is called fit gap analysis. The paper theoretically overviews methods for applying reference models and describes fit gap analysis processes in detail. The paper’s first contribution is its presentation of a fit gap analysis using standard business process modelling notation. The second contribution is the demonstration of a process-based comparison approach between a supply chain process and an ERP system process reference model. In addition to its theoretical contributions, the results can also be practically applied to projects involving the selection and implementation of ERP systems.

  11. Application of Multilevel Models to Morphometric Data. Part 1. Linear Models and Hypothesis Testing

    Directory of Open Access Journals (Sweden)

    O. Tsybrovskyy

    2003-01-01

    Full Text Available Morphometric data usually have a hierarchical structure (i.e., cells are nested within patients, which should be taken into consideration in the analysis. In the recent years, special methods of handling hierarchical data, called multilevel models (MM, as well as corresponding software have received considerable development. However, there has been no application of these methods to morphometric data yet. In this paper we report our first experience of analyzing karyometric data by means of MLwiN – a dedicated program for multilevel modeling. Our data were obtained from 34 follicular adenomas and 44 follicular carcinomas of the thyroid. We show examples of fitting and interpreting MM of different complexity, and draw a number of interesting conclusions about the differences in nuclear morphology between follicular thyroid adenomas and carcinomas. We also demonstrate substantial advantages of multilevel models over conventional, single‐level statistics, which have been adopted previously to analyze karyometric data. In addition, some theoretical issues related to MM as well as major statistical software for MM are briefly reviewed.

  12. physical fitness self-related by the elderly and its relationship

    African Journals Online (AJOL)

    CASA

    the majority of the elderly perceived their fitness as good or very good, with this variable being ... the benefits of physical activity (PA) and sport for the elderly, not only physically, but also psychologically ..... Relations of sex, age, perceived ...

  13. Issues and Importance of "Good" Starting Points for Nonlinear Regression for Mathematical Modeling with Maple: Basic Model Fitting to Make Predictions with Oscillating Data

    Science.gov (United States)

    Fox, William

    2012-01-01

    The purpose of our modeling effort is to predict future outcomes. We assume the data collected are both accurate and relatively precise. For our oscillating data, we examined several mathematical modeling forms for predictions. We also examined both ignoring the oscillations as an important feature and including the oscillations as an important…

  14. Adaptive memory: young children show enhanced retention of fitness-related information.

    Science.gov (United States)

    Aslan, Alp; Bäuml, Karl-Heinz T

    2012-01-01

    Evolutionary psychologists propose that human cognition evolved through natural selection to solve adaptive problems related to survival and reproduction, with its ultimate function being the enhancement of reproductive fitness. Following this proposal and the evolutionary-developmental view that ancestral selection pressures operated not only on reproductive adults, but also on pre-reproductive children, the present study examined whether young children show superior memory for information that is processed in terms of its survival value. In two experiments, we found such survival processing to enhance retention in 4- to 10-year-old children, relative to various control conditions that also required deep, meaningful processing but were not related to survival. These results suggest that, already in very young children, survival processing is a special and extraordinarily effective form of memory encoding. The results support the functional-evolutionary proposal that young children's memory is "tuned" to process and retain fitness-related information. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. An imprecise Dirichlet model for Bayesian analysis of failure data including right-censored observations

    International Nuclear Information System (INIS)

    Coolen, F.P.A.

    1997-01-01

    This paper is intended to make researchers in reliability theory aware of a recently introduced Bayesian model with imprecise prior distributions for statistical inference on failure data, that can also be considered as a robust Bayesian model. The model consists of a multinomial distribution with Dirichlet priors, making the approach basically nonparametric. New results for the model are presented, related to right-censored observations, where estimation based on this model is closely related to the product-limit estimator, which is an important statistical method to deal with reliability or survival data including right-censored observations. As for the product-limit estimator, the model considered in this paper aims at not using any information other than that provided by observed data, but our model fits into the robust Bayesian context which has the advantage that all inferences can be based on probabilities or expectations, or bounds for probabilities or expectations. The model uses a finite partition of the time-axis, and as such it is also related to life-tables

  16. Soil physical properties influencing the fitting parameters in Philip and Kostiakov infiltration models

    International Nuclear Information System (INIS)

    Mbagwu, J.S.C.

    1994-05-01

    Among the many models developed for monitoring the infiltration process those of Philip and Kostiakov have been studied in detail because of their simplicity and the ease of estimating their fitting parameters. The important soil physical factors influencing the fitting parameters in these infiltration models are reported in this study. The results of the study show that the single most important soil property affecting the fitting parameters in these models is the effective porosity. 36 refs, 2 figs, 5 tabs

  17. The trend odds model for ordinal data.

    Science.gov (United States)

    Capuano, Ana W; Dawson, Jeffrey D

    2013-06-15

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values. We consider a trend odds version of this constrained model, wherein the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc NLMIXED and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical data set is used to illustrate the interpretation of the trend odds model, and we apply this model to a swine influenza example wherein the proportional odds assumption appears to be violated. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Improved signal analysis for motional Stark effect data

    International Nuclear Information System (INIS)

    Makowski, M.A.; Allen, S.L.; Ellis, R.; Geer, R.; Jayakumar, R.J.; Moller, J.M.; Rice, B.W.

    2005-01-01

    Nonideal effects in the optical train of the motional Stark effect diagnostic have been modeled using the Mueller matrix formalism. The effects examined are birefringence in the vacuum windows, an imperfect reflective mirror, and signal pollution due to the presence of a circularly polarized light component. Relations for the measured intensity ratio are developed for each case. These relations suggest fitting functions to more accurately model the calibration data. One particular function, termed the tangent offset model, is found to fit the data for all channels better than the currently used tangent slope function. Careful analysis of the calibration data with the fitting functions reveals that a nonideal effect is present in the edge array and is attributed to nonideal performance of a mirror in that system. The result of applying the fitting function to the analysis of our data has been to improve the equilibrium reconstruction

  19. Using geometry to improve model fitting and experiment design for glacial isostasy

    Science.gov (United States)

    Kachuck, S. B.; Cathles, L. M.

    2017-12-01

    As scientists we routinely deal with models, which are geometric objects at their core - the manifestation of a set of parameters as predictions for comparison with observations. When the number of observations exceeds the number of parameters, the model is a hypersurface (the model manifold) in the space of all possible predictions. The object of parameter fitting is to find the parameters corresponding to the point on the model manifold as close to the vector of observations as possible. But the geometry of the model manifold can make this difficult. By curving, ending abruptly (where, for instance, parameters go to zero or infinity), and by stretching and compressing the parameters together in unexpected directions, it can be difficult to design algorithms that efficiently adjust the parameters. Even at the optimal point on the model manifold, parameters might not be individually resolved well enough to be applied to new contexts. In our context of glacial isostatic adjustment, models of sparse surface observations have a broad spread of sensitivity to mixtures of the earth's viscous structure and the surface distribution of ice over the last glacial cycle. This impedes precise statements about crucial geophysical processes, such as the planet's thermal history or the climates that controlled the ice age. We employ geometric methods developed in the field of systems biology to improve the efficiency of fitting (geodesic accelerated Levenberg-Marquardt) and to identify the maximally informative sources of additional data to make better predictions of sea levels and ice configurations (optimal experiment design). We demonstrate this in particular in reconstructions of the Barents Sea Ice Sheet, where we show that only certain kinds of data from the central Barents have the power to distinguish between proposed models.

  20. AMS-02 fits dark matter

    Science.gov (United States)

    Balázs, Csaba; Li, Tong

    2016-05-01

    In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model.

  1. AMS-02 fits dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Balázs, Csaba; Li, Tong [ARC Centre of Excellence for Particle Physics at the Tera-scale,School of Physics and Astronomy, Monash University, Melbourne, Victoria 3800 (Australia)

    2016-05-05

    In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model.

  2. Theoretically unprejudiced fits to proton scattering

    International Nuclear Information System (INIS)

    Kobos, A.M.; Mackintosh, R.S.

    1979-01-01

    By using a spline interpolation method applied to all components of the proton optical potential we have fitted elastic scattering from 40 Ca and from 16 O at a range of energies. The potentials are highly oscillatory and we have shown that similar oscillations are found when the spline fitting procedure is applied to pseudo-data generated from potentials of known l-dependence. Moreover, we show how to find an l-independent potential equivalent to one that is l-dependent and we find that it is oscillatory and that various characteristic features of empirical spline fit potentials can be explained. Thus, by fitting the data with model indenpendt l-independent potentials we have found support for the contention that the nucleon optical potential should be viewed as being l-dependent. This work may be regarded as an example of the kind of physical information that can be gained by pursuing exact fits to proton elastic scattering data

  3. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  4. Pulsar Polar Cap and Slot Gap Models: Confronting Fermi Data

    Science.gov (United States)

    Harding, Alice K.

    2012-01-01

    Rotation-powered pulsars are excellent laboratories for studying particle acceleration as well as fundamental physics of strong gravity, strong magnetic fields and relativity. I will review acceleration and gamma-ray emission from the pulsar polar cap and slot gap. Predictions of these models can be tested with the data set on pulsars collected by the Large Area Telescope on the Fermi Gamma-Ray Telescope over the last four years, using both detailed light curve fitting and population synthesis.

  5. NHL and RCGA Based Multi-Relational Fuzzy Cognitive Map Modeling for Complex Systems

    Directory of Open Access Journals (Sweden)

    Zhen Peng

    2015-11-01

    Full Text Available In order to model multi-dimensions and multi-granularities oriented complex systems, this paper firstly proposes a kind of multi-relational Fuzzy Cognitive Map (FCM to simulate the multi-relational system and its auto construct algorithm integrating Nonlinear Hebbian Learning (NHL and Real Code Genetic Algorithm (RCGA. The multi-relational FCM fits to model the complex system with multi-dimensions and multi-granularities. The auto construct algorithm can learn the multi-relational FCM from multi-relational data resources to eliminate human intervention. The Multi-Relational Data Mining (MRDM algorithm integrates multi-instance oriented NHL and RCGA of FCM. NHL is extended to mine the causal relationships between coarse-granularity concept and its fined-granularity concepts driven by multi-instances in the multi-relational system. RCGA is used to establish high-quality high-level FCM driven by data. The multi-relational FCM and the integrating algorithm have been applied in complex system of Mutagenesis. The experiment demonstrates not only that they get better classification accuracy, but it also shows the causal relationships among the concepts of the system.

  6. Mr-Moose: An advanced SED-fitting tool for heterogeneous multi-wavelength datasets

    Science.gov (United States)

    Drouart, G.; Falkendal, T.

    2018-04-01

    We present the public release of Mr-Moose, a fitting procedure that is able to perform multi-wavelength and multi-object spectral energy distribution (SED) fitting in a Bayesian framework. This procedure is able to handle a large variety of cases, from an isolated source to blended multi-component sources from an heterogeneous dataset (i.e. a range of observation sensitivities and spectral/spatial resolutions). Furthermore, Mr-Moose handles upper-limits during the fitting process in a continuous way allowing models to be gradually less probable as upper limits are approached. The aim is to propose a simple-to-use, yet highly-versatile fitting tool fro handling increasing source complexity when combining multi-wavelength datasets with fully customisable filter/model databases. The complete control of the user is one advantage, which avoids the traditional problems related to the "black box" effect, where parameter or model tunings are impossible and can lead to overfitting and/or over-interpretation of the results. Also, while a basic knowledge of Python and statistics is required, the code aims to be sufficiently user-friendly for non-experts. We demonstrate the procedure on three cases: two artificially-generated datasets and a previous result from the literature. In particular, the most complex case (inspired by a real source, combining Herschel, ALMA and VLA data) in the context of extragalactic SED fitting, makes Mr-Moose a particularly-attractive SED fitting tool when dealing with partially blended sources, without the need for data deconvolution.

  7. The Relationship between Preservice Teachers Health-Related Fitness and Movement Competency in Gymnastics

    Science.gov (United States)

    Webster, Collin Andrew; Webster, Liana; Cribbs, Jason; Wellborn, Benjamin; Lineberger, Matthew Blake; Doan, Rob

    2014-01-01

    The current National Initial Standards for Physical Education Teacher Education state that preservice teachers should achieve and maintain a level of health-related fitness consistent with that expected of K12 learners. However, little research has addressed the relevance of teacher fitness to effective physical education teaching. This study…

  8. Galactic cosmic-ray model in the light of AMS-02 nuclei data

    Science.gov (United States)

    Niu, Jia-Shu; Li, Tianjun

    2018-01-01

    Cosmic ray (CR) physics has entered a precision-driven era. With the latest AMS-02 nuclei data (boron-to-carbon ratio, proton flux, helium flux, and antiproton-to-proton ratio), we perform a global fitting and constrain the primary source and propagation parameters of cosmic rays in the Milky Way by considering 3 schemes with different data sets (with and without p ¯ /p data) and different propagation models (diffusion-reacceleration and diffusion-reacceleration-convection models). We find that the data set with p ¯/p data can remove the degeneracy between the propagation parameters effectively and it favors the model with a very small value of convection (or disfavors the model with convection). The separated injection spectrum parameters are used for proton and other nucleus species, which reveal the different breaks and slopes among them. Moreover, the helium abundance, antiproton production cross sections, and solar modulation are parametrized in our global fitting. Benefited from the self-consistence of the new data set, the fitting results show a little bias, and thus the disadvantages and limitations of the existed propagation models appear. Comparing to the best fit results for the local interstellar spectra (ϕ =0 ) with the VOYAGER-1 data, we find that the primary sources or propagation mechanisms should be different between proton and helium (or other heavier nucleus species). Thus, how to explain these results properly is an interesting and challenging question.

  9. Fits combining hyperon semileptonic decays and magnetic moments and CVC

    International Nuclear Information System (INIS)

    Bohm, A.; Kielanowski, P.

    1982-10-01

    We have performed a test of CVC by determining the baryon charges and magnetic moments from the hyperon semileptonic data. Then CVC was applied in order to make a joint fit of all baryon semileptonic decay data and baryon magnetic moments for the spectrum generating group (SG) model as well as for the conventional (cabibbo and magnetic moments in nuclear magnetons) model. The SG model gives a very good fit with chi 2 /n/sub D/ = 25/20 approximately equals 21% C.L. whereas the conventional model gives a fit with chi 2 /n/sub D/ = 244/20

  10. A CAD System for Evaluating Footwear Fit

    Science.gov (United States)

    Savadkoohi, Bita Ture; de Amicis, Raffaele

    With the great growth in footwear demand, the footwear manufacturing industry, for achieving commercial success, must be able to provide the footwear that fulfills consumer's requirement better than it's competitors. Accurate fitting for shoes is an important factor in comfort and functionality. Footwear fitter measurement have been using manual measurement for a long time, but the development of 3D acquisition devices and the advent of powerful 3D visualization and modeling techniques, automatically analyzing, searching and interpretation of the models have now made automatic determination of different foot dimensions feasible. In this paper, we proposed an approach for finding footwear fit within the shoe last data base. We first properly aligned the 3D models using "Weighted" Principle Component Analysis (WPCA). After solving the alignment problem we used an efficient algorithm for cutting the 3D model in order to find the footwear fit from shoe last data base.

  11. Statistical topography of fitness landscapes

    OpenAIRE

    Franke, Jasper

    2011-01-01

    Fitness landscapes are generalized energy landscapes that play an important conceptual role in evolutionary biology. These landscapes provide a relation between the genetic configuration of an organism and that organism’s adaptive properties. In this work, global topographical features of these fitness landscapes are investigated using theoretical models. The resulting predictions are compared to empirical landscapes. It is shown that these landscapes allow, at least with respe...

  12. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-09-28

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  13. In Search of Optimal Cognitive Diagnostic Model(s) for ESL Grammar Test Data

    Science.gov (United States)

    Yi, Yeon-Sook

    2017-01-01

    This study compares five cognitive diagnostic models in search of optimal one(s) for English as a Second Language grammar test data. Using a unified modeling framework that can represent specific models with proper constraints, the article first fit the full model (the log-linear cognitive diagnostic model, LCDM) and investigated which model…

  14. Code ''Repol'' to fit experimental data with a polynomial and its graphics plotting

    International Nuclear Information System (INIS)

    Travesi, A.; Romero, L.

    1983-01-01

    The ''Repol'' code performs the fitting of a set of experimental data, with a polynomial of mth. degree (max. 10), using the Least Squares Criterion. Further, it presents the graphic plotting of the fitted polynomial, in the appropriate coordinates axes system, by a plotter. An additional option allows also the graphic plotting of the experimental data, used for the fit. The necessary data to execute this code, are asked to the operator in the screen, in a iterative way, by screen-operator dialogue, and the values are introduced through the keyboard. This code is written in Fortran IV, and because of its structure programming in subroutine blocks, can be adapted to any computer with graphic screen and keyboard terminal, with a plotter serial connected to it, whose software has the Hewlett Packard ''Graphics 1000''. (author)

  15. GRace: a MATLAB-based application for fitting the discrimination-association model.

    Science.gov (United States)

    Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio

    2014-10-28

    The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed.

  16. Invited commentary: Lost in estimation--searching for alternatives to markov chains to fit complex Bayesian models.

    Science.gov (United States)

    Molitor, John

    2012-03-01

    Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.

  17. Tanning Shade Gradations of Models in Mainstream Fitness and Muscle Enthusiast Magazines: Implications for Skin Cancer Prevention in Men.

    Science.gov (United States)

    Basch, Corey H; Hillyer, Grace Clarke; Ethan, Danna; Berdnik, Alyssa; Basch, Charles E

    2015-07-01

    Tanned skin has been associated with perceptions of fitness and social desirability. Portrayal of models in magazines may reflect and perpetuate these perceptions. Limited research has investigated tanning shade gradations of models in men's versus women's fitness and muscle enthusiast magazines. Such findings are relevant in light of increased incidence and prevalence of melanoma in the United States. This study evaluated and compared tanning shade gradations of adult Caucasian male and female model images in mainstream fitness and muscle enthusiast magazines. Sixty-nine U.S. magazine issues (spring and summer, 2013) were utilized. Two independent reviewers rated tanning shade gradations of adult Caucasian male and female model images on magazines' covers, advertisements, and feature articles. Shade gradations were assessed using stock photographs of Caucasian models with varying levels of tanned skin on an 8-shade scale. A total of 4,683 images were evaluated. Darkest tanning shades were found among males in muscle enthusiast magazines and lightest among females in women's mainstream fitness magazines. By gender, male model images were 54% more likely to portray a darker tanning shade. In this study, images in men's (vs. women's) fitness and muscle enthusiast magazines portrayed Caucasian models with darker skin shades. Despite these magazines' fitness-related messages, pro-tanning images may promote attitudes and behaviors associated with higher skin cancer risk. To date, this is the first study to explore tanning shades in men's magazines of these genres. Further research is necessary to identify effects of exposure to these images among male readers. © The Author(s) 2014.

  18. Bringing Health and Fitness Data Together for Connected Health Care: Mobile Apps as Enablers of Interoperability.

    Science.gov (United States)

    Gay, Valerie; Leijdekkers, Peter

    2015-11-18

    A transformation is underway regarding how we deal with our health. Mobile devices make it possible to have continuous access to personal health information. Wearable devices, such as Fitbit and Apple's smartwatch, can collect data continuously and provide insights into our health and fitness. However, lack of interoperability and the presence of data silos prevent users and health professionals from getting an integrated view of health and fitness data. To provide better health outcomes, a complete picture is needed which combines informal health and fitness data collected by the user together with official health records collected by health professionals. Mobile apps are well positioned to play an important role in the aggregation since they can tap into these official and informal health and data silos. The objective of this paper is to demonstrate that a mobile app can be used to aggregate health and fitness data and can enable interoperability. It discusses various technical interoperability challenges encountered while integrating data into one place. For 8 years, we have worked with third-party partners, including wearable device manufacturers, electronic health record providers, and app developers, to connect an Android app to their (wearable) devices, back-end servers, and systems. The result of this research is a health and fitness app called myFitnessCompanion, which enables users to aggregate their data in one place. Over 6000 users use the app worldwide to aggregate their health and fitness data. It demonstrates that mobile apps can be used to enable interoperability. Challenges encountered in the research process included the different wireless protocols and standards used to communicate with wireless devices, the diversity of security and authorization protocols used to be able to exchange data with servers, and lack of standards usage, such as Health Level Seven, for medical information exchange. By limiting the negative effects of health data silos

  19. Predictors of "Liking" Three Types of Health and Fitness-Related Content on Social Media: A Cross-Sectional Study.

    Science.gov (United States)

    Carrotte, Elise R; Vella, Alyce M; Lim, Megan S C

    2015-08-21

    Adolescence and young adulthood are key periods for developing norms related to health behaviors and body image, and social media can influence these norms. Social media is saturated with content related to dieting, fitness, and health. Health and fitness-related social media content has received significant media attention for often containing objectifying and inaccurate health messages. Limited research has identified problematic features of such content, including stigmatizing language around weight, portraying guilt-related messages regarding food, and praising thinness. However, no research has identified who is "liking" or "following" (ie, consuming) such content. This exploratory study aimed to identify demographics, mental health, and substance use-related behaviors that predicted consuming 3 types of health and fitness-related social media content-weight loss/fitness motivation pages (ie, "fitspiration"), detox/cleanse pages, and diet/fitness plan pages-among young social media users. Participants (N=1001; age: median 21.06, IQR 17.64-24.64; female: 723/1001, 72.23%) completed a cross-sectional 112-question online survey aimed at social media users aged between 15-29 years residing in Victoria, Australia. Logistic regression was used to determine which characteristics predicted consuming the 3 types of health and fitness-related social media content. A total of 378 (37.76%) participants reported consuming at least 1 of the 3 types of health and fitness-related social media content: 308 (30.77%) fitspiration pages, 145 (14.49%) detox pages, and 235 (23.48%) diet/fitness plan pages. Of the health and fitness-related social media content consumers, 85.7% (324/378) identified as female and 44.8% (324/723) of all female participants consumed at least 1 type of health and fitness-related social media content. Predictors of consuming at least one type of health and fitness-related social media content in univariable analysis included female gender (OR 3.5, 95% CI

  20. Relative Age Effect: Relationship between Anthropometric and Fitness Skills in Youth Soccer

    Directory of Open Access Journals (Sweden)

    Aristotelis GIOLDASIS

    2015-12-01

    Full Text Available The aim of the study was to determine the relationship between anthropometric and fitness skills in youth soccer players according to their related age. The existence of relative age effect was also examined. Anthropometric as well fitness variables such as height, weight, BMI, body mass, flexibility, balance, reaction time, jumping ability, and endurance of the lower limb were assessed in 347 amateur young players. Participants’ age ranged from 9 to 16 (M= 12.43, SD= 2.17. Analyses of variance indicated many significant differences among players of different birth quartile (from P< .001 to P< .05 for all the skills that were examined. The chi square test that was conducted to assess the distribution of players, showed that for all four different age groups no statistically significant difference was found regarding the birth quartile of players. In countries that training groups include 2 different age categories, anthropometric and fitness differences because of relative age effect are heightened. However, physical and physiological variables are inaccurate in predicting later success of players. Thus talent identification systems should provide equal opportunities for talented but related younger players. It is suggested an on-going talent identification using a multidimensional evaluation form including technical, physiological, physical, tactical, and psychological parameters.

  1. A person fit test for IRT models for polytomous items

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Dagohoy, A.V.

    2007-01-01

    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability

  2. Econometric modelling of risk adverse behaviours of entrepreneurs in the provision of house fittings in China

    Directory of Open Access Journals (Sweden)

    Rita Yi Man Li

    2012-03-01

    Full Text Available Entrepreneurs have always born the risk of running their business. They reap a profit in return for their risk taking and work. Housing developers are no different. In many countries, such as Australia, the United Kingdom and the United States, they interpret the tastes of the buyers and provide the dwellings they develop with basic fittings such as floor and wall coverings, bathroom fittings and kitchen cupboards. In mainland China, however, in most of the developments, units or houses are sold without floor or wall coverings, kitchen  or bathroom fittings. What is the motive behind this choice? This paper analyses the factors affecting housing developers’ decisions to provide fittings based on 1701 housing developments in Hangzhou, Chongqing and Hangzhou using a Probit model. The results show that developers build a higher proportion of bare units in mainland China when: 1 there is shortage of housing; 2 land costs are high so that the comparative costs of providing fittings become relatively low.

  3. The lz(p)* Person-Fit Statistic in an Unfolding Model Context

    NARCIS (Netherlands)

    Tendeiro, Jorge N.

    2017-01-01

    Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded

  4. Automation of reverse engineering process in aircraft modeling and related optimization problems

    Science.gov (United States)

    Li, W.; Swetits, J.

    1994-01-01

    During the year of 1994, the engineering problems in aircraft modeling were studied. The initial concern was to obtain a surface model with desirable geometric characteristics. Much of the effort during the first half of the year was to find an efficient way of solving a computationally difficult optimization model. Since the smoothing technique in the proposal 'Surface Modeling and Optimization Studies of Aerodynamic Configurations' requires solutions of a sequence of large-scale quadratic programming problems, it is important to design algorithms that can solve each quadratic program in a few interactions. This research led to three papers by Dr. W. Li, which were submitted to SIAM Journal on Optimization and Mathematical Programming. Two of these papers have been accepted for publication. Even though significant progress has been made during this phase of research and computation times was reduced from 30 min. to 2 min. for a sample problem, it was not good enough for on-line processing of digitized data points. After discussion with Dr. Robert E. Smith Jr., it was decided not to enforce shape constraints in order in order to simplify the model. As a consequence, P. Dierckx's nonparametric spline fitting approach was adopted, where one has only one control parameter for the fitting process - the error tolerance. At the same time the surface modeling software developed by Imageware was tested. Research indicated a substantially improved fitting of digitalized data points can be achieved if a proper parameterization of the spline surface is chosen. A winning strategy is to incorporate Dierckx's surface fitting with a natural parameterization for aircraft parts. The report consists of 4 chapters. Chapter 1 provides an overview of reverse engineering related to aircraft modeling and some preliminary findings of the effort in the second half of the year. Chapters 2-4 are the research results by Dr. W. Li on penalty functions and conjugate gradient methods for

  5. Comprehensive ecosystem model-data synthesis using multiple data sets at two temperate forest free-air CO2 enrichment experiments: Model performance at ambient CO2 concentration

    Science.gov (United States)

    Walker, Anthony P.; Hanson, Paul J.; De Kauwe, Martin G.; Medlyn, Belinda E.; Zaehle, Sönke; Asao, Shinichi; Dietze, Michael; Hickler, Thomas; Huntingford, Chris; Iversen, Colleen M.; Jain, Atul; Lomas, Mark; Luo, Yiqi; McCarthy, Heather; Parton, William J.; Prentice, I. Colin; Thornton, Peter E.; Wang, Shusen; Wang, Ying-Ping; Warlind, David; Weng, Ensheng; Warren, Jeffrey M.; Woodward, F. Ian; Oren, Ram; Norby, Richard J.

    2014-05-01

    Free-air CO2 enrichment (FACE) experiments provide a remarkable wealth of data which can be used to evaluate and improve terrestrial ecosystem models (TEMs). In the FACE model-data synthesis project, 11 TEMs were applied to two decadelong FACE experiments in temperate forests of the southeastern U.S.—the evergreen Duke Forest and the deciduous Oak Ridge Forest. In this baseline paper, we demonstrate our approach to model-data synthesis by evaluating the models' ability to reproduce observed net primary productivity (NPP), transpiration, and leaf area index (LAI) in ambient CO2 treatments. Model outputs were compared against observations using a range of goodness-of-fit statistics. Many models simulated annual NPP and transpiration within observed uncertainty. We demonstrate, however, that high goodness-of-fit values do not necessarily indicate a successful model, because simulation accuracy may be achieved through compensating biases in component variables. For example, transpiration accuracy was sometimes achieved with compensating biases in leaf area index and transpiration per unit leaf area. Our approach to model-data synthesis therefore goes beyond goodness-of-fit to investigate the success of alternative representations of component processes. Here we demonstrate this approach by comparing competing model hypotheses determining peak LAI. Of three alternative hypotheses—(1) optimization to maximize carbon export, (2) increasing specific leaf area with canopy depth, and (3) the pipe model—the pipe model produced peak LAI closest to the observations. This example illustrates how data sets from intensive field experiments such as FACE can be used to reduce model uncertainty despite compensating biases by evaluating individual model assumptions.

  6. IRI related data and model services at NSSDC

    Science.gov (United States)

    Bilitza, D.; Papitashvili, N.; King, J.

    NASA's National Space Science Data Center (NSSDC) provides internet access to a large number of space physics data sets and models. We will review and explain the different products and services that might be of interest to the IRI community. Data can be obtained directly through anonymous ftp or through the SPyCAT WWW interface to a large volume of space physics data on juke-box type mass storage devices. A newly developed WWW system, the ATMOWeb, provides browse and sub-setting capabilities for selected atmospheric and thermospheric data. NSSDC maintains an archive of space physics models that includes a subset of ionospheric models. The model software can be retrieved via anonymous ftp. A selection of the most frequently requested models can be run on-line through special WWW interfaces. Currently supported models include the International Reference Ionosphere (IRI), the Mass Spectrometer and Incoherent Scatter (MSIS) atmospheric model, the International Geomagnetic Reference Field (IGRF) and the AE-8/AP-8 radiation belt models. In this article special emphasis will be given to the IRI interface and its various input/output options. Several new options and a Java-based plotting capability were recently added to the Web interface.

  7. Measured PET Data Characterization with the Negative Binomial Distribution Model.

    Science.gov (United States)

    Santarelli, Maria Filomena; Positano, Vincenzo; Landini, Luigi

    2017-01-01

    Accurate statistical model of PET measurements is a prerequisite for a correct image reconstruction when using statistical image reconstruction algorithms, or when pre-filtering operations must be performed. Although radioactive decay follows a Poisson distribution, deviation from Poisson statistics occurs on projection data prior to reconstruction due to physical effects, measurement errors, correction of scatter and random coincidences. Modelling projection data can aid in understanding the statistical nature of the data in order to develop efficient processing methods and to reduce noise. This paper outlines the statistical behaviour of measured emission data evaluating the goodness of fit of the negative binomial (NB) distribution model to PET data for a wide range of emission activity values. An NB distribution model is characterized by the mean of the data and the dispersion parameter α that describes the deviation from Poisson statistics. Monte Carlo simulations were performed to evaluate: (a) the performances of the dispersion parameter α estimator, (b) the goodness of fit of the NB model for a wide range of activity values. We focused on the effect produced by correction for random and scatter events in the projection (sinogram) domain, due to their importance in quantitative analysis of PET data. The analysis developed herein allowed us to assess the accuracy of the NB distribution model to fit corrected sinogram data, and to evaluate the sensitivity of the dispersion parameter α to quantify deviation from Poisson statistics. By the sinogram ROI-based analysis, it was demonstrated that deviation on the measured data from Poisson statistics can be quantitatively characterized by the dispersion parameter α, in any noise conditions and corrections.

  8. Categorical marginal models: quite extensive package for the estimation of marginal models for categorical data

    OpenAIRE

    Wicher Bergsma; Andries van der Ark

    2015-01-01

    A package accompanying the book Marginal Models for Dependent, Clustered, and Longitudinal Categorical Data by Bergsma, Croon, & Hagenaars, 2009. It’s purpose is fitting and testing of marginal models.

  9. xMDFF: molecular dynamics flexible fitting of low-resolution X-ray structures

    International Nuclear Information System (INIS)

    McGreevy, Ryan; Singharoy, Abhishek; Li, Qufei; Zhang, Jingfen; Xu, Dong; Perozo, Eduardo; Schulten, Klaus

    2014-01-01

    A new real-space refinement method for low-resolution X-ray crystallography is presented. The method is based on the molecular dynamics flexible fitting protocol targeted at addressing large-scale deformations of the search model to achieve refinement with minimal manual intervention. An explanation of the method is provided, augmented by results from the refinement of both synthetic and experimental low-resolution data, including an independent electrophysiological verification of the xMDFF-refined crystal structure of a voltage-sensor protein. X-ray crystallography remains the most dominant method for solving atomic structures. However, for relatively large systems, the availability of only medium-to-low-resolution diffraction data often limits the determination of all-atom details. A new molecular dynamics flexible fitting (MDFF)-based approach, xMDFF, for determining structures from such low-resolution crystallographic data is reported. xMDFF employs a real-space refinement scheme that flexibly fits atomic models into an iteratively updating electron-density map. It addresses significant large-scale deformations of the initial model to fit the low-resolution density, as tested with synthetic low-resolution maps of d-ribose-binding protein. xMDFF has been successfully applied to re-refine six low-resolution protein structures of varying sizes that had already been submitted to the Protein Data Bank. Finally, via systematic refinement of a series of data from 3.6 to 7 Å resolution, xMDFF refinements together with electrophysiology experiments were used to validate the first all-atom structure of the voltage-sensing protein Ci-VSP

  10. xMDFF: molecular dynamics flexible fitting of low-resolution X-ray structures

    Energy Technology Data Exchange (ETDEWEB)

    McGreevy, Ryan; Singharoy, Abhishek [University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States); Li, Qufei [The University of Chicago, Chicago, IL 60637 (United States); Zhang, Jingfen; Xu, Dong [University of Missouri, Columbia, MO 65211 (United States); Perozo, Eduardo [The University of Chicago, Chicago, IL 60637 (United States); Schulten, Klaus, E-mail: kschulte@ks.uiuc.edu [University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States); University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States)

    2014-09-01

    A new real-space refinement method for low-resolution X-ray crystallography is presented. The method is based on the molecular dynamics flexible fitting protocol targeted at addressing large-scale deformations of the search model to achieve refinement with minimal manual intervention. An explanation of the method is provided, augmented by results from the refinement of both synthetic and experimental low-resolution data, including an independent electrophysiological verification of the xMDFF-refined crystal structure of a voltage-sensor protein. X-ray crystallography remains the most dominant method for solving atomic structures. However, for relatively large systems, the availability of only medium-to-low-resolution diffraction data often limits the determination of all-atom details. A new molecular dynamics flexible fitting (MDFF)-based approach, xMDFF, for determining structures from such low-resolution crystallographic data is reported. xMDFF employs a real-space refinement scheme that flexibly fits atomic models into an iteratively updating electron-density map. It addresses significant large-scale deformations of the initial model to fit the low-resolution density, as tested with synthetic low-resolution maps of d-ribose-binding protein. xMDFF has been successfully applied to re-refine six low-resolution protein structures of varying sizes that had already been submitted to the Protein Data Bank. Finally, via systematic refinement of a series of data from 3.6 to 7 Å resolution, xMDFF refinements together with electrophysiology experiments were used to validate the first all-atom structure of the voltage-sensing protein Ci-VSP.

  11. A Data Flow Model to Solve the Data Distribution Changing Problem in Machine Learning

    Directory of Open Access Journals (Sweden)

    Shang Bo-Wen

    2016-01-01

    Full Text Available Continuous prediction is widely used in broad communities spreading from social to business and the machine learning method is an important method in this problem.When we use the machine learning method to predict a problem. We use the data in the training set to fit the model and estimate the distribution of data in the test set.But when we use machine learning to do the continuous prediction we get new data as time goes by and use the data to predict the future data, there may be a problem. As the size of the data set increasing over time, the distribution changes and there will be many garbage data in the training set.We should remove the garbage data as it reduces the accuracy of the prediction. The main contribution of this article is using the new data to detect the timeliness of historical data and remove the garbage data.We build a data flow model to describe how the data flow among the test set, training set, validation set and the garbage set and improve the accuracy of prediction. As the change of the data set, the best machine learning model will change.We design a hybrid voting algorithm to fit the data set better that uses seven machine learning models predicting the same problem and uses the validation set putting different weights on the learning models to give better model more weights. Experimental results show that, when the distribution of the data set changes over time, our time flow model can remove most of the garbage data and get a better result than the traditional method that adds all the data to the data set; our hybrid voting algorithm has a better prediction result than the average accuracy of other predict models

  12. An ARMA prediction model for electromagnetic radiation data preceeding a rock burst

    Energy Technology Data Exchange (ETDEWEB)

    Liu Zhen-tang; Liu Xiao-fei; Wang En-yuan [China University of Mining & Technology, Xuzhou (China). School of Safety Engineering

    2009-03-15

    SAS statistical analysis software was used to test the randomness of electromagnetic radiation (EMR) observed during the '1.12 rock burst' of the number 237 working face in the Nanshan coal mine. An auto-regressive-moving-average (ARMA) model was fitted to the EMR data and used to forecast twelve observations into the future. The results show that the rock burst EMR data are non-white noise, stationary and can be fitted with an AR(3) model. Comparing the model EMR values to the real data, the similarity degree is about 66%. An ARMA model can use data preceding an event to describe changes in the EMR trends quantitatively. 10 refs., 4 figs., 4 tabs.

  13. Code REPOL to fit experimental data with a polynomial, and its graphics plotting

    International Nuclear Information System (INIS)

    Romero, L.; Travesi, A.

    1983-01-01

    The REPOL code, performs the fitting a set of experimental data, with a polynomial of mth. degree (max. 10), using the Least Squares Criterion. further, it presents the graphic plotting of the fitted polynomial, in the appropriate coordinates axes system, by a plotter. An additional option allows also the graphic plotting of the experimental data, used for the fit. The necessary data to execute this code, are asked to the operator in the screen, in a iterative way, by screen-operator dialogue, and the values are introduced through the keyboard. This code is written in Fortran IV, and because of its structure programming in subroutine blocks, can be adapted to any computer with graphic screen and keyboard terminal, with a plotter serial connected to it, whose Software has the Hewlett Packard Graphics 1000. (Author) 5 refs

  14. Atomic data for integrated tokamak modelling

    International Nuclear Information System (INIS)

    Toekesi, K.

    2013-01-01

    scattering. Moreover we present elastic cross sections of fusion related materials. We present total and angular differential elastic cross sections of hydrogen atoms for a wide range of incident electron energy. One of the convenient ways of representation of these data is analytical fit functions, which can be easily applied in various fields of sciences. The aim of work is to develop a universal functional formula of elastic cross sections for the case of hydrogen target. We consider the angular differential electron elastic cross sections for a wide range of incident electron energy and in the entire angular range. The differential cross-sections were calculated using the partial expansion method. The fitted curves are in excellent agreement with the calculated ones within less than 1% accuracy in the energy range between 1 eV and 100 keV. Our analytical formula may show the main virtues in various Monte Carlo simulations reducing drastically the computation time, when it requires to calculate the elastic cross sections many times. The applied fitting technique can be used for other data. Acknowledgement: This work, supported by the European Communities under the contract of Association between EURATOMHAS, was carried out within the framework of the Task Force on Integrated Tokamak Modeling of the European Fusion Development Agreement. The work was also supported by the Hungarian Scientific Research Fund OTKA No. NN103279. (author)

  15. Calibration of a stochastic health evolution model using NHIS data

    Science.gov (United States)

    Gupta, Aparna; Li, Zhisheng

    2011-10-01

    This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.

  16. Topics in modelling of clustered data

    CERN Document Server

    Aerts, Marc; Ryan, Louise M; Geys, Helena

    2002-01-01

    Many methods for analyzing clustered data exist, all with advantages and limitations in particular applications. Compiled from the contributions of leading specialists in the field, Topics in Modelling of Clustered Data describes the tools and techniques for modelling the clustered data often encountered in medical, biological, environmental, and social science studies. It focuses on providing a comprehensive treatment of marginal, conditional, and random effects models using, among others, likelihood, pseudo-likelihood, and generalized estimating equations methods. The authors motivate and illustrate all aspects of these models in a variety of real applications. They discuss several variations and extensions, including individual-level covariates and combined continuous and discrete outcomes. Flexible modelling with fractional and local polynomials, omnibus lack-of-fit tests, robustification against misspecification, exact, and bootstrap inferential procedures all receive extensive treatment. The application...

  17. Fit-for-purpose: species distribution model performance depends on evaluation criteria - Dutch Hoverflies as a case study.

    Science.gov (United States)

    Aguirre-Gutiérrez, Jesús; Carvalheiro, Luísa G; Polce, Chiara; van Loon, E Emiel; Raes, Niels; Reemer, Menno; Biesmeijer, Jacobus C

    2013-01-01

    Understanding species distributions and the factors limiting them is an important topic in ecology and conservation, including in nature reserve selection and predicting climate change impacts. While Species Distribution Models (SDM) are the main tool used for these purposes, choosing the best SDM algorithm is not straightforward as these are plentiful and can be applied in many different ways. SDM are used mainly to gain insight in 1) overall species distributions, 2) their past-present-future probability of occurrence and/or 3) to understand their ecological niche limits (also referred to as ecological niche modelling). The fact that these three aims may require different models and outputs is, however, rarely considered and has not been evaluated consistently. Here we use data from a systematically sampled set of species occurrences to specifically test the performance of Species Distribution Models across several commonly used algorithms. Species range in distribution patterns from rare to common and from local to widespread. We compare overall model fit (representing species distribution), the accuracy of the predictions at multiple spatial scales, and the consistency in selection of environmental correlations all across multiple modelling runs. As expected, the choice of modelling algorithm determines model outcome. However, model quality depends not only on the algorithm, but also on the measure of model fit used and the scale at which it is used. Although model fit was higher for the consensus approach and Maxent, Maxent and GAM models were more consistent in estimating local occurrence, while RF and GBM showed higher consistency in environmental variables selection. Model outcomes diverged more for narrowly distributed species than for widespread species. We suggest that matching study aims with modelling approach is essential in Species Distribution Models, and provide suggestions how to do this for different modelling aims and species' data

  18. ACCELERATED FITTING OF STELLAR SPECTRA

    Energy Technology Data Exchange (ETDEWEB)

    Ting, Yuan-Sen; Conroy, Charlie [Harvard–Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Rix, Hans-Walter [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany)

    2016-07-20

    Stellar spectra are often modeled and fitted by interpolating within a rectilinear grid of synthetic spectra to derive the stars’ labels: stellar parameters and elemental abundances. However, the number of synthetic spectra needed for a rectilinear grid grows exponentially with the label space dimensions, precluding the simultaneous and self-consistent fitting of more than a few elemental abundances. Shortcuts such as fitting subsets of labels separately can introduce unknown systematics and do not produce correct error covariances in the derived labels. In this paper we present a new approach—Convex Hull Adaptive Tessellation (chat)—which includes several new ideas for inexpensively generating a sufficient stellar synthetic library, using linear algebra and the concept of an adaptive, data-driven grid. A convex hull approximates the region where the data lie in the label space. A variety of tests with mock data sets demonstrate that chat can reduce the number of required synthetic model calculations by three orders of magnitude in an eight-dimensional label space. The reduction will be even larger for higher dimensional label spaces. In chat the computational effort increases only linearly with the number of labels that are fit simultaneously. Around each of these grid points in the label space an approximate synthetic spectrum can be generated through linear expansion using a set of “gradient spectra” that represent flux derivatives at every wavelength point with respect to all labels. These techniques provide new opportunities to fit the full stellar spectra from large surveys with 15–30 labels simultaneously.

  19. Soccer Player Characteristics in English Lower-League Development Programmes: The Relationships between Relative Age, Maturation, Anthropometry and Physical Fitness.

    Science.gov (United States)

    Lovell, Ric; Towlson, Chris; Parkin, Guy; Portas, Matt; Vaeyens, Roel; Cobley, Stephen

    2015-01-01

    The relative age effect (RAE) and its relationships with maturation, anthropometry, and physical performance characteristics were examined across a representative sample of English youth soccer development programmes. Birth dates of 1,212 players, chronologically age-grouped (i.e., U9's-U18's), representing 17 professional clubs (i.e., playing in Leagues 1 & 2) were obtained and categorised into relative age quartiles from the start of the selection year (Q1 = Sep-Nov; Q2 = Dec-Feb; Q3 = Mar-May; Q4 = Jun-Aug). Players were measured for somatic maturation and performed a battery of physical tests to determine aerobic fitness (Multi-Stage Fitness Test [MSFT]), Maximal Vertical Jump (MVJ), sprint (10 & 20m), and agility (T-Test) performance capabilities. Odds ratio's (OR) revealed Q1 players were 5.3 times (95% confidence intervals [CI]: 4.08-6.83) more likely to be selected than Q4's, with a particularly strong RAE bias observed in U9 (OR: 5.56) and U13-U16 squads (OR: 5.45-6.13). Multivariate statistical models identified few between quartile differences in anthropometric and fitness characteristics, and confirmed chronological age-group and estimated age at peak height velocity (APHV) as covariates. Assessment of practical significance using magnitude-based inferences demonstrated body size advantages in relatively older players (Q1 vs. Q4) that were very-likely small (Effect Size [ES]: 0.53-0.57), and likely to very-likely moderate (ES: 0.62-0.72) in U12 and U14 squads, respectively. Relatively older U12-U14 players also demonstrated small advantages in 10m (ES: 0.31-0.45) and 20m sprint performance (ES: 0.36-0.46). The data identify a strong RAE bias at the entry-point to English soccer developmental programmes. RAE was also stronger circa-PHV, and relatively older players demonstrated anaerobic performance advantages during the pubescent period. Talent selectors should consider motor function and maturation status assessments to avoid premature and unwarranted

  20. Phylogenetic tree reconstruction accuracy and model fit when proportions of variable sites change across the tree.

    Science.gov (United States)

    Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R

    2010-05-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.

  1. Comparison of kinetic models for data from a positron emission tomograph

    International Nuclear Information System (INIS)

    Coxson, P.G.; Huesman, R.H.; Lim, S.; Klein, G.J.; Reutter, B.W.; Budinger, T.F.

    1995-01-01

    The purpose of this research was to compare a physiological model of 82 Rb in the myocardium with two reduced order models with regard to their ability to assess physiological parameters of diagnostic significance. A three compartment physiological model of 82 Rb uptake in the myocardium was used to simulate kinetic region of interest data from a positron emission tomograph (PET). Simulations were generated for eight different blood flow rates reflecting the physiological range of interest. Two reduced order models which are commonly used with myocardial PET studies were fit to the simulated data and the parameters of the reduced order models were compared with the physiological parameters. Then all three models were fit to the simulated data with noise added. Monte Carlo simulations were used to evaluate and compare the diagnostic utility of the reduced order models

  2. Lung volumes related to physical activity, physical fitness, aerobic capacity and body mass index in students

    Directory of Open Access Journals (Sweden)

    Mihailova A.

    2016-01-01

    Reduced lung volumes were associated with lower aerobic fitness, lower physical fitness and lower amount of weekly physical activity. Healthier body mass index was associated with higher aerobic fitness (relative VO2max in both female and male.

  3. Relationships between Health-Related Fitness Knowledge, Perceived Competence, Self- Determination, and Physical Activity Behaviors of High School Students

    Science.gov (United States)

    Haslem, Liz; Wilkinson, Carol; Prusak, Keven A.; Christensen, William F.; Pennington, Todd

    2016-01-01

    The purpose of this study was (a) to test a hypothesized model of motivation within the context of conceptual physical education (CPE), and (b) to explore the strength and directionality of perceived competence for physical activity as a possible mediator for health-related fitness knowledge (HRFK) and physical activity behaviors. High school…

  4. Sedentary patterns, physical activity and health-related physical fitness in youth: a cross-sectional study

    OpenAIRE

    J?dice, Pedro B.; Silva, Analiza M.; Berria, Juliane; Petroski, Edio L.; Ekelund, Ulf; Sardinha, Lu?s B.

    2017-01-01

    Background: Strong evidence indicates that moderate-vigorous physical activity (MVPA) is positively associated with fitness in youth, independent of total sedentary-time. Sedentary-time appears negatively associated with fitness only when it replaces MVPA. However, whether different sedentary-patterns affect health-related fitness is unknown. Methods: The associations between MVPA and sedentary-patterns with physical fitness were examined in 2698 youths (1262 boys) aged 13.4 ± 2.28 years. Sed...

  5. A Multilevel Shape Fit Analysis of Neutron Transmission Data

    Science.gov (United States)

    Naguib, K.; Sallam, O. H.; Adib, M.; Ashry, A.

    A multilevel shape fit analysis of neutron transmission data is presented. A multilevel computer code SHAPE is used to analyse clean transmission data obtained from time-of-flight (TOF) measurements. The shape analysis deduces the parameters of the observed resonances in the energy region considered in the measurements. The shape code is based upon a least square fit of a multilevel Briet-Wigner formula and includes both instrumental resolution and Doppler broadenings. Operating the SHAPE code on a test example of a measured transmission data of 151Eu, 153Eu and natural Eu in the energy range 0.025-1 eV accquired a good result for the used technique of analysis.Translated AbstractAnalyse von Neutronentransmissionsdaten mittels einer VielniveauformanpassungNeutronentransmissionsdaten werden in einer Vielniveauformanpassung analysiert. Dazu werden bereinigte Daten aus Flugzeitmessungen mit dem Rechnerprogramm SHAPE bearbeitet. Man erhält die Parameter der beobachteten Resonanzen im gemessenen Energiebereich. Die Formanpassung benutzt eine Briet-Wignerformel und berücksichtigt Linienverbreiterungen infolge sowohl der Meßeinrichtung als auch des Dopplereffekts. Als praktisches Beispiel werden 151Eu, 153Eu und natürliches Eu im Energiebereich 0.025 bis 1 eV mit guter Übereinstimmung theoretischer und experimenteller Werte behandelt.

  6. Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain

    Science.gov (United States)

    Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises

    2015-01-01

    Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156

  7. The Impact of Cardiorespiratory Fitness on Age-Related Lipids and Lipoproteins

    Science.gov (United States)

    Park, Yong-Moon Mark; Sui, Xuemei; Liu, Junxiu; Zhou, Haiming; Kokkinos, Peter F.; Lavie, Carl J.; Hardin, James W.; Blair, Steven N.

    2015-01-01

    Background Evidence on the effect of cardiorespiratory fitness (CRF) on age-related longitudinal changes of lipids and lipoproteins is scarce. Objectives This study sought to assess the longitudinal, aging trajectory of lipids and lipoproteins for the life course in adults, and to determine whether CRF modifies the age-associated trajectory of lipids and lipoproteins. Methods Data came from 11,418 men, 20 to 90 years of age, without known high cholesterol, high triglycerides, cardiovascular disease, and cancer at baseline and during follow-up from the Aerobics Center Longitudinal Study. There were 43,821 observations spanning 2 to 25 (mean 3.5) health examinations between 1970 and 2006. CRF was quantified by a maximal treadmill exercise test. Marginal models using generalized estimating equations were applied. Results Total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), triglycerides (TG), and non-high-density lipoprotein cholesterol (non-HDL-C) presented similar inverted U-shaped quadratic trajectories with aging: gradual increases were noted until the mid-40s to early 50s, with subsequent declines (all p lipoproteins in young to middle-aged men than in older men. Conclusions Our investigation reveals a differential trajectory of lipids and lipoproteins with aging according to CRF in healthy men, and suggests that promoting increased CRF levels may help delay the development of dyslipidemia. PMID:25975472

  8. The effect of cardiorespiratory fitness on age-related lipids and lipoproteins.

    Science.gov (United States)

    Park, Yong-Moon Mark; Sui, Xuemei; Liu, Junxiu; Zhou, Haiming; Kokkinos, Peter F; Lavie, Carl J; Hardin, James W; Blair, Steven N

    2015-05-19

    Evidence on the effect of cardiorespiratory fitness (CRF) on age-related longitudinal changes of lipids and lipoproteins is scarce. This study sought to assess the longitudinal aging trajectory of lipids and lipoproteins for the life course in adults and to determine whether CRF modifies the age-associated trajectory of lipids and lipoproteins. Data came from 11,418 men, 20 to 90 years of age, without known high cholesterol, high triglycerides, cardiovascular disease, and cancer at baseline and during follow-up from the Aerobics Center Longitudinal Study. There were 43,821 observations spanning 2 to 25 health examinations (mean 3.5 examinations) between 1970 and 2006. CRF was quantified by a maximal treadmill exercise test. Marginal models using generalized estimating equations were applied. Total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), triglycerides, and non-high-density lipoprotein cholesterol (non-HDL-C) presented similar inverted U-shaped quadratic trajectories with aging: gradual increases were noted until age mid-40s to early 50s, with subsequent declines (all p lipoproteins in young to middle-age men than in older men. Our investigation reveals a differential trajectory of lipids and lipoproteins with aging according to CRF in healthy men and suggests that promoting increased CRF levels may help delay the development of dyslipidemia. Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  9. Bayesian inference and model comparison for metallic fatigue data

    KAUST Repository

    Babuška, Ivo

    2016-02-23

    In this work, we present a statistical treatment of stress-life (S-N) data drawn from a collection of records of fatigue experiments that were performed on 75S-T6 aluminum alloys. Our main objective is to predict the fatigue life of materials by providing a systematic approach to model calibration, model selection and model ranking with reference to S-N data. To this purpose, we consider fatigue-limit models and random fatigue-limit models that are specially designed to allow the treatment of the run-outs (right-censored data). We first fit the models to the data by maximum likelihood methods and estimate the quantiles of the life distribution of the alloy specimen. To assess the robustness of the estimation of the quantile functions, we obtain bootstrap confidence bands by stratified resampling with respect to the cycle ratio. We then compare and rank the models by classical measures of fit based on information criteria. We also consider a Bayesian approach that provides, under the prior distribution of the model parameters selected by the user, their simulation-based posterior distributions. We implement and apply Bayesian model comparison methods, such as Bayes factor ranking and predictive information criteria based on cross-validation techniques under various a priori scenarios.

  10. Bayesian inference and model comparison for metallic fatigue data

    KAUST Repository

    Babuška, Ivo; Sawlan, Zaid A; Scavino, Marco; Szabó , Barna; Tempone, Raul

    2016-01-01

    In this work, we present a statistical treatment of stress-life (S-N) data drawn from a collection of records of fatigue experiments that were performed on 75S-T6 aluminum alloys. Our main objective is to predict the fatigue life of materials by providing a systematic approach to model calibration, model selection and model ranking with reference to S-N data. To this purpose, we consider fatigue-limit models and random fatigue-limit models that are specially designed to allow the treatment of the run-outs (right-censored data). We first fit the models to the data by maximum likelihood methods and estimate the quantiles of the life distribution of the alloy specimen. To assess the robustness of the estimation of the quantile functions, we obtain bootstrap confidence bands by stratified resampling with respect to the cycle ratio. We then compare and rank the models by classical measures of fit based on information criteria. We also consider a Bayesian approach that provides, under the prior distribution of the model parameters selected by the user, their simulation-based posterior distributions. We implement and apply Bayesian model comparison methods, such as Bayes factor ranking and predictive information criteria based on cross-validation techniques under various a priori scenarios.

  11. Hamiltonian inclusive fitness: a fitter fitness concept.

    Science.gov (United States)

    Costa, James T

    2013-01-01

    In 1963-1964 W. D. Hamilton introduced the concept of inclusive fitness, the only significant elaboration of Darwinian fitness since the nineteenth century. I discuss the origin of the modern fitness concept, providing context for Hamilton's discovery of inclusive fitness in relation to the puzzle of altruism. While fitness conceptually originates with Darwin, the term itself stems from Spencer and crystallized quantitatively in the early twentieth century. Hamiltonian inclusive fitness, with Price's reformulation, provided the solution to Darwin's 'special difficulty'-the evolution of caste polymorphism and sterility in social insects. Hamilton further explored the roles of inclusive fitness and reciprocation to tackle Darwin's other difficulty, the evolution of human altruism. The heuristically powerful inclusive fitness concept ramified over the past 50 years: the number and diversity of 'offspring ideas' that it has engendered render it a fitter fitness concept, one that Darwin would have appreciated.

  12. Estimating daily climatologies for climate indices derived from climate model data and observations

    Science.gov (United States)

    Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof

    2015-01-01

    Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192

  13. Open and closed CDM isocurvature models contrasted with the CMB data

    International Nuclear Information System (INIS)

    Enqvist, Kari; Kurki-Suonio, Hannu; Vaeliviita, Jussi

    2002-01-01

    We consider pure isocurvature cold dark matter models in the case of open and closed universes. We allow for a large spectral tilt and scan the six-dimensional parameter space for the best fit to the COBE, Boomerang, and Maxima-1 data. Taking into account constraints from large-scale structure and big bang nucleosynthesis, we find a best fit with χ 2 =121, which is to be compared to χ 2 =44 of a flat adiabatic reference model. Hence the current data strongly disfavor pure isocurvature perturbations

  14. Spreadsheets, Graphing Calculators and the Line of Best Fit

    Directory of Open Access Journals (Sweden)

    Bernie O'Sullivan

    2003-07-01

    One technique that can now be done, almost mindlessly, is the line of best fit. Both the graphing calculator and the Excel spreadsheet produce models for collected data that appear to be very good fits, but upon closer scrutiny, are revealed to be quite poor. This article will examine one such case. I will couch the paper within the framework of a very good classroom investigation that will help generate students’ understanding of the basic principles of curve fitting and will enable them to produce a very accurate model of collected data by combining the technology of the graphing calculator and the spreadsheet.

  15. Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.

    Science.gov (United States)

    Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F

    2009-11-01

    Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database.

  16. AXAF FITS standard for ray trace interchange

    Science.gov (United States)

    Hsieh, Paul F.

    1993-07-01

    A standard data format for the archival and transport of x-ray events generated by ray trace models is described. Upon review and acceptance by the Advanced X-ray Astrophysics Facility (AXAF) Software Systems Working Group (SSWG), this standard shall become the official AXAF data format for ray trace events. The Flexible Image Transport System (FITS) is well suited for the purposes of the standard and was selected to be the basis of the standard. FITS is both flexible and efficient and is also widely used within the astronomical community for storage and transfer of data. In addition, software to read and write FITS format files are widely available. In selecting quantities to be included within the ray trace standard, the AXAF Mission Support team, Science Instruments team, and the other contractor teams were surveyed. From the results of this survey, the following requirements were established: (1) for the scientific needs, each photon should have associated with it: position, direction, energy, and statistical weight; the standard must also accommodate path length (relative phase), and polarization. (2) a unique photon identifier is necessary for bookkeeping purposes; (3) a log of individuals, organizations, and software packages that have modified the data must be maintained in order to create an audit trail; (4) a mechanism for extensions to the basic kernel should be provided; and (5) the ray trace standard should integrate with future AXAF data product standards.

  17. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  18. A new kinetic model based on the remote control mechanism to fit experimental data in the selective oxidation of propene into acrolein on biphasic catalysts

    Energy Technology Data Exchange (ETDEWEB)

    Abdeldayem, H.M.; Ruiz, P.; Delmon, B. [Unite de Catalyse et Chimie des Materiaux Divises, Universite Catholique de Louvain, Louvain-La-Neuve (Belgium); Thyrion, F.C. [Unite des Procedes Faculte des Sciences Appliquees, Universite Catholique de Louvain, Louvain-La-Neuve (Belgium)

    1998-12-31

    A new kinetic model for a more accurate and detailed fitting of the experimental data is proposed. The model is based on the remote control mechanism (RCM). The RCM assumes that some oxides (called `donors`) are able to activate molecular oxygen transforming it to very active mobile species (spillover oxygen (O{sub OS})). O{sub OS} migrates onto the surface of the other oxide (called `acceptor`) where it creates and/or regenerates the active sites during the reaction. The model contains tow terms, one considering the creation of selective sites and the other the catalytic reaction at each site. The model has been tested in the selective oxidation of propene into acrolein (T=380, 400, 420 C; oxygen and propene partial pressures between 38 and 152 Torr). Catalysts were prepared as pure MoO{sub 3} (acceptor) and their mechanical mixtures with {alpha}-Sb{sub 2}O{sub 4} (donor) in different proportions. The presence of {alpha}-Sb{sub 2}O{sub 4} changes the reaction order, the activation energy of the reaction and the number of active sites of MoO{sub 3} produced by oxygen spillover. These changes are consistent with a modification in the degree of irrigation of the surface by oxygen spillover. The fitting of the model to experimental results shows that the number of sites created by O{sub SO} increases with the amount of {alpha}-Sb{sub 2}O{sub 4}. (orig.)

  19. A realistic closed-form radiobiological model of clinical tumor-control data incorporating intertumor heterogeneity

    International Nuclear Information System (INIS)

    Roberts, Stephen A.; Hendry, Jolyon H.

    1998-01-01

    Purpose: To investigate the role of intertumor heterogeneity in clinical tumor control datasets and the relationship to in vitro measurements of tumor biopsy samples. Specifically, to develop a modified linear-quadratic (LQ) model incorporating such heterogeneity that it is practical to fit to clinical tumor-control datasets. Methods and Materials: We developed a modified version of the linear-quadratic (LQ) model for tumor control, incorporating a (lagged) time factor to allow for tumor cell repopulation. We explicitly took into account the interpatient heterogeneity in clonogen number, radiosensitivity, and repopulation rate. Using this model, we could generate realistic TCP curves using parameter estimates consistent with those reported from in vitro studies, subject to the inclusion of a radiosensitivity (or dose)-modifying factor. We then demonstrated that the model was dominated by the heterogeneity in α (tumor radiosensitivity) and derived an approximate simplified model incorporating this heterogeneity. This simplified model is expressible in a compact closed form, which it is practical to fit to clinical datasets. Using two previously analysed datasets, we fit the model using direct maximum-likelihood techniques and obtained parameter estimates that were, again, consistent with the experimental data on the radiosensitivity of primary human tumor cells. This heterogeneity model includes the same number of adjustable parameters as the standard LQ model. Results: The modified model provides parameter estimates that can easily be reconciled with the in vitro measurements. The simplified (approximate) form of the heterogeneity model is a compact, closed-form probit function that can readily be fitted to clinical series by conventional maximum-likelihood methodology. This heterogeneity model provides a slightly better fit to the datasets than the conventional LQ model, with the same numbers of fitted parameters. The parameter estimates of the clinically

  20. The moderating role of alienation on the relation between social dominance orientation, right-wing authoritarianism, and person-organization fit.

    Science.gov (United States)

    Nicol, Adelheid A M; Rounding, Kevin

    2014-12-01

    Right-Wing Authoritarianism and Social Dominance Orientation have been found to be related with Person-Organization fit. This study examined whether alienation also plays a role in the relation between Person-Organization fit and these two socio-political attitudes. Measures of Right-Wing Authoritarianism, Social Dominance Orientation, alienation, and Person-Organization fit were given to a sample of Officer Cadets (N = 99; M age = 22.8 yr., SD = 5.4). The findings suggest that when individuals felt alienated, Social Dominance Orientation and Right-Wing Authoritarianism were not related to Person-Organization fit. When alienation was low, Social Dominance Orientation and Right-Wing Authoritarianism interacted to predict Person-Organization fit. Therefore, feelings of alienation can influence the perception of fit within an organization and the relation between perception of fit with Social Dominance Orientation and Right-Wing Authoritarianism.

  1. Dynamic relationships between motor skill competence and health-related fitness in youth.

    Science.gov (United States)

    Stodden, David F; Gao, Zan; Goodway, Jacqueline D; Langendorfer, Stephen J

    2014-08-01

    This cross-sectional study examined associations among motor skill competence (MSC) and health-related fitness (HRF) in youth. A convenient sample of 253 boys and 203 girls (aged 4-13 years) participated in the study. Associations among measures of MSC (throwing and kicking speed and standing long jump distance) and a composite measure of HRF (push-ups, curl-ups, grip strength and PACER test) across five age groups (4-5, 6-7, 8-9, 10-11 and 12-13 yrs.) were assessed using hierarchical regression modeling. When including all children, throwing and jumping were significantly associated with the composite HRF factor for both boys and girls (throw, t = 5.33; jump, t = 4.49) beyond the significant age effect (t = 4.98) with kicking approaching significance (t = 1.73, p = .08). Associations between throwing and kicking speed and HRF appeared to increase from early to middle to late childhood age ranges. Associations between jumping and HRF were variable across age groups. These results support the notion that the relationship between MSC and HRF performance are dynamic and may change across childhood. These data suggest that the development of object control skills in childhood may be important for the development and maintenance of HRF across childhood and into adolescence.

  2. Beta-Poisson model for single-cell RNA-seq data analyses.

    Science.gov (United States)

    Vu, Trung Nghia; Wills, Quin F; Kalari, Krishna R; Niu, Nifang; Wang, Liewei; Rantalainen, Mattias; Pawitan, Yudi

    2016-07-15

    Single-cell RNA-sequencing technology allows detection of gene expression at the single-cell level. One typical feature of the data is a bimodality in the cellular distribution even for highly expressed genes, primarily caused by a proportion of non-expressing cells. The standard and the over-dispersed gamma-Poisson models that are commonly used in bulk-cell RNA-sequencing are not able to capture this property. We introduce a beta-Poisson mixture model that can capture the bimodality of the single-cell gene expression distribution. We further integrate the model into the generalized linear model framework in order to perform differential expression analyses. The whole analytical procedure is called BPSC. The results from several real single-cell RNA-seq datasets indicate that ∼90% of the transcripts are well characterized by the beta-Poisson model; the model-fit from BPSC is better than the fit of the standard gamma-Poisson model in > 80% of the transcripts. Moreover, in differential expression analyses of simulated and real datasets, BPSC performs well against edgeR, a conventional method widely used in bulk-cell RNA-sequencing data, and against scde and MAST, two recent methods specifically designed for single-cell RNA-seq data. An R package BPSC for model fitting and differential expression analyses of single-cell RNA-seq data is available under GPL-3 license at https://github.com/nghiavtr/BPSC CONTACT: yudi.pawitan@ki.se or mattias.rantalainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Reliability of Health-Related Physical Fitness Tests among Colombian Children and Adolescents: The FUPRECOL Study.

    Directory of Open Access Journals (Sweden)

    Robinson Ramírez-Vélez

    Full Text Available Substantial evidence indicates that youth physical fitness levels are an important marker of lifestyle and cardio-metabolic health profiles and predict future risk of chronic diseases. The reliability physical fitness tests have not been explored in Latino-American youth population. This study's aim was to examine the reliability of health-related physical fitness tests that were used in the Colombian health promotion "Fuprecol study". Participants were 229 Colombian youth (boys n = 124 and girls n = 105 aged 9 to 17.9 years old. Five components of health-related physical fitness were measured: 1 morphological component: height, weight, body mass index (BMI, waist circumference, triceps skinfold, subscapular skinfold, and body fat (% via impedance; 2 musculoskeletal component: handgrip and standing long jump test; 3 motor component: speed/agility test (4x10 m shuttle run; 4 flexibility component (hamstring and lumbar extensibility, sit-and-reach test; 5 cardiorespiratory component: 20-meter shuttle-run test (SRT to estimate maximal oxygen consumption. The tests were performed two times, 1 week apart on the same day of the week, except for the SRT which was performed only once. Intra-observer technical errors of measurement (TEMs and inter-rater (reliability were assessed in the morphological component. Reliability for the Musculoskeletal, motor and cardiorespiratory fitness components was examined using Bland-Altman tests. For the morphological component, TEMs were small and reliability was greater than 95% of all cases. For the musculoskeletal, motor, flexibility and cardiorespiratory components, we found adequate reliability patterns in terms of systematic errors (bias and random error (95% limits of agreement. When the fitness assessments were performed twice, the systematic error was nearly 0 for all tests, except for the sit and reach (mean difference: -1.03% [95% CI = -4.35% to -2.28%]. The results from this study indicate that the

  4. One Size Does Not Fit All: Best Practices for Data Governance

    OpenAIRE

    Otto, Boris

    2011-01-01

    Data Governance defines roles and responsibilities for the management and use of corporate data. While the need for Data Governance is undoubted, companies often encounter difficulties in designing Data Governance in their organization. There is no one size fits all solution. As companies are different in terms of their business strategy, their diversification breadth, their industry, IT strategy and application system landscape, Data Governance must take into account this diversity. What wor...

  5. Relative age effect in physical attributes and motor fitness at different ...

    African Journals Online (AJOL)

    Relative age effect in physical attributes and motor fitness at different birth-month quartile. S.M. Mat-Rasid, M.R. Abdullah, H Juahir, R.M. Musa, A.B.H.M. Maliki, A Adnan, N.A. Kosni, V Eswaramoorthi, N Alias ...

  6. Proton-proton total cross sections and the neglect of masses in data fitting in the Regge region

    International Nuclear Information System (INIS)

    Kamran, M.

    1981-01-01

    It is shown by taking the example of pp total cross sections that the use of the approximation s is appoximately equal to 2qsup(1/2) while fitting data in the Regge region can be misleading. Several standard fits to sigmasub(tot)pp data are based on the assumption of weak rho-f-ω-A 2 exchange degeneracy (EXD). However, these fits involve the use of the approximation mentioned. It is found that it is impossible to fit the sigmasub(tot)pp data in the range 6 2 EXD. This investigation shows that sigmasub(tot)pp data alone seem to indicate either a breaking of weak rho-f-ω-A 2 EXD or the presence of low-lying contributions, or both, provided the masses of the interacting particles in data fitting in the Regge region ((Pi)ab>=5GeV/c) are not ignored

  7. [Association between physical fitness parameters and health related quality of life in Chilean community-dwelling older adults].

    Science.gov (United States)

    Guede Rojas, Francisco; Chirosa Ríos, Luis Javier; Fuentealba Urra, Sergio; Vergara Ríos, César; Ulloa Díaz, David; Campos Jara, Christian; Barbosa González, Paola; Cuevas Aburto, Jesualdo

    2017-01-01

    There is no conclusive evidence about the association between physical fitness (PF) and health related quality of life (HRQOL) in older adults. To seek for an association between PF and HRQOL in non-disabled community-dwelling Chilean older adults. One hundred and sixteen subjects participated in the study. PF was assessed using the Senior Fitness Test (SFT) and hand grip strength (HGS). HRQOL was assessed using eight dimensions provided by the SF-12v2 questionnaire. Binary multivariate logistic regression models were carried out considering the potential influence of confounder variables. Non-adjusted models, indicated that subjects with better performance in arm curl test (ACT) were more likely to score higher on vitality dimension (OR > 1) and those with higher HGS were more likely to score higher on physical functioning, bodily pain, vitality and mental health (OR > 1). The adjusted models consistently showed that ACT and HGS predicted a favorable perception of vitality and mental health dimensions respectively (OR > 1). HGS and ACT have a predictive value for certain dimensions of HRQOL.

  8. Neural network hydrological modelling: on questions of over-fitting, over-training and over-parameterisation

    Science.gov (United States)

    Abrahart, R. J.; Dawson, C. W.; Heppenstall, A. J.; See, L. M.

    2009-04-01

    The most critical issue in developing a neural network model is generalisation: how well will the preferred solution perform when it is applied to unseen datasets? The reported experiments used far-reaching sequences of model architectures and training periods to investigate the potential damage that could result from the impact of several interrelated items: (i) over-fitting - a machine learning concept related to exceeding some optimal architectural size; (ii) over-training - a machine learning concept related to the amount of adjustment that is applied to a specific model - based on the understanding that too much fine-tuning might result in a model that had accommodated random aspects of its training dataset - items that had no causal relationship to the target function; and (iii) over-parameterisation - a statistical modelling concept that is used to restrict the number of parameters in a model so as to match the information content of its calibration dataset. The last item in this triplet stems from an understanding that excessive computational complexities might permit an absurd and false solution to be fitted to the available material. Numerous feedforward multilayered perceptrons were trialled and tested. Two different methods of model construction were also compared and contrasted: (i) traditional Backpropagation of Error; and (ii) state-of-the-art Symbiotic Adaptive Neuro-Evolution. Modelling solutions were developed using the reported experimental set ups of Gaume & Gosset (2003). The models were applied to a near-linear hydrological modelling scenario in which past upstream and past downstream discharge records were used to forecast current discharge at the downstream gauging station [CS1: River Marne]; and a non-linear hydrological modelling scenario in which past river discharge measurements and past local meteorological records (precipitation and evaporation) were used to forecast current discharge at the river gauging station [CS2: Le Sauzay].

  9. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  10. High-degree Gravity Models from GRAIL Primary Mission Data

    Science.gov (United States)

    Lemoine, Frank G.; Goossens, Sander J.; Sabaka, Terence J.; Nicholas, Joseph B.; Mazarico, Erwan; Rowlands, David D.; Loomis, Bryant D.; Chinn, Douglas S.; Caprette, Douglas S.; Neumann, Gregory A.; hide

    2013-01-01

    We have analyzed Ka?band range rate (KBRR) and Deep Space Network (DSN) data from the Gravity Recovery and Interior Laboratory (GRAIL) primary mission (1 March to 29 May 2012) to derive gravity models of the Moon to degree 420, 540, and 660 in spherical harmonics. For these models, GRGM420A, GRGM540A, and GRGM660PRIM, a Kaula constraint was applied only beyond degree 330. Variance?component estimation (VCE) was used to adjust the a priori weights and obtain a calibrated error covariance. The global root?mean?square error in the gravity anomalies computed from the error covariance to 320×320 is 0.77 mGal, compared to 29.0 mGal with the pre?GRAIL model derived with the SELENE mission data, SGM150J, only to 140×140. The global correlations with the Lunar Orbiter Laser Altimeter?derived topography are larger than 0.985 between l = 120 and 330. The free?air gravity anomalies, especially over the lunar farside, display a dramatic increase in detail compared to the pre?GRAIL models (SGM150J and LP150Q) and, through degree 320, are free of the orbit?track?related artifacts present in the earlier models. For GRAIL, we obtain an a posteriori fit to the S?band DSN data of 0.13 mm/s. The a posteriori fits to the KBRR data range from 0.08 to 1.5 micrometers/s for GRGM420A and from 0.03 to 0.06 micrometers/s for GRGM660PRIM. Using the GRAIL data, we obtain solutions for the degree 2 Love numbers, k20=0.024615+/-0.0000914, k21=0.023915+/-0.0000132, and k22=0.024852+/-0.0000167, and a preliminary solution for the k30 Love number of k30=0.00734+/-0.0015, where the Love number error sigmas are those obtained with VCE.

  11. Cardiorespiratory fitness and age-related arterial stiffness in women with systemic lupus erythematosus.

    Science.gov (United States)

    Montalbán-Méndez, Cristina; Soriano-Maldonado, Alberto; Vargas-Hitos, José A; Sáez-Urán, Luis M; Rosales-Castillo, Antonio; Morillas-de-Laguno, Pablo; Gavilán-Carrera, Blanca; Jiménez-Alonso, Juan

    2018-03-01

    The aim of this study was twofold: (i) to examine the association of cardiorespiratory fitness with arterial stiffness in women with systemic lupus erythematosus; (ii) to assess the potential interaction of cardiorespiratory fitness with age on arterial stiffness in this population. A total of 49 women with systemic lupus erythematosus (mean age 41.3 [standard deviation 13.8] years) and clinical stability during the previous 6 months were included in the study. Arterial stiffness was assessed through pulse wave velocity (Mobil-O-Graph® 24 hours pulse wave velocity monitor). Cardiorespiratory fitness was estimated with the Siconolfi step test and the 6-minute walk test. Cardiorespiratory fitness was inversely associated with pulse wave velocity in crude analyses (P fitness × age interaction effect on pulse wave velocity, regardless of the test used to estimate cardiorespiratory fitness (P fitness was associated with a lower increase in pulse wave velocity per each year increase in age. The results of this study suggest that cardiorespiratory fitness might attenuate the age-related arterial stiffening in women with systemic lupus erythematosus and might thus contribute to the primary prevention of cardiovascular disease in this population. As the cross-sectional design precludes establishing causal relationships, future clinical trials should confirm or contrast these findings. © 2018 Stichting European Society for Clinical Investigation Journal Foundation.

  12. Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Sen; Li, Chengwei, E-mail: heikuanghit@163.com [School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001 (China)

    2016-06-15

    The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiation of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.

  13. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  14. Educational Level Is Related to Physical Fitness in Patients with Type 2 Diabetes - A Cross-Sectional Study.

    Directory of Open Access Journals (Sweden)

    Lara Allet

    balance or flexibility.A main strength of the present study is that it addresses a population of importance and a factor (EL whose understanding can influence future interventions. A second strength is its relatively large sample size of a high-risk population. Third, unlike studies that have shown an association between self-reported fitness and educational level we assessed physical fitness measures by a quantitative and validated test battery using assessors blinded to other data. Another novelty is the extensive evaluation of the role of many relevant confounder variables.In conclusion, we show that in patients with type 2 diabetes EL correlates favorably and independently with important health-related physical fitness measures such as aerobic fitness, walking speed, and lower limb strength. Our findings underline that diabetic patients with low EL should be specifically encouraged to participate in physical activity intervention programs to further reduce social disparities in healthcare. Such programs should be structured and integrate the norms, needs and capacities (financial, time, physical capacities and self-efficacy of this population, and their effectiveness should be tested in future studies.University of Lausanne clinicaltrials.gov NCT01289587.

  15. Educational Level Is Related to Physical Fitness in Patients with Type 2 Diabetes - A Cross-Sectional Study.

    Science.gov (United States)

    Allet, Lara; Giet, Olivier; Barral, Jérôme; Junod, Nicolas; Durrer, Dominique; Amati, Francesca; Sykiotis, Gerasimos P; Marques-Vidal, Pedro; Puder, Jardena J

    2016-01-01

    flexibility. A main strength of the present study is that it addresses a population of importance and a factor (EL) whose understanding can influence future interventions. A second strength is its relatively large sample size of a high-risk population. Third, unlike studies that have shown an association between self-reported fitness and educational level we assessed physical fitness measures by a quantitative and validated test battery using assessors blinded to other data. Another novelty is the extensive evaluation of the role of many relevant confounder variables. In conclusion, we show that in patients with type 2 diabetes EL correlates favorably and independently with important health-related physical fitness measures such as aerobic fitness, walking speed, and lower limb strength. Our findings underline that diabetic patients with low EL should be specifically encouraged to participate in physical activity intervention programs to further reduce social disparities in healthcare. Such programs should be structured and integrate the norms, needs and capacities (financial, time, physical capacities and self-efficacy) of this population, and their effectiveness should be tested in future studies. University of Lausanne clinicaltrials.gov NCT01289587.

  16. Fitting Latent Cluster Models for Networks with latentnet

    Directory of Open Access Journals (Sweden)

    Pavel N. Krivitsky

    2007-12-01

    Full Text Available latentnet is a package to fit and evaluate statistical latent position and cluster models for networks. Hoff, Raftery, and Handcock (2002 suggested an approach to modeling networks based on positing the existence of an latent space of characteristics of the actors. Relationships form as a function of distances between these characteristics as well as functions of observed dyadic level covariates. In latentnet social distances are represented in a Euclidean space. It also includes a variant of the extension of the latent position model to allow for clustering of the positions developed in Handcock, Raftery, and Tantrum (2007.The package implements Bayesian inference for the models based on an Markov chain Monte Carlo algorithm. It can also compute maximum likelihood estimates for the latent position model and a two-stage maximum likelihood method for the latent position cluster model. For latent position cluster models, the package provides a Bayesian way of assessing how many groups there are, and thus whether or not there is any clustering (since if the preferred number of groups is 1, there is little evidence for clustering. It also estimates which cluster each actor belongs to. These estimates are probabilistic, and provide the probability of each actor belonging to each cluster. It computes four types of point estimates for the coefficients and positions: maximum likelihood estimate, posterior mean, posterior mode and the estimator which minimizes Kullback-Leibler divergence from the posterior. You can assess the goodness-of-fit of the model via posterior predictive checks. It has a function to simulate networks from a latent position or latent position cluster model.

  17. [The association between socioeconomic indicators andadolescents'physical activity and health-related fitness].

    Science.gov (United States)

    Constantino-Coledam, Diogo H; Ferraiol, Philippe Fanelli; Arruda, Gustavo Aires de; Pires-Júnior, Raymundo; Teixeira, Marcio; Greca, João Paulo de Aguiar; Oliveira, Arli Ramos de

    2013-01-01

    This study was aimed at analysing the association between socioeconomic indicators and adolescents' physical activity and health-related fitness. The study involved 716 adolescents from both genders whose age ranged from 10 to 18 years-old (46.8% male) who answered a questionnaire for estimating their habitual physical activity, socioeconomic status; two health-related physical fitness tests were also performed. The socioeconomic indicators analysed concerned their parents' educational level and the number of bathrooms, TVs, cars, housemaids, refrigerators and freezers in their homes. A positive association was found between paternal education (PR=1.61 (range 1.27-2.10) and 1.41 (1.10-1.83)) and housemaids (PR=1.97 (1.04-3.81) and 1.92 (1.05-3.52)) with recommended physical activity and leisure time physical activity, respectively. The number of cars (PR=1.48: 1.02-2.19) and freezers (PR=1.88: 1.12-3.18) was positively associated with leisure time physical activity and the number of TVs negatively so (PR=0.75: 0.63-0.89). The number of TVs (PR=0.80: 0.67-0.96) and cars (PR=0.70: 0.55-0.89) was negatively associated with cardiorespiratory fitness whilst paternal education (PR=1.17: 1.00-1.37) and the number of bathrooms in the home (PR=1.25: 1.02-1.54) were positively associated with muscular strength. Physical activity and health-related physical fitness were associated with socioeconomic status. However, such association depended on the socioeconomic indicator being analysed. Caution should be taken when analysing studies which use different socioeconomic indicators.

  18. Cardiorespiratory Fitness and Cognitive Function are Positively Related Among Participants with Mild and Subjective Cognitive Impairment.

    Science.gov (United States)

    Stuckenschneider, Tim; Askew, Christopher David; Rüdiger, Stefanie; Cristina Polidori, Maria; Abeln, Vera; Vogt, Tobias; Krome, Andreas; Olde Rikkert, Marcel; Lawlor, Brian; Schneider, Stefan

    2018-01-01

    By 2030, about 74 million people will be diagnosed with dementia, and many more will experience subjective (SCI) or mild cognitive impairment (MCI). As physical inactivity has been identified to be a strong modifiable risk factor for dementia, exercise and physical activity (PA) may be important parameters to predict the progression from MCI to dementia, but might also represent disease trajectory modifying strategies for SCI and MCI. A better understanding of the relationship between activity, fitness, and cognitive function across the spectrum of MCI and SCI would provide an insight into the potential utility of PA and fitness as early markers, and treatment targets to prevent cognitive decline. 121 participants were stratified into three groups, late MCI (LMCI), early MCI (EMCI), and SCI based on the Montreal Cognitive Assessment (MoCA). Cognitive function assessments also included the Trail Making Test A+B, and a verbal fluency test. PA levels were evaluated with an interviewer-administered questionnaire (LAPAQ) and an activity monitor. An incremental exercise test was performed to estimate cardiorespiratory fitness and to determine exercise capacity relative to population normative data. ANCOVA revealed that LMCI subjects had the lowest PA levels (LAPAQ, p = 0.018; activity monitor, p = 0.041), and the lowest exercise capacity in relation to normative values (p = 0.041). Moreover, a modest correlation between MoCA and cardiorespiratory fitness (r = 0.25; p cognitive impairment PA and exercise capacity might present a marker for the risk of further cognitive decline. This finding warrants further investigation using longitudinal cohort studies.

  19. Automatic generation of co-embeddings from relational data with adaptive shaping.

    Science.gov (United States)

    Mu, Tingting; Goulermas, John Yannis

    2013-10-01

    In this paper, we study the co-embedding problem of how to map different types of patterns into one common low-dimensional space, given only the associations (relation values) between samples. We conduct a generic analysis to discover the commonalities between existing co-embedding algorithms and indirectly related approaches and investigate possible factors controlling the shapes and distributions of the co-embeddings. The primary contribution of this work is a novel method for computing co-embeddings, termed the automatic co-embedding with adaptive shaping (ACAS) algorithm, based on an efficient transformation of the co-embedding problem. Its advantages include flexible model adaptation to the given data, an economical set of model variables leading to a parametric co-embedding formulation, and a robust model fitting criterion for model optimization based on a quantization procedure. The secondary contribution of this work is the introduction of a set of generic schemes for the qualitative analysis and quantitative assessment of the output of co-embedding algorithms, using existing labeled benchmark datasets. Experiments with synthetic and real-world datasets show that the proposed algorithm is very competitive compared to existing ones.

  20. Aerobic fitness related to cardiovascular risk factors in young children

    DEFF Research Database (Denmark)

    Dencker, Magnus; Thorsson, Ola; Karlsson, Magnus K

    2012-01-01

    Low aerobic fitness (maximum oxygen uptake (VO(2PEAK))) is predictive for poor health in adults. In a cross-sectional study, we assessed if VO(2PEAK) is related to a composite risk factor score for cardiovascular disease (CVD) in 243 children (136 boys and 107 girls) aged 8 to 11 years. VO(2PEAK...

  1. Experimental Rugged Fitness Landscape in Protein Sequence Space

    Science.gov (United States)

    Hayashi, Yuuki; Aita, Takuyo; Toyota, Hitoshi; Husimi, Yuzuru; Urabe, Itaru; Yomo, Tetsuya

    2006-01-01

    The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12–130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7×104-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1) the dependence of stationary fitness on library size, which increased gradually, and (2) the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18–24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region. PMID:17183728

  2. Experimental rugged fitness landscape in protein sequence space.

    Science.gov (United States)

    Hayashi, Yuuki; Aita, Takuyo; Toyota, Hitoshi; Husimi, Yuzuru; Urabe, Itaru; Yomo, Tetsuya

    2006-12-20

    The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12-130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7x10(4)-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1) the dependence of stationary fitness on library size, which increased gradually, and (2) the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18-24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region.

  3. Experimental rugged fitness landscape in protein sequence space.

    Directory of Open Access Journals (Sweden)

    Yuuki Hayashi

    Full Text Available The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12-130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7x10(4-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1 the dependence of stationary fitness on library size, which increased gradually, and (2 the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18-24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region.

  4. High-intensity interval training for improving health-related fitness in adolescents: a systematic review and meta-analysis.

    Science.gov (United States)

    Costigan, S A; Eather, N; Plotnikoff, R C; Taaffe, D R; Lubans, D R

    2015-10-01

    High-intensity interval training (HIIT) may be a feasible and efficacious strategy for improving health-related fitness in young people. The objective of this systematic review and meta-analysis was to evaluate the utility of HIIT to improve health-related fitness in adolescents and to identify potential moderators of training effects. Studies were considered eligible if they: (1) examined adolescents (13-18 years); (2) examined health-related fitness outcomes; (3) involved an intervention of ≥4 weeks in duration; (4) included a control or moderate intensity comparison group; and (5) prescribed high-intensity activity for the HIIT condition. Meta-analyses were conducted to determine the effect of HIIT on health-related fitness components using Comprehensive Meta-analysis software and potential moderators were explored (ie, study duration, risk of bias and type of comparison group). The effects of HIIT on cardiorespiratory fitness and body composition were large, and medium, respectively. Study duration was a moderator for the effect of HIIT on body fat percentage. Intervention effects for waist circumference and muscular fitness were not statistically significant. HIIT is a feasible and time-efficient approach for improving cardiorespiratory fitness and body composition in adolescent populations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. The Relation between Career Decision-Making Strategies and Person-Job Fit: A Study of Job Changers

    Science.gov (United States)

    Singh, Romila; Greenhaus, Jeffrey H.

    2004-01-01

    This study examined relations between three career decision-making strategies (rational, intuitive, and dependent) and person--job fit among 361 professionals who had recently changed jobs. We found that the relation between each decision-making strategy and fit was contingent upon the concurrent use of other strategies. A rational strategy…

  6. Modeling forest fire occurrences using count-data mixed models in Qiannan autonomous prefecture of Guizhou province in China.

    Science.gov (United States)

    Xiao, Yundan; Zhang, Xiongqing; Ji, Ping

    2015-01-01

    Forest fires can cause catastrophic damage on natural resources. In the meantime, it can also bring serious economic and social impacts. Meteorological factors play a critical role in establishing conditions favorable for a forest fire. Effective prediction of forest fire occurrences could prevent or minimize losses. This paper uses count data models to analyze fire occurrence data which is likely to be dispersed and frequently contain an excess of zero counts (no fire occurrence). Such data have commonly been analyzed using count data models such as a Poisson model, negative binomial model (NB), zero-inflated models, and hurdle models. Data we used in this paper is collected from Qiannan autonomous prefecture of Guizhou province in China. Using the fire occurrence data from January to April (spring fire season) for the years 1996 through 2007, we introduced random effects to the count data models. In this study, the results indicated that the prediction achieved through NB model provided a more compelling and credible inferential basis for fitting actual forest fire occurrence, and mixed-effects model performed better than corresponding fixed-effects model in forest fire forecasting. Besides, among all meteorological factors, we found that relative humidity and wind speed is highly correlated with fire occurrence.

  7. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Dixon, P.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M and O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty

  8. Structural similarities between brain and linguistic data provide evidence of semantic relations in the brain.

    Directory of Open Access Journals (Sweden)

    Colleen E Crangle

    Full Text Available This paper presents a new method of analysis by which structural similarities between brain data and linguistic data can be assessed at the semantic level. It shows how to measure the strength of these structural similarities and so determine the relatively better fit of the brain data with one semantic model over another. The first model is derived from WordNet, a lexical database of English compiled by language experts. The second is given by the corpus-based statistical technique of latent semantic analysis (LSA, which detects relations between words that are latent or hidden in text. The brain data are drawn from experiments in which statements about the geography of Europe were presented auditorily to participants who were asked to determine their truth or falsity while electroencephalographic (EEG recordings were made. The theoretical framework for the analysis of the brain and semantic data derives from axiomatizations of theories such as the theory of differences in utility preference. Using brain-data samples from individual trials time-locked to the presentation of each word, ordinal relations of similarity differences are computed for the brain data and for the linguistic data. In each case those relations that are invariant with respect to the brain and linguistic data, and are correlated with sufficient statistical strength, amount to structural similarities between the brain and linguistic data. Results show that many more statistically significant structural similarities can be found between the brain data and the WordNet-derived data than the LSA-derived data. The work reported here is placed within the context of other recent studies of semantics and the brain. The main contribution of this paper is the new method it presents for the study of semantics and the brain and the focus it permits on networks of relations detected in brain data and represented by a semantic model.

  9. Is Good Fit Related to Good Behaviour? Goodness of Fit between Daycare Teacher-Child Relationships, Temperament, and Prosocial Behaviour

    Science.gov (United States)

    Hipson, Will E.; Séguin, Daniel G.

    2016-01-01

    The Goodness-of-Fit model [Thomas, A., & Chess, S. (1977). Temperament and development. New York: Brunner/Mazel] proposes that a child's temperament interacts with the environment to influence child outcomes. In the past, researchers have shown how the association between the quality of the teacher-child relationship in daycare and child…

  10. GENFIT - a generic track-fitting toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Rauch, Johannes [Technische Universitaet Muenchen (Germany); Schlueter, Tobias [Ludwig-Maximilians-Universitaet Muenchen (Germany)

    2014-07-01

    GENFIT is an experiment-independent track-fitting toolkit, which combines fitting algorithms, track representations, and measurement geometries into a modular framework. We report on a significantly improved version of GENFIT, based on experience gained in the Belle II, PANDA, and FOPI experiments. Improvements concern the implementation of additional track-fitting algorithms, enhanced implementations of Kalman fitters, enhanced visualization capabilities, and additional implementations of measurement types suited for various kinds of tracking detectors. The data model has been revised, allowing for efficient track merging, smoothing, residual calculation and alignment.

  11. Information in relational data bases

    Energy Technology Data Exchange (ETDEWEB)

    Abhyankar, R B

    1982-01-01

    A new knowledge representation scheme is proposed for representing incomplete information in relational data bases. The knowledge representation scheme introduces a novel convention for negative information based on modal logic and a novel data structure obtained by introducing tuple flags in the relational model of data. Standard and minimal forms are defined for relations conforming to the new data structure. The conventional relational operators, select, project and join, the redefined so they can be used to manipulate relations containing incomplete information. Conditions are presented for the lossless decomposition of relations containing incomplete information. 20 references.

  12. Development and Analysis of Volume Multi-Sphere Method Model Generation using Electric Field Fitting

    Science.gov (United States)

    Ingram, G. J.

    Electrostatic modeling of spacecraft has wide-reaching applications such as detumbling space debris in the Geosynchronous Earth Orbit regime before docking, servicing and tugging space debris to graveyard orbits, and Lorentz augmented orbits. The viability of electrostatic actuation control applications relies on faster-than-realtime characterization of the electrostatic interaction. The Volume Multi-Sphere Method (VMSM) seeks the optimal placement and radii of a small number of equipotential spheres to accurately model the electrostatic force and torque on a conducting space object. Current VMSM models tuned using force and torque comparisons with commercially available finite element software are subject to the modeled probe size and numerical errors of the software. This work first investigates fitting of VMSM models to Surface-MSM (SMSM) generated electrical field data, removing modeling dependence on probe geometry while significantly increasing performance and speed. A proposed electric field matching cost function is compared to a force and torque cost function, the inclusion of a self-capacitance constraint is explored and 4 degree-of-freedom VMSM models generated using electric field matching are investigated. The resulting E-field based VMSM development framework is illustrated on a box-shaped hub with a single solar panel, and convergence properties of select models are qualitatively analyzed. Despite the complex non-symmetric spacecraft geometry, elegantly simple 2-sphere VMSM solutions provide force and torque fits within a few percent.

  13. Macular Carotenoids, Aerobic Fitness, and Central Adiposity Are Associated Differentially with Hippocampal-Dependent Relational Memory in Preadolescent Children.

    Science.gov (United States)

    Hassevoort, Kelsey M; Khazoum, Sarah E; Walker, John A; Barnett, Sasha M; Raine, Lauren B; Hammond, Billy R; Renzi-Hammond, Lisa M; Kramer, Arthur F; Khan, Naiman A; Hillman, Charles H; Cohen, Neal J

    2017-04-01

    To examine the associations of macular pigment carotenoids (lutein, meso-zeaxanthin, and zeaxanthin), aerobic fitness, and central adiposity with hippocampal-dependent relational memory in prepubescent children. Children between 7 and 10 years of age (n = 40) completed a task designed to assess relational memory performance and participated in aerobic fitness, adiposity, and macular pigment optical density (MPOD) assessment. Aerobic fitness was assessed via a modified Balke treadmill protocol designed to measure maximal oxygen volume. Central adiposity was assessed via dual-energy x-ray absorptiometry. MPOD was measured psychophysically by the use of customized heterochromatic flicker photometry. Statistical analyses included correlations and hierarchical linear regression. Aerobic fitness and MPOD were associated negatively with relational memory errors (P memory errors (P memory performance even after we accounted for aerobic fitness (β = -0.388, P = .007). Even after we adjusted for aerobic fitness and central adiposity, factors known to relate to hippocampal-dependent memory, MPOD positively and significantly predicted hippocampal-dependent memory performance. ClinicalTrials.gov: NCT01619826. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Health-related fitness of urban children in Suriname : an ethnic variety

    NARCIS (Netherlands)

    Walhain, Fenna; Declerck, Marlies; de Vries, J; Veeger, H.E.J.; Ledebt, A.

    Objective: The aim of our study was to investigate the health-related fitness (HRF) of 11-year-old children living in an urban area in Suriname, taking into account the difference between the five main ethnicities from Suriname. Design and Method: Cross-sectionally, performance on the HRF

  15. Bayesian Geostatistical Modeling of Malaria Indicator Survey Data in Angola

    Science.gov (United States)

    Gosoniu, Laura; Veta, Andre Mia; Vounatsou, Penelope

    2010-01-01

    The 2006–2007 Angola Malaria Indicator Survey (AMIS) is the first nationally representative household survey in the country assessing coverage of the key malaria control interventions and measuring malaria-related burden among children under 5 years of age. In this paper, the Angolan MIS data were analyzed to produce the first smooth map of parasitaemia prevalence based on contemporary nationwide empirical data in the country. Bayesian geostatistical models were fitted to assess the effect of interventions after adjusting for environmental, climatic and socio-economic factors. Non-linear relationships between parasitaemia risk and environmental predictors were modeled by categorizing the covariates and by employing two non-parametric approaches, the B-splines and the P-splines. The results of the model validation showed that the categorical model was able to better capture the relationship between parasitaemia prevalence and the environmental factors. Model fit and prediction were handled within a Bayesian framework using Markov chain Monte Carlo (MCMC) simulations. Combining estimates of parasitaemia prevalence with the number of children under we obtained estimates of the number of infected children in the country. The population-adjusted prevalence ranges from in Namibe province to in Malanje province. The odds of parasitaemia in children living in a household with at least ITNs per person was by 41% lower (CI: 14%, 60%) than in those with fewer ITNs. The estimates of the number of parasitaemic children produced in this paper are important for planning and implementing malaria control interventions and for monitoring the impact of prevention and control activities. PMID:20351775

  16. Model-independent partial wave analysis using a massively-parallel fitting framework

    Science.gov (United States)

    Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.

    2017-10-01

    The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.

  17. Adherence to Voice Therapy Recommendations Is Associated With Preserved Employment Fitness Among Teachers With Work-Related Dysphonia.

    Science.gov (United States)

    Rinsky-Halivni, Lilah; Klebanov, Miriam; Lerman, Yehuda; Paltiel, Ora

    2017-05-01

    Referral to voice therapy and recommendations for voice rest and microphone use are common interventions in occupational medicine aimed at preserving the working capability of teachers with occupation-related voice problems. Research on the impact of such interventions in terms of employment is lacking. This study examined changes in fitness (ie, ability) to work of dysphonic teachers referred to an occupational clinic and evaluated employment outcomes following voice therapy, voice rest, and microphone use. A historical prospective study was carried out. Of 365 classroom teachers who were first referred to a regional occupational medicine clinic due to dysphonia between January 2007 and December 2012, 156 were sampled and 153 were followed-up for an average of 5 years (range 2-8). Data were collected from medical records and from interviews conducted in 2014 aimed at assessing employment status. Logistic regression models were used to assess associations between interventions and employment outcomes. Survival analyses were performed to evaluate the association between participating in voice therapy and length of retained employment fitness. Thirty-four (22.2%) teachers suffered declines in working capabilities due to dysphonia. Voice therapy was demonstrated as being a protective factor against such declines (odds ratio = 0.05 [0.01-0.27]). Adherence to recommendation of voice therapy was teachers occurred within 20 months after referral. Unlike voice therapy, voice rest and microphone use were not associated with retention of working capabilities. Voice therapy, especially when instituted early, is a strong predictor for retaining fitness for employment among dysphonic teachers. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  18. Computational Software to Fit Seismic Data Using Epidemic-Type Aftershock Sequence Models and Modeling Performance Comparisons

    Science.gov (United States)

    Chu, A.

    2016-12-01

    Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work implements three of the homogeneous ETAS models described in Ogata (1998). With a model's log-likelihood function, my software finds the Maximum-Likelihood Estimates (MLEs) of the model's parameters to estimate the homogeneous background rate and the temporal and spatial parameters that govern triggering effects. EM-algorithm is employed for its advantages of stability and robustness (Veen and Schoenberg, 2008). My work also presents comparisons among the three models in robustness, convergence speed, and implementations from theory to computing practice. Up-to-date regional seismic data of seismic active areas such as Southern California and Japan are used to demonstrate the comparisons. Data analysis has been done using computer languages Java and R. Java has the advantages of being strong-typed and easiness of controlling memory resources, while R has the advantages of having numerous available functions in statistical computing. Comparisons are also made between the two programming languages in convergence and stability, computational speed, and easiness of implementation. Issues that may affect convergence such as spatial shapes are discussed.

  19. The inert doublet model in the light of Fermi-LAT gamma-ray data: a global fit analysis

    Science.gov (United States)

    Eiteneuer, Benedikt; Goudelis, Andreas; Heisig, Jan

    2017-09-01

    We perform a global fit within the inert doublet model taking into account experimental observables from colliders, direct and indirect dark matter searches and theoretical constraints. In particular, we consider recent results from searches for dark matter annihilation-induced gamma-rays in dwarf spheroidal galaxies and relax the assumption that the inert doublet model should account for the entire dark matter in the Universe. We, moreover, study in how far the model is compatible with a possible dark matter explanation of the so-called Galactic center excess. We find two distinct parameter space regions that are consistent with existing constraints and can simultaneously explain the excess: One with dark matter masses near the Higgs resonance and one around 72 GeV where dark matter annihilates predominantly into pairs of virtual electroweak gauge bosons via the four-vertex arising from the inert doublet's kinetic term. We briefly discuss future prospects to probe these scenarios.

  20. The inert doublet model in the light of Fermi-LAT gamma-ray data: a global fit analysis

    Energy Technology Data Exchange (ETDEWEB)

    Eiteneuer, Benedikt; Heisig, Jan [RWTH Aachen University, Institute for Theoretical Particle Physics and Cosmology, Aachen (Germany); Goudelis, Andreas [UMR 7589 CNRS and UPMC, Laboratoire de Physique Theorique et Hautes Energies (LPTHE), Paris (France)

    2017-09-15

    We perform a global fit within the inert doublet model taking into account experimental observables from colliders, direct and indirect dark matter searches and theoretical constraints. In particular, we consider recent results from searches for dark matter annihilation-induced gamma-rays in dwarf spheroidal galaxies and relax the assumption that the inert doublet model should account for the entire dark matter in the Universe. We, moreover, study in how far the model is compatible with a possible dark matter explanation of the so-called Galactic center excess. We find two distinct parameter space regions that are consistent with existing constraints and can simultaneously explain the excess: One with dark matter masses near the Higgs resonance and one around 72 GeV where dark matter annihilates predominantly into pairs of virtual electroweak gauge bosons via the four-vertex arising from the inert doublet's kinetic term. We briefly discuss future prospects to probe these scenarios. (orig.)