Stanley, Leanne M.; Edwards, Michael C.
2016-01-01
The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…
Defining fitness in evolutionary models
Indian Academy of Sciences (India)
jgen/087/04/0339-0348. Keywords. fitness; invasion exponent; adaptive dynamics; game theory; Lyapunov exponent; invasibility; Malthusian parameter. Abstract. The analysis of evolutionary models requires an appropriate definition for fitness.
International Nuclear Information System (INIS)
Martin Llorente, F.
1990-01-01
The models of atmospheric pollutants dispersion are based in mathematic algorithms that describe the transport, diffusion, elimination and chemical reactions of atmospheric contaminants. These models operate with data of contaminants emission and make an estimation of quality air in the area. This model can be applied to several aspects of atmospheric contamination
Defining fitness in evolutionary models
Indian Academy of Sciences (India)
2008-12-23
Dec 23, 2008 ... The analysis of evolutionary models requires an appropriate definition for fitness. ..... of dimorphism for dormancy in plants (Cohen 1966). .... yses have assumed nonoverlapping generations (i.e. no age- structure). The solution to defining fitness when the environ- ment is spatially variable and there is a ...
Students' Models of Curve Fitting: A Models and Modeling Perspective
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
Measured, modeled, and causal conceptions of fitness
Abrams, Marshall
2012-01-01
This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804
Fitting State Space Models with EViews
Directory of Open Access Journals (Sweden)
Filip A. M. Van den Bossche
2011-05-01
Full Text Available This paper demonstrates how state space models can be fitted in EViews. We first briefly introduce EViews as an econometric software package. Next we fit a local level model to the Nile data. We then show how a multivariate “latent risk” model can be developed, making use of the EViews programming environment. We conclude by summarizing the possibilities and limitations of the software package when it comes to state space modeling.
LFLM (Local Fitting of Linear Models / Locally weighted Fitting of Linear Models)
DEFF Research Database (Denmark)
1997-01-01
LFLM (Local Fitting of Linear Models / Locally weighted Fitting of Linear Models) is an S-PLUS / R library for estimation in conditional parametric models. This class of models can briefly be described as linear models in which the parameters are replaced by smooth functions....
Contrast Gain Control Model Fits Masking Data
Watson, Andrew B.; Solomon, Joshua A.; Null, Cynthia H. (Technical Monitor)
1994-01-01
We studied the fit of a contrast gain control model to data of Foley (JOSA 1994), consisting of thresholds for a Gabor patch masked by gratings of various orientations, or by compounds of two orientations. Our general model includes models of Foley and Teo & Heeger (IEEE 1994). Our specific model used a bank of Gabor filters with octave bandwidths at 8 orientations. Excitatory and inhibitory nonlinearities were power functions with exponents of 2.4 and 2. Inhibitory pooling was broad in orientation, but narrow in spatial frequency and space. Minkowski pooling used an exponent of 4. All of the data for observer KMF were well fit by the model. We have developed a contrast gain control model that fits masking data. Unlike Foley's, our model accepts images as inputs. Unlike Teo & Heeger's, our model did not require multiple channels for different dynamic ranges.
Are Physical Education Majors Models for Fitness?
Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela
2012-01-01
The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…
Model fit after pairwise maximum likelihood
Directory of Open Access Journals (Sweden)
M. T. eBarendse
2016-04-01
Full Text Available Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log--likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML of two--way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more, PML performs as well the robust weighted least squares analysis of polychoric correlations.
Hayduk, Leslie
2014-01-01
Researchers using factor analysis tend to dismiss the significant ill fit of factor models by presuming that if their factor model is close-to-fitting, it is probably close to being properly causally specified. Close fit may indeed result from a model being close to properly causally specified, but close-fitting factor models can also be seriously…
Exact Fit of Simple Finite Mixture Models
Directory of Open Access Journals (Sweden)
Dirk Tasche
2014-11-01
Full Text Available How to forecast next year’s portfolio-wide credit default rate based on last year’s default observations and the current score distribution? A classical approach to this problem consists of fitting a mixture of the conditional score distributions observed last year to the current score distribution. This is a special (simple case of a finite mixture model where the mixture components are fixed and only the weights of the components are estimated. The optimum weights provide a forecast of next year’s portfolio-wide default rate. We point out that the maximum-likelihood (ML approach to fitting the mixture distribution not only gives an optimum but even an exact fit if we allow the mixture components to vary but keep their density ratio fixed. From this observation we can conclude that the standard default rate forecast based on last year’s conditional default rates will always be located between last year’s portfolio-wide default rate and the ML forecast for next year. As an application example, cost quantification is then discussed. We also discuss how the mixture model based estimation methods can be used to forecast total loss. This involves the reinterpretation of an individual classification problem as a collective quantification problem.
A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models
Scott, Erin; Serpetti, Natalia; Steenbeek, Jeroen; Heymans, Johanna Jacomina
The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE) models to observation reference data (Mackinson et al. 2009). The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting > 1000 specific individual searches to find the statistically 'best fit' model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the 'best fit' model for ecological accuracy.
Fitting statistical models in bivariate allometry.
Packard, Gary C; Birchard, Geoffrey F; Boardman, Thomas J
2011-08-01
Several attempts have been made in recent years to formulate a general explanation for what appear to be recurring patterns of allometric variation in morphology, physiology, and ecology of both plants and animals (e.g. the Metabolic Theory of Ecology, the Allometric Cascade, the Metabolic-Level Boundaries hypothesis). However, published estimates for parameters in allometric equations often are inaccurate, owing to undetected bias introduced by the traditional method for fitting lines to empirical data. The traditional method entails fitting a straight line to logarithmic transformations of the original data and then back-transforming the resulting equation to the arithmetic scale. Because of fundamental changes in distributions attending transformation of predictor and response variables, the traditional practice may cause influential outliers to go undetected, and it may result in an underparameterized model being fitted to the data. Also, substantial bias may be introduced by the insidious rotational distortion that accompanies regression analyses performed on logarithms. Consequently, the aforementioned patterns of allometric variation may be illusions, and the theoretical explanations may be wide of the mark. Problems attending the traditional procedure can be largely avoided in future research simply by performing preliminary analyses on arithmetic values and by validating fitted equations in the arithmetic domain. The goal of most allometric research is to characterize relationships between biological variables and body size, and this is done most effectively with data expressed in the units of measurement. Back-transforming from a straight line fitted to logarithms is not a generally reliable way to estimate an allometric equation in the original scale. © 2010 The Authors. Biological Reviews © 2010 Cambridge Philosophical Society.
Local fit evaluation of structural equation models using graphical criteria.
Thoemmes, Felix; Rosseel, Yves; Textor, Johannes
2018-03-01
Evaluation of model fit is critically important for every structural equation model (SEM), and sophisticated methods have been developed for this task. Among them are the χ² goodness-of-fit test, decomposition of the χ², derived measures like the popular root mean square error of approximation (RMSEA) or comparative fit index (CFI), or inspection of residuals or modification indices. Many of these methods provide a global approach to model fit evaluation: A single index is computed that quantifies the fit of the entire SEM to the data. In contrast, graphical criteria like d-separation or trek-separation allow derivation of implications that can be used for local fit evaluation, an approach that is hardly ever applied. We provide an overview of local fit evaluation from the viewpoint of SEM practitioners. In the presence of model misfit, local fit evaluation can potentially help in pinpointing where the problem with the model lies. For models that do fit the data, local tests can identify the parts of the model that are corroborated by the data. Local tests can also be conducted before a model is fitted at all, and they can be used even for models that are globally underidentified. We discuss appropriate statistical local tests, and provide applied examples. We also present novel software in R that automates this type of local fit evaluation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
topicmodels: An R Package for Fitting Topic Models
Directory of Open Access Journals (Sweden)
Bettina Grun
2011-05-01
Full Text Available Topic models allow the probabilistic modeling of term frequency occurrences in documents. The fitted model can be used to estimate the similarity between documents as well as between a set of specified keywords using an additional layer of latent variables which are referred to as topics. The R package topicmodels provides basic infrastructure for fitting topic models based on data structures from the text mining package tm. The package includes interfaces to two algorithms for fitting topic models: the variational expectation-maximization algorithm provided by David M. Blei and co-authors and an algorithm using Gibbs sampling by Xuan-Hieu Phan and co-authors.
Model fit after pairwise maximum likelihood
Barendse, M. T.; Ligtvoet, R.; Timmerman, M. E.; Oort, F. J.
2016-01-01
Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response
Model fit after pairwise maximum likelihood
Barendse, M.T.; Ligtvoet, R.; Timmerman, M.E.; Oort, F.J.
Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response
Curve fitting methods for solar radiation data modeling
Energy Technology Data Exchange (ETDEWEB)
Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-10-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
International Nuclear Information System (INIS)
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-01-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods
Automated Model Fit Method for Diesel Engine Control Development
Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.
2014-01-01
This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is
A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models
Directory of Open Access Journals (Sweden)
Erin Scott
2016-01-01
Full Text Available The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE models to observation reference data (Mackinson et al. 2009. The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting >1000 specific individual searches to find the statistically ‘best fit’ model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the ‘best fit’ model for ecological accuracy.
Fitting ARMA Time Series by Structural Equation Models.
van Buuren, Stef
1997-01-01
This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)
A person fit test for IRT models for polytomous items
Glas, Cornelis A.W.; Dagohoy, A.V.
2007-01-01
A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability
An R package for fitting age, period and cohort models
Directory of Open Access Journals (Sweden)
Adriano Decarli
2014-11-01
Full Text Available In this paper we present the R implementation of a GLIM macro which fits age-period-cohort model following Osmond and Gardner. In addition to the estimates of the corresponding model, owing to the programming capability of R as an object oriented language, methods for printing, plotting and summarizing the results are provided. Furthermore, the researcher has fully access to the output of the main function (apc which returns all the models fitted within the function. It is so possible to critically evaluate the goodness of fit of the resulting model.
Fan, Xitao; Wang, Lin; Thompson, Bruce
1999-01-01
A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)
Fitting polytomous Rasch models in SAS
DEFF Research Database (Denmark)
Christensen, Karl Bang
2006-01-01
The item parameters of a polytomous Rasch model can be estimated using marginal and conditional approaches. This paper describes how this can be done in SAS (V8.2) for three item parameter estimation procedures: marginal maximum likelihood estimation, conditional maximum likelihood estimation, an...
Lázaro, Ester; Escarmís, Cristina; Domingo, Esteban; Manrubia, Susanna C
2002-09-01
Evolution of fitness values upon replication of viral populations is strongly influenced by the size of the virus population that participates in the infections. While large population passages often result in fitness gains, repeated plaque-to-plaque transfers result in average fitness losses. Here we develop a numerical model that describes fitness evolution of viral clones subjected to serial bottleneck events. The model predicts a biphasic evolution of fitness values in that a period of exponential decrease is followed by a stationary state in which fitness values display large fluctuations around an average constant value. This biphasic evolution is in agreement with experimental results of serial plaque-to-plaque transfers carried out with foot-and-mouth disease virus (FMDV) in cell culture. The existence of a stationary phase of fitness values has been further documented by serial plaque-to-plaque transfers of FMDV clones that had reached very low relative fitness values. The statistical properties of the stationary state depend on several parameters of the model, such as the probability of advantageous versus deleterious mutations, initial fitness, and the number of replication rounds. In particular, the size of the bottleneck is critical for determining the trend of fitness evolution.
Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS
DEFF Research Database (Denmark)
Bolker, B.M.; Gardner, B.; Maunder, M.
2013-01-01
Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. R is convenient and (relatively) easy...... to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield...
Akaike information criterion to select well-fit resist models
Burbine, Andrew; Fryer, David; Sturtevant, John
2015-03-01
In the field of model design and selection, there is always a risk that a model is over-fit to the data used to train the model. A model is well suited when it describes the physical system and not the stochastic behavior of the particular data collected. K-fold cross validation is a method to check this potential over-fitting to the data by calibrating with k-number of folds in the data, typically between 4 and 10. Model training is a computationally expensive operation, however, and given a wide choice of candidate models, calibrating each one repeatedly becomes prohibitively time consuming. Akaike information criterion (AIC) is an information-theoretic approach to model selection based on the maximized log-likelihood for a given model that only needs a single calibration per model. It is used in this study to demonstrate model ranking and selection among compact resist modelforms that have various numbers and types of terms to describe photoresist behavior. It is shown that there is a good correspondence of AIC to K-fold cross validation in selecting the best modelform, and it is further shown that over-fitting is, in most cases, not indicated. In modelforms with more than 40 fitting parameters, the size of the calibration data set benefits from additional parameters, statistically validating the model complexity.
Robust discriminative response map fitting with constrained local models
Asthana, Akshay; Asthana, Ashish; Zafeiriou, Stefanos; Cheng, Shiyang; Pantic, Maja
We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that,
A comparison of two methods for fitting the INDCLUS model
Ten Berge, Jos M.F.; Kiers, Henk A.L.
2005-01-01
Chaturvedi and Carroll have proposed the SINDCLUS method for fitting the INDCLUS model. It is based on splitting the two appearances of the cluster matrix in the least squares fit function and relying on convergence to a solution where both cluster matrices coincide. Kiers has proposed an
An Algorithm for Optimally Fitting a Wiener Model
Directory of Open Access Journals (Sweden)
Lucas P. Beverlin
2011-01-01
Full Text Available The purpose of this work is to present a new methodology for fitting Wiener networks to datasets with a large number of variables. Wiener networks have the ability to model a wide range of data types, and their structures can yield parameters with phenomenological meaning. There are several challenges to fitting such a model: model stiffness, the nonlinear nature of a Wiener network, possible overfitting, and the large number of parameters inherent with large input sets. This work describes a methodology to overcome these challenges by using several iterative algorithms under supervised learning and fitting subsets of the parameters at a time. This methodology is applied to Wiener networks that are used to predict blood glucose concentrations. The predictions of validation sets from models fit to four subjects using this methodology yielded a higher correlation between observed and predicted observations than other algorithms, including the Gauss-Newton and Levenberg-Marquardt algorithms.
Automatic fitting of spiking neuron models to electrophysiological recordings
Directory of Open Access Journals (Sweden)
Cyrille Rossant
2010-03-01
Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.
HDFITS: Porting the FITS data model to HDF5
Price, D. C.; Barsdell, B. R.; Greenhill, L. J.
2015-09-01
The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new 'HDFITS' format, where data are stored in HDF5 in a FITS-like manner. We show that HDFITS allows faster reading of data (up to 100x of FITS in some use cases), and improved compression (higher compression ratios and higher throughput). Finally, we show that by only changing the import lines in Python-based FITS utilities, HDFITS formatted data can be presented transparently as an in-memory FITS equivalent.
A Note on Recurring Misconceptions When Fitting Nonlinear Mixed Models.
Harring, Jeffrey R; Blozis, Shelley A
2016-01-01
Nonlinear mixed-effects (NLME) models are used when analyzing continuous repeated measures data taken on each of a number of individuals where the focus is on characteristics of complex, nonlinear individual change. Challenges with fitting NLME models and interpreting analytic results have been well documented in the statistical literature. However, parameter estimates as well as fitted functions from NLME analyses in recent articles have been misinterpreted, suggesting the need for clarification of these issues before these misconceptions become fact. These misconceptions arise from the choice of popular estimation algorithms, namely, the first-order linearization method (FO) and Gaussian-Hermite quadrature (GHQ) methods, and how these choices necessarily lead to population-average (PA) or subject-specific (SS) interpretations of model parameters, respectively. These estimation approaches also affect the fitted function for the typical individual, the lack-of-fit of individuals' predicted trajectories, and vice versa.
Hyper-Fit: Fitting Linear Models to Multidimensional Data with Multivariate Gaussian Uncertainties
Robotham, A. S. G.; Obreschkow, D.
2015-09-01
Astronomical data is often uncertain with errors that are heteroscedastic (different for each data point) and covariant between different dimensions. Assuming that a set of D-dimensional data points can be described by a (D - 1)-dimensional plane with intrinsic scatter, we derive the general likelihood function to be maximised to recover the best fitting model. Alongside the mathematical description, we also release the hyper-fit package for the R statistical language (http://github.com/asgr/hyper.fit) and a user-friendly web interface for online fitting (http://hyperfit.icrar.org). The hyper-fit package offers access to a large number of fitting routines, includes visualisation tools, and is fully documented in an extensive user manual. Most of the hyper-fit functionality is accessible via the web interface. In this paper, we include applications to toy examples and to real astronomical data from the literature: the mass-size, Tully-Fisher, Fundamental Plane, and mass-spin-morphology relations. In most cases, the hyper-fit solutions are in good agreement with published values, but uncover more information regarding the fitted model.
Fitting Equilibrium Search Models to Labour Market Data
DEFF Research Database (Denmark)
Bowlus, Audra J.; Kiefer, Nicholas M.; Neumann, George R.
1996-01-01
Specification and estimation of a Burdett-Mortensen type equilibrium search model is considered. The estimation is nonstandard. An estimation strategy asymptotically equivalent to maximum likelihood is proposed and applied. The results indicate that specifications with a small number of productiv...... of productivity types fit the data well compared to the homogeneous model....
LEP asymmetries and fits of the standard model
International Nuclear Information System (INIS)
Pietrzyk, B.
1994-01-01
The lepton and quark asymmetries measured at LEP are presented. The results of the Standard Model fits to the electroweak data presented at this conference are given. The top mass obtained from the fit to the LEP data is 172 -14-20 +13+18 GeV; it is 177 -11-19 +11+18 when also the collider, ν and A LR data are included. (author). 10 refs., 3 figs., 2 tabs
[How to fit and interpret multilevel models using SPSS].
Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael
2007-05-01
Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.
Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data
McNeish, Daniel; Harring, Jeffrey R.
2017-01-01
To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…
Twitter classification model: the ABC of two million fitness tweets.
Vickey, Theodore A; Ginis, Kathleen Martin; Dabrowski, Maciej
2013-09-01
The purpose of this project was to design and test data collection and management tools that can be used to study the use of mobile fitness applications and social networking within the context of physical activity. This project was conducted over a 6-month period and involved collecting publically shared Twitter data from five mobile fitness apps (Nike+, RunKeeper, MyFitnessPal, Endomondo, and dailymile). During that time, over 2.8 million tweets were collected, processed, and categorized using an online tweet collection application and a customized JavaScript. Using the grounded theory, a classification model was developed to categorize and understand the types of information being shared by application users. Our data show that by tracking mobile fitness app hashtags, a wealth of information can be gathered to include but not limited to daily use patterns, exercise frequency, location-based workouts, and overall workout sentiment.
Person-fit to the Five Factor Model of personality
Czech Academy of Sciences Publication Activity Database
Allik, J.; Realo, A.; Mõttus, R.; Borkenau, P.; Kuppens, P.; Hřebíčková, Martina
2012-01-01
Roč. 71, č. 1 (2012), s. 35-45 ISSN 1421-0185 R&D Projects: GA ČR GAP407/10/2394 Institutional research plan: CEZ:AV0Z70250504 Keywords : Five Factor Model * cross-cultural comparison * person-fit Subject RIV: AN - Psychology Impact factor : 0.638, year: 2012
Assessing fit in Bayesian models for spatial processes
Jun, M.
2014-09-16
© 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models\\' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.
Application of modified vector fitting to grounding system modeling
Energy Technology Data Exchange (ETDEWEB)
Jimenez, D.; Camargo, M.; Herrera, J.; Torres, H. [National University of Colombia (Colombia). Research Program on Acquisition and Analysis of Signals - PAAS], Emails: dyjimeneza@unal.edu.co, mpcamargom@unal.edu.co; Vargas, M. [Siemens S.A. - Power Transmission and Distribution - Energy Services (Colombia)
2007-07-01
The transient behavior of grounding systems (GS) influences greatly the performance of electrical networks under fault conditions. This fact has led the authors to present an application of the Modified Vector Fitting (MVF)1 methodology based upon the frequency response of the system, in order to find a rational function approximation and an equivalent electrical network whose transient behavior is similar to the original one of the GS. The obtained network can be introduced into the EMTP/ATP program for simulating the transient behavior of the GS. The MVF technique, which is a modification of the Vector Fitting (VF) technique, allows identifying state space models from the Frequency Domain Response for both single and multiple input-output systems. In this work, the methodology is used to fit the frequency response of a grounding grid, which is computed by means of the Hybrid Electromagnetic Model (HEM), finding the relation between voltages and input currents in two points of the grid in frequency domain. The model obtained with the MVF shows a good agreement with the frequency response of the GS. Besides, the model is tested in EMTP/ATP finding a good fitting with the calculated data, which demonstrates the validity and usefulness of the MVF. (author)
Person-fit to the Five Factor Model of personality
Czech Academy of Sciences Publication Activity Database
Allik, J.; Realo, A.; Mõttus, R.; Borkenau, P.; Kuppens, P.; Hřebíčková, Martina
2012-01-01
Roč. 71, č. 1 (2012), s. 35-45 ISSN 1421-0185 R&D Projects: GA ČR GAP407/10/2394 Institutional research plan: CEZ:AV0Z70250504 Keywords : Five Factor Model * cross - cultural comparison * person-fit Subject RIV: AN - Psychology Impact factor: 0.638, year: 2012
Fitting rainfall interception models to forest ecosystems of Mexico
Návar, José
2017-05-01
Models that accurately predict forest interception are essential both for water balance studies and for assessing watershed responses to changes in land use and the long-term climate variability. This paper compares the performance of four rainfall interception models-the sparse Gash (1995), Rutter et al. (1975), Liu (1997) and two new models (NvMxa and NvMxb)-using data from four spatially extensive, structurally diverse forest ecosystems in Mexico. Ninety-eight case studies measuring interception in tropical dry (25), arid/semi-arid (29), temperate (26), and tropical montane cloud forests (18) were compiled and analyzed. Coefficients derived from raw data or published statistical relationships were used as model input to evaluate multi-storm forest interception at the case study scale. On average empirical data showed that, tropical montane cloud, temperate, arid/semi-arid and tropical dry forests intercepted 14%, 18%, 22% and 26% of total precipitation, respectively. The models performed well in predicting interception, with mean deviations between measured and modeled interception as a function of total precipitation (ME) generally 0.66. Model fitting precision was dependent on the forest ecosystem. Arid/semi-arid forests exhibited the smallest, while tropical montane cloud forest displayed the largest ME deviations. Improved agreement between measured and modeled data requires modification of in-storm evaporation rate in the Liu; the canopy storage in the sparse Gash model; and the throughfall coefficient in the Rutter and the NvMx models. This research concludes on recommending the wide application of rainfall interception models with some caution as they provide mixed results. The extensive forest interception data source, the fitting and testing of four models, the introduction of a new model, and the availability of coefficient values for all four forest ecosystems are an important source of information and a benchmark for future investigations in this
O'Riordan, J F; Goldstick, T K; Vida, L N; Honig, G R; Ernest, J T
1985-01-01
The ability of nine different models, prominent in the literature, to meaningfully characterize the oxygen-hemoglobin equilibrium curve (OHEC) of normal individuals was examined. Previously reported data (N = 33), obtained using the DCA-1 (Radiometer, Copenhagen), and new data (N = 8), obtained using the Hemox-Analyzer (TCS, Southampton, PA), from blood samples of normal, non-smoking volunteers were used and these devices were found to give statistically similar results. The OHECs were digitized and fitted to the models using least-squares techniques developed in this laboratory. The "goodness-of-fit" was determined by the root-mean-squared (RMS) error, the number of parameters, and the parameter redundancy, i.e., correlation between the parameters. The best RMS error did not necessarily indicate the best model. Most literature models consist of ratios of similar-order polynomials. These showed considerable parameter redundancy which made the curve fitting difficult. The best fits gave RMS errors as low as 0.2% saturation. The Hill model gave a good characterization over the saturation range 20%-98% with RMS errors of about 0.6% saturation. On the other hand, good characterizations over the entire range were given by several other models. The relative advantages and disadvantages of each model have been compared as well as the difficulties in fitting several of the models. No single model is best under all circumstances. The best model depends upon the particular circumstances for which it is to be utilized.
Supersymmetry with prejudice: Fitting the wrong model to LHC data
Allanach, B. C.; Dolan, Matthew J.
2012-09-01
We critically examine interpretations of hypothetical supersymmetric LHC signals, fitting to alternative wrong models of supersymmetry breaking. The signals we consider are some of the most constraining on the sparticle spectrum: invariant mass distributions with edges and endpoints from the golden decay chain q˜→qχ20(→l˜±l∓q)→χ10l+l-q. We assume a constrained minimal supersymmetric standard model (CMSSM) point to be the ‘correct’ one, but fit the signals instead with minimal gauge mediated supersymmetry breaking models (mGMSB) with a neutralino quasistable lightest supersymmetric particle, minimal anomaly mediation and large volume string compactification models. Minimal anomaly mediation and large volume scenario can be unambiguously discriminated against the CMSSM for the assumed signal and 1fb-1 of LHC data at s=14TeV. However, mGMSB would not be discriminated on the basis of the kinematic endpoints alone. The best-fit point spectra of mGMSB and CMSSM look remarkably similar, making experimental discrimination at the LHC based on the edges or Higgs properties difficult. However, using rate information for the golden chain should provide the additional separation required.
Fitting Latent Cluster Models for Networks with latentnet
Directory of Open Access Journals (Sweden)
Pavel N. Krivitsky
2007-12-01
Full Text Available latentnet is a package to fit and evaluate statistical latent position and cluster models for networks. Hoﬀ, Raftery, and Handcock (2002 suggested an approach to modeling networks based on positing the existence of an latent space of characteristics of the actors. Relationships form as a function of distances between these characteristics as well as functions of observed dyadic level covariates. In latentnet social distances are represented in a Euclidean space. It also includes a variant of the extension of the latent position model to allow for clustering of the positions developed in Handcock, Raftery, and Tantrum (2007.The package implements Bayesian inference for the models based on an Markov chain Monte Carlo algorithm. It can also compute maximum likelihood estimates for the latent position model and a two-stage maximum likelihood method for the latent position cluster model. For latent position cluster models, the package provides a Bayesian way of assessing how many groups there are, and thus whether or not there is any clustering (since if the preferred number of groups is 1, there is little evidence for clustering. It also estimates which cluster each actor belongs to. These estimates are probabilistic, and provide the probability of each actor belonging to each cluster. It computes four types of point estimates for the coefficients and positions: maximum likelihood estimate, posterior mean, posterior mode and the estimator which minimizes Kullback-Leibler divergence from the posterior. You can assess the goodness-of-fit of the model via posterior predictive checks. It has a function to simulate networks from a latent position or latent position cluster model.
Evolution models with lethal mutations on symmetric or random fitness landscapes.
Kirakosyan, Zara; Saakian, David B; Hu, Chin-Kun
2010-07-01
We calculate the mean fitness for evolution models, when the fitness is a function of the Hamming distance from a reference sequence, and there is a probability that this fitness is nullified (Eigen model case) or tends to the negative infinity (Crow-Kimura model case). We calculate the mean fitness of these models. The mean fitness is calculated also for the random fitnesses with logarithmic-normal distribution, reasonably describing sometimes the situation with RNA viruses.
Atmospheric Turbulence Modeling for Aerospace Vehicles: Fractional Order Fit
Kopasakis, George (Inventor)
2015-01-01
An improved model for simulating atmospheric disturbances is disclosed. A scale Kolmogorov spectral may be scaled to convert the Kolmogorov spectral into a finite energy von Karman spectral and a fractional order pole-zero transfer function (TF) may be derived from the von Karman spectral. Fractional order atmospheric turbulence may be approximated with an integer order pole-zero TF fit, and the approximation may be stored in memory.
RFA: R-Squared Fitting Analysis Model for Power Attack
Directory of Open Access Journals (Sweden)
An Wang
2017-01-01
Full Text Available Correlation Power Analysis (CPA introduced by Brier et al. in 2004 is an important method in the side-channel attack and it enables the attacker to use less cost to derive secret or private keys with efficiency over the last decade. In this paper, we propose R-squared fitting model analysis (RFA which is more appropriate for nonlinear correlation analysis. This model can also be applied to other side-channel methods such as second-order CPA and collision-correlation power attack. Our experiments show that the RFA-based attacks bring significant advantages in both time complexity and success rate.
An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models
Ames, Allison J.; Penfield, Randall D.
2015-01-01
Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…
Rapid world modeling: Fitting range data to geometric primitives
International Nuclear Information System (INIS)
Feddema, J.; Little, C.
1996-01-01
For the past seven years, Sandia National Laboratories has been active in the development of robotic systems to help remediate DOE's waste sites and decommissioned facilities. Some of these facilities have high levels of radioactivity which prevent manual clean-up. Tele-operated and autonomous robotic systems have been envisioned as the only suitable means of removing the radioactive elements. World modeling is defined as the process of creating a numerical geometric model of a real world environment or workspace. This model is often used in robotics to plan robot motions which perform a task while avoiding obstacles. In many applications where the world model does not exist ahead of time, structured lighting, laser range finders, and even acoustical sensors have been used to create three dimensional maps of the environment. These maps consist of thousands of range points which are difficult to handle and interpret. This paper presents a least squares technique for fitting range data to planar and quadric surfaces, including cylinders and ellipsoids. Once fit to these primitive surfaces, the amount of data associated with a surface is greatly reduced up to three orders of magnitude, thus allowing for more rapid handling and analysis of world data
Modelling population dynamics model formulation, fitting and assessment using state-space methods
Newman, K B; Morgan, B J T; King, R; Borchers, D L; Cole, D J; Besbeas, P; Gimenez, O; Thomas, L
2014-01-01
This book gives a unifying framework for estimating the abundance of open populations: populations subject to births, deaths and movement, given imperfect measurements or samples of the populations. The focus is primarily on populations of vertebrates for which dynamics are typically modelled within the framework of an annual cycle, and for which stochastic variability in the demographic processes is usually modest. Discrete-time models are developed in which animals can be assigned to discrete states such as age class, gender, maturity, population (within a metapopulation), or species (for multi-species models). The book goes well beyond estimation of abundance, allowing inference on underlying population processes such as birth or recruitment, survival and movement. This requires the formulation and fitting of population dynamics models. The resulting fitted models yield both estimates of abundance and estimates of parameters characterizing the underlying processes.
Feature extraction through least squares fit to a simple model
International Nuclear Information System (INIS)
Demuth, H.B.
1976-01-01
The Oak Ridge National Laboratory (ORNL) presented the Los Alamos Scientific Laboratory (LASL) with 18 radiographs of fuel rod test bundles. The problem is to estimate the thickness of the gap between some cylindrical rods and a flat wall surface. The edges of the gaps are poorly defined due to finite source size, x-ray scatter, parallax, film grain noise, and other degrading effects. The radiographs were scanned and the scan-line data were averaged to reduce noise and to convert the problem to one dimension. A model of the ideal gap, convolved with an appropriate point-spread function, was fit to the averaged data with a least squares program; and the gap width was determined from the final fitted-model parameters. The least squares routine did converge and the gaps obtained are of reasonable size. The method is remarkably insensitive to noise. This report describes the problem, the techniques used to solve it, and the results and conclusions. Suggestions for future work are also given
Empirical fitness models for hepatitis C virus immunogen design.
Hart, Gregory R; Ferguson, Andrew L
2015-11-24
Hepatitis C virus (HCV) afflicts 170 million people worldwide, 2%-3% of the global population, and kills 350 000 each year. Prophylactic vaccination offers the most realistic and cost effective hope of controlling this epidemic in the developing world where expensive drug therapies are not available. Despite 20 years of research, the high mutability of the virus and lack of knowledge of what constitutes effective immune responses have impeded development of an effective vaccine. Coupling data mining of sequence databases with spin glass models from statistical physics, we have developed a computational approach to translate clinical sequence databases into empirical fitness landscapes quantifying the replicative capacity of the virus as a function of its amino acid sequence. These landscapes explicitly connect viral genotype to phenotypic fitness, and reveal vulnerable immunological targets within the viral proteome that can be exploited to rationally design vaccine immunogens. We have recovered the empirical fitness landscape for the HCV RNA-dependent RNA polymerase (protein NS5B) responsible for viral genome replication, and validated the predictions of our model by demonstrating excellent accord with experimental measurements and clinical observations. We have used our landscapes to perform exhaustive in silico screening of 16.8 million T-cell immunogen candidates to identify 86 optimal formulations. By reducing the search space of immunogen candidates by over five orders of magnitude, our approach can offer valuable savings in time, expense, and labor for experimental vaccine development and accelerate the search for a HCV vaccine. HCV-hepatitis C virus, HLA-human leukocyte antigen, CTL-cytotoxic T lymphocyte, NS5B-nonstructural protein 5B, MSA-multiple sequence alignment, PEG-IFN-pegylated interferon.
Empirical fitness models for hepatitis C virus immunogen design
Hart, Gregory R.; Ferguson, Andrew L.
2015-12-01
Hepatitis C virus (HCV) afflicts 170 million people worldwide, 2%-3% of the global population, and kills 350 000 each year. Prophylactic vaccination offers the most realistic and cost effective hope of controlling this epidemic in the developing world where expensive drug therapies are not available. Despite 20 years of research, the high mutability of the virus and lack of knowledge of what constitutes effective immune responses have impeded development of an effective vaccine. Coupling data mining of sequence databases with spin glass models from statistical physics, we have developed a computational approach to translate clinical sequence databases into empirical fitness landscapes quantifying the replicative capacity of the virus as a function of its amino acid sequence. These landscapes explicitly connect viral genotype to phenotypic fitness, and reveal vulnerable immunological targets within the viral proteome that can be exploited to rationally design vaccine immunogens. We have recovered the empirical fitness landscape for the HCV RNA-dependent RNA polymerase (protein NS5B) responsible for viral genome replication, and validated the predictions of our model by demonstrating excellent accord with experimental measurements and clinical observations. We have used our landscapes to perform exhaustive in silico screening of 16.8 million T-cell immunogen candidates to identify 86 optimal formulations. By reducing the search space of immunogen candidates by over five orders of magnitude, our approach can offer valuable savings in time, expense, and labor for experimental vaccine development and accelerate the search for a HCV vaccine. Abbreviations: HCV—hepatitis C virus, HLA—human leukocyte antigen, CTL—cytotoxic T lymphocyte, NS5B—nonstructural protein 5B, MSA—multiple sequence alignment, PEG-IFN—pegylated interferon.
Global fits of GUT-scale SUSY models with GAMBIT
Energy Technology Data Exchange (ETDEWEB)
Athron, Peter [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Bringmann, Torsten; Dal, Lars A.; Krislock, Abram; Raklev, Are [University of Oslo, Department of Physics, Oslo (Norway); Buckley, Andy [University of Glasgow, SUPA, School of Physics and Astronomy, Glasgow (United Kingdom); Chrzaszcz, Marcin [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); H. Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Krakow (Poland); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Jackson, Paul; White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, CNRS, ENS de Lyon, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval (France); Theoretical Physics Department, CERN, Geneva (Switzerland); Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Rogan, Christopher [Harvard University, Department of Physics, Cambridge, MA (United States); Ruiz de Austri, Roberto [IFIC-UV/CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Saavedra, Aldo [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); The University of Sydney, Faculty of Engineering and Information Technologies, Centre for Translational Data Science, School of Physics, Camperdown, NSW (Australia); Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Serra, Nicola [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); Collaboration: The GAMBIT Collaboration
2017-12-15
We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos. (orig.)
Global fits of GUT-scale SUSY models with GAMBIT
Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin
2017-12-01
We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.
A bipartite fitness model for online music streaming services
Pongnumkul, Suchit; Motohashi, Kazuyuki
2018-01-01
This paper proposes an evolution model and an analysis of the behavior of music consumers on online music streaming services. While previous studies have observed power-law degree distributions of usage in online music streaming services, the underlying behavior of users has not been well understood. Users and songs can be described using a bipartite network where an edge exists between a user node and a song node when the user has listened that song. The growth mechanism of bipartite networks has been used to understand the evolution of online bipartite networks Zhang et al. (2013). Existing bipartite models are based on a preferential attachment mechanism László Barabási and Albert (1999) in which the probability that a user listens to a song is proportional to its current popularity. This mechanism does not allow for two types of real world phenomena. First, a newly released song with high quality sometimes quickly gains popularity. Second, the popularity of songs normally decreases as time goes by. Therefore, this paper proposes a new model that is more suitable for online music services by adding fitness and aging functions to the song nodes of the bipartite network proposed by Zhang et al. (2013). Theoretical analyses are performed for the degree distribution of songs. Empirical data from an online streaming service, Last.fm, are used to confirm the degree distribution of the object nodes. Simulation results show improvements from a previous model. Finally, to illustrate the application of the proposed model, a simplified royalty cost model for online music services is used to demonstrate how the changes in the proposed parameters can affect the costs for online music streaming providers. Managerial implications are also discussed.
The issue of statistical power for overall model fit in evaluating structural equation models
Directory of Open Access Journals (Sweden)
Richard HERMIDA
2015-06-01
Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.
The FITS model office ergonomics program: a model for best practice.
Chim, Justine M Y
2014-01-01
An effective office ergonomics program can predict positive results in reducing musculoskeletal injury rates, enhancing productivity, and improving staff well-being and job satisfaction. Its objective is to provide a systematic solution to manage the potential risk of musculoskeletal disorders among computer users in an office setting. A FITS Model office ergonomics program is developed. The FITS Model Office Ergonomics Program has been developed which draws on the legislative requirements for promoting the health and safety of workers using computers for extended periods as well as previous research findings. The Model is developed according to the practical industrial knowledge in ergonomics, occupational health and safety management, and human resources management in Hong Kong and overseas. This paper proposes a comprehensive office ergonomics program, the FITS Model, which considers (1) Furniture Evaluation and Selection; (2) Individual Workstation Assessment; (3) Training and Education; (4) Stretching Exercises and Rest Break as elements of an effective program. An experienced ergonomics practitioner should be included in the program design and implementation. Through the FITS Model Office Ergonomics Program, the risk of musculoskeletal disorders among computer users can be eliminated or minimized, and workplace health and safety and employees' wellness enhanced.
Fitting of Parametric Building Models to Oblique Aerial Images
Panday, U. S.; Gerke, M.
2011-09-01
In literature and in photogrammetric workstations many approaches and systems to automatically reconstruct buildings from remote sensing data are described and available. Those building models are being used for instance in city modeling or in cadastre context. If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS) data. This leads to inconsistent building outlines, which has a negative influence on visual impression, but more seriously also represents a wrong legal boundary in the cadaster. Oblique aerial images as opposed to nadir-view images reveal greater detail, enabling to see different views of an object taken from different directions. Building walls are visible from oblique images directly and those images are used for automated roof overhang estimation in this research. A fitting algorithm is employed to find roof parameters of simple buildings. It uses a least squares algorithm to fit projected wire frames to their corresponding edge lines extracted from the images. Self-occlusion is detected based on intersection result of viewing ray and the planes formed by the building whereas occlusion from other objects is detected using an ALS point cloud. Overhang and ground height are obtained by sweeping vertical and horizontal planes respectively. Experimental results are verified with high resolution ortho-images, field survey, and ALS data. Planimetric accuracy of 1cm mean and 5cm standard deviation was obtained, while buildings' orientation were accurate to mean of 0.23° and standard deviation of 0.96° with ortho-image. Overhang parameters were aligned to approximately 10cm with field survey. The ground and roof heights were accurate to mean of - 9cm and 8cm with standard deviations of 16cm and 8cm with ALS respectively. The developed approach reconstructs 3D building models well in cases of sufficient texture. More images should be acquired for completeness of
The FIT Model - Fuel-cycle Integration and Tradeoffs
Energy Technology Data Exchange (ETDEWEB)
Steven J. Piet; Nick R. Soelberg; Samuel E. Bays; Candido Pereira; Layne F. Pincock; Eric L. Shaber; Meliisa C Teague; Gregory M Teske; Kurt G Vedros
2010-09-01
All mass streams from fuel separation and fabrication are products that must meet some set of product criteria – fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the “system losses study” team that developed it [Shropshire2009, Piet2010] are an initial step by the FCR&D program toward a global analysis that accounts for the requirements and capabilities of each component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R&D needs and set longer-term goals. The question originally posed to the “system losses study” was the cost of separation, fuel fabrication, waste management, etc. versus the separation efficiency. In other words, are the costs associated with marginal reductions in separations losses (or improvements in product recovery) justified by the gains in the performance of other systems? We have learned that that is the wrong question. The right question is: how does one adjust the compositions and quantities of all mass streams, given uncertain product criteria, to balance competing objectives including cost? FIT is a method to analyze different fuel cycles using common bases to determine how chemical performance changes in one part of a fuel cycle (say used fuel cooling times or separation efficiencies) affect other parts of the fuel cycle. FIT estimates impurities in fuel and waste via a rough estimate of physics and mass balance for a set of technologies. If feasibility is an issue for a set, as it is for “minimum fuel treatment” approaches such as melt refining and AIROX, it can help to make an estimate of how performances would have to change to achieve feasibility.
A fitting LEGACY – modelling Kepler's best stars
Directory of Open Access Journals (Sweden)
Aarslev Magnus J.
2017-01-01
Full Text Available The LEGACY sample represents the best solar-like stars observed in the Kepler mission[5, 8]. The 66 stars in the sample are all on the main sequence or only slightly more evolved. They each have more than one year's observation data in short cadence, allowing for precise extraction of individual frequencies. Here we present model fits using a modified ASTFIT procedure employing two different near-surface-effect corrections, one by Christensen-Dalsgaard[4] and a newer correction proposed by Ball & Gizon[1]. We then compare the results obtained using the different corrections. We find that using the latter correction yields lower masses and significantly lower χ2 values for a large part of the sample.
Bosone, Lucia; Martinez, Frédéric; Kalampalikis, Nikos
2015-04-01
In health-promotional campaigns, positive and negative role models can be deployed to illustrate the benefits or costs of certain behaviors. The main purpose of this article is to investigate why, how, and when exposure to role models strengthens the persuasiveness of a message, according to regulatory fit theory. We argue that exposure to a positive versus a negative model activates individuals' goals toward promotion rather than prevention. By means of two experiments, we demonstrate that high levels of persuasion occur when a message advertising healthy dietary habits offers a regulatory fit between its framing and the described role model. Our data also establish that the effects of such internal regulatory fit by vicarious experience depend on individuals' perceptions of response-efficacy and self-efficacy. Our findings constitute a significant theoretical complement to previous research on regulatory fit and contain valuable practical implications for health-promotional campaigns. © 2015 by the Society for Personality and Social Psychology, Inc.
Item level diagnostics and model - data fit in item response theory ...
African Journals Online (AJOL)
Item response theory (IRT) is a framework for modeling and analyzing item response data. Item-level modeling gives IRT advantages over classical test theory. The fit of an item score pattern to an item response theory (IRT) models is a necessary condition that must be assessed for further use of item and models that best fit ...
Fitness voter model: Damped oscillations and anomalous consensus.
Woolcock, Anthony; Connaughton, Colm; Merali, Yasmin; Vazquez, Federico
2017-09-01
We study the dynamics of opinion formation in a heterogeneous voter model on a complete graph, in which each agent is endowed with an integer fitness parameter k≥0, in addition to its + or - opinion state. The evolution of the distribution of k-values and the opinion dynamics are coupled together, so as to allow the system to dynamically develop heterogeneity and memory in a simple way. When two agents with different opinions interact, their k-values are compared, and with probability p the agent with the lower value adopts the opinion of the one with the higher value, while with probability 1-p the opposite happens. The agent that keeps its opinion (winning agent) increments its k-value by one. We study the dynamics of the system in the entire 0≤p≤1 range and compare with the case p=1/2, in which opinions are decoupled from the k-values and the dynamics is equivalent to that of the standard voter model. When 0≤p<1/2, agents with higher k-values are less persuasive, and the system approaches exponentially fast to the consensus state of the initial majority opinion. The mean consensus time τ appears to grow logarithmically with the number of agents N, and it is greatly decreased relative to the linear behavior τ∼N found in the standard voter model. When 1/2
model, although it still scales linearly with N. The p=1 case is special, with a relaxation to coexistence that scales as t^{-2.73} and a consensus time that scales as
A quantitative confidence signal detection model: 1. Fitting psychometric functions
Yi, Yongwoo
2016-01-01
Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. PMID:26763777
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten
2017-05-01
Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.
A cautionary note on the use of information fit indexes in covariance structure modeling with means
Wicherts, J.M.; Dolan, C.V.
2004-01-01
Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases
Optimisation of Ionic Models to Fit Tissue Action Potentials: Application to 3D Atrial Modelling
Directory of Open Access Journals (Sweden)
Amr Al Abed
2013-01-01
Full Text Available A 3D model of atrial electrical activity has been developed with spatially heterogeneous electrophysiological properties. The atrial geometry, reconstructed from the male Visible Human dataset, included gross anatomical features such as the central and peripheral sinoatrial node (SAN, intra-atrial connections, pulmonary veins, inferior and superior vena cava, and the coronary sinus. Membrane potentials of myocytes from spontaneously active or electrically paced in vitro rabbit cardiac tissue preparations were recorded using intracellular glass microelectrodes. Action potentials of central and peripheral SAN, right and left atrial, and pulmonary vein myocytes were each fitted using a generic ionic model having three phenomenological ionic current components: one time-dependent inward, one time-dependent outward, and one leakage current. To bridge the gap between the single-cell ionic models and the gross electrical behaviour of the 3D whole-atrial model, a simplified 2D tissue disc with heterogeneous regions was optimised to arrive at parameters for each cell type under electrotonic load. Parameters were then incorporated into the 3D atrial model, which as a result exhibited a spontaneously active SAN able to rhythmically excite the atria. The tissue-based optimisation of ionic models and the modelling process outlined are generic and applicable to image-based computer reconstruction and simulation of excitable tissue.
Fitting a code-red virus spread model: An account of putting theory into practice
Kolesnichenko, A.V.; Haverkort, Boudewijn R.H.M.; Remke, Anne Katharina Ingrid; de Boer, Pieter-Tjerk
This paper is about fitting a model for the spreading of a computer virus to measured data, contributing not only the fitted model, but equally important, an account of the process of getting there. Over the last years, there has been an increased interest in epidemic models to study the speed of
Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.
DeCarlo, Lawrence T
2003-02-01
The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.
Flexible competing risks regression modeling and goodness-of-fit
DEFF Research Database (Denmark)
Scheike, Thomas; Zhang, Mei-Jie
2008-01-01
In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause......-specific hazards. Another recent approach is to directly model the cumulative incidence by a proportional model (Fine and Gray, J Am Stat Assoc 94:496-509, 1999), and then obtain direct estimates of how covariates influences the cumulative incidence curve. We consider a simple and flexible class of regression...
Zhang, Yichen; Tan, Jonathan C.
2018-01-01
We present a continuum radiative transfer model grid for fitting observed spectral energy distributions (SEDs) of massive protostars. The model grid is based on the paradigm of core accretion theory for massive star formation with pre-assembled gravitationally bound cores as initial conditions. In particular, following the turbulent core model, initial core properties are set primarily by their mass and the pressure of their ambient clump. We then model the evolution of the protostar and its surround structures in a self-consistent way. The model grid contains about 9000 SEDs with four free parameters: initial core mass, the mean surface density of the environment, the protostellar mass, and the inclination. The model grid is used to fit observed SEDs via {χ }2 minimization, with the foreground extinction additionally estimated. We demonstrate the fitting process and results using the example of massive protostar G35.20-0.74. Compared with other SED model grids currently used for massive star formation studies, the properties of the protostar and its surrounding structures are more physically connected in our model grid, which reduces the dimensionality of the parameter spaces and the total number of models. This excludes possible fitting of models that are physically unrealistic or are not internally self-consistent in the context of the turbulent core model. Thus, this model grid serves not only as a fitting tool to estimate properties of massive protostars, but also as a test of core accretion theory. The SED model grid is publicly released with this paper.
A No-Scale Inflationary Model to Fit Them All
Ellis, John; Nanopoulos, Dimitri; Olive, Keith
2014-01-01
The magnitude of B-mode polarization in the cosmic microwave background as measured by BICEP2 favours models of chaotic inflation with a quadratic $m^2 \\phi^2/2$ potential, whereas data from the Planck satellite favour a small value of the tensor-to-scalar perturbation ratio $r$ that is highly consistent with the Starobinsky $R + R^2$ model. Reality may lie somewhere between these two scenarios. In this paper we propose a minimal two-field no-scale supergravity model that interpolates between quadratic and Starobinsky-like inflation as limiting cases, while retaining the successful prediction $n_s \\simeq 0.96$.
Revisiting the Global Electroweak Fit of the Standard Model and Beyond with Gfitter
Flächer, Henning; Haller, J; Höcker, A; Mönig, K; Stelzer, J
2009-01-01
The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter projec...
Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits
Kopasakis, George
2015-01-01
Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.
Using a Person-Environment Fit Model to Predict Job Involvement and Organizational Commitment.
Blau, Gary J.
1987-01-01
Using a sample of registered nurses (N=228) from a large urban hospital, this longitudinal study tested the applicability of a person-environment fit model for predicting job involvement and organizational commitment. Results indicated the proposed person-environment fit model is useful for predicting job involvement, but not organizational…
Counseling as a Stochastic Process: Fitting a Markov Chain Model to Initial Counseling Interviews
Lichtenberg, James W.; Hummel, Thomas J.
1976-01-01
The goodness of fit of a first-order Markov chain model to six counseling interviews was assessed by using chi-square tests of homogeneity and simulating sampling distributions of selected process characteristics against which the same characteristics in the actual interviews were compared. The model fit four of the interviews. Presented at AERA,…
Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models
Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning
2012-01-01
The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…
Bauer, Daniel J.; Sterba, Sonya K.
2011-01-01
Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ…
Fitness model for the Italian interbank money market
de Masi, G.; Iori, G.; Caldarelli, G.
2006-12-01
We use the theory of complex networks in order to quantitatively characterize the formation of communities in a particular financial market. The system is composed by different banks exchanging on a daily basis loans and debts of liquidity. Through topological analysis and by means of a model of network growth we can determine the formation of different group of banks characterized by different business strategy. The model based on Pareto’s law makes no use of growth or preferential attachment and it reproduces correctly all the various statistical properties of the system. We believe that this network modeling of the market could be an efficient way to evaluate the impact of different policies in the market of liquidity.
Information Theoretic Tools for Parameter Fitting in Coarse Grained Models
Kalligiannaki, Evangelia
2015-01-07
We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.
Hierarchical shrinkage priors and model fitting for high-dimensional generalized linear models.
Yi, Nengjun; Ma, Shuangge
2012-11-26
Abstract Genetic and other scientific studies routinely generate very many predictor variables, which can be naturally grouped, with predictors in the same groups being highly correlated. It is desirable to incorporate the hierarchical structure of the predictor variables into generalized linear models for simultaneous variable selection and coefficient estimation. We propose two prior distributions: hierarchical Cauchy and double-exponential distributions, on coefficients in generalized linear models. The hierarchical priors include both variable-specific and group-specific tuning parameters, thereby not only adopting different shrinkage for different coefficients and different groups but also providing a way to pool the information within groups. We fit generalized linear models with the proposed hierarchical priors by incorporating flexible expectation-maximization (EM) algorithms into the standard iteratively weighted least squares as implemented in the general statistical package R. The methods are illustrated with data from an experiment to identify genetic polymorphisms for survival of mice following infection with Listeria monocytogenes. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/).
Extended Langmuir model fitting to the filter column adsorption data ...
African Journals Online (AJOL)
Leachate samples collected at different depths of WQD column were analyzed for concentrations of zinc and copper ions using atomic absorption spectrometer. The removal efficiency was around 94% and 92% for zinc and copper respectively using column depth of 1 M at a flow rate of 12 ml/min. The adsorption model ...
Design of spatial experiments: Model fitting and prediction
Energy Technology Data Exchange (ETDEWEB)
Fedorov, V.V.
1996-03-01
The main objective of the paper is to describe and develop model oriented methods and algorithms for the design of spatial experiments. Unlike many other publications in this area, the approach proposed here is essentially based on the ideas of convex design theory.
Reducing uncertainty based on model fitness: Application to a ...
African Journals Online (AJOL)
A weakness of global sensitivity and uncertainty analysis methodologies is the often subjective definition of prior parameter probability distributions, especially ... The reservoir representing the central part of the wetland, where flood waters separate into several independent distributaries, is a keystone area within the model.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
Virtual Suit Fit Assessment Using Body Shape Model
National Aeronautics and Space Administration — Shoulder injury is one of the most serious risks for crewmembers in long-duration spaceflight. While suboptimal suit fit and contact pressures between the shoulder...
Diploid biological evolution models with general smooth fitness landscapes and recombination.
Saakian, David B; Kirakosyan, Zara; Hu, Chin-Kun
2008-06-01
Using a Hamilton-Jacobi equation approach, we obtain analytic equations for steady-state population distributions and mean fitness functions for Crow-Kimura and Eigen-type diploid biological evolution models with general smooth hypergeometric fitness landscapes. Our numerical solutions of diploid biological evolution models confirm the analytic equations obtained. We also study the parallel diploid model for the simple case of recombination and calculate the variance of distribution, which is consistent with numerical results.
Goodness-of-fit tests in mixed models
Claeskens, Gerda
2009-05-12
Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.
Goodness-of-fit test for proportional subdistribution hazards model.
Zhou, Bingqing; Fine, Jason; Laird, Glen
2013-09-30
This paper concerns using modified weighted Schoenfeld residuals to test the proportionality of subdistribution hazards for the Fine-Gray model, similar to the tests proposed by Grambsch and Therneau for independently censored data. We develop a score test for the time-varying coefficients based on the modified Schoenfeld residuals derived assuming a certain form of non-proportionality. The methods perform well in simulations and a real data analysis of breast cancer data, where the treatment effect exhibits non-proportional hazards. Copyright © 2013 John Wiley & Sons, Ltd.
Some Statistics for Assessing Person-Fit Based on Continuous-Response Models
Ferrando, Pere Joan
2010-01-01
This article proposes several statistics for assessing individual fit based on two unidimensional models for continuous responses: linear factor analysis and Samejima's continuous response model. Both models are approached using a common framework based on underlying response variables and are formulated at the individual level as fixed regression…
A simple model of group selection that cannot be analyzed with inclusive fitness
van Veelen, M.; Luo, S.; Simon, B.
2014-01-01
A widespread claim in evolutionary theory is that every group selection model can be recast in terms of inclusive fitness. Although there are interesting classes of group selection models for which this is possible, we show that it is not true in general. With a simple set of group selection models,
Directory of Open Access Journals (Sweden)
Jaclyn K Mann
2014-08-01
Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion
Mann, Jaclyn K; Barton, John P; Ferguson, Andrew L; Omarjee, Saleha; Walker, Bruce D; Chakraborty, Arup; Ndung'u, Thumbi
2014-08-01
Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model), generalizing our previous approach (Ising model) that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these) predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6) are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12). Performance of the Potts model (r = -0.73, p = 9.7×10-9) was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion, and
Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver
2012-01-01
Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…
Cai, Li; Lee, Taehun
2009-01-01
We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…
The lz(p)* Person-Fit Statistic in an Unfolding Model Context
Tendeiro, Jorge N.
2017-01-01
Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded
The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting
Tao, Zhang; Li, Zhang; Dingjun, Chen
On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.
Revisiting the global electroweak fit of the Standard Model and beyond with Gfitter
Flächer, H.; Goebel, M.; Haller, J.; Hoecker, A.; Mönig, K.; Stelzer, J.
2009-04-01
The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing for flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plug-ins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model (SM), and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. In the SM fit including the direct Higgs searches, we find M H =116.4{-1.3/+18.3} GeV, and the 2 σ and 3 σ allowed regions [114,145] GeV and [[113,168] and [180,225
Model fitting in two dimensions to small angle diffraction patterns from soft tissue
International Nuclear Information System (INIS)
Wilkinson, S J; Rogers, K D; Hall, C J
2006-01-01
In our research programme small angle x-ray scattering (SAXS) is used to provide information on the axial arrangement of collagen molecules as well as data about the state of other components of the extra cellular matrix (ECM) in human tissues. Derivation of parameters to describe and simplify the data is required for much of the SAXS patterns analysis. A method is presented here to achieve function fitting to collagen diffraction peaks along with a representation of the underlying diffuse scatter. A simple model was used which proved reliable in fitting a variety of 2D diffraction patterns. The logarithm of the scatter intensity over the area of the scatter image was taken to reduce the range and improve fitting accuracy. Our model was then used to fit the log data. The model consisted of a radial exponential diffuse scatter component added to a specified number of Gaussian peaks. In 2D the peak model is toroidal, each component being rotated about a common specified centre. Initial search parameters from a 1D averaged sector were supplied to the iterative 2D fitting routine. With the aid of data weighting and basic wavelet filtering, successful and reliable fitting of a specified 2D model to real data is achievable. The process is easily automated. Multiple SAXS patterns can be fitted without operator intervention. As described the model is simple enough to converge rapidly and yet allows image data to be parameterized to a form suitable for extracting the requisite information. The fitting method is flexible enough to be extended to achieve a more comprehensive and complex pattern fitting in two dimensions if this turns out to be necessary. It is our intention to implement orientation distribution functions in the near future by including an angular scaling factor
International Nuclear Information System (INIS)
Liang, Zhong Wei; Wang, Yi Jun; Ye, Bang Yan; Brauwer, Richard Kars
2012-01-01
In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process
SPSS macros to compare any two fitted values from a regression model.
Weaver, Bruce; Dubois, Sacha
2012-12-01
In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.
Directory of Open Access Journals (Sweden)
Y. Kakinami
2009-08-01
Full Text Available Empirical models of Total Electron Content (TEC based on functional fitting over Taiwan (120° E, 24° N have been constructed using data of the Global Positioning System (GPS from 1998 to 2007 during geomagnetically quiet condition (D_{st}>−30 nT. The models provide TEC as functions of local time (LT, day of year (DOY and the solar activity (F, which are represented by 1–162 days mean of F10.7 and EUV. Other models based on median values have been also constructed and compared with the models based on the functional fitting. Under same values of F parameter, the models based on the functional fitting show better accuracy than those based on the median values in all cases. The functional fitting model using daily EUV is the most accurate with 9.2 TECu of root mean square error (RMS than the 15-days running median with 10.4 TECu RMS and the model of International Reference Ionosphere 2007 (IRI2007 with 14.7 TECu RMS. IRI2007 overestimates TEC when the solar activity is low, and underestimates TEC when the solar activity is high. Though average of 81 days centered running mean of F10.7 and daily F10.7 is often used as indicator of EUV, our result suggests that average of F10.7 mean from 1 to 54 day prior and current day is better than the average of 81 days centered running mean for reproduction of TEC. This paper is for the first time comparing the median based model with the functional fitting model. Results indicate the functional fitting model yielding a better performance than the median based one. Meanwhile we find that the EUV radiation is essential to derive an optimal TEC.
Gfitter - Revisiting the global electroweak fit of the Standard Model and beyond
International Nuclear Information System (INIS)
Flaecher, H.; Hoecker, A.; Goebel, M.
2008-11-01
The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M H =116.4 +18.3 -1.3 GeV, and the 2σ and 3σ allowed regions [114,145] GeV and [[113,168] and [180,225
Gfitter - Revisiting the global electroweak fit of the Standard Model and beyond
Energy Technology Data Exchange (ETDEWEB)
Flaecher, H.; Hoecker, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Goebel, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Haller, J. [Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Moenig, K.; Stelzer, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2008-11-15
The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M{sub H}=116.4{sup +18.3}{sub -1.3} GeV, and the 2{sigma} and 3{sigma} allowed regions [114,145] GeV and [[113,168] and [180,225
Directory of Open Access Journals (Sweden)
Grant B. Morgan
2015-02-01
Full Text Available Bi-factor confirmatory factor models have been influential in research on cognitive abilities because they often better fit the data than correlated factors and higher-order models. They also instantiate a perspective that differs from that offered by other models. Motivated by previous work that hypothesized an inherent statistical bias of fit indices favoring the bi-factor model, we compared the fit of correlated factors, higher-order, and bi-factor models via Monte Carlo methods. When data were sampled from a true bi-factor structure, each of the approximate fit indices was more likely than not to identify the bi-factor solution as the best fitting. When samples were selected from a true multiple correlated factors structure, approximate fit indices were more likely overall to identify the correlated factors solution as the best fitting. In contrast, when samples were generated from a true higher-order structure, approximate fit indices tended to identify the bi-factor solution as best fitting. There was extensive overlap of fit values across the models regardless of true structure. Although one model may fit a given dataset best relative to the other models, each of the models tended to fit the data well in absolute terms. Given this variability, models must also be judged on substantive and conceptual grounds.
Assessing model fit in latent class analysis when asymptotics do not hold
van Kollenburg, Geert H.; Mulder, Joris; Vermunt, Jeroen K.
2015-01-01
The application of latent class (LC) analysis involves evaluating the LC model using goodness-of-fit statistics. To assess the misfit of a specified model, say with the Pearson chi-squared statistic, a p-value can be obtained using an asymptotic reference distribution. However, asymptotic p-values
Person-Fit Statistics for Joint Models for Accuracy and Speed
Fox, Jean Paul; Marianti, Sukaesi
2017-01-01
Response accuracy and response time data can be analyzed with a joint model to measure ability and speed of working, while accounting for relationships between item and person characteristics. In this study, person-fit statistics are proposed for joint models to detect aberrant response accuracy
Local and omnibus goodness-of-fit tests in classical measurement error models
Ma, Yanyuan
2010-09-14
We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.
An active contour model based on local fitted images for image segmentation.
Wang, Lei; Chang, Yan; Wang, Hui; Wu, Zhenzhou; Pu, Jiantao; Yang, Xiaodong
2017-12-01
Active contour models are popular and widely used for a variety of image segmentation applications with promising accuracy, but they may suffer from limited segmentation performances due to the presence of intensity inhomogeneity. To overcome this drawback, a novel region-based active contour model based on two different local fitted images is proposed by constructing a novel local hybrid image fitting energy, which is minimized in a variational level set framework to guide the evolving of contour curves toward the desired boundaries. The proposed model is evaluated and compared with several typical active contour models to segment synthetic and real images with different intensity characteristics. Experimental results demonstrate that the proposed model outperforms these models in terms of accuracy in image segmentation.
A goodness-of-fit test for occupancy models with correlated within-season revisits
Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.
2016-01-01
Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and
Tests of fit of historically-informed models of African American Admixture.
Gross, Jessica M
2018-02-01
African American populations in the U.S. formed primarily by mating between Africans and Europeans over the last 500 years. To date, studies of admixture have focused on either a one-time admixture event or continuous input into the African American population from Europeans only. Our goal is to gain a better understanding of the admixture process by examining models that take into account (a) assortative mating by ancestry in the African American population, (b) continuous input from both Europeans and Africans, and (c) historically informed variation in the rate of African migration over time. We used a model-based clustering method to generate distributions of African ancestry in three samples comprised of 147 African Americans from two published sources. We used a log-likelihood method to examine the fit of four models to these distributions and used a log-likelihood ratio test to compare the relative fit of each model. The mean ancestry estimates for our datasets of 77% African/23% European to 83% African/17% European ancestry are consistent with previous studies. We find admixture models that incorporate continuous gene flow from Europeans fit significantly better than one-time event models, and that a model involving continuous gene flow from Africans and Europeans fits better than one with continuous gene flow from Europeans only for two samples. Importantly, models that involve continuous input from Africans necessitate a higher level of gene flow from Europeans than previously reported. We demonstrate that models that take into account information about the rate of African migration over the past 500 years fit observed patterns of African ancestry better than alternative models. Our approach will enrich our understanding of the admixture process in extant and past populations. © 2017 Wiley Periodicals, Inc.
A scaled Lagrangian method for performing a least squares fit of a model to plant data
International Nuclear Information System (INIS)
Crisp, K.E.
1988-01-01
Due to measurement errors, even a perfect mathematical model will not be able to match all the corresponding plant measurements simultaneously. A further discrepancy may be introduced if an un-modelled change in conditions occurs within the plant which should have required a corresponding change in model parameters - e.g. a gradual deterioration in the performance of some component(s). Taking both these factors into account, what is required is that the overall discrepancy between the model predictions and the plant data is kept to a minimum. This process is known as 'model fitting', A method is presented for minimising any function which consists of the sum of squared terms, subject to any constraints. Its most obvious application is in the process of model fitting, where a weighted sum of squares of the differences between model predictions and plant data is the function to be minimised. When implemented within existing Central Electricity Generating Board computer models, it will perform a least squares fit of a model to plant data within a single job submission. (author)
Efficient occupancy model-fitting for extensive citizen-science data
Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.
2017-01-01
Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen
Fast fitting of non-Gaussian state-space models to animal movement data via Template Model Builder
DEFF Research Database (Denmark)
Albertsen, Christoffer Moesgaard; Whoriskey, Kim; Yurkowski, David
2015-01-01
recommend using the Laplace approximation combined with automatic differentiation (as implemented in the novel R package Template Model Builder; TMB) for the fast fitting of continuous-time multivariate non-Gaussian SSMs. Through Argos satellite tracking data, we demonstrate that the use of continuous...... are able to estimate additional parameters compared to previous methods, all without requiring a substantial increase in computational time. The model implementation is made available through the R package argosTrack....
Shekhar, Karthik; Ruberman, Claire F; Ferguson, Andrew L; Barton, John P; Kardar, Mehran; Chakraborty, Arup K
2013-12-01
Mutational escape from vaccine-induced immune responses has thwarted the development of a successful vaccine against AIDS, whose causative agent is HIV, a highly mutable virus. Knowing the virus' fitness as a function of its proteomic sequence can enable rational design of potent vaccines, as this information can focus vaccine-induced immune responses to target mutational vulnerabilities of the virus. Spin models have been proposed as a means to infer intrinsic fitness landscapes of HIV proteins from patient-derived viral protein sequences. These sequences are the product of nonequilibrium viral evolution driven by patient-specific immune responses and are subject to phylogenetic constraints. How can such sequence data allow inference of intrinsic fitness landscapes? We combined computer simulations and variational theory á la Feynman to show that, in most circumstances, spin models inferred from patient-derived viral sequences reflect the correct rank order of the fitness of mutant viral strains. Our findings are relevant for diverse viruses.
Shekhar, Karthik; Ruberman, Claire F.; Ferguson, Andrew L.; Barton, John P.; Kardar, Mehran; Chakraborty, Arup K.
2013-12-01
Mutational escape from vaccine-induced immune responses has thwarted the development of a successful vaccine against AIDS, whose causative agent is HIV, a highly mutable virus. Knowing the virus' fitness as a function of its proteomic sequence can enable rational design of potent vaccines, as this information can focus vaccine-induced immune responses to target mutational vulnerabilities of the virus. Spin models have been proposed as a means to infer intrinsic fitness landscapes of HIV proteins from patient-derived viral protein sequences. These sequences are the product of nonequilibrium viral evolution driven by patient-specific immune responses and are subject to phylogenetic constraints. How can such sequence data allow inference of intrinsic fitness landscapes? We combined computer simulations and variational theory á la Feynman to show that, in most circumstances, spin models inferred from patient-derived viral sequences reflect the correct rank order of the fitness of mutant viral strains. Our findings are relevant for diverse viruses.
A soluble model of evolution and extinction dynamics in a rugged fitness landscape
Sibani, Paolo
1997-01-01
We consider a continuum version of a previously introduced and numerically studied model of macroevolution (PRL 75, 2055, (1995)) in which agents evolve by an optimization process in a rugged fitness landscape and die due to their competitive interactions. We first formulate dynamical equations for the fitness distribution and the survival probability. Secondly we analytically derive the $t^{-2}$ law which characterizes the life time distribution of biological genera. Thirdly we discuss other...
Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.
Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei
2015-02-01
This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.
Fit Indexes, Lagrange Multipliers, Constraint Changes and Incomplete Data in Structural Models.
Bentler, P M
1990-04-01
Certain aspects of model modification and evaluation are discussed, with an emphasis on some points of view that expand upon or may differ from Kaplan (1990). The usefulness of BentlerBonett indexes is reiterated. When degree of misspecification can be measured by the size of the noncentrality parameter of a x[SUP2] distribution, the comparative fit index provides a useful general index of model adequacy that does not require knowledge of sourees of misspecification. The dependence of the Lagrange Multiplier X[SUP2] statistic on both the estimated multiplier parameter and estimated constraint or parameter change is discussed. A sensitivity theorem that shows the effects of unit change in constraints on model fit is developed for model modification in structural models. Recent incomplete data methods, such as those developed by Kaplan and his collaborators, are extended to be applicable in a wider range of situations.
Modelling metabolic evolution on phenotypic fitness landscapes: a case study on C4 photosynthesis.
Heckmann, David
2015-12-01
How did the complex metabolic systems we observe today evolve through adaptive evolution? The fitness landscape is the theoretical framework to answer this question. Since experimental data on natural fitness landscapes is scarce, computational models are a valuable tool to predict landscape topologies and evolutionary trajectories. Careful assumptions about the genetic and phenotypic features of the system under study can simplify the design of such models significantly. The analysis of C4 photosynthesis evolution provides an example for accurate predictions based on the phenotypic fitness landscape of a complex metabolic trait. The C4 pathway evolved multiple times from the ancestral C3 pathway and models predict a smooth 'Mount Fuji' landscape accordingly. The modelled phenotypic landscape implies evolutionary trajectories that agree with data on modern intermediate species, indicating that evolution can be predicted based on the phenotypic fitness landscape. Future directions will have to include structural changes of metabolic fitness landscape structure with changing environments. This will not only answer important evolutionary questions about reversibility of metabolic traits, but also suggest strategies to increase crop yields by engineering the C4 pathway into C3 plants. © 2015 Authors; published by Portland Press Limited.
GOODNESS-OF-FIT TEST FOR THE ACCELERATED FAILURE TIME MODEL BASED ON MARTINGALE RESIDUALS
Czech Academy of Sciences Publication Activity Database
Novák, Petr
2013-01-01
Roč. 49, č. 1 (2013), s. 40-59 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:GA MŠk(CZ) SVV 261315/2011 Keywords : accelerated failure time model * survival analysis * goodness-of-fit Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/novak-goodness-of-fit test for the aft model based on martingale residuals.pdf
A flexible, interactive software tool for fitting the parameters of neuronal models
Directory of Open Access Journals (Sweden)
Péter eFriedrich
2014-07-01
Full Text Available The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problem of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting
von Cramon-Taubadel, Noreen; Lycett, Stephen J
2008-05-01
Recent studies comparing craniometric and neutral genetic affinity matrices have concluded that, on average, human cranial variation fits a model of neutral expectation. While human craniometric and genetic data fit a model of isolation by geographic distance, it is not yet clear whether this is due to geographically mediated gene flow or human dispersal events. Recently, human genetic data have been shown to fit an iterative founder effect model of dispersal with an African origin, in line with the out-of-Africa replacement model for modern human origins, and Manica et al. (Nature 448 (2007) 346-349) have demonstrated that human craniometric data also fit this model. However, in contrast with the neutral model of cranial evolution suggested by previous studies, Manica et al. (2007) made the a priori assumption that cranial form has been subject to climatically driven natural selection and therefore correct for climate prior to conducting their analyses. Here we employ a modified theoretical and methodological approach to test whether human cranial variability fits the iterative founder effect model. In contrast with Manica et al. (2007) we employ size-adjusted craniometric variables, since climatic factors such as temperature have been shown to correlate with aspects of cranial size. Despite these differences, we obtain similar results to those of Manica et al. (2007), with up to 26% of global within-population craniometric variation being explained by geographic distance from sub-Saharan Africa. Comparative analyses using non-African origins do not yield significant results. The implications of these results are discussed in the light of the modern human origins debate. (c) 2007 Wiley-Liss, Inc.
International Nuclear Information System (INIS)
Ji Zhilong; Ma Yuanwei; Wang Dezhong
2014-01-01
Background: In radioactive nuclides atmospheric diffusion models, the empirical dispersion coefficients were deduced under certain experiment conditions, whose difference with nuclear accident conditions is a source of deviation. A better estimation of the radioactive nuclide's actual dispersion process could be done by correcting dispersion coefficients with observation data, and Genetic Algorithm (GA) is an appropriate method for this correction procedure. Purpose: This study is to analyze the fitness functions' influence on the correction procedure and the forecast ability of diffusion model. Methods: GA, coupled with Lagrange dispersion model, was used in a numerical simulation to compare 4 fitness functions' impact on the correction result. Results: In the numerical simulation, the fitness function with observation deviation taken into consideration stands out when significant deviation exists in the observed data. After performing the correction procedure on the Kincaid experiment data, a significant boost was observed in the diffusion model's forecast ability. Conclusion: As the result shows, in order to improve dispersion models' forecast ability using GA, observation data should be given different weight in the fitness function corresponding to their error. (authors)
Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R
2010-05-01
Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.
The fitting parameters extraction of conversion model of the low dose rate effect in bipolar devices
International Nuclear Information System (INIS)
Bakerenkov, Alexander
2011-01-01
The Enhanced Low Dose Rate Sensitivity (ELDRS) in bipolar devices consists of in base current degradation of NPN and PNP transistors increase as the dose rate is decreased. As a result of almost 20-year studying, the some physical models of effect are developed, being described in detail. Accelerated test methods, based on these models use in standards. The conversion model of the effect, that allows to describe the inverse S-shaped excess base current dependence versus dose rate, was proposed. This paper presents the problem of conversion model fitting parameters extraction.
Lee, Min Jin; Hong, Helen; Chung, Jin Wook
2014-03-01
We propose an automatic vessel segmentation method of vertebral arteries in CT angiography using combined circular and cylindrical model fitting. First, to generate multi-segmented volumes, whole volume is automatically divided into four segments by anatomical properties of bone structures along z-axis of head and neck. To define an optimal volume circumscribing vertebral arteries, anterior-posterior bounding and side boundaries are defined as initial extracted vessel region. Second, the initial vessel candidates are tracked using circular model fitting. Since boundaries of the vertebral arteries are ambiguous in case the arteries pass through the transverse foramen in the cervical vertebra, the circle model is extended along z-axis to cylinder model for considering additional vessel information of neighboring slices. Finally, the boundaries of the vertebral arteries are detected using graph-cut optimization. From the experiments, the proposed method provides accurate results without bone artifacts and eroded vessels in the cervical vertebra.
James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll
2003-01-01
This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...
Stojek, Monika M K; Montoya, Amanda K; Drescher, Christopher F; Newberry, Andrew; Sultan, Zain; Williams, Celestine F; Pollock, Norman K; Davis, Catherine L
We used mediation models to examine the mechanisms underlying the relationships among physical fitness, sleep-disordered breathing (SDB), symptoms of depression, and cognitive functioning. We conducted a cross-sectional secondary analysis of the cohorts involved in the 2003-2006 project PLAY (a trial of the effects of aerobic exercise on health and cognition) and the 2008-2011 SMART study (a trial of the effects of exercise on cognition). A total of 397 inactive overweight children aged 7-11 received a fitness test, standardized cognitive test (Cognitive Assessment System, yielding Planning, Attention, Simultaneous, Successive, and Full Scale scores), and depression questionnaire. Parents completed a Pediatric Sleep Questionnaire. We used bootstrapped mediation analyses to test whether SDB mediated the relationship between fitness and depression and whether SDB and depression mediated the relationship between fitness and cognition. Fitness was negatively associated with depression ( B = -0.041; 95% CI, -0.06 to -0.02) and SDB ( B = -0.005; 95% CI, -0.01 to -0.001). SDB was positively associated with depression ( B = 0.99; 95% CI, 0.32 to 1.67) after controlling for fitness. The relationship between fitness and depression was mediated by SDB (indirect effect = -0.005; 95% CI, -0.01 to -0.0004). The relationship between fitness and the attention component of cognition was independently mediated by SDB (indirect effect = 0.058; 95% CI, 0.004 to 0.13) and depression (indirect effect = -0.071; 95% CI, -0.01 to -0.17). SDB mediates the relationship between fitness and depression, and SDB and depression separately mediate the relationship between fitness and the attention component of cognition.
A mathematical model of actin filament turnover for fitting FRAP data.
Halavatyi, Aliaksandr A; Nazarov, Petr V; Al Tanoury, Ziad; Apanasovich, Vladimir V; Yatskou, Mikalai; Friederich, Evelyne
2010-03-01
A novel mathematical model of the actin dynamics in living cells under steady-state conditions has been developed for fluorescence recovery after photobleaching (FRAP) experiments. As opposed to other FRAP fitting models, which use the average lifetime of actins in filaments and the actin turnover rate as fitting parameters, our model operates with unbiased actin association/dissociation rate constants and accounts for the filament length. The mathematical formalism is based on a system of stochastic differential equations. The derived equations were validated on synthetic theoretical data generated by a stochastic simulation algorithm adapted for the simulation of FRAP experiments. Consistent with experimental findings, the results of this work showed that (1) fluorescence recovery is a function of the average filament length, (2) the F-actin turnover and the FRAP are accelerated in the presence of actin nucleating proteins, (3) the FRAP curves may exhibit both a linear and non-linear behaviour depending on the parameters of actin polymerisation, and (4) our model resulted in more accurate parameter estimations of actin dynamics as compared with other FRAP fitting models. Additionally, we provide a computational tool that integrates the model and that can be used for interpretation of FRAP data on actin cytoskeleton.
Comparison of Three Measures to Promote National Fitness in China by Mathematical Modeling
Directory of Open Access Journals (Sweden)
Pan Tang
2014-01-01
Full Text Available In this paper we established a mathematical model for national fitness in China. Based on a questionnaire and data of the General Administration of Sport of China and the National Bureau of Statistics of China, the dynamics for three classes of people are expressed by a system of three-dimensional ordinary equations. Model parameters are estimated from the data. This study indicated that national fitness put out by the Chinese government is reasonable. By finding the key parameter, the best measure to promote national fitness is put forward. In order to increase the number of people who frequently participate in sport exercise in a short period of time, if only one measure can be chosen, guiding people who never take part in physical exercise will be the best measure.
Development and design of a late-model fitness test instrument based on LabView
Xie, Ying; Wu, Feiqing
2010-12-01
Undergraduates are pioneers of China's modernization program and undertake the historic mission of rejuvenating our nation in the 21st century, whose physical fitness is vital. A smart fitness test system can well help them understand their fitness and health conditions, thus they can choose more suitable approaches and make practical plans for exercising according to their own situation. following the future trends, a Late-model fitness test Instrument based on LabView has been designed to remedy defects of today's instruments. The system hardware consists of fives types of sensors with their peripheral circuits, an acquisition card of NI USB-6251 and a computer, while the system software, on the basis of LabView, includes modules of user register, data acquisition, data process and display, and data storage. The system, featured by modularization and an open structure, is able to be revised according to actual needs. Tests results have verified the system's stability and reliability.
Evaluation of the uniformity of fit of general outcome prediction models
Moreno, R; Apolone, G; Miranda, DR
Objective: To compare the performance of the New Simplified Acute Physiology Score (SAPS II) and the New Admission Mortality Probability Model (MPM II0) within relevant subgroups using formal statistical assessment (uniformity of fit), Design: Analysis of the database of a multi-centre,
Checking the Adequacy of Fit of Models from Split-Plot Designs
DEFF Research Database (Denmark)
Almini, A. A.; Kulahci, Murat; Montgomery, D. C.
2009-01-01
One of the main features that distinguish split-plot experiments from other experiments is that they involve two types of experimental errors: the whole-plot (WP) error and the subplot (SP) error. Taking this into consideration is very important when computing measures of adequacy of fit for split......-plot models. In this article, we propose the computation of two R-2, R-2-adjusted, prediction error sums of squares (PRESS), and R-2-prediction statistics to measure the adequacy of fit for the WP and the SP submodels in a split-plot design. This is complemented with the graphical analysis of the two types...... of errors to check for any violation of the underlying assumptions and the adequacy of fit of split-plot models. Using examples, we show how computing two measures of model adequacy of fit for each split-plot design model is appropriate and useful as they reveal whether the correct WP and SP effects have...
Fit Gap Analysis – The Role of Business Process Reference Models
Directory of Open Access Journals (Sweden)
Dejan Pajk
2013-12-01
Full Text Available Enterprise resource planning (ERP systems support solutions for standard business processes such as financial, sales, procurement and warehouse. In order to improve the understandability and efficiency of their implementation, ERP vendors have introduced reference models that describe the processes and underlying structure of an ERP system. To select and successfully implement an ERP system, the capabilities of that system have to be compared with a company’s business needs. Based on a comparison, all of the fits and gaps must be identified and further analysed. This step usually forms part of ERP implementation methodologies and is called fit gap analysis. The paper theoretically overviews methods for applying reference models and describes fit gap analysis processes in detail. The paper’s first contribution is its presentation of a fit gap analysis using standard business process modelling notation. The second contribution is the demonstration of a process-based comparison approach between a supply chain process and an ERP system process reference model. In addition to its theoretical contributions, the results can also be practically applied to projects involving the selection and implementation of ERP systems.
A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.
Glas, Cees A. W.; Meijer, Rob R.
A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…
Flexible Fitting of Atomic Models into Cryo-EM Density Maps Guided by Helix Correspondences.
Dou, Hang; Burrows, Derek W; Baker, Matthew L; Ju, Tao
2017-06-20
Although electron cryo-microscopy (cryo-EM) has recently achieved resolutions of better than 3 Å, at which point molecular modeling can be done directly from the density map, analysis and annotation of a cryo-EM density map still primarily rely on fitting atomic or homology models to the density map. In this article, we present, to our knowledge, a new method for flexible fitting of known or modeled protein structures into cryo-EM density maps. Unlike existing methods that are guided by local density gradients, our method is guided by correspondences between the α-helices in the density map and model, and does not require an initial rigid-body fitting step. Compared with current methods on both simulated and experimental density maps, our method not only achieves greater accuracy for proteins with large deformations but also runs as fast or faster than many of the other flexible fitting routines. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Fitting the CDO correlation skew: a tractable structural jump-diffusion model
DEFF Research Database (Denmark)
Willemann, Søren
2007-01-01
allowing for instantaneous calibration to heterogeneous CDS curves and fast computation of CDO tranche spreads. We calibrate the model to CDX and iTraxx data from February 2007 and achieve a satisfactory fit. To price the senior tranches for both indices, we require a risk-neutral probability of a market...
Wang, Chee Keng John; Pyun, Do Young; Liu, Woon Chia; Lim, Boon San Coral; Li, Fuzhong
2013-01-01
Using a multilevel latent growth curve modeling (LGCM) approach, this study examined longitudinal change in levels of physical fitness performance over time (i.e. four years) in young adolescents aged from 12-13 years. The sample consisted of 6622 students from 138 secondary schools in Singapore. Initial analyses found between-school variation on…
Non-Uniqueness of the Geometry of Interplanetary Magnetic Flux Ropes Obtained from Model-Fitting
Marubashi, K.; Cho, K.-S.
2015-12-01
Since the early recognition of the important role of interplanetary magnetic flux ropes (IPFRs) to carry the southward magnetic fields to the Earth, many attempts have been made to determine the structure of the IPFRs by model-fitting analyses to the interplanetary magnetic field variations. This paper describes the results of fitting analyses for three selected solar wind structures in the latter half of 2014. In the fitting analysis a special attention was paid to identification of all the possible models or geometries that can reproduce the observed magnetic field variation. As a result, three or four geometries have been found for each of the three cases. The non-uniqueness of the fitted results include (1) the different geometries naturally stemming from the difference in the models used for fitting, and (2) an unexpected result that either of magnetic field chirality, left-handed and right-handed, can reproduce the observation in some cases. Thus we conclude that the model-fitting cannot always give us a unique geometry of the observed magnetic flux rope. In addition, we have found that the magnetic field chirality of a flux rope cannot be uniquely inferred from the sense of field vector rotation observed in the plane normal to the Earth-Sun line; the sense of rotation changes depending on the direction of the flux rope axis. These findings exert an important impact on the studies aimed at the geometrical relationships between the flux ropes and the magnetic field structures in the solar corona where the flux ropes were produced, such studies being an important step toward predicting geomagnetic storms based on observations of solar eruption phenomena.
Development and Analysis of Volume Multi-Sphere Method Model Generation using Electric Field Fitting
Ingram, G. J.
Electrostatic modeling of spacecraft has wide-reaching applications such as detumbling space debris in the Geosynchronous Earth Orbit regime before docking, servicing and tugging space debris to graveyard orbits, and Lorentz augmented orbits. The viability of electrostatic actuation control applications relies on faster-than-realtime characterization of the electrostatic interaction. The Volume Multi-Sphere Method (VMSM) seeks the optimal placement and radii of a small number of equipotential spheres to accurately model the electrostatic force and torque on a conducting space object. Current VMSM models tuned using force and torque comparisons with commercially available finite element software are subject to the modeled probe size and numerical errors of the software. This work first investigates fitting of VMSM models to Surface-MSM (SMSM) generated electrical field data, removing modeling dependence on probe geometry while significantly increasing performance and speed. A proposed electric field matching cost function is compared to a force and torque cost function, the inclusion of a self-capacitance constraint is explored and 4 degree-of-freedom VMSM models generated using electric field matching are investigated. The resulting E-field based VMSM development framework is illustrated on a box-shaped hub with a single solar panel, and convergence properties of select models are qualitatively analyzed. Despite the complex non-symmetric spacecraft geometry, elegantly simple 2-sphere VMSM solutions provide force and torque fits within a few percent.
A hands-on approach for fitting long-term survival models under the GAMLSS framework.
de Castro, Mário; Cancho, Vicente G; Rodrigues, Josemar
2010-02-01
In many data sets from clinical studies there are patients insusceptible to the occurrence of the event of interest. Survival models which ignore this fact are generally inadequate. The main goal of this paper is to describe an application of the generalized additive models for location, scale, and shape (GAMLSS) framework to the fitting of long-term survival models. In this work the number of competing causes of the event of interest follows the negative binomial distribution. In this way, some well known models found in the literature are characterized as particular cases of our proposal. The model is conveniently parameterized in terms of the cured fraction, which is then linked to covariates. We explore the use of the gamlss package in R as a powerful tool for inference in long-term survival models. The procedure is illustrated with a numerical example. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Fitting additive hazards models for case-cohort studies: a multiple imputation approach.
Jung, Jinhyouk; Harel, Ofer; Kang, Sangwook
2016-07-30
In this paper, we consider fitting semiparametric additive hazards models for case-cohort studies using a multiple imputation approach. In a case-cohort study, main exposure variables are measured only on some selected subjects, but other covariates are often available for the whole cohort. We consider this as a special case of a missing covariate by design. We propose to employ a popular incomplete data method, multiple imputation, for estimation of the regression parameters in additive hazards models. For imputation models, an imputation modeling procedure based on a rejection sampling is developed. A simple imputation modeling that can naturally be applied to a general missing-at-random situation is also considered and compared with the rejection sampling method via extensive simulation studies. In addition, a misspecification aspect in imputation modeling is investigated. The proposed procedures are illustrated using a cancer data example. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Mbagwu, J.S.C.
1993-10-01
Six infiltration models, some obtained by reformulating the fitting parameters of the classical Kostiakov (1932) and Philip (1957) equations, were investigated for their ability to describe water infiltration into highly permeable sandy soils from the Nsukka plains of SE Nigeria. The models were Kostiakov, Modified Kostiakov (A), Modified Kostiakov (B), Philip, Modified Philip (A) and Modified Philip (B). Infiltration data were obtained from double ring infiltrometers on field plots established on a Knadic Paleustult (Nkpologu series) to investigate the effects of land use on soil properties and maize yield. The treatments were; (i) tilled-mulched (TM), (ii) tilled-unmulched (TU), (iii) untilled-mulched (UM), (iv) untilled-unmulched (UU) and (v) continuous pasture (CP). Cumulative infiltration was highest on the TM and lowest on the CP plots. All estimated model parameters obtained by the best fit of measured data differed significantly among the treatments. Based on the magnitude of R 2 values, the Kostiakov, Modified Kostiakov (A), Philip and Modified Philip (A) models provided best predictions of cumulative infiltration as a function of time. Comparing experimental with model-predicted cumulative infiltration showed, however, that on all treatments the values predicted by the classical Kostiakov, Philip and Modified Philip (A) models deviated most from experimental data. The other models produced values that agreed very well with measured data. Considering the eases of determining the fitting parameters it is proposed that on soils with high infiltration rates, either Modified Kostiakov model (I = Kt a + Ict) or Modified Philip model (I St 1/2 + Ict), (where I is cumulative infiltration, K, the time coefficient, t, time elapsed, 'a' the time exponent, Ic the equilibrium infiltration rate and S, the soil water sorptivity), be used for routine characterization of the infiltration process. (author). 33 refs, 3 figs 6 tabs
Wu, L.; Chow, D. S-L.; Tam, V.; Putcha, L.
2015-01-01
An intranasal gel formulation of scopolamine (INSCOP) was developed for the treatment of Motion Sickness. Bioavailability and pharmacokinetics (PK) were determined per Investigative New Drug (IND) evaluation guidance by the Food and Drug Administration. Earlier, we reported the development of a PK model that can predict the relationship between plasma, saliva and urinary scopolamine (SCOP) concentrations using data collected from an IND clinical trial with INSCOP. This data analysis project is designed to validate the reported best fit PK model for SCOP by comparing observed and model predicted SCOP concentration-time profiles after administration of INSCOP.
McNeish, Daniel; Hancock, Gregory R
2018-03-01
Lance, Beck, Fan, and Carter (2016) recently advanced 6 new fit indices and associated cutoff values for assessing data-model fit in the structural portion of traditional latent variable path models. The authors appropriately argued that, although most researchers' theoretical interest rests with the latent structure, they still rely on indices of global model fit that simultaneously assess both the measurement and structural portions of the model. As such, Lance et al. proposed indices intended to assess the structural portion of the model in isolation of the measurement model. Unfortunately, although these strategies separate the assessment of the structure from the fit of the measurement model, they do not isolate the structure's assessment from the quality of the measurement model. That is, even with a perfectly fitting measurement model, poorer quality (i.e., less reliable) measurements will yield a more favorable verdict regarding structural fit, whereas better quality (i.e., more reliable) measurements will yield a less favorable structural assessment. This phenomenon, referred to by Hancock and Mueller (2011) as the reliability paradox, affects not only traditional global fit indices but also those structural indices proposed by Lance et al. as well. Fortunately, as this comment will clarify, indices proposed by Hancock and Mueller help to mitigate this problem and allow the structural portion of the model to be assessed independently of both the fit of the measurement model as well as the quality of indicator variables contained therein. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
The FIT 2.0 Model - Fuel-cycle Integration and Tradeoffs
Energy Technology Data Exchange (ETDEWEB)
Steven J. Piet; Nick R. Soelberg; Layne F. Pincock; Eric L. Shaber; Gregory M Teske
2011-06-01
All mass streams from fuel separation and fabrication are products that must meet some set of product criteria – fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the “system losses study” team that developed it [Shropshire2009, Piet2010b] are steps by the Fuel Cycle Technology program toward an analysis that accounts for the requirements and capabilities of each fuel cycle component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R&D needs and set longer-term goals. This report describes FIT 2, an update of the original FIT model.[Piet2010c] FIT is a method to analyze different fuel cycles; in particular, to determine how changes in one part of a fuel cycle (say, fuel burnup, cooling, or separation efficiencies) chemically affect other parts of the fuel cycle. FIT provides the following: Rough estimate of physics and mass balance feasibility of combinations of technologies. If feasibility is an issue, it provides an estimate of how performance would have to change to achieve feasibility. Estimate of impurities in fuel and impurities in waste as function of separation performance, fuel fabrication, reactor, uranium source, etc.
Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components
Zhang, Saijuan
2011-01-06
There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole
Directory of Open Access Journals (Sweden)
Bońkowski T.
2017-12-01
Full Text Available This paper is focused on experimental testing and modeling of genuine leather used for a motorcycle personal protective equipment. Simulations of powered two wheelers (PTW accidents are usually performed using human body models (HBM for the injury assessment equipped only with the helmet model. However, the kinematics of the PTW rider during a real accident is disturbed by the stiffness of his suit, which is normally not taken into account during the reconstruction or simulation of the accident scenario. The material model proposed in this paper can be used in numerical simulations of crash scenarios that include the effect of motorcyclist rider garment. The fitting procedure was conducted on 2 sets of samples: 5 uniaxial samples and 5 biaxial samples. The experimental characteristics were used to obtain the set of 25 constitutive material models in terms of Ogden parameters.
FITTING A THREE DIMENSIONAL PEM FUEL CELL MODEL TO MEASUREMENTS BY TUNING THE POROSITY AND
DEFF Research Database (Denmark)
Bang, Mads; Odgaard, Madeleine; Condra, Thomas Joseph
2004-01-01
the distribution of current density and further how thisaffects the polarization curve.The porosity and conductivity of the catalyst layer are some ofthe most difficult parameters to measure, estimate and especiallycontrol. Yet the proposed model shows how these two parameterscan have significant influence...... on the performance of the fuel cell.The two parameters are shown to be key elements in adjusting thethree-dimensional model to fit measured polarization curves.Results from the proposed model are compared to single cellmeasurements on a test MEA from IRD Fuel Cells.......A three-dimensional, computational fluid dynamics (CFD) model of a PEM fuel cell is presented. The model consists ofstraight channels, porous gas diffusion layers, porous catalystlayers and a membrane. In this computational domain, most ofthe transport phenomena which govern the performance of the...
Estimation of retinal vessel caliber using model fitting and random forests
Araújo, Teresa; Mendonça, Ana Maria; Campilho, Aurélio
2017-03-01
Retinal vessel caliber changes are associated with several major diseases, such as diabetes and hypertension. These caliber changes can be evaluated using eye fundus images. However, the clinical assessment is tiresome and prone to errors, motivating the development of automatic methods. An automatic method based on vessel crosssection intensity profile model fitting for the estimation of vessel caliber in retinal images is herein proposed. First, vessels are segmented from the image, vessel centerlines are detected and individual segments are extracted and smoothed. Intensity profiles are extracted perpendicularly to the vessel, and the profile lengths are determined. Then, model fitting is applied to the smoothed profiles. A novel parametric model (DoG-L7) is used, consisting on a Difference-of-Gaussians multiplied by a line which is able to describe profile asymmetry. Finally, the parameters of the best-fit model are used for determining the vessel width through regression using ensembles of bagged regression trees with random sampling of the predictors (random forests). The method is evaluated on the REVIEW public dataset. A precision close to the observers is achieved, outperforming other state-of-the-art methods. The method is robust and reliable for width estimation in images with pathologies and artifacts, with performance independent of the range of diameters.
Energy Technology Data Exchange (ETDEWEB)
Furlan, E. [Infrared Processing and Analysis Center, California Institute of Technology, 770 S. Wilson Ave., Pasadena, CA 91125 (United States); Fischer, W. J. [Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771 (United States); Ali, B. [Space Science Institute, 4750 Walnut Street, Boulder, CO 80301 (United States); Stutz, A. M. [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Stanke, T. [ESO, Karl-Schwarzschild-Strasse 2, D-85748 Garching bei München (Germany); Tobin, J. J. [National Radio Astronomy Observatory, Charlottesville, VA 22903 (United States); Megeath, S. T.; Booker, J. [Ritter Astrophysical Research Center, Department of Physics and Astronomy, University of Toledo, 2801 W. Bancroft Street, Toledo, OH 43606 (United States); Osorio, M. [Instituto de Astrofísica de Andalucía, CSIC, Camino Bajo de Huétor 50, E-18008 Granada (Spain); Hartmann, L.; Calvet, N. [Department of Astronomy, University of Michigan, 500 Church Street, Ann Arbor, MI 48109 (United States); Poteet, C. A. [New York Center for Astrobiology, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180 (United States); Manoj, P. [Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India); Watson, D. M. [Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627 (United States); Allen, L., E-mail: furlan@ipac.caltech.edu [National Optical Astronomy Observatory, 950 N. Cherry Avenue, Tucson, AZ 85719 (United States)
2016-05-01
We present key results from the Herschel Orion Protostar Survey: spectral energy distributions (SEDs) and model fits of 330 young stellar objects, predominantly protostars, in the Orion molecular clouds. This is the largest sample of protostars studied in a single, nearby star formation complex. With near-infrared photometry from 2MASS, mid- and far-infrared data from Spitzer and Herschel , and submillimeter photometry from APEX, our SEDs cover 1.2–870 μ m and sample the peak of the protostellar envelope emission at ∼100 μ m. Using mid-IR spectral indices and bolometric temperatures, we classify our sample into 92 Class 0 protostars, 125 Class I protostars, 102 flat-spectrum sources, and 11 Class II pre-main-sequence stars. We implement a simple protostellar model (including a disk in an infalling envelope with outflow cavities) to generate a grid of 30,400 model SEDs and use it to determine the best-fit model parameters for each protostar. We argue that far-IR data are essential for accurate constraints on protostellar envelope properties. We find that most protostars, and in particular the flat-spectrum sources, are well fit. The median envelope density and median inclination angle decrease from Class 0 to Class I to flat-spectrum protostars, despite the broad range in best-fit parameters in each of the three categories. We also discuss degeneracies in our model parameters. Our results confirm that the different protostellar classes generally correspond to an evolutionary sequence with a decreasing envelope infall rate, but the inclination angle also plays a role in the appearance, and thus interpretation, of the SEDs.
Fitting the HIV epidemic in Zambia: a two-sex micro-simulation model.
Directory of Open Access Journals (Sweden)
Pauline M Leclerc
Full Text Available BACKGROUND: In describing and understanding how the HIV epidemic spreads in African countries, previous studies have not taken into account the detailed periods at risk. This study is based on a micro-simulation model (individual-based of the spread of the HIV epidemic in the population of Zambia, where women tend to marry early and where divorces are not frequent. The main target of the model was to fit the HIV seroprevalence profiles by age and sex observed at the Demographic and Health Survey conducted in 2001. METHODS AND FINDINGS: A two-sex micro-simulation model of HIV transmission was developed. Particular attention was paid to precise age-specific estimates of exposure to risk through the modelling of the formation and dissolution of relationships: marriage (stable union, casual partnership, and commercial sex. HIV transmission was exclusively heterosexual for adults or vertical (mother-to-child for children. Three stages of HIV infection were taken into account. All parameters were derived from empirical population-based data. Results show that basic parameters could not explain the dynamics of the HIV epidemic in Zambia. In order to fit the age and sex patterns, several assumptions were made: differential susceptibility of young women to HIV infection, differential susceptibility or larger number of encounters for male clients of commercial sex workers, and higher transmission rate. The model allowed to quantify the role of each type of relationship in HIV transmission, the proportion of infections occurring at each stage of disease progression, and the net reproduction rate of the epidemic (R(0 = 1.95. CONCLUSIONS: The simulation model reproduced the dynamics of the HIV epidemic in Zambia, and fitted the age and sex pattern of HIV seroprevalence in 2001. The same model could be used to measure the effect of changing behaviour in the future.
Measuring Fit of Sequence Data to Phylogenetic Model: Gain of Power Using Marginal Tests
Waddell, Peter J.; Ota, Rissa; Penny, David
2009-10-01
Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (1978) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (1982) to the present. We compare the general log-likelihood ratio (the G or G2 statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (p~0.5), but the marginalized tests do. Tests on pair-wise frequency (F) matrices, strongly (p < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (p < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4t patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with p << 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published analyses may really be far larger than the analytical methods (e.g., bootstrap) report.
UROX 2.0: an interactive tool for fitting atomic models into electron-microscopy reconstructions
International Nuclear Information System (INIS)
Siebert, Xavier; Navaza, Jorge
2009-01-01
UROX is software designed for the interactive fitting of atomic models into electron-microscopy reconstructions. The main features of the software are presented, along with a few examples. Electron microscopy of a macromolecular structure can lead to three-dimensional reconstructions with resolutions that are typically in the 30–10 Å range and sometimes even beyond 10 Å. Fitting atomic models of the individual components of the macromolecular structure (e.g. those obtained by X-ray crystallography or nuclear magnetic resonance) into an electron-microscopy map allows the interpretation of the latter at near-atomic resolution, providing insight into the interactions between the components. Graphical software is presented that was designed for the interactive fitting and refinement of atomic models into electron-microscopy reconstructions. Several characteristics enable it to be applied over a wide range of cases and resolutions. Firstly, calculations are performed in reciprocal space, which results in fast algorithms. This allows the entire reconstruction (or at least a sizeable portion of it) to be used by taking into account the symmetry of the reconstruction both in the calculations and in the graphical display. Secondly, atomic models can be placed graphically in the map while the correlation between the model-based electron density and the electron-microscopy reconstruction is computed and displayed in real time. The positions and orientations of the models are refined by a least-squares minimization. Thirdly, normal-mode calculations can be used to simulate conformational changes between the atomic model of an individual component and its corresponding density within a macromolecular complex determined by electron microscopy. These features are illustrated using three practical cases with different symmetries and resolutions. The software, together with examples and user instructions, is available free of charge at http://mem.ibs.fr/UROX/
Computational Software for Fitting Seismic Data to Epidemic-Type Aftershock Sequence Models
Chu, A.
2014-12-01
Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work introduces software to implement two of ETAS models described in Ogata (1998). To find the Maximum-Likelihood Estimates (MLEs), my software provides estimates of the homogeneous background rate parameter and the temporal and spatial parameters that govern triggering effects by applying the Expectation-Maximization (EM) algorithm introduced in Veen and Schoenberg (2008). Despite other computer programs exist for similar data modeling purpose, using EM-algorithm has the benefits of stability and robustness (Veen and Schoenberg, 2008). Spatial shapes that are very long and narrow cause difficulties in optimization convergence and problems with flat or multi-modal log-likelihood functions encounter similar issues. My program uses a robust method to preset a parameter to overcome the non-convergence computational issue. In addition to model fitting, the software is equipped with useful tools for examining modeling fitting results, for example, visualization of estimated conditional intensity, and estimation of expected number of triggered aftershocks. A simulation generator is also given with flexible spatial shapes that may be defined by the user. This open-source software has a very simple user interface. The user may execute it on a local computer, and the program also has potential to be hosted online. Java language is used for the software's core computing part and an optional interface to the statistical package R is provided.
Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy
Nabizadeh, Nooshin; John, Nigel
2014-03-01
Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.
Fitting a Two-Component Scattering Model to Polarimetric SAR Data from Forests
Freeman, Anthony
2007-01-01
Two simple scattering mechanisms are fitted to polarimetric synthetic aperture radar (SAR) observations of forests. The mechanisms are canopy scatter from a reciprocal medium with azimuthal symmetry and a ground scatter term that can represent double-bounce scatter from a pair of orthogonal surfaces with different dielectric constants or Bragg scatter from a moderately rough surface, which is seen through a layer of vertically oriented scatterers. The model is shown to represent the behavior of polarimetric backscatter from a tropical forest and two temperate forest sites by applying it to data from the National Aeronautic and Space Agency/Jet Propulsion Laboratory's Airborne SAR (AIRSAR) system. Scattering contributions from the two basic scattering mechanisms are estimated for clusters of pixels in polarimetric SAR images. The solution involves the estimation of four parameters from four separate equations. This model fit approach is justified as a simplification of more complicated scattering models, which require many inputs to solve the forward scattering problem. The model is used to develop an understanding of the ground-trunk double-bounce scattering that is present in the data, which is seen to vary considerably as a function of incidence angle. Two parameters in the model fit appear to exhibit sensitivity to vegetation canopy structure, which is worth further exploration. Results from the model fit for the ground scattering term are compared with estimates from a forward model and shown to be in good agreement. The behavior of the scattering from the ground-trunk interaction is consistent with the presence of a pseudo-Brewster angle effect for the air-trunk scattering interface. If the Brewster angle is known, it is possible to directly estimate the real part of the dielectric constant of the trunks, a key variable in forward modeling of backscatter from forests. It is also shown how, with a priori knowledge of the forest height, an estimate for the
International Nuclear Information System (INIS)
Smith, D.L.; Guenther, P.T.
1983-11-01
We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references
SpectrRelax: An application for Mössbauer spectra modeling and fitting
Matsnev, M. E.; Rusakov, V. S.
2012-10-01
The SpectrRelax application was created for analysis and fitting of absorption and emission Mossbauer spectra of isotopes with 1/2 ↔ 3/2 transitions. Available models include a single Pseudo-Voigt line, doublet, and a sextet, a number of relaxation models, and a distribution of hyperfine/relaxation parameters of any model. SpectRelax can evaluate user supplied analytical expressions of model parameters and their error estimates. Complex parameter constraints or even new models can be implemented by setting parameter values to analytical expressions. Optimal model parameters search is performed using a maximum likelihood criterion in a Levenberg-Marquardt (L-M) algorithm. In the search process, a matrix of linear correlation coefficients between model parameters is calculated along with the error estimates, which allows better understanding of the optimized results. Partial derivatives of the model functions are evaluated using a "dual numbers" algorithm, which provides exact derivatives values at any point and improves the L-M method convergence. SpectrRelax runs under Windows operating systems by Microsoft. The application has a modern graphical user interface with extensive model editing and preview capabilities.
Anticipating mismatches of HIT investments: Developing a viability-fit model for e-health services.
Mettler, Tobias
2016-01-01
Albeit massive investments in the recent years, the impact of health information technology (HIT) has been controversial and strongly disputed by both research and practice. While many studies are concerned with the development of new or the refinement of existing measurement models for assessing the impact of HIT adoption (ex post), this study presents an initial attempt to better understand the factors affecting viability and fit of HIT and thereby underscores the importance of also having instruments for managing expectations (ex ante). We extend prior research by undertaking a more granular investigation into the theoretical assumptions of viability and fit constructs. In doing so, we use a mixed-methods approach, conducting qualitative focus group discussions and a quantitative field study to improve and validate a viability-fit measurement instrument. Our findings suggest two issues for research and practice. First, the results indicate that different stakeholders perceive HIT viability and fit of the same e-health services very unequally. Second, the analysis also demonstrates that there can be a great discrepancy between the organizational viability and individual fit of a particular e-health service. The findings of this study have a number of important implications such as for health policy making, HIT portfolios, and stakeholder communication. Copyright © 2015. Published by Elsevier Ireland Ltd.
Directory of Open Access Journals (Sweden)
Rita Yi Man Li
2012-03-01
Full Text Available Entrepreneurs have always born the risk of running their business. They reap a profit in return for their risk taking and work. Housing developers are no different. In many countries, such as Australia, the United Kingdom and the United States, they interpret the tastes of the buyers and provide the dwellings they develop with basic fittings such as floor and wall coverings, bathroom fittings and kitchen cupboards. In mainland China, however, in most of the developments, units or houses are sold without floor or wall coverings, kitchen or bathroom fittings. What is the motive behind this choice? This paper analyses the factors affecting housing developers’ decisions to provide fittings based on 1701 housing developments in Hangzhou, Chongqing and Hangzhou using a Probit model. The results show that developers build a higher proportion of bare units in mainland China when: 1 there is shortage of housing; 2 land costs are high so that the comparative costs of providing fittings become relatively low.
Building Customer Churn Prediction Models in Fitness Industry with Machine Learning Methods
Shan, Min
2017-01-01
With the rapid growth of digital systems, churn management has become a major focus within customer relationship management in many industries. Ample research has been conducted for churn prediction in different industries with various machine learning methods. This thesis aims to combine feature selection and supervised machine learning methods for defining models of churn prediction and apply them on fitness industry. Forward selection is chosen as feature selection methods. Support Vector ...
Goodness-of-fit test in a multivariate errors-in-variables model $AX=B$
Kukush, Alexander; Tsaregorodtsev, Yaroslav
2016-01-01
We consider a multivariable functional errors-in-variables model $AX\\approx B$, where the data matrices $A$ and $B$ are observed with errors, and a matrix parameter $X$ is to be estimated. A goodness-of-fit test is constructed based on the total least squares estimator. The proposed test is asymptotically chi-squared under null hypothesis. The power of the test under local alternatives is discussed.
Bereczkei, Tamas; Mesko, Norbert
2007-01-01
Multiple Fitness Model states that attractiveness varies across multiple dimensions, with each feature representing a different aspect of mate value. In the present study, male raters judged the attractiveness of young females with neotenous and mature facial features, with various hair lengths. Results revealed that the physical appearance of long-haired women was rated high, regardless of their facial attractiveness being valued high or low. Women rated as most attractive were those whose f...
Directory of Open Access Journals (Sweden)
Riionheimo Janne
2003-01-01
Full Text Available We describe a technique for estimating control parameters for a plucked string synthesis model using a genetic algorithm. The model has been intensively used for sound synthesis of various string instruments but the fine tuning of the parameters has been carried out with a semiautomatic method that requires some hand adjustment with human listening. An automated method for extracting the parameters from recorded tones is described in this paper. The calculation of the fitness function utilizes knowledge of the properties of human hearing.
Fitting the CDO correlation skew: a tractable structural jump-diffusion model
DEFF Research Database (Denmark)
Willemann, Søren
2007-01-01
We extend a well-known structural jump-diffusion model for credit risk to handle both correlations through diffusion of asset values and common jumps in asset value. Through a simplifying assumption on the default timing and efficient numerical techniques, we develop a semi-analytic framework...... allowing for instantaneous calibration to heterogeneous CDS curves and fast computation of CDO tranche spreads. We calibrate the model to CDX and iTraxx data from February 2007 and achieve a satisfactory fit. To price the senior tranches for both indices, we require a risk-neutral probability of a market...
Adapted strategic plannig model applied to small business: a case study in the fitness area
Directory of Open Access Journals (Sweden)
Eduarda Tirelli Hennig
2012-06-01
Full Text Available The strategic planning is an important management tool in the corporate scenario and shall not be restricted to big Companies. However, this kind of planning process in small business may need special adaptations due to their own characteristics. This paper aims to identify and adapt the existent models of strategic planning to the scenario of a small business in the fitness area. Initially, it is accomplished a comparative study among models of different authors to identify theirs phases and activities. Then, it is defined which of these phases and activities should be present in a model that will be utilized in a small business. That model was applied to a Pilates studio; it involves the establishment of an organizational identity, an environmental analysis as well as the definition of strategic goals, strategies and actions to reach them. Finally, benefits to the organization could be identified, as well as hurdles in the implementation of the tool.
Diamond, Joshua M
2016-01-01
The conserved nature of sleep in Drosophila has allowed the fruit fly to emerge in the last decade as a powerful model organism in which to study sleep. Recent sleep studies in Drosophila have focused on the discovery and characterization of hyposomnolent mutants. One common feature of these animals is a change in sleep architecture: sleep bout count tends to be greater, and sleep bout length lower, in hyposomnolent mutants. I propose a mathematical model, produced by least-squares nonlinear regression to fit the form Y = aX (∧) b, which can explain sleep behavior in the healthy animal as well as previously-reported changes in total sleep and sleep architecture in hyposomnolent mutants. This model, fit to sleep data, yields coefficient of determination R squared, which describes goodness of fit. R squared is lower, as compared to control, in hyposomnolent mutants insomniac and fumin. My findings raise the possibility that low R squared is a feature of all hyposomnolent mutants, not just insomniac and fumin. If this were the case, R squared could emerge as a novel means by which sleep researchers might assess sleep dysfunction.
Directory of Open Access Journals (Sweden)
Joshua M. Diamond
2016-01-01
Full Text Available The conserved nature of sleep in Drosophila has allowed the fruit fly to emerge in the last decade as a powerful model organism in which to study sleep. Recent sleep studies in Drosophila have focused on the discovery and characterization of hyposomnolent mutants. One common feature of these animals is a change in sleep architecture: sleep bout count tends to be greater, and sleep bout length lower, in hyposomnolent mutants. I propose a mathematical model, produced by least-squares nonlinear regression to fit the form Y = aX∧b, which can explain sleep behavior in the healthy animal as well as previously-reported changes in total sleep and sleep architecture in hyposomnolent mutants. This model, fit to sleep data, yields coefficient of determination R squared, which describes goodness of fit. R squared is lower, as compared to control, in hyposomnolent mutants insomniac and fumin. My findings raise the possibility that low R squared is a feature of all hyposomnolent mutants, not just insomniac and fumin. If this were the case, R squared could emerge as a novel means by which sleep researchers might assess sleep dysfunction.
Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU
Directory of Open Access Journals (Sweden)
Jinwei Wang
2014-01-01
Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.
Fitting a Two-Component Scattering Model to Polarimetric SAR Data
Freeman, A.
1998-01-01
Classification, decomposition and modeling of polarimetric SAR data has received a great deal of attention in the recent literature. The objective behind these efforts is to better understand the scattering mechanisms which give rise to the polarimetric signatures seen in SAR image data. In this Paper an approach is described, which involves the fit of a combination of two simple scattering mechanisms to polarimetric SAR observations. The mechanisms am canopy scatter from a cloud of randomly oriented oblate spheroids, and a ground scatter term, which can represent double-bounce scatter from a pair of orthogonal surfaces with different dielectric constants or Bragg scatter from a moderately rough surface, seen through a layer of vertically oriented scatterers. An advantage of this model fit approach is that the scattering contributions from the two basic scattering mechanisms can be estimated for clusters of pixels in polarimetric SAR images. The solution involves the estimation of four parameters from four separate equations. The model fit can be applied to polarimetric AIRSAR data at C-, L- and P-Band.
Cardinal, Bradley J.; Cardinal, Marita K.
2002-01-01
Compared the role modeling attitudes and physical activity and fitness promoting behaviors of undergraduate students majoring in physical education and in elementary education. Student teacher surveys indicated that physical education majors had more positive attitudes toward role modeling physical activity and fitness promoting behaviors and…
Using geometry to improve model fitting and experiment design for glacial isostasy
Kachuck, S. B.; Cathles, L. M.
2017-12-01
As scientists we routinely deal with models, which are geometric objects at their core - the manifestation of a set of parameters as predictions for comparison with observations. When the number of observations exceeds the number of parameters, the model is a hypersurface (the model manifold) in the space of all possible predictions. The object of parameter fitting is to find the parameters corresponding to the point on the model manifold as close to the vector of observations as possible. But the geometry of the model manifold can make this difficult. By curving, ending abruptly (where, for instance, parameters go to zero or infinity), and by stretching and compressing the parameters together in unexpected directions, it can be difficult to design algorithms that efficiently adjust the parameters. Even at the optimal point on the model manifold, parameters might not be individually resolved well enough to be applied to new contexts. In our context of glacial isostatic adjustment, models of sparse surface observations have a broad spread of sensitivity to mixtures of the earth's viscous structure and the surface distribution of ice over the last glacial cycle. This impedes precise statements about crucial geophysical processes, such as the planet's thermal history or the climates that controlled the ice age. We employ geometric methods developed in the field of systems biology to improve the efficiency of fitting (geodesic accelerated Levenberg-Marquardt) and to identify the maximally informative sources of additional data to make better predictions of sea levels and ice configurations (optimal experiment design). We demonstrate this in particular in reconstructions of the Barents Sea Ice Sheet, where we show that only certain kinds of data from the central Barents have the power to distinguish between proposed models.
Duarte, Adam; Adams, Michael J.; Peterson, James T.
2018-01-01
Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision
A new fit-for-purpose model testing framework: Decision Crash Tests
Tolson, Bryan; Craig, James
2016-04-01
Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building
Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.
Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F
2009-11-01
Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database.
Fitting a 3-D analytic model of the coronal mass ejection to observations
Gibson, S. E.; Biesecker, D.; Fisher, R.; Howard, R. A.; Thompson, B. J.
1997-01-01
The application of an analytic magnetohydrodynamic model is presented to observations of the time-dependent explusion of 3D coronal mass ejections (CMEs) out of the solar corona. This model relates the white-light appearance of the CME to its internal magnetic field, which takes the form of a closed bubble, filled with a partly anchored, twisted magnetic flux rope and embedded in an otherwise open background field. The density distribution frozen into the expanding CME expanding field is fully 3D, and can be integrated along the line of sight to reproduce observations of scattered white light. The model is able to reproduce the three conspicuous features often associated with CMEs as observed with white-light coronagraphs: a surrounding high-density region, an internal low-density cavity, and a high-density core. The model also describes the self-similar radial expansion of these structures. By varying the model parameters, the model can be fitted directly to observations of CMEs. It is shown how the model can quantitatively match the polarized brightness contrast of a dark cavity emerging through the lower corona as observed by the HAO Mauna Loa K-coronameter to within the noise level of the data.
Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model
International Nuclear Information System (INIS)
Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.
2002-01-01
We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well
Wenseleers, Tom; Helanterä, Heikki; Alves, Denise A; Dueñez-Guzmán, Edgar; Pamilo, Pekka
2013-01-01
The conflicts over sex allocation and male production in insect societies have long served as an important test bed for Hamilton's theory of inclusive fitness, but have for the most part been considered separately. Here, we develop new coevolutionary models to examine the interaction between these two conflicts and demonstrate that sex ratio and colony productivity costs of worker reproduction can lead to vastly different outcomes even in species that show no variation in their relatedness structure. Empirical data on worker-produced males in eight species of Melipona bees support the predictions from a model that takes into account the demographic details of colony growth and reproduction. Overall, these models contribute significantly to explaining behavioural variation that previous theories could not account for.
Fitting mathematical models to describe the rheological behaviour of chocolate pastes
Barbosa, Carla; Diogo, Filipa; Alves, M. Rui
2016-06-01
The flow behavior is of utmost importance for the chocolate industry. The objective of this work was to study two mathematical models, Casson and Windhab models that can be used to fit chocolate rheological data and evaluate which better infers or previews the rheological behaviour of different chocolate pastes. Rheological properties (viscosity, shear stress and shear rates) were obtained with a rotational viscometer equipped with a concentric cylinder. The chocolate samples were white chocolate and chocolate with varying percentages in cacao (55%, 70% and 83%). The results showed that the Windhab model was the best to describe the flow behaviour of all the studied samples with higher determination coefficients (r2 > 0.9).
GRace: a MATLAB-based application for fitting the discrimination-association model.
Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio
2014-10-28
The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed.
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
Model-independent partial wave analysis using a massively-parallel fitting framework
Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.
2017-10-01
The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h ‑. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h ‑) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.
Goldstein, R. A.
2013-01-01
The predicted effect of effective population size on the distribution of fitness effects and substitution rate is critically dependent on the relationship between sequence and fitness. This highlights the importance of using models that are informed by the molecular biology, biochemistry, and biophysics of the evolving systems. We describe a computational model based on fundamental aspects of biophysics, the requirement for (most) proteins to be thermodynamically stable. Using this model, we ...
UROX 2.0: an interactive tool for fitting atomic models into electron-microscopy reconstructions.
Siebert, Xavier; Navaza, Jorge
2009-07-01
Electron microscopy of a macromolecular structure can lead to three-dimensional reconstructions with resolutions that are typically in the 30-10 A range and sometimes even beyond 10 A. Fitting atomic models of the individual components of the macromolecular structure (e.g. those obtained by X-ray crystallography or nuclear magnetic resonance) into an electron-microscopy map allows the interpretation of the latter at near-atomic resolution, providing insight into the interactions between the components. Graphical software is presented that was designed for the interactive fitting and refinement of atomic models into electron-microscopy reconstructions. Several characteristics enable it to be applied over a wide range of cases and resolutions. Firstly, calculations are performed in reciprocal space, which results in fast algorithms. This allows the entire reconstruction (or at least a sizeable portion of it) to be used by taking into account the symmetry of the reconstruction both in the calculations and in the graphical display. Secondly, atomic models can be placed graphically in the map while the correlation between the model-based electron density and the electron-microscopy reconstruction is computed and displayed in real time. The positions and orientations of the models are refined by a least-squares minimization. Thirdly, normal-mode calculations can be used to simulate conformational changes between the atomic model of an individual component and its corresponding density within a macromolecular complex determined by electron microscopy. These features are illustrated using three practical cases with different symmetries and resolutions. The software, together with examples and user instructions, is available free of charge at http://mem.ibs.fr/UROX/.
Worthington, Thomas A.; Zhang, T.; Logue, Daniel R.; Mittelstet, Aaron R.; Brewer, Shannon K.
2016-01-01
Truncated distributions of pelagophilic fishes have been observed across the Great Plains of North America, with water use and landscape fragmentation implicated as contributing factors. Developing conservation strategies for these species is hindered by the existence of multiple competing flow regime hypotheses related to species persistence. Our primary study objective was to compare the predicted distributions of one pelagophil, the Arkansas River Shiner Notropis girardi, constructed using different flow regime metrics. Further, we investigated different approaches for improving temporal transferability of the species distribution model (SDM). We compared four hypotheses: mean annual flow (a baseline), the 75th percentile of daily flow, the number of zero-flow days, and the number of days above 55th percentile flows, to examine the relative importance of flows during the spawning period. Building on an earlier SDM, we added covariates that quantified wells in each catchment, point source discharges, and non-native species presence to a structured variable framework. We assessed the effects on model transferability and fit by reducing multicollinearity using Spearman’s rank correlations, variance inflation factors, and principal component analysis, as well as altering the regularization coefficient (β) within MaxEnt. The 75th percentile of daily flow was the most important flow metric related to structuring the species distribution. The number of wells and point source discharges were also highly ranked. At the default level of β, model transferability was improved using all methods to reduce collinearity; however, at higher levels of β, the correlation method performed best. Using β = 5 provided the best model transferability, while retaining the majority of variables that contributed 95% to the model. This study provides a workflow for improving model transferability and also presents water-management options that may be considered to improve the
Coping among individuals with multiple sclerosis: Evaluating a goodness-of-fit model.
Roubinov, Danielle S; Turner, Aaron P; Williams, Rhonda M
2015-05-01
Multiple sclerosis (MS) is a chronic illness involving both controllable and uncontrollable stressors. The goodness-of-fit hypothesis posits that managing stressors effectively requires the use of different coping approaches in the face of controllable and uncontrollable stressors. To test the applicability of the goodness-of-fit model in a sample of adults with MS, we evaluated the ratio of 2 types of coping (an active problem-solving approach and an emotion-based meaning-focused approach) as a moderator of the relations between stress uncontrollability and mental health outcomes. Participants were veterans with MS (N = 90) receiving medical services through the Veterans Health Administration who completed telephone-based interviews. Regression analyses tested the interaction of stress uncontrollability and the problem- and meaning-focused coping ratio on anxious and depressive symptoms. Significant interactions were probed at 1 SD above the mean of coping (use of predominantly problem-focused coping) and 1 SD below the mean of coping (use of predominantly meaning-focused coping). Findings largely supported the goodness-of-fit hypothesis. Anxiety and depressive symptoms were elevated when participants used more problem-focused strategies relative to meaning-focused strategies in the face of perceived uncontrollable stress. Conversely, symptoms of anxiety and depression were lower when uncontrollable stress was met with predominantly meaning-focused coping; however, the relations did not reach statistical significance. The impact of uncontrollable stressors on mental health outcomes for individuals with MS may vary depending on the degree to which problem-focused versus meaning-focused coping strategies are employed, lending support to the goodness-of-fit model. (c) 2015 APA, all rights reserved).
Directory of Open Access Journals (Sweden)
Matthew R Nassar
2013-04-01
Full Text Available Fitting models to behavior is commonly used to infer the latent computational factors responsible for generating behavior. However, the complexity of many behaviors can handicap the interpretation of such models. Here we provide perspectives on problems that can arise when interpreting parameter fits from models that provide incomplete descriptions of behavior. We illustrate these problems by fitting commonly used and neurophysiologically motivated reinforcement-learning models to simulated behavioral data sets from learning tasks. These model fits can pass a host of standard goodness-of-fit tests and other model-selection diagnostics even when the models do not provide a complete description of the behavioral data. We show that such incomplete models can be misleading by yielding biased estimates of the parameters explicitly included in the models. This problem is particularly pernicious when the neglected factors are unknown and therefore not easily identified by model comparisons and similar methods. An obvious conclusion is that a parsimonious description of behavioral data does not necessarily imply an accurate description of the underlying computations. Moreover, general goodness-of-fit measures are not a strong basis to support claims that a particular model can provide a generalized understanding of the computations that govern behavior. To help overcome these challenges, we advocate the design of tasks that provide direct reports of the computational variables of interest. Such direct reports complement model-fitting approaches by providing a more complete, albeit possibly more task-specific, representation of the factors that drive behavior. Computational models then provide a means to connect such task-specific results to a more general algorithmic understanding of the brain.
Tikhonov, Mikhail; Monasson, Remi
2018-01-01
Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.
Fitting the two-compartment model in DCE-MRI by linear inversion.
Flouri, Dimitra; Lesnic, Daniel; Sourbron, Steven P
2016-09-01
Model fitting of dynamic contrast-enhanced-magnetic resonance imaging-MRI data with nonlinear least squares (NLLS) methods is slow and may be biased by the choice of initial values. The aim of this study was to develop and evaluate a linear least squares (LLS) method to fit the two-compartment exchange and -filtration models. A second-order linear differential equation for the measured concentrations was derived where model parameters act as coefficients. Simulations of normal and pathological data were performed to determine calculation time, accuracy and precision under different noise levels and temporal resolutions. Performance of the LLS was evaluated by comparison against the NLLS. The LLS method is about 200 times faster, which reduces the calculation times for a 256 × 256 MR slice from 9 min to 3 s. For ideal data with low noise and high temporal resolution the LLS and NLLS were equally accurate and precise. The LLS was more accurate and precise than the NLLS at low temporal resolution, but less accurate at high noise levels. The data show that the LLS leads to a significant reduction in calculation times, and more reliable results at low noise levels. At higher noise levels the LLS becomes exceedingly inaccurate compared to the NLLS, but this may be improved using a suitable weighting strategy. Magn Reson Med 76:998-1006, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Sih, Bryant L; Negus, Charles H
2016-05-01
The U.S. Army Basic Combat Training (BCT) is the first step in preparing soldier trainees for the physical demands of the military. Unfortunately, a substantial number of trainees fail BCT due to failure on the final Army Physical Fitness Test (also known as the "end of cycle" APFT). Current epidemiological studies have used statistics to identify several risk factors for poor APFT performance, but these studies have had limited utility for guiding regimen design to maximize APFT outcome. This is because such studies focus on intrinsic risks to APFT failure and do not utilize detailed BCT activity data to build models which offer guidance for optimizing the training regimen to improve graduation rates. In this study, a phenomenological run performance model that accounts for physiological changes in fitness and fatigue due to training was applied to recruits undergoing U.S. Army BCT using high resolution (minute-by-minute) activity data. The phenomenological model was better at predicting both the final as well as intermediate APFTs (R(2) range = 0.55-0.59) compared to linear regression models (LRMs) that used the same intrinsic input variables (R(2) range = 0.36-0.50). Unlike a statistical approach, a phenomenological model accounts for physiological changes and, therefore, has the potential to not only identify trainees at risk of failing BCT on novel training regimens, but offer guidance to regimen planners on how to change the regimen for maximizing physical performance. This paper is Part I of a 2-part series on physical training outcome predictions. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
FIT ANALYSIS OF INDOSAT DOMPETKU BUSINESS MODEL USING A STRATEGIC DIAGNOSIS APPROACH
Directory of Open Access Journals (Sweden)
Fauzi Ridwansyah
2015-09-01
Full Text Available Mobile payment is an industry's response to global and regional technological-driven, as well as national social-economical driven in less cash society development. The purposes of this study were 1 identifying positioning of PT. Indosat in providing a response to Indonesian mobile payment market, 2 analyzing Indosat’s internal capabilities and business model fit with environment turbulence, and 3 formulating the optimum mobile payment business model development design for Indosat. The method used in this study was a combination of qualitative and quantitative analysis through in-depth interviews with purposive judgment sampling. The analysis tools used in this study were Business Model Canvas (MBC and Ansoff’s Strategic Diagnosis. The interviewees were the representatives of PT. Indosat internal management and mobile payment business value chain stakeholders. Based on BMC mapping which is then analyzed by strategic diagnosis model, a considerable gap (>1 between the current market environment and Indosat strategy of aggressiveness with the expected future of environment turbulence level was obtained. Therefore, changes in the competitive strategy that need to be conducted include 1 developing a new customer segment, 2 shifting the value proposition that leads to the extensification of mobile payment, 3 monetizing effective value proposition, and 4 integrating effective collaboration for harmonizing company’s objective with the government's vision. Keywords: business model canvas, Indosat, mobile payment, less cash society, strategic diagnosis
Alipoor, Mohammad; Maier, Stephan E; Gu, Irene Yu-Hua; Mehnert, Andrew; Kahl, Fredrik
2015-01-01
The monoexponential model is widely used in quantitative biomedical imaging. Notable applications include apparent diffusion coefficient (ADC) imaging and pharmacokinetics. The application of ADC imaging to the detection of malignant tissue has in turn prompted several studies concerning optimal experiment design for monoexponential model fitting. In this paper, we propose a new experiment design method that is based on minimizing the determinant of the covariance matrix of the estimated parameters (D-optimal design). In contrast to previous methods, D-optimal design is independent of the imaged quantities. Applying this method to ADC imaging, we demonstrate its steady performance for the whole range of input variables (imaged parameters, number of measurements, and range of b-values). Using Monte Carlo simulations we show that the D-optimal design outperforms existing experiment design methods in terms of accuracy and precision of the estimated parameters.
A History of Regression and Related Model-Fitting in the Earth Sciences (1636?-2000)
International Nuclear Information System (INIS)
Howarth, Richard J.
2001-01-01
The (statistical) modeling of the behavior of a dependent variate as a function of one or more predictors provides examples of model-fitting which span the development of the earth sciences from the 17th Century to the present. The historical development of these methods and their subsequent application is reviewed. Bond's predictions (c. 1636 and 1668) of change in the magnetic declination at London may be the earliest attempt to fit such models to geophysical data. Following publication of Newton's theory of gravitation in 1726, analysis of data on the length of a 1 o meridian arc, and the length of a pendulum beating seconds, as a function of sin 2 (latitude), was used to determine the ellipticity of the oblate spheroid defining the Figure of the Earth. The pioneering computational methods of Mayer in 1750, Boscovich in 1755, and Lambert in 1765, and the subsequent independent discoveries of the principle of least squares by Gauss in 1799, Legendre in 1805, and Adrain in 1808, and its later substantiation on the basis of probability theory by Gauss in 1809 were all applied to the analysis of such geodetic and geophysical data. Notable later applications include: the geomagnetic survey of Ireland by Lloyd, Sabine, and Ross in 1836, Gauss's model of the terrestrial magnetic field in 1838, and Airy's 1845 analysis of the residuals from a fit to pendulum lengths, from which he recognized the anomalous character of measurements of gravitational force which had been made on islands. In the early 20th Century applications to geological topics proliferated, but the computational burden effectively held back applications of multivariate analysis. Following World War II, the arrival of digital computers in universities in the 1950s facilitated computation, and fitting linear or polynomial models as a function of geographic coordinates, trend surface analysis, became popular during the 1950-60s. The inception of geostatistics in France at this time by Matheron had its
Lubke, Gitta H.; Campbell, Ian
2016-01-01
Inference and conclusions drawn from model fitting analyses are commonly based on a single “best-fitting” model. If model selection and inference are carried out using the same data model selection uncertainty is ignored. We illustrate the Type I error inflation that can result from using the same data for model selection and inference, and we then propose a simple bootstrap based approach to quantify model selection uncertainty in terms of model selection rates. A selection rate can be interpreted as an estimate of the replication probability of a fitted model. The benefits of bootstrapping model selection uncertainty is demonstrated in a growth mixture analyses of data from the National Longitudinal Study of Youth, and a 2-group measurement invariance analysis of the Holzinger-Swineford data. PMID:28663687
VizieR Online Data Catalog: GRB prompt emission fitted with the DREAM model (Ahlgren+, 2015)
Ahlgren, B.; Larsson, J.; Nymark, T.; Ryde, F.; Pe'Er, A.
2018-01-01
We illustrate the application of the DREAM model by fitting it to two different, bright Fermi GRBs; GRB 090618 and GRB 100724B. While GRB 090618 is well fitted by a Band function, GRB 100724B was the first example of a burst with a significant additional BB component (Guiriec et al. 2011ApJ...727L..33G). GRB 090618 is analysed using Gamma-ray Burst Monitor (GBM) data (Meegan et al. 2009ApJ...702..791M) from the NaI and BGO detectors. For GRB 100724B, we used GBM data from the NaI and BGO detectors as well as Large Area Telescope Low Energy (LAT-LLE) data. For both bursts we selected NaI detectors seeing the GRB at an off-axis angle lower than 60° and the BGO detector as being the best aligned of the two BGO detectors. The spectra were fitted in the energy ranges 8-1000 keV (NaI), 200-40000 keV (BGO) and 30-1000 MeV (LAT-LLE). (2 data files).
Directory of Open Access Journals (Sweden)
Gurutzeta Guillera-Arroita
Full Text Available In a recent paper, Welsh, Lindenmayer and Donnelly (WLD question the usefulness of models that estimate species occupancy while accounting for detectability. WLD claim that these models are difficult to fit and argue that disregarding detectability can be better than trying to adjust for it. We think that this conclusion and subsequent recommendations are not well founded and may negatively impact the quality of statistical inference in ecology and related management decisions. Here we respond to WLD's claims, evaluating in detail their arguments, using simulations and/or theory to support our points. In particular, WLD argue that both disregarding and accounting for imperfect detection lead to the same estimator performance regardless of sample size when detectability is a function of abundance. We show that this, the key result of their paper, only holds for cases of extreme heterogeneity like the single scenario they considered. Our results illustrate the dangers of disregarding imperfect detection. When ignored, occupancy and detection are confounded: the same naïve occupancy estimates can be obtained for very different true levels of occupancy so the size of the bias is unknowable. Hierarchical occupancy models separate occupancy and detection, and imprecise estimates simply indicate that more data are required for robust inference about the system in question. As for any statistical method, when underlying assumptions of simple hierarchical models are violated, their reliability is reduced. Resorting in those instances where hierarchical occupancy models do no perform well to the naïve occupancy estimator does not provide a satisfactory solution. The aim should instead be to achieve better estimation, by minimizing the effect of these issues during design, data collection and analysis, ensuring that the right amount of data is collected and model assumptions are met, considering model extensions where appropriate.
Directory of Open Access Journals (Sweden)
Cristina García Magro
2015-06-01
Full Text Available Purpose: The aims of the paper is offers a model of analysis which allows to measure the impact on the performance of fairs, as well as the knowledge or not of the motives of participation of the visitors on the part of the exhibitors. Design/methodology: A review of the literature is established concerning two of the principal interested agents, exhibitors and visitors, focusing. The study is focused on the line of investigation referred to the motives of participation or not in a trade show. According to the information thrown by each perspectives of study, a comparative analysis is carried out in order to determine the degree of existing understanding between both. Findings: The trade shows allow to be studied from an integrated strategic marketing approach. The fit model between the reasons for participation of exhibitors and visitors offer information on the lack of an understanding between exhibitors and visitors, leading to dissatisfaction with the participation, a fact that is reflected in the fair success. The model identified shows that a strategic plan must be designed in which the reason for participation of visitor was incorporated as moderating variable of the reason for participation of exhibitors. The article concludes with the contribution of a series of proposals for the improvement of fairground results. Social implications: The fit model that improve the performance of trade shows, implicitly leads to successful achievement of targets for multiple stakeholders beyond the consideration of visitors and exhibitors. Originality/value: The integrated perspective of stakeholders allows the study of the existing relationships between the principal groups of interest, in such a way that, having knowledge on the condition of the question of the trade shows facilitates the task of the investigator in future academic works and allows that the interested groups obtain a better performance to the participation in fairs, as visitor or as
Rybizki, Jan; Just, Andreas; Rix, Hans-Walter
2017-09-01
Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar
Levy flights and self-similar exploratory behaviour of termite workers: beyond model fitting.
Directory of Open Access Journals (Sweden)
Octavio Miramontes
Full Text Available Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties--including Lévy flights--in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.
A PID Positioning Controller with a Curve Fitting Model Based on RFID Technology
Directory of Open Access Journals (Sweden)
Young-Long Chen
2013-04-01
Full Text Available The global positioning system (GPS is an important research topic to solve outdoor positioning problems, but GPS is unable to locate objects accurately and precisely indoors. Some available systems apply ultrasound or optical tracking. This paper presents an efficient proportional-integral-derivative (PID controller with curve fitting model for mobile robot localization and position estimation which adopts passive radio frequency identification (RFID tags in a space. This scheme is based on a mobile robot carries an RFID reader module which reads the installed low-cost passive tags under the floor in a grid-like pattern. The PID controllers increase the efficiency of captured RFID tags and the curve fitting model is used to systematically identify the revolutions per minute (RPM of the motor. We control and monitor the position of the robot from a remote location through a mobile phone via Wi-Fi and Bluetooth network. Experiment results present that the number of captured RFID tags of our proposed scheme outperforms that of the previous scheme.
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
Limited-information goodness-of-fit testing of hierarchical item factor models.
Cai, Li; Hansen, Mark
2013-05-01
In applications of item response theory, assessment of model fit is a critical issue. Recently, limited-information goodness-of-fit testing has received increased attention in the psychometrics literature. In contrast to full-information test statistics such as Pearson's X(2) or the likelihood ratio G(2) , these limited-information tests utilize lower-order marginal tables rather than the full contingency table. A notable example is Maydeu-Olivares and colleagues'M2 family of statistics based on univariate and bivariate margins. When the contingency table is sparse, tests based on M2 retain better Type I error rate control than the full-information tests and can be more powerful. While in principle the M2 statistic can be extended to test hierarchical multidimensional item factor models (e.g., bifactor and testlet models), the computation is non-trivial. To obtain M2 , a researcher often has to obtain (many thousands of) marginal probabilities, derivatives, and weights. Each of these must be approximated with high-dimensional numerical integration. We propose a dimension reduction method that can take advantage of the hierarchical factor structure so that the integrals can be approximated far more efficiently. We also propose a new test statistic that can be substantially better calibrated and more powerful than the original M2 statistic when the test is long and the items are polytomous. We use simulations to demonstrate the performance of our new methods and illustrate their effectiveness with applications to real data. © 2012 The British Psychological Society.
Evapotranspiration measurement and modeling without fitting parameters in high-altitude grasslands
Ferraris, Stefano; Previati, Maurizio; Canone, Davide; Dematteis, Niccolò; Boetti, Marco; Balocco, Jacopo; Bechis, Stefano
2016-04-01
Mountain grasslands are important, also because one sixth of the world population lives inside watershed dominated by snowmelt. Also, grasslands provide food to both domestic and selvatic animals. The global warming will probably accelerate the hydrological cycle and increase the drought risk. The combination of measurements, modeling and remote sensing can furnish knowledge in such faraway areas (e.g.: Brocca et al., 2013). A better knowledge of water balance can also allow to optimize the irrigation (e.g.: Canone et al., 2015). This work is meant to build a model of water balance in mountain grasslands, ranging between 1500 and 2300 meters asl. The main input is the Digital Terrain Model, which is more reliable in grasslands than both in the woods and in the built environment. It drives the spatial variability of shortwave solar radiation. The other atmospheric forcings are more problematic to estimate, namely air temperature, wind and longwave radiation. Ad hoc routines have been written, in order to interpolate in space the meteorological hourly time variability. The soil hydraulic properties are less variable than in the plains, but the soil depth estimation is still an open issue. The soil vertical variability has been modeled taking into account the main processes: soil evaporation, root uptake, and fractured bedrock percolation. The time variability latent heat flux and soil moisture results have been compared with the data measured in an eddy covariance station. The results are very good, given the fact that the model has no fitting parameters. The space variability results have been compared with the results of a model based on Landsat 7 and 8 data, applied over an area of about 200 square kilometers. The spatial correlation is quite in agreement between the two models. Brocca et al. (2013). "Soil moisture estimation in alpine catchments through modelling and satellite observations". Vadose Zone Journal, 12(3), 10 pp. Canone et al. (2015). "Field
Silva, Mónica A; Jonsen, Ian; Russell, Deborah J F; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F
2014-01-01
Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.
Directory of Open Access Journals (Sweden)
L.R. Schaeffer
2010-04-01
Full Text Available The shape of individual deviations of milk yield for dairy cattle from the fixed part of a random regression test day model (RRTDM was investigated. Data were 53,217 TD records for milk yield of 6,229 first lactation Canadian Holsteins in Ontario. Data were fitted with a model that included the fixed effects of herd-testdate, DIM interval nested within age and season of calving. Residuals of the model were then fitted with the following functions: Ali and Schaeffer 5 parameter model, fourth-order Legendre Polynomials, and cubic spline with three, four or five knots. Result confirm the great variability of shape that can be found when individual lactation are modeled. Cubic splines gave better fitting pe4rformances although together with a marked tendency to yield aberrant estimates at the edge of the lactation trajectory.
Yuan, Shupei; Ma, Wenjuan; Kanthawala, Shaheen; Peng, Wei
2015-09-01
Health and fitness applications (apps) are one of the major app categories in the current mobile app market. Few studies have examined this area from the users' perspective. This study adopted the Extended Unified Theory of Acceptance and Use of Technology (UTAUT2) Model to examine the predictors of the users' intention to adopt health and fitness apps. A survey (n=317) was conducted with college-aged smartphone users at a Midwestern university in the United States. Performance expectancy, hedonic motivations, price value, and habit were significant predictors of users' intention of continued usage of health and fitness apps. However, effort expectancy, social influence, and facilitating conditions were not found to predict users' intention of continued usage of health and fitness apps. This study extends the UTATU2 Model to the mobile apps domain and provides health professions, app designers, and marketers with the insights of user experience in terms of continuously using health and fitness apps.
Directory of Open Access Journals (Sweden)
Loreen eHertäg
2012-09-01
Full Text Available For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('in-vivo-like' input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.
Modeling of physical fitness of young karatyst on the pre basic training
Directory of Open Access Journals (Sweden)
V. A. Galimskyi
2014-09-01
Full Text Available Purpose : to develop a program of physical fitness for the correction of the pre basic training on the basis of model performance. Material: 57 young karate sportsmen of 9-11 years old took part in the research. Results : the level of general and special physical preparedness of young karate 9-11 years old was determined. Classes in the control group occurred in the existing program for yous sports school Muay Thai (Thailand boxing. For the experimental group has developed a program of selective development of general and special physical qualities of model-based training sessions. Special program contains 6 direction: 1. Development of static and dynamic balance; 2. Development of vestibular stability (precision movements after rotation; 3. Development rate movements; 4. The development of the capacity for rapid restructuring movements; 5. Development capabilities to differentiate power and spatial parameters of movement; 6. Development of the ability to perform jumping movements of rotation. Development of special physical qualities continued to work to improve engineering complex shock motions on the place and with movement. Conclusions : the use of selective development of special physical qualities based models of training sessions has a significant performance advantage over the control group.
Supersymmetric Fits after the Higgs Discovery and Implications for Model Building
Ellis, John
2014-01-01
The data from the first run of the LHC at 7 and 8 TeV, together with the information provided by other experiments such as precision electroweak measurements, flavour measurements, the cosmological density of cold dark matter and the direct search for the scattering of dark matter particles in the LUX experiment, provide important constraints on supersymmetric models. Important information is provided by the ATLAS and CMS measurements of the mass of the Higgs boson, as well as the negative results of searches at the LHC for events with missing transverse energy accompanied by jets, and the LHCb and CMS measurements off BR($B_s \\to \\mu^+ \\mu^-$). Results are presented from frequentist analyses of the parameter spaces of the CMSSM and NUHM1. The global $\\chi^2$ functions for the supersymmetric models vary slowly over most of the parameter spaces allowed by the Higgs mass and the missing transverse energy search, with best-fit values that are comparable to the $\\chi^2$ for the Standard Model. The $95\\%$ CL lower...
Treasure, Janet; Leslie, Monica; Chami, Rayane; Fernández-Aranda, Fernando
2018-03-01
Explanatory models for eating disorders have changed over time to account for changing clinical presentations. The transdiagnostic model evolved from the maintenance model, which provided the framework for cognitive behavioural therapy for bulimia nervosa. However, for many individuals (especially those at the extreme ends of the weight spectrum), this account does not fully fit. New evidence generated from research framed within the food addiction hypothesis is synthesized here into a model that can explain recurrent binge eating behaviour. New interventions that target core maintenance elements identified within the model may be useful additions to a complex model of treatment for eating disorders. Copyright © 2018 John Wiley & Sons, Ltd and Eating Disorders Association.
Dai, Junyi; Kerestes, Rebecca; Upton, Daniel J; Busemeyer, Jerome R; Stout, Julie C
2015-01-01
The Iowa Gambling Task (IGT) and the Soochow Gambling Task (SGT) are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning (EVL) model and the prospect valence learning (PVL) model, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79) and 27 control participants (mean age 35; SD 10.44) completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.
Directory of Open Access Journals (Sweden)
Junyi eDai
2015-03-01
Full Text Available The Iowa Gambling Task (IGT and the Soochow Gambling Task (SGT are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning model (EVL and the prospect valence learning model (PVL, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79 and 27 control participants (mean age 35; SD 10.44 completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.
Model Atmosphere Spectrum Fit to the Soft X-Ray Outburst Spectrum of SS Cyg
Directory of Open Access Journals (Sweden)
V. F. Suleimanov
2015-02-01
Full Text Available The X-ray spectrum of SS Cyg in outburst has a very soft component that can be interpreted as the fast-rotating optically thick boundary layer on the white dwarf surface. This component was carefully investigated by Mauche (2004 using the Chandra LETG spectrum of this object in outburst. The spectrum shows broad ( ≈5 °A spectral features that have been interpreted as a large number of absorption lines on a blackbody continuum with a temperature of ≈250 kK. Because the spectrum resembles the photospheric spectra of super-soft X-ray sources, we tried to fit it with high gravity hot LTE stellar model atmospheres with solar chemical composition, specially computed for this purpose. We obtained a reasonably good fit to the 60–125 °A spectrum with the following parameters: Teff = 190 kK, log g = 6.2, and NH = 8 · 1019 cm−2, although at shorter wavelengths the observed spectrum has a much higher flux. The reasons for this are discussed. The hypothesis of a fast rotating boundary layer is supported by the derived low surface gravity.
Resilience of a FIT screening programme against screening fatigue: a modelling study
Directory of Open Access Journals (Sweden)
Marjolein J. E. Greuter
2016-09-01
Full Text Available Abstract Background Repeated participation is important in faecal immunochemical testing (FIT screening for colorectal cancer (CRC. However, a large number of screening invitations over time may lead to screening fatigue and consequently, decreased participation rates. We evaluated the impact of screening fatigue on overall screening programme effectiveness. Methods Using the ASCCA model, we simulated the Dutch CRC screening programme consisting of biennial FIT screening in individuals aged 55–75. We studied the resilience of the programme against heterogeneity in screening attendance and decrease in participation rate due to screening fatigue. Outcomes were reductions in CRC incidence and mortality compared to no screening. Results Assuming a homogenous 63 % participation, i.e., each round each individual was equally likely to attend screening, 30 years of screening reduced CRC incidence and mortality by 39 and 53 %, respectively, compared to no screening. When assuming clustered participation, i.e., three subgroups of individuals with a high (95 %, moderate (65 % and low (5 % participation rate, screening was less effective; reductions were 33 % for CRC incidence and 43 % for CRC mortality. Screening fatigue considerably reduced screening effectiveness; if individuals refrained from screening after three negative screens, model-predicted incidence reductions decreased to 25 and 18 % under homogenous and clustered participation, respectively. Figures were 34 and 25 % for mortality reduction. Conclusions Screening will substantially decrease CRC incidence and mortality. However, screening effectiveness can be seriously compromised if screening fatigue occurs. This warrants careful monitoring of individual screening behaviour and consideration of targeted invitation systems in individuals who have (repeatedly missed screening rounds.
DEFF Research Database (Denmark)
Nielsen, P.; Jiang, L. P.; Rytter, N. G. M.
2014-01-01
This paper evaluates the influence of forecast horizon and observation fit on the robustness and performance of a specific freight rate forecast model used in the liner shipping industry. In the first stage of the research, a forecast model used to predict container freight rate development is pr...
Grilli, Leonardo; Innocenti, Francesco
2017-01-01
Fitting cross-classified multilevel models with binary response is challenging. In this setting a promising method is Bayesian inference through Integrated Nested Laplace Approximations (INLA), which performs well in several latent variable models. We devise a systematic simulation study to assess
DEFF Research Database (Denmark)
Vang, Jakob Rabjerg; Zhou, Fan; Andreasen, Søren Juhl
2015-01-01
A high temperature PEM (HTPEM) fuel cell model capable of simulating both steady state and dynamic operation is presented. The purpose is to enable extraction of unknown parameters from sets of impedance spectra and polarisation curves. The model is fitted to two polarisation curves and four...
Molecular mechanisms of protein aggregation from global fitting of kinetic models.
Meisl, Georg; Kirkegaard, Julius B; Arosio, Paolo; Michaels, Thomas C T; Vendruscolo, Michele; Dobson, Christopher M; Linse, Sara; Knowles, Tuomas P J
2016-02-01
The elucidation of the molecular mechanisms by which soluble proteins convert into their amyloid forms is a fundamental prerequisite for understanding and controlling disorders that are linked to protein aggregation, such as Alzheimer's and Parkinson's diseases. However, because of the complexity associated with aggregation reaction networks, the analysis of kinetic data of protein aggregation to obtain the underlying mechanisms represents a complex task. Here we describe a framework, using quantitative kinetic assays and global fitting, to determine and to verify a molecular mechanism for aggregation reactions that is compatible with experimental kinetic data. We implement this approach in a web-based software, AmyloFit. Our procedure starts from the results of kinetic experiments that measure the concentration of aggregate mass as a function of time. We illustrate the approach with results from the aggregation of the β-amyloid (Aβ) peptides measured using thioflavin T, but the method is suitable for data from any similar kinetic experiment measuring the accumulation of aggregate mass as a function of time; the input data are in the form of a tab-separated text file. We also outline general experimental strategies and practical considerations for obtaining kinetic data of sufficient quality to draw detailed mechanistic conclusions, and the procedure starts with instructions for extensive data quality control. For the core part of the analysis, we provide an online platform (http://www.amylofit.ch.cam.ac.uk) that enables robust global analysis of kinetic data without the need for extensive programming or detailed mathematical knowledge. The software automates repetitive tasks and guides users through the key steps of kinetic analysis: determination of constraints to be placed on the aggregation mechanism based on the concentration dependence of the aggregation reaction, choosing from several fundamental models describing assembly into linear aggregates and
Spädtke, P
2013-01-01
Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.
Temperature dependence of bulk respiration of crop stands. Measurement and model fitting
International Nuclear Information System (INIS)
Tani, Takashi; Arai, Ryuji; Tako, Yasuhiro
2007-01-01
The objective of the present study was to examine whether the temperature dependence of respiration at a crop-stand scale could be directly represented by an Arrhenius function that was widely used for representing the temperature dependence of leaf respiration. We determined temperature dependences of bulk respiration of monospecific stands of rice and soybean within a range of the air temperature from 15 to 30degC using large closed chambers. Measured responses of respiration rates of the two stands were well fitted by the Arrhenius function (R 2 =0.99). In the existing model to assess the local radiological impact of the anthropogenic carbon-14, effects of the physical environmental factors on photosynthesis and respiration of crop stands are not taken into account for the calculation of the net amount of carbon per cultivation area in crops at harvest which is the crucial parameter for the estimation of the activity concentration of carbon-14 in crops. Our result indicates that the Arrhenius function is useful for incorporating the effect of the temperature on respiration of crop stands into the model which is expected to contribute to a more realistic estimate of the activity concentration of carbon-14 in crops. (author)
A Mathematical Images Group Model to Estimate the Sound Level in a Close-Fitting Enclosure
Directory of Open Access Journals (Sweden)
Michael J. Panza
2014-01-01
Full Text Available This paper describes a special mathematical images model to determine the sound level inside a close-fitting sound enclosure. Such an enclosure is defined as the internal air volume defined by a machine vibration noise source at one wall and a parallel reflecting wall located very close to it and acts as the outside radiating wall of the enclosure. Four smaller surfaces define a parallelepiped for the volume. The main reverberation group is between the two large parallel planes. Viewed as a discrete line-type source, the main group is extended as additional discrete line-type source image groups due to reflections from the four smaller surfaces. The images group approach provides a convergent solution for the case where hard reflective surfaces are modeled with absorption coefficients equal to zero. Numerical examples are used to calculate the sound pressure level incident on the outside wall and the effect of adding high absorption to the front wall. This is compared to the result from the general large room diffuse reverberant field enclosure formula for several hard wall absorption coefficients and distances between machine and front wall. The images group method is shown to have low sensitivity to hard wall absorption coefficient value and presents a method where zero sound absorption for hard surfaces can be used rather than an initial hard surface sound absorption estimate or measurement to predict the internal sound levels the effect of adding absorption.
Fitting Cox Models with Doubly Censored Data Using Spline-Based Sieve Marginal Likelihood
Li, Zhiguo; Owzar, Kouros
2015-01-01
In some applications, the failure time of interest is the time from an originating event to a failure event, while both event times are interval censored. We propose fitting Cox proportional hazards models to this type of data using a spline-based sieve maximum marginal likelihood, where the time to the originating event is integrated out in the empirical likelihood function of the failure time of interest. This greatly reduces the complexity of the objective function compared with the fully semiparametric likelihood. The dependence of the time of interest on time to the originating event is induced by including the latter as a covariate in the proportional hazards model for the failure time of interest. The use of splines results in a higher rate of convergence of the estimator of the baseline hazard function compared with the usual nonparametric estimator. The computation of the estimator is facilitated by a multiple imputation approach. Asymptotic theory is established and a simulation study is conducted to assess its finite sample performance. It is also applied to analyzing a real data set on AIDS incubation time. PMID:27239090
Directory of Open Access Journals (Sweden)
A H Sabry
Full Text Available The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.
W. Hasan, W. Z.
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554
Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.
International Nuclear Information System (INIS)
Little, M P
2004-01-01
Bystander effects following exposure to α-particles have been observed in many experimental systems, and imply that linearly extrapolating low dose risks from high dose data might materially underestimate risk. Brenner and Sachs (2002 Int. J. Radiat. Biol. 78 593-604; 2003 Health Phys. 85 103-8) have recently proposed a model of the bystander effect which they use to explain the inverse dose rate effect observed for lung cancer in underground miners exposed to radon daughters. In this paper we fit the model of the bystander effect proposed by Brenner and Sachs to 11 cohorts of underground miners, taking account of the covariance structure of the data and the period of latency between the development of the first pre-malignant cell and clinically overt cancer. We also fitted a simple linear relative risk model, with adjustment for age at exposure and attained age. The methods that we use for fitting both models are different from those used by Brenner and Sachs, in particular taking account of the covariance structure, which they did not, and omitting certain unjustifiable adjustments to the miner data. The fit of the original model of Brenner and Sachs (with 0 y period of latency) is generally poor, although it is much improved by assuming a 5 or 6 y period of latency from the first appearance of a pre-malignant cell to cancer. The fit of this latter model is equivalent to that of a linear relative risk model with adjustment for age at exposure and attained age. In particular, both models are capable of describing the observed inverse dose rate effect in this data set
Basch, Corey H; Hillyer, Grace Clarke; Ethan, Danna; Berdnik, Alyssa; Basch, Charles E
2015-07-01
Tanned skin has been associated with perceptions of fitness and social desirability. Portrayal of models in magazines may reflect and perpetuate these perceptions. Limited research has investigated tanning shade gradations of models in men's versus women's fitness and muscle enthusiast magazines. Such findings are relevant in light of increased incidence and prevalence of melanoma in the United States. This study evaluated and compared tanning shade gradations of adult Caucasian male and female model images in mainstream fitness and muscle enthusiast magazines. Sixty-nine U.S. magazine issues (spring and summer, 2013) were utilized. Two independent reviewers rated tanning shade gradations of adult Caucasian male and female model images on magazines' covers, advertisements, and feature articles. Shade gradations were assessed using stock photographs of Caucasian models with varying levels of tanned skin on an 8-shade scale. A total of 4,683 images were evaluated. Darkest tanning shades were found among males in muscle enthusiast magazines and lightest among females in women's mainstream fitness magazines. By gender, male model images were 54% more likely to portray a darker tanning shade. In this study, images in men's (vs. women's) fitness and muscle enthusiast magazines portrayed Caucasian models with darker skin shades. Despite these magazines' fitness-related messages, pro-tanning images may promote attitudes and behaviors associated with higher skin cancer risk. To date, this is the first study to explore tanning shades in men's magazines of these genres. Further research is necessary to identify effects of exposure to these images among male readers. © The Author(s) 2014.
Experimental model for non-Newtonian fluid viscosity estimation: Fit to mathematical expressions
Directory of Open Access Journals (Sweden)
Guillem Masoliver i Marcos
2017-01-01
Full Text Available The construction process of a viscometer, developed in collaboration with a final project student, is here presented. It is intended to be used by first year's students to know the viscosity as a fluid property, for both Newtonian and non-Newtonian flows. Viscosity determination is crucial for the fluids behaviour knowledge related to their reologic and physical properties. These have great implications in engineering aspects such as friction or lubrication. With the present experimental model device three different fluids are analyzed (water, kétchup and a mixture with cornstarch and water. Tangential stress is measured versus velocity in order to characterize all the fluids in different thermal conditions. A mathematical fit process is proposed to be done in order to adjust the results to expected analytical expressions, obtaining good results for these fittings, with R2 greater than 0.88 in any case.
Fitting diameter distribution models to data from forest inventories with concentric plot design
Energy Technology Data Exchange (ETDEWEB)
Nanos, N.; Sjöstedt de Luna, S.
2017-11-01
Aim: Several national forest inventories use a complex plot design based on multiple concentric subplots where smaller diameter trees are inventoried when lying in the smaller-radius subplots and ignored otherwise. Data from these plots are truncated with threshold (truncation) diameters varying according to the distance from the plot centre. In this paper we designed a maximum likelihood method to fit the Weibull diameter distribution to data from concentric plots. Material and methods: Our method (M1) was based on multiple truncated probability density functions to build the likelihood. In addition, we used an alternative method (M2) presented recently. We used methods M1 and M2 as well as two other reference methods to estimate the Weibull parameters in 40000 simulated plots. The spatial tree pattern of the simulated plots was generated using four models of spatial point patterns. Two error indices were used to assess the relative performance of M1 and M2 in estimating relevant stand-level variables. In addition, we estimated the Quadratic Mean plot Diameter (QMD) using Expansion Factors (EFs). Main results: Methods M1 and M2 produced comparable estimation errors in random and cluster tree spatial patterns. Method M2 produced biased parameter estimates in plots with inhomogeneous Poisson patterns. Estimation of QMD using EFs produced biased results in plots within inhomogeneous intensity Poisson patterns. Research highlights:We designed a new method to fit the Weibull distribution to forest inventory data from concentric plots that achieves high accuracy and precision in parameter estimates regardless of the within-plot spatial tree pattern.
Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting
International Nuclear Information System (INIS)
Ross, James C.; Kindlmann, Gordon L.; Okajima, Yuka; Hatabu, Hiroto; Díaz, Alejandro A.; Silverman, Edwin K.; Washko, George R.; Dy, Jennifer; Estépar, Raúl San José
2013-01-01
Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The proposed
Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting
Energy Technology Data Exchange (ETDEWEB)
Ross, James C., E-mail: jross@bwh.harvard.edu [Channing Laboratory, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Surgical Planning Lab, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Laboratory of Mathematics in Imaging, Brigham and Women' s Hospital, Boston, Massachusetts 02126 (United States); Kindlmann, Gordon L. [Computer Science Department and Computation Institute, University of Chicago, Chicago, Illinois 60637 (United States); Okajima, Yuka; Hatabu, Hiroto [Department of Radiology, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Díaz, Alejandro A. [Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 and Department of Pulmonary Diseases, Pontificia Universidad Católica de Chile, Santiago (Chile); Silverman, Edwin K. [Channing Laboratory, Brigham and Women' s Hospital, Boston, Massachusetts 02215 and Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 (United States); Washko, George R. [Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 (United States); Dy, Jennifer [ECE Department, Northeastern University, Boston, Massachusetts 02115 (United States); Estépar, Raúl San José [Department of Radiology, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Surgical Planning Lab, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Laboratory of Mathematics in Imaging, Brigham and Women' s Hospital, Boston, Massachusetts 02126 (United States)
2013-12-15
Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The
Fitting non-gaussian Models to Financial data: An Empirical Study
Directory of Open Access Journals (Sweden)
Pablo Olivares
2011-04-01
Full Text Available In this paper are presented some experiences about the modeling of financial data by three classes of models as alternative to Gaussian Linear models. Dynamic Volatility, Stable L'evy and Diffusion with Jumps models are considered. The techniques are illustrated with some examples of financial series on currency, futures and indexes.
Wasylkiw, L; Emms, A A; Meuse, R; Poirier, K F
2009-03-01
The current study is a content analysis of women appearing in advertisements in two types of magazines: fitness/health versus fashion/beauty chosen because of their large and predominantly female readerships. Women appearing in advertisements of the June 2007 issue of five fitness/health magazines were compared to women appearing in advertisements of the June 2007 issue of five beauty/fashion magazines. Female models appearing in advertisements of both types of magazines were primarily young, thin Caucasians; however, images of models were more likely to emphasize appearance over performance when they appeared in fashion magazines. This difference in emphasis has implications for future research.
The fitting of general force-of-infection models to wildlife disease prevalence data
Heisey, D.M.; Joly, D.O.; Messier, F.
2006-01-01
Researchers and wildlife managers increasingly find themselves in situations where they must deal with infectious wildlife diseases such as chronic wasting disease, brucellosis, tuberculosis, and West Nile virus. Managers are often charged with designing and implementing control strategies, and researchers often seek to determine factors that influence and control the disease process. All of these activities require the ability to measure some indication of a disease's foothold in a population and evaluate factors affecting that foothold. The most common type of data available to managers and researchers is apparent prevalence data. Apparent disease prevalence, the proportion of animals in a sample that are positive for the disease, might seem like a natural measure of disease's foothold, but several properties, in particular, its dependency on age structure and the biasing effects of disease-associated mortality, make it less than ideal. In quantitative epidemiology, the a??force of infection,a?? or infection hazard, is generally the preferred parameter for measuring a disease's foothold, and it can be viewed as the most appropriate way to a??adjusta?? apparent prevalence for age structure. The typical ecology curriculum includes little exposure to quantitative epidemiological concepts such as cumulative incidence, apparent prevalence, and the force of infection. The goal of this paper is to present these basic epidemiological concepts and resulting models in an ecological context and to illustrate how they can be applied to understand and address basic epidemiological questions. We demonstrate a practical approach to solving the heretofore intractable problem of fitting general force-of-infection models to wildlife prevalence data using a generalized regression approach. We apply the procedures to Mycobacterium bovis (bovine tuberculosis) prevalence in bison (Bison bison) in Wood Buffalo National Park, Canada, and demonstrate strong age dependency in the force of
International Nuclear Information System (INIS)
Le Roy, S; Teyssedre, G; Laurent, C; Montanari, G C; Palmieri, F
2006-01-01
A numerical model for describing bipolar charge transport and storage in polyethylene has been developed recently. The present paper proposes a comparison of the model outputs with experimental data in three different direct current (DC) voltage application protocols (step field increase and polarization/depolarization schemes). Three kinds of measurement have been realized for the three different protocols: space charge distribution using the pulsed electro-acoustic method, external current and electroluminescence. Simulation under AC stress has also been attempted on the basis of the model parameters that were derived from the DC case. Model limitations and possible improvements are discussed
Directory of Open Access Journals (Sweden)
K S Mwitondi
2013-05-01
Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.
Yu, Chung-Jong; Kim, Euikwoun; Kim, Jae-Yong
2011-05-01
A general-purpose fitting procedure is presented for X-ray reflectivity data. The Parratt formula was used to fit the low-angle region of the reflectivity data and the resulting electron density profile (continuous base EDP or cbEDP) was then divided into a series of electron density slabs of width 1 angstroms (discrete base EDP or dbEDP), which is then easily incorporated into the Distorted Wave Born Approximation (DWBA). An additional series of density slabs of resolution-limited width are overlapped to the dbEDP, and the density value of the each additional slab is allowed to vary to further fit the data model-independently using DWBA. Because this procedure combines the Parratt formula and the model-independent DWBA fitting, each fitting method can always be employed depending on the type of thin film. Moreover, it provides a way to overcome the difficulties when both fitting methods do not work well for certain types of thin films. Simulations show that this procedure is suitable for nanoscale thin film characterization.
Directory of Open Access Journals (Sweden)
Farhan Akram
Full Text Available This paper presents a region-based active contour method for the segmentation of intensity inhomogeneous images using an energy functional based on local and global fitted images. A square image fitted model is defined by using both local and global fitted differences. Moreover, local and global signed pressure force functions are introduced in the solution of the energy functional to stabilize the gradient descent flow. In the final gradient descent solution, the local fitted term helps extract regions with intensity inhomogeneity, whereas the global fitted term targets homogeneous regions. A Gaussian kernel is applied to regularize the contour at each step, which not only smoothes it but also avoids the computationally expensive re-initialization. Intensity inhomogeneous images contain undesired smooth intensity variations (bias field that alter the results of intensity-based segmentation methods. The bias field is approximated with a Gaussian distribution and the bias of intensity inhomogeneous regions is corrected by dividing the original image by the approximated bias field. In this paper, a two-phase model is first derived and then extended to a four-phase model to segment brain magnetic resonance (MR images into the desired regions of interest. Experimental results with both synthetic and real brain MR images are used for a quantitative and qualitative comparison with state-of-the-art active contour methods to show the advantages of the proposed segmentation technique in practical terms.
Zheng, Wenjun; Tekpinar, Mustafa
2014-01-01
To circumvent the difficulty of directly solving high-resolution biomolecular structures, low-resolution structural data from Cryo-electron microscopy (EM) and small angle solution X-ray scattering (SAXS) are increasingly used to explore multiple conformational states of biomolecular assemblies. One promising venue to obtain high-resolution structural models from low-resolution data is via data-constrained flexible fitting. To this end, we have developed a new method based on a coarse-grained Cα-only protein representation, and a modified form of the elastic network model (ENM) that allows large-scale conformational changes while maintaining the integrity of local structures including pseudo-bonds and secondary structures. Our method minimizes a pseudo-energy which linearly combines various terms of the modified ENM energy with an EM/SAXS-fitting score and a collision energy that penalizes steric collisions. Unlike some previous flexible fitting efforts using the lowest few normal modes, our method effectively utilizes all normal modes so that both global and local structural changes can be fully modeled with accuracy. This method is also highly efficient in computing time. We have demonstrated our method using adenylate kinase as a test case which undergoes a large open-to-close conformational change. The EM-fitting method is available at a web server (http://enm.lobos.nih.gov), and the SAXS-fitting method is available as a pre-compiled executable upon request. © 2014 Elsevier Inc. All rights reserved.
Garcia, Miguel Angel; Puig, Domenec
2017-01-01
This paper presents a region-based active contour method for the segmentation of intensity inhomogeneous images using an energy functional based on local and global fitted images. A square image fitted model is defined by using both local and global fitted differences. Moreover, local and global signed pressure force functions are introduced in the solution of the energy functional to stabilize the gradient descent flow. In the final gradient descent solution, the local fitted term helps extract regions with intensity inhomogeneity, whereas the global fitted term targets homogeneous regions. A Gaussian kernel is applied to regularize the contour at each step, which not only smoothes it but also avoids the computationally expensive re-initialization. Intensity inhomogeneous images contain undesired smooth intensity variations (bias field) that alter the results of intensity-based segmentation methods. The bias field is approximated with a Gaussian distribution and the bias of intensity inhomogeneous regions is corrected by dividing the original image by the approximated bias field. In this paper, a two-phase model is first derived and then extended to a four-phase model to segment brain magnetic resonance (MR) images into the desired regions of interest. Experimental results with both synthetic and real brain MR images are used for a quantitative and qualitative comparison with state-of-the-art active contour methods to show the advantages of the proposed segmentation technique in practical terms. PMID:28376124
Directory of Open Access Journals (Sweden)
Terrapon Nicolas
2012-05-01
Full Text Available Abstract Background Hidden Markov Models (HMMs are a powerful tool for protein domain identification. The Pfam database notably provides a large collection of HMMs which are widely used for the annotation of proteins in new sequenced organisms. In Pfam, each domain family is represented by a curated multiple sequence alignment from which a profile HMM is built. In spite of their high specificity, HMMs may lack sensitivity when searching for domains in divergent organisms. This is particularly the case for species with a biased amino-acid composition, such as P. falciparum, the main causal agent of human malaria. In this context, fitting HMMs to the specificities of the target proteome can help identify additional domains. Results Using P. falciparum as an example, we compare approaches that have been proposed for this problem, and present two alternative methods. Because previous attempts strongly rely on known domain occurrences in the target species or its close relatives, they mainly improve the detection of domains which belong to already identified families. Our methods learn global correction rules that adjust amino-acid distributions associated with the match states of HMMs. These rules are applied to all match states of the whole HMM library, thus enabling the detection of domains from previously absent families. Additionally, we propose a procedure to estimate the proportion of false positives among the newly discovered domains. Starting with the Pfam standard library, we build several new libraries with the different HMM-fitting approaches. These libraries are first used to detect new domain occurrences with low E-values. Second, by applying the Co-Occurrence Domain Discovery (CODD procedure we have recently proposed, the libraries are further used to identify likely occurrences among potential domains with higher E-values. Conclusion We show that the new approaches allow identification of several domain families previously absent in
Reike, Dennis; Schwarz, Wolf
2016-01-01
The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…
Jbabdi, Saad; Sotiropoulos, Stamatios N; Savio, Alexander M; Graña, Manuel; Behrens, Timothy EJ
2012-01-01
In this article, we highlight an issue that arises when using multiple b-values in a model-based analysis of diffusion MR data for tractography. The non-mono-exponential decay, commonly observed in experimental data, is shown to induce over-fitting in the distribution of fibre orientations when not considered in the model. Extra fibre orientations perpendicular to the main orientation arise to compensate for the slower apparent signal decay at higher b-values. We propose a simple extension to the ball and stick model based on a continuous Gamma distribution of diffusivities, which significantly improves the fitting and reduces the over-fitting. Using in-vivo experimental data, we show that this model outperforms a simpler, noise floor model, especially at the interfaces between brain tissues, suggesting that partial volume effects are a major cause of the observed non-mono-exponential decay. This model may be helpful for future data acquisition strategies that may attempt to combine multiple shells to improve estimates of fibre orientations in white matter and near the cortex. PMID:22334356
Directory of Open Access Journals (Sweden)
Liyun Su
2012-01-01
Full Text Available We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations.
International Nuclear Information System (INIS)
Fruehwirth, R.
1993-01-01
We present an estimation procedure of the error components in a linear regression model with multiple independent stochastic error contributions. After solving the general problem we apply the results to the estimation of the actual trajectory in track fitting with multiple scattering. (orig.)
Score, pseudo-score and residual diagnostics for goodness-of-fit of spatial point process models
DEFF Research Database (Denmark)
Baddeley, Adrian; Rubak, Ege H.; Møller, Jesper
theoretical support to the established practice of using functional summary statistics such as Ripley’s K-function, when testing for complete spatial randomness; and they provide new tools such as the compensator of the K-function for testing other fitted models. The results also support localisation methods...
Limited-information goodness-of-fit testing of item response theory models for sparse 2 tables.
Cai, Li; Maydeu-Olivares, Albert; Coffman, Donna L; Thissen, David
2006-05-01
Bartholomew and Leung proposed a limited-information goodness-of-fit test statistic (Y) for models fitted to sparse 2(P ) contingency tables. The null distribution of Y was approximated using a chi-squared distribution by matching moments. The moments were derived under the assumption that the model parameters were known in advance and it was conjectured that the approximation would also be appropriate when the parameters were to be estimated. Using maximum likelihood estimation of the two-parameter logistic item response theory model, we show that the effect of parameter estimation on the distribution of Y is too large to be ignored. Consequently, we derive the asymptotic moments of Y for maximum likelihood estimation. We show using a simulation study that when the null distribution of Y is approximated using moments that take into account the effect of estimation, Y becomes a very useful statistic to assess the overall goodness of fit of models fitted to sparse 2(P) tables.
DEFF Research Database (Denmark)
Ding, Tao; Li, Cheng; Huang, Can
2018-01-01
In order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave...... function of the slave model for the master model, which reflects the impacts of each slave model. Second, the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global...
A simulation-based goodness-of-fit test for random effects in generalized linear mixed models
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2006-01-01
The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...
A simulation-based goodness-of-fit test for random effects in generalized linear mixed models
DEFF Research Database (Denmark)
Waagepetersen, Rasmus Plenge
The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...
Directory of Open Access Journals (Sweden)
Yun Wang
2016-01-01
Full Text Available Gamma Gaussian inverse Wishart cardinalized probability hypothesis density (GGIW-CPHD algorithm was always used to track group targets in the presence of cluttered measurements and missing detections. A multiple models GGIW-CPHD algorithm based on best-fitting Gaussian approximation method (BFG and strong tracking filter (STF is proposed aiming at the defect that the tracking error of GGIW-CPHD algorithm will increase when the group targets are maneuvering. The best-fitting Gaussian approximation method is proposed to implement the fusion of multiple models using the strong tracking filter to correct the predicted covariance matrix of the GGIW component. The corresponding likelihood functions are deduced to update the probability of multiple tracking models. From the simulation results we can see that the proposed tracking algorithm MM-GGIW-CPHD can effectively deal with the combination/spawning of groups and the tracking error of group targets in the maneuvering stage is decreased.
Chu, A.
2016-12-01
Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work implements three of the homogeneous ETAS models described in Ogata (1998). With a model's log-likelihood function, my software finds the Maximum-Likelihood Estimates (MLEs) of the model's parameters to estimate the homogeneous background rate and the temporal and spatial parameters that govern triggering effects. EM-algorithm is employed for its advantages of stability and robustness (Veen and Schoenberg, 2008). My work also presents comparisons among the three models in robustness, convergence speed, and implementations from theory to computing practice. Up-to-date regional seismic data of seismic active areas such as Southern California and Japan are used to demonstrate the comparisons. Data analysis has been done using computer languages Java and R. Java has the advantages of being strong-typed and easiness of controlling memory resources, while R has the advantages of having numerous available functions in statistical computing. Comparisons are also made between the two programming languages in convergence and stability, computational speed, and easiness of implementation. Issues that may affect convergence such as spatial shapes are discussed.
Aguilera, Luis U; Zimmer, Christoph; Kummer, Ursula
2017-02-20
Mathematical models are used to gain an integrative understanding of biochemical processes and networks. Commonly the models are based on deterministic ordinary differential equations. When molecular counts are low, stochastic formalisms like Monte Carlo simulations are more appropriate and well established. However, compared to the wealth of computational methods used to fit and analyze deterministic models, there is only little available to quantify the exactness of the fit of stochastic models compared to experimental data or to analyze different aspects of the modeling results. Here, we developed a method to fit stochastic simulations to experimental high-throughput data, meaning data that exhibits distributions. The method uses a comparison of the probability density functions that are computed based on Monte Carlo simulations and the experimental data. Multiple parameter values are iteratively evaluated using optimization routines. The method improves its performance by selecting parameters values after comparing the similitude between the deterministic stability of the system and the modes in the experimental data distribution. As a case study we fitted a model of the IRF7 gene expression circuit to time-course experimental data obtained by flow cytometry. IRF7 shows bimodal dynamics upon IFN stimulation. This dynamics occurs due to the switching between active and basal states of the IRF7 promoter. However, the exact molecular mechanisms responsible for the bimodality of IRF7 is not fully understood. Our results allow us to conclude that the activation of the IRF7 promoter by the combination of IRF7 and ISGF3 is sufficient to explain the observed bimodal dynamics.
Goldstein, Richard A
2013-01-01
The predicted effect of effective population size on the distribution of fitness effects and substitution rate is critically dependent on the relationship between sequence and fitness. This highlights the importance of using models that are informed by the molecular biology, biochemistry, and biophysics of the evolving systems. We describe a computational model based on fundamental aspects of biophysics, the requirement for (most) proteins to be thermodynamically stable. Using this model, we find that differences in population size have minimal impact on the distribution of population-scaled fitness effects, as well as on the rate of molecular evolution. This is because larger populations result in selection for more stable proteins that are less affected by mutations. This reduction in the magnitude of the fitness effects almost exactly cancels the greater selective pressure resulting from the larger population size. Conversely, changes in the population size in either direction cause transient increases in the substitution rate. As differences in population size often correspond to changes in population size, this makes comparisons of substitution rates in different lineages difficult to interpret.
Abrahart, R. J.; Dawson, C. W.; Heppenstall, A. J.; See, L. M.
2009-04-01
The most critical issue in developing a neural network model is generalisation: how well will the preferred solution perform when it is applied to unseen datasets? The reported experiments used far-reaching sequences of model architectures and training periods to investigate the potential damage that could result from the impact of several interrelated items: (i) over-fitting - a machine learning concept related to exceeding some optimal architectural size; (ii) over-training - a machine learning concept related to the amount of adjustment that is applied to a specific model - based on the understanding that too much fine-tuning might result in a model that had accommodated random aspects of its training dataset - items that had no causal relationship to the target function; and (iii) over-parameterisation - a statistical modelling concept that is used to restrict the number of parameters in a model so as to match the information content of its calibration dataset. The last item in this triplet stems from an understanding that excessive computational complexities might permit an absurd and false solution to be fitted to the available material. Numerous feedforward multilayered perceptrons were trialled and tested. Two different methods of model construction were also compared and contrasted: (i) traditional Backpropagation of Error; and (ii) state-of-the-art Symbiotic Adaptive Neuro-Evolution. Modelling solutions were developed using the reported experimental set ups of Gaume & Gosset (2003). The models were applied to a near-linear hydrological modelling scenario in which past upstream and past downstream discharge records were used to forecast current discharge at the downstream gauging station [CS1: River Marne]; and a non-linear hydrological modelling scenario in which past river discharge measurements and past local meteorological records (precipitation and evaporation) were used to forecast current discharge at the river gauging station [CS2: Le Sauzay].
The PX-EM algorithm for fast stable fitting of Henderson's mixed model
Directory of Open Access Journals (Sweden)
Van Dyk David A
2000-03-01
Full Text Available Abstract This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence are obtained for PX-EM relative to the basic EM algorithm in the random regression.
Fitting macroevolutionary models to phylogenies: an example using vertebrate body sizes
Mooers, Arne Ø.; Schluter, Dolph
1998-01-01
How do traits change through time and with speciation? We present a simple and generally applicable method for comparing various models of the macroevolution of traits within a maximum likelihood framework. We illustrate four such models: 1) variance among species accumulates in direct proportion to
de Vries, S O; Fidler, Vaclav; Kuipers, Wietze D; Hunink, Maria G M
1998-01-01
The purpose of this study was to develop a model that predicts the outcome of supervised exercise for intermittent claudication. The authors present an example of the use of autoregressive logistic regression for modeling observed longitudinal data. Data were collected from 329 participants in a
Tarrés, J; Fina, M; Piedrafita, J
2010-09-01
The aim of this study was to compare the goodness of fit of the threshold models with homoscedasticity or heteroscedasticity and the grouped data model for the analysis of calving ease in beef cattle by using a parametric bootstrap procedure. Field data included 8,205 records of the Bruna dels Pirineus beef cattle breed in the Pyrenean mountain areas of Catalonia (Spain). The actual distribution was 81.81% of calvings without assistance, 11.02% slightly assisted by the farmer, 5.12% strongly assisted by the farmer, 0.89% assisted by the veterinarian, and 1.16% cesarean, but these percentages were very different in the different herds. This can be explained partially by the different subjective way of scoring of each farmer. Primiparous cows had a greater (P < 0.001) difficulty calving than cows with 5 or more parities (11.74 vs. 4.49% of calvings strongly assisted by the farmer or the veterinarian and 2.8 vs. 0.65% cesarean). Male calves caused greater (P < 0.001) calving difficulty than females (7.71% of male calvings strongly assisted by the farmer or the veterinarian vs. 4.25% of females and 1.83% cesarean in males vs. 0.47% in females). The month and year of calving also had a strong influence on calving ease. These data were analyzed using 3 different models: the threshold models with homoscedasticity or heteroscedasticity and the grouped data model. The bootstrap comparison among models suggested that the threshold models, even allowing for heteroscedasticity, did not fit the herd effects well. In contrast, fitting deficiencies were not observed for the grouped data model in any factor. The variance of direct effect of the calf was estimated using the 3 models, and the heritability estimate ranged from 0.165 for the grouped data model to 0.185 for the hereroscedastic threshold model. This heritability was moderate, but it would justify the inclusion of direct effects of the calf on calving ease in the breeding objective. Overall, results highlighted the
Casellas, J; Tarrés, J; Piedrafita, J; Varona, L
2006-10-01
Given that correct assumptions on the baseline survival function are determinant for the validity of further inferences, specific tools to test the fit of a model to real data become essential in proportional hazards models. In this sense, we have proposed a parametric bootstrap to test the fit of survival models. Monte Carlo simulations are used to generate new data sets from the estimates obtained through the assumed models, and then bootstrap intervals can be established for the survival function along the time space studied. Significant fitting deficiencies are revealed when the real survival function is not included within the bootstrap interval. We tested this procedure in a survival data set of Bruna dels Pirineus beef calves, assuming 4 parametric models (exponential, Weibull, exponential time-dependent, Weibull time-dependent) and the Cox's semiparametric model. Fitting deficiencies were not observed for the Cox's model and the exponential time-dependent model, whereas the Weibull time-dependent model suffered from moderate overestimation at different ages. Thus, the exponential time-dependent model appears to be preferable because of its correct fit for survival data of beef calves and its smaller computational and time requirements. Exponential and Weibull models were completely rejected due to the continuous over- and underestimation of the survival probability reported. Results here highlighted the flexibility of parametric models with time-dependent effects, achieving a fit comparable to nonparametric models.
Fitness effects of beneficial mutations: the mutational landscape model in experimental evolution
DEFF Research Database (Denmark)
Betancourt, Andrea J.; Bollback, Jonathan Paul
2006-01-01
of beneficial mutations should be roughly exponentially distributed. The prediction appears to be borne out by most of these studies, at least qualitatively. Another study showed that a modified version of the model was able to predict, with reasonable accuracy, which of a ranked set of beneficial alleles...... will be fixed next. Although it remains to be seen whether the mutational landscape model adequately describes adaptation in organisms other than microbes, together these studies suggest that adaptive evolution has surprisingly general properties that can be successfully captured by theoretical models....
Alcalá-Quintana, Rocío; García-Pérez, Miguel A
2013-12-01
Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.
Johnson, T. J.; Harding, A. K.; Venter, C.
2012-01-01
Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.
Czech Academy of Sciences Publication Activity Database
Suda, Jan; Herben, Tomáš
2013-01-01
Roč. 280, č. 1751 (2013), no.20122387 ISSN 0962-8452 Institutional support: RVO:67985939 Keywords : cytometry * statiscical modelling * polyploidy Subject RIV: EF - Botanics Impact factor: 5.292, year: 2013
Dubno, Judy R
2018-05-01
This manuscript provides a Commentary on a paper published in the current issue of the International Journal of Audiology and the companion paper published in Ear and Hearing by Soli et al. These papers report background, rationale and results of a novel modelling approach to assess "auditory fitness for duty," or an individual's ability to perform hearing-critical tasks related to their job, based on their likelihood of effective speech communication in the listening environment in which the task is performed.
Helgesson, P.; Sjöstrand, H.
2017-11-01
Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
Diamond, Joshua M.
2016-01-01
The conserved nature of sleep in Drosophila has allowed the fruit fly to emerge in the last decade as a powerful model organism in which to study sleep. Recent sleep studies in Drosophila have focused on the discovery and characterization of hyposomnolent mutants. One common feature of these animals is a change in sleep architecture: sleep bout count tends to be greater, and sleep bout length lower, in hyposomnolent mutants. I propose a mathematical model, produced by least-squares nonlinear ...
2017-08-01
YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) XX-08-2017 2. REPORT TYPE Final 3. DATES COVERED (From - To) Mar 2016 – Mar 2017 4...models does not take into consideration that a set of experimental data points may require more than one type of model to fit the entire data set. That is...experimental data sets that consisted of different temperatures and pH values for an analyte were overlaid directly onto the blueprint platform (eight
Fitting a Turbulent Cloud Model to CO Observations of Starless Bok Globules
Hegmann, M.; Hengel, C.; Röllig, M.; Kegel, W. H.
We present observations of five starless Bok globules in transitions of 12CO (J=2-1 and {J=3-2}), 13CO (J=2-1), and C18O (J=2-1) which have been obtained at the Heinrich-Hertz-Telescope. For an analysis of the data we use the model of Kegel et al. (see e.g. Piehler & Kegel 1995, A&A 297, 841; Hegmann & Kegel 2000, A&A 359, 405) which describes an isothermal sphere stabilized by turbulent and thermal pressure. This approach deals with the full NLTE radiative transfer problem and accounts for a turbulent velocity field with finite correlation length. By a comparison of observed and calculated line profiles we are able not only to determine the kinetic temperature, hydrogen density and CO coloumn density of the globules, but also to study the properties of the turbulent velocity field, i.e. the variance of its one-point-distribution and its correlation length. We consider our model to be an alternative tool for the evaluation of molecular lines emitted by molecular clouds. The model assumptions are certainly closer to reality than the assumptions behind the standard evaluation models, as for example the LVG model. Our current study shows that that the results obtained from our model can differ significantly from those obtained from a LVG analysis.
Fitting identity in the reasoned action framework: A meta-analysis and model comparison.
Paquin, Ryan S; Keating, David M
2017-01-01
Several competing models have been put forth regarding the role of identity in the reasoned action framework. The standard model proposes that identity is a background variable. Under a typical augmented model, identity is treated as an additional direct predictor of intention and behavior. Alternatively, it has been proposed that identity measures are inadvertent indicators of an underlying intention factor (e.g., a manifest-intention model). In order to test these competing hypotheses, we used data from 73 independent studies (total N = 23,917) to conduct a series of meta-analytic structural equation models. We also tested for moderation effects based on whether there was a match between identity constructs and the target behaviors examined (e.g., if the study examined a "smoker identity" and "smoking behavior," there would be a match; if the study examined a "health conscious identity" and "smoking behavior," there would not be a match). Average effects among primary reasoned action variables were all substantial, rs = .37-.69. Results gave evidence for the manifest-intention model over the other explanations, and a moderation effect by identity-behavior matching.
SDSS-II: Determination of shape and color parameter coefficients for SALT-II fit model
Energy Technology Data Exchange (ETDEWEB)
Dojcsak, L.; Marriner, J.; /Fermilab
2010-08-01
In this study we look at the SALT-II model of Type IA supernova analysis, which determines the distance moduli based on the known absolute standard candle magnitude of the Type IA supernovae. We take a look at the determination of the shape and color parameter coefficients, {alpha} and {beta} respectively, in the SALT-II model with the intrinsic error that is determined from the data. Using the SNANA software package provided for the analysis of Type IA supernovae, we use a standard Monte Carlo simulation to generate data with known parameters to use as a tool for analyzing the trends in the model based on certain assumptions about the intrinsic error. In order to find the best standard candle model, we try to minimize the residuals on the Hubble diagram by calculating the correct shape and color parameter coefficients. We can estimate the magnitude of the intrinsic errors required to obtain results with {chi}{sup 2}/degree of freedom = 1. We can use the simulation to estimate the amount of color smearing as indicated by the data for our model. We find that the color smearing model works as a general estimate of the color smearing, and that we are able to use the RMS distribution in the variables as one method of estimating the correct intrinsic errors needed by the data to obtain the correct results for {alpha} and {beta}. We then apply the resultant intrinsic error matrix to the real data and show our results.
Wörz, Stefan; Rohr, Karl
2006-02-01
We introduce a new approach for the localization of 3D anatomical point landmarks. This approach is based on 3D parametric intensity models which are directly fitted to 3D images. To efficiently model tip-like, saddle-like, and sphere-like anatomical structures we introduce analytic intensity models based on the Gaussian error function in conjunction with 3D rigid transformations as well as deformations. To select a suitable size of the region-of-interest (ROI) where model fitting is performed, we also propose a new scheme for automatic selection of an optimal 3D ROI size based on the dominant gradient direction. In addition, to achieve a higher level of automation we present an algorithm for automatic initialization of the model parameters. Our approach has been successfully applied to accurately localize anatomical landmarks in 3D synthetic data as well as 3D MR and 3D CT image data. We have also compared the experimental results with the results of a previously proposed 3D differential approach. It turns out that the new approach significantly improves the localization accuracy.
Bhatnagar, Tarun; Dutta, Tapati; Stover, John; Godbole, Sheela; Sahu, Damodar; Boopathi, Kangusamy; Bembalkar, Shilpa; Singh, Kh Jitenkumar; Goyal, Rajat; Pandey, Arvind; Mehendale, Sanjay M
2016-01-01
Models are designed to provide evidence for strategic program planning by examining the impact of different interventions on projected HIV incidence. We employed the Goals Model to fit the HIV epidemic curves in Andhra Pradesh, Maharashtra and Tamil Nadu states of India where HIV epidemic is considered to have matured and in a declining phase. Input data in the Goals Model consisted of demographic, epidemiological, transmission-related and risk group wise behavioral parameters. The HIV prevalence curves generated in the Goals Model for each risk group in the three states were compared with the epidemic curves generated by the Estimation and Projection Package (EPP) that the national program is routinely using. In all the three states, the HIV prevalence trends for high-risk populations simulated by the Goals Model matched well with those derived using state-level HIV surveillance data in the EPP. However, trends for the low- and medium-risk populations differed between the two models. This highlights the need to generate more representative and robust data in these sub-populations and consider some structural changes in the modeling equation and parameters in the Goals Model to effectively use it to assess the impact of future strategies of HIV control in various sub-populations in India at the sub-national level.
Loeb, M L G; Zink, A G
2006-05-01
Individuals within complex social groups often experience reduced reproduction owing to coercive or suppressive actions of other group members. However, the nature of social and ecological environments that favour individual acceptance of such costs of sociality is not well understood. Taxa with short periods of direct social interaction, such as some communal egg layers, are interesting models for study of the cost of social interaction because opportunities to control reproduction of others are limited to brief periods of reproduction. To understand the conditions under which communal egg layers are in fitness conflict and thus likely to influence each other's reproduction, we develop an optimality model involving a brood guarding 'host' and a nonguarding disperser, or 'egg dumper'. The model shows that when, where intermediate-sized broods have highest survival, lifetime inclusive fitnesses of hosts and dumpers are often optimized with different numbers of dumped eggs. We hypothesize that resolution of this conflict may involve attempts by one party to manipulate the other's reproduction. To test model predictions we used a lace bug (Heteroptera: Tingidae) that shows both hosts and egg dumpers as well as increased offspring survival in response to communal egg laying. We found that egg-dumping lace bugs oviposit a number of eggs that very closely matches predicted fitness optimum for hosts rather than predicted optimum of dumpers. This result suggests that dumpers pay a social cost for communal egg laying, a cost that may occur through host suppression of dumper reproduction. Although dumper allocation of eggs is thus sub-optimal for dumpers, previous models show that the decision to egg dump is nevertheless evolutionarily stable, possibly because hosts permit just enough dumper oviposition to encourage commitment to the behaviour.
Fitting three-level meta-analytic models in R: A step-by-step tutorial
Directory of Open Access Journals (Sweden)
Assink, Mark
2016-10-01
Full Text Available Applying a multilevel approach to meta-analysis is a strong method for dealing with dependency of effect sizes. However, this method is relatively unknown among researchers and, to date, has not been widely used in meta-analytic research. Therefore, the purpose of this tutorial was to show how a three-level random effects model can be applied to meta-analytic models in R using the rma.mv function of the metafor package. This application is illustrated by taking the reader through a step-by-step guide to the multilevel analyses comprising the steps of (1 organizing a data file; (2 setting up the R environment; (3 calculating an overall effect; (4 examining heterogeneity of within-study variance and between-study variance; (5 performing categorical and continuous moderator analyses; and (6 examining a multiple moderator model. By example, the authors demonstrate how the multilevel approach can be applied to meta-analytically examining the association between mental health disorders of juveniles and juvenile offender recidivism. In our opinion, the rma.mv function of the metafor package provides an easy and flexible way of applying a multi-level structure to meta-analytic models in R. Further, the multilevel meta-analytic models can be easily extended so that the potential moderating influence of variables can be examined.
Do telemonitoring projects of heart failure fit the Chronic Care Model?
Willemse, Evi; Adriaenssens, Jef; Dilles, Tinne; Remmen, Roy
2014-07-01
This study describes the characteristics of extramural and transmural telemonitoring projects on chronic heart failure in Belgium. It describes to what extent these telemonitoring projects coincide with the Chronic Care Model of Wagner. The Chronic Care Model describes essential components for high-quality health care. Telemonitoring can be used to optimise home care for chronic heart failure. It provides a potential prospective to change the current care organisation. This qualitative study describes seven non-invasive home-care telemonitoring projects in patients with heart failure in Belgium. A qualitative design, including interviews and literature review, was used to describe the correspondence of these home-care telemonitoring projects with the dimensions of the Chronic Care Model. The projects were situated in primary and secondary health care. Their primary goal was to reduce the number of readmissions for chronic heart failure. None of these projects succeeded in a final implementation of telemonitoring in home care after the pilot phase. Not all the projects were initiated to accomplish all of the dimensions of the Chronic Care Model. A central role for the patient was sparse. Limited financial resources hampered continuation after the pilot phase. Cooperation and coordination in telemonitoring appears to be major barriers but are, within primary care as well as between the lines of care, important links in follow-up. This discrepancy can be prohibitive for deployment of good chronic care. Chronic Care Model is recommended as basis for future.
Suttinger, Matthew; Go, Rowel; Figueiredo, Pedro; Todi, Ankesh; Shu, Hong; Leshin, Jason; Lyakh, Arkadiy
2018-01-01
Experimental and model results for 15-stage broad area quantum cascade lasers (QCLs) are presented. Continuous wave (CW) power scaling from 1.62 to 2.34 W has been experimentally demonstrated for 3.15-mm long, high reflection-coated QCLs for an active region width increased from 10 to 20 μm. A semiempirical model for broad area devices operating in CW mode is presented. The model uses measured pulsed transparency current, injection efficiency, waveguide losses, and differential gain as input parameters. It also takes into account active region self-heating and sublinearity of pulsed power versus current laser characteristic. The model predicts that an 11% improvement in maximum CW power and increased wall-plug efficiency can be achieved from 3.15 mm×25 μm devices with 21 stages of the same design, but half doping in the active region. For a 16-stage design with a reduced stage thickness of 300 Å, pulsed rollover current density of 6 kA/cm2, and InGaAs waveguide layers, an optical power increase of 41% is projected. Finally, the model projects that power level can be increased to ˜4.5 W from 3.15 mm×31 μm devices with the baseline configuration with T0 increased from 140 K for the present design to 250 K.
Asymptotic distribution for goodness-of-fit statistics in a sequence of multinomial models
Czech Academy of Sciences Publication Activity Database
Vajda, Igor; Gyorfi, L.
2002-01-01
Roč. 56, č. 1 (2002), s. 57-67 ISSN 0167-7152 R&D Projects: GA AV ČR IAA1075101 Institutional research plan: CEZ:AV0Z1075907 Keywords : goodness-of-fit statistics * disparity statistics * goodnes-of-fit tests Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.364, year: 2002
Harring, Jeffrey R; Blozis, Shelley A
2014-06-01
Nonlinear mixed-effects (NLME) models remain popular among practitioners for analyzing continuous repeated measures data taken on each of a number of individuals when interest centers on characterizing individual-specific change. Within this framework, variation and correlation among the repeated measurements may be partitioned into interindividual variation and intraindividual variation components. The covariance structure of the residuals are, in many applications, consigned to be independent with homogeneous variances, [Formula: see text], not because it is believed that intraindividual variation adheres to this structure, but because many software programs that estimate parameters of such models are not well-equipped to handle other, possibly more realistic, patterns. In this article, we describe how the programmatic environment within SAS may be utilized to model residual structures for serial correlation and variance heterogeneity. An empirical example is used to illustrate the capabilities of the module.
Fitting and interpreting continuous-time latent Markov models for panel data.
Lange, Jane M; Minin, Vladimir N
2013-11-20
Multistate models characterize disease processes within an individual. Clinical studies often observe the disease status of individuals at discrete time points, making exact times of transitions between disease states unknown. Such panel data pose considerable modeling challenges. Assuming the disease process progresses accordingly, a standard continuous-time Markov chain (CTMC) yields tractable likelihoods, but the assumption of exponential sojourn time distributions is typically unrealistic. More flexible semi-Markov models permit generic sojourn distributions yet yield intractable likelihoods for panel data in the presence of reversible transitions. One attractive alternative is to assume that the disease process is characterized by an underlying latent CTMC, with multiple latent states mapping to each disease state. These models retain analytic tractability due to the CTMC framework but allow for flexible, duration-dependent disease state sojourn distributions. We have developed a robust and efficient expectation-maximization algorithm in this context. Our complete data state space consists of the observed data and the underlying latent trajectory, yielding computationally efficient expectation and maximization steps. Our algorithm outperforms alternative methods measured in terms of time to convergence and robustness. We also examine the frequentist performance of latent CTMC point and interval estimates of disease process functionals based on simulated data. The performance of estimates depends on time, functional, and data-generating scenario. Finally, we illustrate the interpretive power of latent CTMC models for describing disease processes on a dataset of lung transplant patients. We hope our work will encourage wider use of these models in the biomedical setting. Copyright © 2013 John Wiley & Sons, Ltd.
Fitting inverse power-law quintessence models using the SNAP satellite
International Nuclear Information System (INIS)
Eriksson, Martin; Amanullah, Rahman
2002-01-01
We investigate the possibility of using the proposed SNAP satellite in combination with low-z supernova searches to distinguish between different inverse power-law quintessence models. If the true model is that of a cosmological constant, we determine the prospects of ruling out the inverse power-law potential. We show that SNAP combined with e.g., the SNfactory and an independent measurement of the mass energy density to 17% accuracy can distinguish between an inverse power-law potential and a cosmological constant and put severe constraints on the power-law exponent
International Nuclear Information System (INIS)
Rancati, T.; Fiorino, C.; Gagliardi, G.; Cattaneo, G.M.; Sanguineti, G.; Borca, V. Casanova; Cozzarini, C.; Fellin, G.; Foppiano, F.; Girelli, G.; Menegotti, L.; Piazzolla, A.; Vavassori, V.; Valdagni, R.
2004-01-01
Background and purpose: Recent investigations demonstrated a significant correlation between rectal dose-volume patterns and late rectal toxicity. The reduction of the DVH to a value expressing the probability of complication would be suitable. To fit different normal tissue complication probability (NTCP) models to clinical outcome on late rectal bleeding after external beam radiotherapy (RT) for prostate cancer. Patients and methods: Rectal dose-volume histograms of the rectum (DVH) and clinical records of 547 prostate cancer patients (pts) pooled from five institutions previously collected and analyzed were considered. All patients were treated in supine position with 3 or 4-field techniques: 123 patients received an ICRU dose between 64 and 70 Gy, 255 patients between 70 and 74 Gy and 169 patients between 74 and 79.2 Gy; 457/547 patients were treated with conformal RT and 203/547 underwent radical prostatectomy before RT. Minimum follow-up was 18 months. Patients were considered as bleeders if showing grade 2/3 late bleeding (slightly modified RTOG/EORTC scoring system) within 18 months after the end of RT. Four NTCP models were considered: (a) the Lyman model with DVH reduced to the equivalent uniform dose (LEUD, coincident with the classical Lyman-Kutcher-Burman, LKB, model), (b) logistic with DVH reduced to EUD (LOGEUD), (c) Poisson coupled to EUD reduction scheme and (d) relative seriality (RS). The parameters for the different models were fit to the patient data using a maximum likelihood analysis. The 68% confidence intervals (CI) of each parameter were also derived. Results: Forty six out of five hundred and forty seven patients experienced grade 2/3 late bleeding: 38/46 developed rectal bleeding within 18 months and were then considered as bleeders The risk of rectal bleeding can be well calculated with a 'smooth' function of EUD (with a seriality parameter n equal to 0.23 (CI 0.05), best fit result). Using LEUD the relationship between EUD and NTCP can
International Nuclear Information System (INIS)
Syrmalenios, Panayotis
1973-01-01
This research thesis addresses the issue of safety of fast neutron reactors, and more particularly is a contribution of the study of mechanisms of interaction between molten fuel and sodium. It aims at developing tools of prediction of consequences of three main types of accidents: local fusion of a fuel rod and contact of the fuel with the surrounding sodium, failure of an assembly due to the fusion of several rods and fuel-coolant interaction within the assembly, and fuel-coolant interaction at the level of the reactor core. The author first proposes a bibliographical analysis of experimental and theoretical studies related to this issue of interaction between a hot body and a cold liquid, and of its consequences. Then, he introduces a mathematical model and its resolution method, and reports the use of the associated code (Corfou) for the interpretation of experimental results: expulsion of cold sodium column by expansion of an overheated sodium mass, fusion of a rod by Joule effect, interaction between UO 2 molten by high frequency with liquid sodium. Finally, the author discusses a comparison between the Corfou code and other models which are being currently developed [fr
Nakahara, Shinji; Kawamura, Takashi; Ichikawa, Masao; Wakai, Susumu
2006-01-01
Previous research has indicated that unbelted drivers are at higher risk of involvement in fatal crashes than belted drivers, suggesting selective recruitment that high-risk drivers are unlikely to become belt users. However, how the risk of involvement in fatal crashes among unbelted drivers varies according to the level of seat belt use among general drivers has yet to be clearly quantified. We, therefore, developed mathematical models describing the risk of fatal crashes in relation to seat belt use among the general public, and explored how these models fitted to changes in driver mortality and changes in observed seat belt use using Japanese data. Mortality data between 1979 and 1994 were obtained from vital statistics, and mortality data in the daytime and nighttime between 1980 and 2001 and belt use data between 1979 and 2001 were obtained from the National Police Agency. Regardless of the data set analyzed, exponential models, assuming that high-risk drivers would gradually become belt users in order of increasing risk as seat belt use among general motorists reached high levels, showed the best fit. Our models provide an insight into behavioral changes among high-risk drivers and support the selective recruitment hypothesis.
Joseph, Agnel P; Swapna, Lakshmipuram S; Rakesh, Ramachandran; Srinivasan, Narayanaswamy
2016-09-01
Protein-protein interface residues, especially those at the core of the interface, exhibit higher conservation than residues in solvent exposed regions. Here, we explore the ability of this differential conservation to evaluate fittings of atomic models in low-resolution cryo-EM maps and select models from the ensemble of solutions that are often proposed by different model fitting techniques. As a prelude, using a non-redundant and high-resolution structural dataset involving 125 permanent and 95 transient complexes, we confirm that core interface residues are conserved significantly better than nearby non-interface residues and this result is used in the cryo-EM map analysis. From the analysis of inter-component interfaces in a set of fitted models associated with low-resolution cryo-EM maps of ribosomes, chaperones and proteasomes we note that a few poorly conserved residues occur at interfaces. Interestingly a few conserved residues are not in the interface, though they are close to the interface. These observations raise the potential requirement of refitting the models in the cryo-EM maps. We show that sampling an ensemble of models and selection of models with high residue conservation at the interface and in good agreement with the density helps in improving the accuracy of the fit. This study indicates that evolutionary information can serve as an additional input to improve and validate fitting of atomic models in cryo-EM density maps. Copyright © 2016 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Lilith K Whittles
2017-10-01
Full Text Available Gonorrhoea is one of the most common bacterial sexually transmitted infections in England. Over 41,000 cases were recorded in 2015, more than half of which occurred in men who have sex with men (MSM. As the bacterium has developed resistance to each first-line antibiotic in turn, we need an improved understanding of fitness benefits and costs of antibiotic resistance to inform control policy and planning. Cefixime was recommended as a single-dose treatment for gonorrhoea from 2005 to 2010, during which time resistance increased, and subsequently declined.We developed a stochastic compartmental model representing the natural history and transmission of cefixime-sensitive and cefixime-resistant strains of Neisseria gonorrhoeae in MSM in England, which was applied to data on diagnoses and prescriptions between 2008 and 2015. We estimated that asymptomatic carriers play a crucial role in overall transmission dynamics, with 37% (95% credible interval CrI 24%-52% of infections remaining asymptomatic and untreated, accounting for 89% (95% CrI 82%-93% of onward transmission. The fitness cost of cefixime resistance in the absence of cefixime usage was estimated to be such that the number of secondary infections caused by resistant strains is only about half as much as for the susceptible strains, which is insufficient to maintain persistence. However, we estimated that treatment of cefixime-resistant strains with cefixime was unsuccessful in 83% (95% CrI 53%-99% of cases, representing a fitness benefit of resistance. This benefit was large enough to counterbalance the fitness cost when 31% (95% CrI 26%-36% of cases were treated with cefixime, and when more than 55% (95% CrI 44%-66% of cases were treated with cefixime, the resistant strain had a net fitness advantage over the susceptible strain. Limitations include sparse data leading to large intervals on key model parameters and necessary assumptions in the modelling of a complex epidemiological process
A Parametric Model of Shoulder Articulation for Virtual Assessment of Space Suit Fit
Kim, K. Han; Young, Karen S.; Bernal, Yaritza; Boppana, Abhishektha; Vu, Linh Q.; Benson, Elizabeth A.; Jarvis, Sarah; Rajulu, Sudhakar L.
2016-01-01
Suboptimal suit fit is a known risk factor for crewmember shoulder injury. Suit fit assessment is however prohibitively time consuming and cannot be generalized across wide variations of body shapes and poses. In this work, we have developed a new design tool based on the statistical analysis of body shape scans. This tool is aimed at predicting the skin deformation and shape variations for any body size and shoulder pose for a target population. This new process, when incorporated with CAD software, will enable virtual suit fit assessments, predictively quantifying the contact volume, and clearance between the suit and body surface at reduced time and cost.
Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm
Jin, Ick Hoon
2013-10-01
The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Understanding the Listening Process: Rethinking the "One Size Fits All" Model
Wolvin, Andrew
2013-01-01
Robert Bostrom's seminal contributions to listening theory and research represent an impressive legacy and provide listening scholars with important perspectives on the complexities of listening cognition and behavior. Bostrom's work provides a solid foundation on which to build models that more realistically explain how listeners function…
Fitting the Mixed Rasch Model to a Reading Comprehension Test: Identifying Reader Types
Baghaei, Purya; Carstensen, Claus H.
2013-01-01
Standard unidimensional Rasch models assume that persons with the same ability parameters are comparable. That is, the same interpretation applies to persons with identical ability estimates as regards the underlying mental processes triggered by the test. However, research in cognitive psychology shows that persons at the same trait level may…
Directory of Open Access Journals (Sweden)
Cheol-Eung Lee
2017-02-01
Full Text Available Several natural disasters occur because of torrential rainfalls. The change in global climate most likely increases the occurrences of such downpours. Hence, it is necessary to investigate the characteristics of the torrential rainfall events in order to introduce effective measures for mitigating disasters such as urban floods and landslides. However, one of the major problems is evaluating the number of torrential rainfall events from a statistical viewpoint. If the number of torrential rainfall occurrences during a month is considered as count data, their frequency distribution could be identified using a probability distribution. Generally, the number of torrential rainfall occurrences has been analyzed using the Poisson distribution (POI or the Generalized Poisson Distribution (GPD. However, it was reported that POI and GPD often overestimated or underestimated the observed count data when additional or fewer zeros were included. Hence, in this study, a zero-inflated model concept was applied to solve this problem existing in the conventional models. Zero-Inflated Poisson (ZIP model, Zero-Inflated Generalized Poisson (ZIGP model, and the Bayesian ZIGP model have often been applied to fit the count data having additional or fewer zeros. However, the applications of these models in water resource management have been very limited despite their efficiency and accuracy. The five models, namely, POI, GPD, ZIP, ZIGP, and Bayesian ZIGP, were applied to the torrential rainfall data having additional zeros obtained from two rain gauges in South Korea, and their applicability was examined in this study. In particular, the informative prior distributions evaluated via the empirical Bayes method using ten rain gauges were developed in the Bayesian ZIGP model. Finally, it was suggested to avoid using the POI and GPD models to fit the frequency of torrential rainfall data. In addition, it was concluded that the Bayesian ZIGP model used in this study
Cich, Matthew J.; Guillaume, Alexandre; Drouin, Brian; Benner, D. Chris
2017-06-01
Multispectrum analysis can be a challenge for a variety of reasons. It can be computationally intensive to fit a proper line shape model especially for high resolution experimental data. Band-wide analyses including many transitions along with interactions, across many pressures and temperatures are essential to accurately model, for example, atmospherically relevant systems. Labfit is a fast multispectrum analysis program originally developed by D. Chris Benner with a text-based interface. More recently at JPL a graphical user interface was developed with the goal of increasing the ease of use but also the number of potential users. The HTP lineshape model has been added to Labfit keeping it up-to-date with community standards. Recent analyses using labfit will be shown to demonstrate its ability to competently handle large experimental datasets, including high order lineshape effects, that are otherwise unmanageable.
The inert doublet model in the light of Fermi-LAT gamma-ray data: a global fit analysis
Energy Technology Data Exchange (ETDEWEB)
Eiteneuer, Benedikt; Heisig, Jan [RWTH Aachen University, Institute for Theoretical Particle Physics and Cosmology, Aachen (Germany); Goudelis, Andreas [UMR 7589 CNRS and UPMC, Laboratoire de Physique Theorique et Hautes Energies (LPTHE), Paris (France)
2017-09-15
We perform a global fit within the inert doublet model taking into account experimental observables from colliders, direct and indirect dark matter searches and theoretical constraints. In particular, we consider recent results from searches for dark matter annihilation-induced gamma-rays in dwarf spheroidal galaxies and relax the assumption that the inert doublet model should account for the entire dark matter in the Universe. We, moreover, study in how far the model is compatible with a possible dark matter explanation of the so-called Galactic center excess. We find two distinct parameter space regions that are consistent with existing constraints and can simultaneously explain the excess: One with dark matter masses near the Higgs resonance and one around 72 GeV where dark matter annihilates predominantly into pairs of virtual electroweak gauge bosons via the four-vertex arising from the inert doublet's kinetic term. We briefly discuss future prospects to probe these scenarios. (orig.)
O'Neill, James M; Clark, Jeffrey K; Jones, James A
2016-07-01
In elementary grades, comprehensive health education curricula have demonstrated effectiveness in addressing singular health issues. The Michigan Model for Health (MMH) was implemented and evaluated to determine its impact on nutrition, physical fitness, and safety knowledge and skills. Schools (N = 52) were randomly assigned to intervention and control conditions. Participants received MMH with 24 lessons in grade 4 and 28 more lessons in grade 5 including material focusing on nutrition, physical fitness, and safety attitudes and skills. The 40-minute lessons were taught by the classroom teacher who received curriculum training and provided feedback on implementation fidelity. Self-report survey data were collected from the fourth-grade students (N = 1983) prior to the intervention, immediately after the intervention, and 6 weeks after the intervention, with the same data collection schedule repeated in fifth grade. Analysis of the scales was conducted using a mixed-model approach. Students who received the curriculum had better nutrition, physical activity, and safety skills than the control-group students. Intervention students also reported higher consumption of fruits; however, no difference was reported for other types of food consumption. The effectiveness of the MMH in promoting fitness and safety supports the call for integrated strategies that begin in elementary grades, target multiple risk behaviors, and result in practical and financial benefits to schools. © 2016, American School Health Association.
Strategy for Fitting Neuronal Models to Dual Patch Data under Multiple Stimulation Protocols
National Research Council Canada - National Science Library
Shen, Gongyu
2001-01-01
.... The authors separated the fitting of the intervening part between the two electrodes from that beyond the dendritic electrode to reduce the spatial complexity by using dendritic voltage clamp simulation...
Convis: A Toolbox to Fit and Simulate Filter-Based Models of Early Visual Processing.
Huth, Jacob; Masquelier, Timothée; Arleo, Angelo
2018-01-01
We developed Convis , a Python simulation toolbox for large scale neural populations which offers arbitrary receptive fields by 3D convolutions executed on a graphics card. The resulting software proves to be flexible and easily extensible in Python, while building on the PyTorch library (The Pytorch Project, 2017), which was previously used successfully in deep learning applications, for just-in-time optimization and compilation of the model onto CPU or GPU architectures. An alternative implementation based on Theano (Theano Development Team, 2016) is also available, although not fully supported. Through automatic differentiation, any parameter of a specified model can be optimized to approach a desired output which is a significant improvement over e.g., Monte Carlo or particle optimizations without gradients. We show that a number of models including even complex non-linearities such as contrast gain control and spiking mechanisms can be implemented easily. We show in this paper that we can in particular recreate the simulation results of a popular retina simulation software VirtualRetina (Wohrer and Kornprobst, 2009), with the added benefit of providing (1) arbitrary linear filters instead of the product of Gaussian and exponential filters and (2) optimization routines utilizing the gradients of the model. We demonstrate the utility of 3d convolution filters with a simple direction selective filter. Also we show that it is possible to optimize the input for a certain goal, rather than the parameters, which can aid the design of experiments as well as closed-loop online stimulus generation. Yet, Convis is more than a retina simulator. For instance it can also predict the response of V1 orientation selective cells. Convis is open source under the GPL-3.0 license and available from https://github.com/jahuth/convis/ with documentation at https://jahuth.github.io/convis/.
Taking error into account when fitting models using Approximate Bayesian Computation.
van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M
2018-03-01
Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.
Does Vocational Education Model fit to Fulfil Prisoners’ Needs Based on Gender?
Hayzaki, S. H.; Nurhaeni, I. D. A.
2018-02-01
Men and women have different needs, based on their gender or the socio-cultural construction. The government has issued a policy about accelerating the equivalence of gender since 2012 through responsive planning and budgeting. With the policy, every institution (including the institutions under the ministry of law and human rights) must integrate its gender perspective on planning and budgeting, then it can fulfill the different needs between men and women. One of the programs developed in prisons for prisoners is vocational education and technology for preparing the prisoners’ life after being released from the prison cells. This article was made for evaluating the vocational education and training given to the prisoners. Gender perspective is employed as the analyzing tool. The result was then used as the basis of formulating vocational education model integrating gender perspective. The research was conducted at the Prison of Demak Regency, Indonesia. The method used in the research is qualitative descriptive with data collection techniques using by in-depth interviews, observation and documentation. The data analysis uses statistic description of Harvard’s checklist category model and combined with Moser category model. The result shows that vocational education and training given have not considered the differences between men and women. As a result, the prisoners were still not able to understand their different needs which can cause gender injustice when they come into job market. It is suggested that gender perspective must be included as a teaching material in the vocational education and training.
Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Steefel, C. I.; Banfield, J. F.; Beller, H. R.; Anantharaman, K.; Ligocki, T. J.; Trebotich, D.
2015-12-01
Pore-scale processes mediated by microorganisms underlie a range of critical ecosystem services, regulating carbon stability, nutrient flux, and the purification of water. Advances in cultivation-independent approaches now provide us with the ability to reconstruct thousands of genomes from microbial populations from which functional roles may be assigned. With this capability to reveal microbial metabolic potential, the next step is to put these microbes back where they belong to interact with their natural environment, i.e. the pore scale. At this scale, microorganisms communicate, cooperate and compete across their fitness landscapes with communities emerging that feedback on the physical and chemical properties of their environment, ultimately altering the fitness landscape and selecting for new microbial communities with new properties and so on. We have developed a trait-based model of microbial activity that simulates coupled functional guilds that are parameterized with unique combinations of traits that govern fitness under dynamic conditions. Using a reactive transport framework, we simulate the thermodynamics of coupled electron donor-acceptor reactions to predict energy available for cellular maintenance, respiration, biomass development, and enzyme production. From metagenomics, we directly estimate some trait values related to growth and identify the linkage of key traits associated with respiration and fermentation, macromolecule depolymerizing enzymes, and other key functions such as nitrogen fixation. Our simulations were carried out to explore abiotic controls on community emergence such as seasonally fluctuating water table regimes across floodplain organic matter hotspots. Simulations and metagenomic/metatranscriptomic observations highlighted the many dependencies connecting the relative fitness of functional guilds and the importance of chemolithoautotrophic lifestyles. Using an X-Ray microCT-derived soil microaggregate physical model combined
Czech Academy of Sciences Publication Activity Database
Vinš, Václav; Jäger, A.; Hrubý, Jan; Span, R.
2017-01-01
Roč. 435, March (2017), s. 104-117 ISSN 0378-3812 R&D Projects: GA MŠk(CZ) 7F14466; GA ČR(CZ) GJ15-07129Y Institutional support: RVO:61388998 Keywords : carbon capture and storage * clathrate * parameter fitting Subject RIV: BJ - Thermodynamics Impact factor: 2.473, year: 2016 http://ac.els-cdn.com/S0378381216306069/1-s2.0-S0378381216306069-main.pdf?_tid=7b6bf82c-2f22-11e7-8661-00000aab0f02&acdnat=1493721260_17561db239dd867f17c2ad3bda9a5540
Modeling Invasion Dynamics with Spatial Random-Fitness Due to Micro-Environment.
Manem, V S K; Kaveh, K; Kohandel, M; Sivaloganathan, S
2015-01-01
Numerous experimental studies have demonstrated that the microenvironment is a key regulator influencing the proliferative and migrative potentials of species. Spatial and temporal disturbances lead to adverse and hazardous microenvironments for cellular systems that is reflected in the phenotypic heterogeneity within the system. In this paper, we study the effect of microenvironment on the invasive capability of species, or mutants, on structured grids (in particular, square lattices) under the influence of site-dependent random proliferation in addition to a migration potential. We discuss both continuous and discrete fitness distributions. Our results suggest that the invasion probability is negatively correlated with the variance of fitness distribution of mutants (for both advantageous and neutral mutants) in the absence of migration of both types of cells. A similar behaviour is observed even in the presence of a random fitness distribution of host cells in the system with neutral fitness rate. In the case of a bimodal distribution, we observe zero invasion probability until the system reaches a (specific) proportion of advantageous phenotypes. Also, we find that the migrative potential amplifies the invasion probability as the variance of fitness of mutants increases in the system, which is the exact opposite in the absence of migration. Our computational framework captures the harsh microenvironmental conditions through quenched random fitness distributions and migration of cells, and our analysis shows that they play an important role in the invasion dynamics of several biological systems such as bacterial micro-habitats, epithelial dysplasia, and metastasis. We believe that our results may lead to more experimental studies, which can in turn provide further insights into the role and impact of heterogeneous environments on invasion dynamics.
Beldad, Ardion Daroca; Hegner, Sabrina
2017-01-01
According to one market research, fitness or running apps are hugely popular in Germany. Such a trend prompts the question concerning the factors influencing German users’ intention to continue using a specific fitness app. To address the research question, the expanded Technology Acceptance Model (with the addition of trust, social influence, and health valuation) was tested with 476 German users of fitness apps. Structural equation modeling results reveal that respondents’ intention to cont...
Using Fit Indexes to Select a Covariance Model for Longitudinal Data
Liu, Siwei; Rovine, Michael J.; Molenaar, Peter C. M.
2012-01-01
This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error…
van der Niet, Anneke G.; Hartman, Esther; Smith, Joanne; Visscher, Chris
Objectives: The relationship between physical fitness and academic achievement in children has received much attention, however, whether executive functioning plays a mediating role in this relationship is unclear. The aim of this study therefore was to investigate the relationships between physical
Resilience of a FIT screening programme against screening fatigue: a modelling study
Greuter, Marjolein J. E.; Berkhof, Johannes; Canfell, Karen; Lew, Jie-Bin; Dekker, Evelien; Coupé, Veerle M. H.
2016-01-01
Repeated participation is important in faecal immunochemical testing (FIT) screening for colorectal cancer (CRC). However, a large number of screening invitations over time may lead to screening fatigue and consequently, decreased participation rates. We evaluated the impact of screening fatigue on
Roberts, James S.
Stone and colleagues (C. Stone, R. Ankenman, S. Lane, and M. Liu, 1993; C. Stone, R. Mislevy and J. Mazzeo, 1994; C. Stone, 2000) have proposed a fit index that explicitly accounts for the measurement error inherent in an estimated theta value, here called chi squared superscript 2, subscript i*. The elements of this statistic are natural…
A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models
International Nuclear Information System (INIS)
Xu, Jin; Yu, Yaming; Van Dyk, David A.; Kashyap, Vinay L.; Siemiginowska, Aneta; Drake, Jeremy; Ratzlaff, Pete; Connors, Alanna; Meng, Xiao-Li
2014-01-01
Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.
A closed-loop power controller model of series-resonant-inverter-fitted induction heating system
Directory of Open Access Journals (Sweden)
Pal Palash
2016-12-01
Full Text Available This paper presents a mathematical model of a power controller for a high-frequency induction heating system based on a modified half-bridge series resonant inverter. The output real power is precise over the heating coil, and this real power is processed as a feedback signal that contends a closed-loop topology with a proportional-integral-derivative controller. This technique enables both control of the closed-loop power and determination of the stability of the high-frequency inverter. Unlike the topologies of existing power controllers, the proposed topology enables direct control of the real power of the high-frequency inverter.
International Nuclear Information System (INIS)
Masseran, N.; Razali, A.M.; Ibrahim, K.; Latif, M.T.
2013-01-01
Highlights: • We suggest a simple way for wind direction modeling using the mixture of von Mises distribution. • We determine the most suitable probability model for wind direction regime in Malaysia. • We provide the circular density plots to show the most prominent directions of wind blows. - Abstract: A statistical distribution for describing wind direction provides information about the wind regime at a particular location. In addition, this information complements knowledge of wind speed, which allows researchers to draw some conclusions about the energy potential of wind and aids the development of efficient wind energy generation. This study focuses on modeling the frequency distribution of wind direction, including some characteristics of wind regime that cannot be represented by a unimodal distribution. To identify the most suitable model, a finite mixture of von Mises distributions were fitted to the average hourly wind direction data for nine wind stations located in Peninsular Malaysia. The data used were from the years 2000 to 2009. The suitability of each mixture distribution was judged based on the R 2 coefficient and the histogram plot with a density line. The results showed that the finite mixture of the von Mises distribution with H number of components was the best distribution to describe the wind direction distributions in Malaysia. In addition, the circular density plots of the suitable model clearly showed the most prominent directions of wind blows than the other directions
Sun, Yanqing; Li, Mei; Gilbert, Peter B
2016-01-01
Motivated by the need to assess HIV vaccine efficacy, previous studies proposed an extension of the discrete competing risks proportional hazards model, in which the cause of failure is replaced by a continuous mark only observed at the failure time. However the model assumptions may fail in several ways, and no diagnostic testing procedure for this situation has been proposed. A goodness-of-fit test procedure for the stratified mark-specific proportional hazards model in which the regression parameters depend nonparametrically on the mark and the baseline hazards depends nonparametrically on both time and the mark is proposed. The test statistics are constructed based on the weighted cumulative mark-specific martingale residuals. The critical values of the proposed test statistics are approximated using the Gaussian multiplier method. The performance of the proposed tests are examined extensively in simulations for a variety of the models under the null hypothesis and under different types of alternative models. An analysis of the 'Step' HIV vaccine efficacy trial using the proposed method is presented. The analysis suggests that the HIV vaccine candidate may increase susceptibility to HIV acquisition.
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
International Nuclear Information System (INIS)
Li Yupeng; Deutsch, Clayton V.
2012-01-01
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.
Directory of Open Access Journals (Sweden)
Tao eWang
2015-03-01
Full Text Available The generalized linear mixed model (GLMM is a useful tool for modeling genetic correlation among family data in genetic association studies. However, when dealing with families of varied sizes and diverse genetic relatedness, the GLMM has a special correlation structure which often makes it difficult to be specified using standard statistical software. In this study, we propose a Cholesky decomposition based re-formulation of the GLMM so that the re-formulated GLMM can be specified conveniently via `proc nlmixed' and `proc glimmix' in SAS, or OpenBUGS via R package BRugs. Performances of these procedures in fitting the re-formulated GLMM are examined through simulation studies. We also apply this re-formulated GLMM to analyze a real data set from Type 1 Diabetes Genetics Consortium (T1DGC.
The electroweak fit of the standard model after the discovery of a new boson at the LHC
International Nuclear Information System (INIS)
Baak, M.; Hoecker, A.; Schott, M.; Goebel, M.; Kennedy, D.; Moenig, K.; Haller, J.; Kogler, R.; Stelzer, J.
2012-09-01
In view of the discovery of a new boson by the ATLAS and CMS Collaborations at the LHC, we present an update of the global Standard Model (SM) fit to electroweak precision data. Assuming the new particle to be the SM Higgs boson, all fundamental parameters of the SM are known allowing, for the first time, to overconstrain the SM at the electroweak scale and assert its validity. Including the effects of radiative corrections and the experimental and theoretical uncertainties, the global fit exhibits a p-value of 0.07. The mass measurements by ATLAS and CMS agree within 1.3σ with the indirect determination M H =94 +25 -22 GeV. Within the SM the W boson mass and the effective weak mixing angle can be accurately predicted to be M W =80.359±0.011 GeV and sin 2 θ l eff =0.23150±0.00010 from the global fit. These results are compatible with, and exceed in precision, the direct measurements. For the indirect determination of the top quark mass we find m t =175.8 +2.7 -2.4 GeV, in agreement with the kinematic and cross-section based measurements.
STATISTICAL EVALUATION OF FITTING ACCURACY OF GLOBAL AND LOCAL DIGITAL ELEVATION MODELS IN IRAN
Directory of Open Access Journals (Sweden)
F. Alidoost
2013-09-01
Full Text Available Digital Elevation Models (DEMs are one of the most important data for various applications such as hydrological studies, topography mapping and ortho image generation. There are well-known DEMs of the whole world that represent the terrain's surface at variable resolution and they are also freely available for 99% of the globe. However, it is necessary to assess the quality of the global DEMs for the regional scale applications.These models are evaluated by differencing with other reference DEMs or ground control points (GCPs in order to estimate the quality and accuracy parameters over different land cover types. In this paper, a comparison of ASTER GDEM ver2, SRTM DEM with more than 800 reference GCPs and also with a local elevation model over the area of Iran is presented. This study investigates DEM’s characteristics such as systematic error (bias, vertical accuracy and outliers for DEMs using both the usual (Mean error, Root Mean Square Error, Standard Deviation and the robust (Median, Normalized Median Absolute Deviation, Sample Quantiles descriptors. Also, the visual assessment tools are used to illustrate the quality of DEMs, such as normalized histograms and Q-Q plots. The results of the study confirmed that there is a negative elevation bias of approximately 5 meters of GDEM ver2. The measured RMSE and NMAD for elevation differences of GDEM-GCPs are 7.1 m and 3.2 m, respectively, while these values for SRTM and GCPs are 9.0 m and 4.4 m. On the other hand, in comparison with the local DEM, GDEM ver2 exhibits the RMSE of about 6.7 m, a little higher than the RMSE of SRTM (5.1 m.The results of height difference classification and other statistical analysis of GDEM ver2-local DEM and SRTM-local DEM reveal that SRTM is slightly more accurate than GDEM ver2. Accordingly, SRTM has no noticeable bias and shift from Local DEM and they have more consistency to each other, while GDEM ver2 has always a negative bias.
Whiteman-Sandland, Jessica; Hawkins, Jemma; Clayton, Debbie
2016-01-01
This is the first study to measure the ‘sense of community’ reportedly offered by the CrossFit gym model. A cross-sectional study adapted Social Capital and General Belongingness scales to compare perceptions of a CrossFit gym and a traditional gym. CrossFit gym members reported significantly higher levels of social capital (both bridging and bonding) and community belongingness compared with traditional gym members. However, regression analysis showed neither social capital, community belong...
Volkmann, Niels
2012-02-01
A complete understanding of complex dynamic cellular processes such as cell migration or cell adhesion requires the integration of atomic level structural information into the larger cellular context. While direct atomic-level information at the cellular level remains inaccessible, electron microscopy, electron tomography and their associated computational image processing approaches have now matured to a point where sub-cellular structures can be imaged in three dimensions at the nanometer scale. Atomic-resolution information obtained by other means can be combined with this data to obtain three-dimensional models of large macromolecular assemblies in their cellular context. This article summarizes some recent advances in this field. Copyright © 2011 Elsevier Ltd. All rights reserved.
Hierarchical Winner-Take-All Particle Swarm Optimization Social Network for Neural Model Fitting
Coventry, Brandon S.; Parthasarathy, Aravindakshan; Sommer, Alexandra L.; Bartlett, Edward L.
2016-01-01
Particle swarm optimization (PSO) has gained widespread use as a general mathematical programming paradigm and seen use in a wide variety of optimization and machine learning problems. In this work, we introduce a new variant on the PSO social network and apply this method to the inverse problem of input parameter selection from recorded auditory neuron tuning curves. The topology of a PSO social network is a major contributor to optimization success. Here we propose a new social network which draws influence from winner-take-all coding found in visual cortical neurons. We show that the winner-take-all network performs exceptionally well on optimization problems with greater than 5 dimensions and runs at a lower iteration count as compared to other PSO topologies. Finally we show that this variant of PSO is able to recreate auditory frequency tuning curves and modulation transfer functions, making it a potentially useful tool for computational neuroscience models. PMID:27726048
Middelkamp, P.J.C.; Wolfhagen, P.; Steenbergen, B.
2015-01-01
Introduction: The transtheoretical model of behaviour change (TTM) is often used to understand and predict changes in health related behaviour, for example exercise behaviour and eating behaviour. Fitness professionals like personal trainers typically service and support clients in improving
Falahati Marvast, Fatemeh; Arabalibeik, Hossein; Alipour, Fatemeh; Sheikhtaheri, Abbas; Nouri, Leila; Soozande, Mehdi; Yarmahmoodi, Masood
2016-01-01
Keratoconus is a progressive non-inflammatory disease of the cornea. Rigid gas permeable contact lenses (RGPs) are prescribed when the disease progresses. Contact lens fitting and assessment is very difficult in these patients and is a concern of ophthalmologists and optometrists. In this study, a hierarchical fuzzy system is used to capture the expertise of experienced ophthalmologists during the lens evaluation phase of prescription. The system is fine-tuned using genetic algorithms. Sensitivity, specificity and accuracy of the final system are 88.9%, 94.4% and 92.6% respectively.
Mitsuru eMatsumoto
2013-01-01
The discovery of Aire-dependent transcriptional control of many tissue-restricted self-antigen (TRA) genes in thymic epithelial cells in the medulla (mTECs) has raised the intriguing question of how the single Aire gene can influence the transcription of such a large number of TRA genes within mTECs. From a mechanistic viewpoint, there are two possible models to explain the function of Aire in this action. In the first model, TRAs are considered to be the direct target genes of Aire’s transcr...
Matsumoto, Mitsuru; Nishikawa, Yumiko; Nishijima, Hitoshi; Morimoto, Junko; Matsumoto, Minoru; Mouri, Yasuhiro
2013-01-01
The discovery of Aire-dependent transcriptional control of many tissue-restricted self-antigen (TRA) genes in thymic epithelial cells in the medulla (medullary thymic epithelial cells, mTECs) has raised the intriguing question of how the single Aire gene can influence the transcription of such a large number of TRA genes within mTECs. From a mechanistic viewpoint, there are two possible models to explain the function of Aire in this action. In the first model, TRAs are considered to be the di...
A CONTRASTIVE ANALYSIS OF THE FACTORIAL STRUCTURE OF THE PCL-R: WHICH MODEL FITS BEST THE DATA?
Directory of Open Access Journals (Sweden)
Beatriz Pérez
2015-01-01
Full Text Available The aim of this study was to determine which of the factorial solutions proposed for the Hare Psychopathy Checklist-Revised (PCL-R of two, three, four factors, and unidimensional fitted best the data. Two trained and experienced independent raters scored 197 prisoners from the Villabona Penitentiary (Asturias, Spain, age range 21 to 73 years (M = 36.0, SD = 9.7, of whom 60.12% were reoffenders and 73% had committed violent crimes. The results revealed that the two-factor correlational, three-factor hierarchical without testlets, four-factor correlational and hierarchical, and unidimensional models were a poor fit for the data (CFI ≤ .86, and the three-factor model with testlets was a reasonable fit for the data (CFI = .93. The scale resulting from the three-factor hierarchical model with testlets (13 items classified psychopathy significantly higher than the original 20-item scale. The results are discussed in terms of their implications for theoretical models of psychopathy, decision-making, prison classification and intervention, and prevention. Se diseñó un estudio con el objetivo de conocer cuál de las soluciones factoriales propuestas para la Hare Psychopathy Checklist-Revised (PCL-R de dos, tres y cuatro factores y unidimensional era la que presentaba mejor ajuste a los datos. Para ello, dos evaluadores entrenados y con experiencia evaluaron de forma independiente a 197 internos en la prisión Villabona (Asturias, España, con edades comprendidas entre los 21 y los 73 años (M = 36.0, DT = 9.7, de los cuales el 60.12% eran reincidentes y el 73% había cometido delitos violentos. Los resultados mostraron que los modelos unidimensional, correlacional de 2 factores, jerárquico de 3 factores sin testlest y correlacional y jerárquico de 4 factores, presentaban un pobre ajuste con los datos (CFI ≤ .86 y un ajuste razonable del modelo jerárquico de tres factores con testlets (CFI = .93. La escala resultante del modelo de tres factores
de Villiers, Marelize; Kriticos, Darren J; Veldtman, Ruan
2017-01-01
The European wasp, Vespula germanica (Fabricius) (Hymenoptera: Vespidae), is of Palaearctic origin, being native to Europe, northern Africa and Asia, and introduced into North America, Chile, Argentina, Iceland, Ascension Island, South Africa, Australia and New Zealand. Due to its polyphagous nature and scavenging behaviour, V. germanica threatens agriculture and silviculture, and negatively affects biodiversity, while its aggressive nature and venomous sting pose a health risk to humans. In areas with warmer winters and longer summers, queens and workers can survive the winter months, leading to the build-up of large nests during the following season; thereby increasing the risk posed by this species. To prevent or prepare for such unwanted impacts it is important to know where the wasp may be able to establish, either through natural spread or through introduction as a result of human transport. Distribution data from Argentina and Australia, and seasonal phenology data from Argentina were used to determine the potential distribution of V. germanica using CLIMEX modelling. In contrast to previous models, the influence of irrigation on its distribution was also investigated. Under a natural rainfall scenario, the model showed similarities to previous models. When irrigation is applied, dry stress is alleviated, leading to larger areas modelled climatically suitable compared with previous models, which provided a better fit with the actual distribution of the species. The main areas at risk of invasion by V. germanica include western USA, Mexico, small areas in Central America and in the north-western region of South America, eastern Brazil, western Russia, north-western China, Japan, the Mediterranean coastal regions of North Africa, and parts of southern and eastern Africa.
Directory of Open Access Journals (Sweden)
Marelize de Villiers
Full Text Available The European wasp, Vespula germanica (Fabricius (Hymenoptera: Vespidae, is of Palaearctic origin, being native to Europe, northern Africa and Asia, and introduced into North America, Chile, Argentina, Iceland, Ascension Island, South Africa, Australia and New Zealand. Due to its polyphagous nature and scavenging behaviour, V. germanica threatens agriculture and silviculture, and negatively affects biodiversity, while its aggressive nature and venomous sting pose a health risk to humans. In areas with warmer winters and longer summers, queens and workers can survive the winter months, leading to the build-up of large nests during the following season; thereby increasing the risk posed by this species. To prevent or prepare for such unwanted impacts it is important to know where the wasp may be able to establish, either through natural spread or through introduction as a result of human transport. Distribution data from Argentina and Australia, and seasonal phenology data from Argentina were used to determine the potential distribution of V. germanica using CLIMEX modelling. In contrast to previous models, the influence of irrigation on its distribution was also investigated. Under a natural rainfall scenario, the model showed similarities to previous models. When irrigation is applied, dry stress is alleviated, leading to larger areas modelled climatically suitable compared with previous models, which provided a better fit with the actual distribution of the species. The main areas at risk of invasion by V. germanica include western USA, Mexico, small areas in Central America and in the north-western region of South America, eastern Brazil, western Russia, north-western China, Japan, the Mediterranean coastal regions of North Africa, and parts of southern and eastern Africa.
Directory of Open Access Journals (Sweden)
Mitsuru eMatsumoto
2013-07-01
Full Text Available The discovery of Aire-dependent transcriptional control of many tissue-restricted self-antigen (TRA genes in thymic epithelial cells in the medulla (mTECs has raised the intriguing question of how the single Aire gene can influence the transcription of such a large number of TRA genes within mTECs. From a mechanistic viewpoint, there are two possible models to explain the function of Aire in this action. In the first model, TRAs are considered to be the direct target genes of Aire’s transcriptional activity. In this scenario, the lack of Aire protein within cells would result in the defective TRA gene expression, while the maturation program of mTECs would be unaffected in principle. The second model hypothesizes that Aire is necessary for the maturation program of mTECs. In this case, we assume that the mTEC compartment does not mature normally in the absence of Aire. If acquisition of the properties of TRA gene expression depends on the maturation status of mTECs, a defect of such an Aire-dependent maturation program in Aire-deficient mTECs can also result in impaired TRA gene expression. In this brief review, we will focus on these two contrasting models for the roles of Aire in controlling the expression of TRAs within mTECs
Matsumoto, Mitsuru; Nishikawa, Yumiko; Nishijima, Hitoshi; Morimoto, Junko; Matsumoto, Minoru; Mouri, Yasuhiro
2013-01-01
The discovery of Aire-dependent transcriptional control of many tissue-restricted self-antigen (TRA) genes in thymic epithelial cells in the medulla (medullary thymic epithelial cells, mTECs) has raised the intriguing question of how the single Aire gene can influence the transcription of such a large number of TRA genes within mTECs. From a mechanistic viewpoint, there are two possible models to explain the function of Aire in this action. In the first model, TRAs are considered to be the direct target genes of Aire's transcriptional activity. In this scenario, the lack of Aire protein within cells would result in the defective TRA gene expression, while the maturation program of mTECs would be unaffected in principle. The second model hypothesizes that Aire is necessary for the maturation program of mTECs. In this case, we assume that the mTEC compartment does not mature normally in the absence of Aire. If acquisition of the properties of TRA gene expression depends on the maturation status of mTECs, a defect of such an Aire-dependent maturation program in Aire-deficient mTECs can also result in impaired TRA gene expression. In this brief review, we will focus on these two contrasting models for the roles of Aire in controlling the expression of TRAs within mTECs.
Nevin, John A; Craig, Andrew R; Cunningham, Paul J; Podlesnik, Christopher A; Shahan, Timothy A; Sweeney, Mary M
2017-08-01
We review quantitative accounts of behavioral momentum theory (BMT), its application to clinical treatment, and its extension to post-intervention relapse of target behavior. We suggest that its extension can account for relapse using reinstatement and renewal models, but that its application to resurgence is flawed both conceptually and in its failure to account for recent data. We propose that the enhanced persistence of target behavior engendered by alternative reinforcers is limited to their concurrent availability within a distinctive stimulus context. However, a failure to find effects of stimulus-correlated reinforcer rates in a Pavlovian-to-Instrumental Transfer (PIT) paradigm challenges even a straightforward Pavlovian account of alternative reinforcer effects. BMT has been valuable in understanding basic research findings and in guiding clinical applications and accounting for their data, but alternatives are needed that can account more effectively for resurgence while encompassing basic data on resistance to change as well as other forms of relapse. Copyright © 2017 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Madsen, Jonas Stenløkke; Lin, Yu Cheng; Squyres, Georgia R.
2015-01-01
response to electron acceptor limitation in both biofilm formation regimes, we found variation in the exploitability of its production and necessity for competitive fitness between the two systems. The wild type showed a competitive advantage against a non-Pel-producing mutant in pellicles but no advantage...... in colonies. Adaptation to the pellicle environment selected for mutants with a competitive advantage against the wild type in pellicles but also caused a severe disadvantage in colonies, even in wrinkled colony centers. Evolution in the colony center produced divergent phenotypes, while adaptation...... to the colony edge produced mutants with clear competitive advantages against the wild type in this O2-replete niche. In general, the structurally heterogeneous colony environment promoted more diversification than the more homogeneous pellicle. These results suggest that the role of Pel in community structure...
3D Product Development for Loose-Fitting Garments Based on Parametric Human Models
Krzywinski, S.; Siegmund, J.
2017-10-01
Researchers and commercial suppliers worldwide pursue the objective of achieving a more transparent garment construction process that is computationally linked to a virtual body, in order to save development costs over the long term. The current aim is not to transfer the complete pattern making step to a 3D design environment but to work out basic constructions in 3D that provide excellent fit due to their accurate construction and morphological pattern grading (automatic change of sizes in 3D) in respect of sizes and body types. After a computer-aided derivation of 2D pattern parts, these can be made available to the industry as a basis on which to create more fashionable variations.
Marconi, M.; Molinaro, R.; Ripepi, V.; Cioni, M.-R. L.; Clementini, G.; Moretti, M. I.; Ragosta, F.; de Grijs, R.; Groenewegen, M. A. T.; Ivanov, V. D.
2017-04-01
We present the results of the χ2 minimization model fitting technique applied to optical and near-infrared photometric and radial velocity data for a sample of nine fundamental and three first overtone classical Cepheids in the Small Magellanic Cloud (SMC). The near-infrared photometry (JK filters) was obtained by the European Southern Observatory (ESO) public survey 'VISTA near-infrared Y, J, Ks survey of the Magellanic Clouds system' (VMC). For each pulsator, isoperiodic model sequences have been computed by adopting a non-linear convective hydrodynamical code in order to reproduce the multifilter light and (when available) radial velocity curve amplitudes and morphological details. The inferred individual distances provide an intrinsic mean value for the SMC distance modulus of 19.01 mag and a standard deviation of 0.08 mag, in agreement with the literature. Moreover, the intrinsic masses and luminosities of the best-fitting model show that all these pulsators are brighter than the canonical evolutionary mass-luminosity relation (MLR), suggesting a significant efficiency of core overshooting and/or mass-loss. Assuming that the inferred deviation from the canonical MLR is only due to mass-loss, we derive the expected distribution of percentage mass-loss as a function of both the pulsation period and the canonical stellar mass. Finally, a good agreement is found between the predicted mean radii and current period-radius (PR) relations in the SMC available in the literature. The results of this investigation support the predictive capabilities of the adopted theoretical scenario and pave the way for the application to other extensive data bases at various chemical compositions, including the VMC Large Magellanic Cloud pulsators and Galactic Cepheids with Gaia parallaxes.
A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns
Dao, Ngocanh
2014-04-03
Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Directory of Open Access Journals (Sweden)
Xiaohong Chen
2017-05-01
Full Text Available The upper tail of a flood frequency distribution is always specifically concerned with flood control. However, different model selection criteria often give different optimal distributions when the focus is on the upper tail of distribution. With emphasis on the upper-tail behavior, five distribution selection criteria including two hypothesis tests and three information-based criteria are evaluated in selecting the best fitted distribution from eight widely used distributions by using datasets from Thames River, Wabash River, Beijiang River and Huai River. The performance of the five selection criteria is verified by using a composite criterion with focus on upper tail events. This paper demonstrated an approach for optimally selecting suitable flood frequency distributions. Results illustrate that (1 there are different selections of frequency distributions in the four rivers by using hypothesis tests and information-based criteria approaches. Hypothesis tests are more likely to choose complex, parametric models, and information-based criteria prefer to choose simple, effective models. Different selection criteria have no particular tendency toward the tail of the distribution; (2 The information-based criteria perform better than hypothesis tests in most cases when the focus is on the goodness of predictions of the extreme upper tail events. The distributions selected by information-based criteria are more likely to be close to true values than the distributions selected by hypothesis test methods in the upper tail of the frequency curve; (3 The proposed composite criterion not only can select the optimal distribution, but also can evaluate the error of estimated value, which often plays an important role in the risk assessment and engineering design. In order to decide on a particular distribution to fit the high flow, it would be better to use the composite criterion.
Growing Fit: Georgia's model for engaging early care environments in preventing childhood obesity.
McDavid, Kelsey; Piedrahita, Catalina; Hashima, Patricia; Vall, Emily Anne; Kay, Christi; O'Connor, Jean
2016-01-01
In the United States, one in three children is overweight or obese by their fifth birthday. In Georgia, 35 percent of children are overweight or obese. Contrary to popular belief, children who are overweight or obese are likely to be the same weight status as adults, making early childhood an essential time to address weight status. An estimated 380,000 Georgia children attend early care and education environments, such as licensed child care centers, Head Start, and pre-kindergarten programs, which provide an opportunity to reach large numbers of children, including those at risk for obesity and overweight. To address this opportunity, the Georgia Department of Public Health, Georgia Shape - the Governor's Initiative to prevent childhood obesity, and HealthMPowers, Inc., created the Growing Fit training and toolkit to assist early childhood educators in creating policy, systems, and environmental changes that support good nutrition and physical activity. This report, the first related to this project, describes the training and its dissemination between January and December 2015. A total of 103 early childcare educators from 39 early childcare education centers (22 individual childcare systems) from 19 counties in Georgia were trained. Fifteen systems completed a pre and post-test assessment of their system, demonstrating slight improvements. Training for an additional 125 early childcare education centers is planned for 2016. Lessons learned from the first year of the training include the need for more robust assessment of adoption and implementation of policy, systems, and environmental changes in trained centers.
International Nuclear Information System (INIS)
Gao, Xiankun; Cui, Yan; Hu, Jianjun; Xu, Guangyin; Yu, Yongchang
2016-01-01
Highlights: • Lambert W-function based exact representation (LBER) is presented for double diode model (DDM). • Fitness difference between LBER and DDM is verified by reported parameter values. • The proposed LBER can better represent the I–V and P–V characteristics of solar cells. • Parameter extraction difference between LBER and DDM is validated by two algorithms. • The parameter values extracted from LBER are more accurate than those from DDM. - Abstract: Accurate modeling and parameter extraction of solar cells play an important role in the simulation and optimization of PV systems. This paper presents a Lambert W-function based exact representation (LBER) for traditional double diode model (DDM) of solar cells, and then compares their fitness and parameter extraction performance. Unlike existing works, the proposed LBER is rigorously derived from DDM, and in LBER the coefficients of Lambert W-function are not extra parameters to be extracted or arbitrary scalars but the vectors of terminal voltage and current of solar cells. The fitness difference between LBER and DDM is objectively validated by the reported parameter values and experimental I–V data of a solar cell and four solar modules from different technologies. The comparison results indicate that under the same parameter values, the proposed LBER can better represent the I–V and P–V characteristics of solar cells and provide a closer representation to actual maximum power points of all module types. Two different algorithms are used to compare the parameter extraction performance of LBER and DDM. One is our restart-based bound constrained Nelder-Mead (rbcNM) algorithm implemented in Matlab, and the other is the reported R cr -IJADE algorithm executed in Visual Studio. The comparison results reveal that, the parameter values extracted from LBER using two algorithms are always more accurate and robust than those from DDM despite more time consuming. As an improved version of DDM, the
Lepping, R. P.; Wu, C.-C.; Berdichevsky, D. B.; Szabo, A.
2018-04-01
We give the results of parameter fitting of the magnetic clouds (MCs) observed by the Wind spacecraft for the three-year period 2013 to the end of 2015 (called the "Present" period) using the MC model of Lepping, Jones, and Burlaga ( J. Geophys. Res. 95, 11957, 1990). The Present period is almost coincident with the solar maximum of the sunspot number, which has a broad peak starting in about 2012 and extending to almost 2015. There were 49 MCs identified in the Present period. The modeling gives MC quantities such as size, axial attitude, field handedness, axial magnetic-field strength, center time, and closest-approach vector. Derived quantities are also estimated, such as axial magnetic flux, axial current density, and total axial current. Quality estimates are assigned representing excellent, fair/good, and poor. We provide error estimates on the specific fit parameters for the individual MCs, where the poor cases are excluded. Model-fitting results that are based on the Present period are compared to the results of the full Wind mission from 1995 to the end of 2015 (Long-term period), and compared to the results of two other recent studies that encompassed the periods 2007 - 2009 and 2010 - 2012, inclusive. We see that during the Present period, the MCs are, on average, slightly slower, slightly weaker in axial magnetic field (by 8.7%), and larger in diameter (by 6.5%) than those in the Long-term period. However, in most respects, the MCs in the Present period are significantly closer in characteristics to those of the Long-term period than to those of the two recent three-year periods. However, the rate of occurrence of MCs for the Long-term period is 10.3 year^{-1}, whereas this rate for the Present period is 16.3 year^{-1}, similar to that of the period 2010 - 2012. Hence, the MC occurrence rate has increased appreciably in the last six years. MC Type (N-S, S-N, All N, All S, etc.) is assigned to each MC; there is an inordinately large percentage of All S
Global fits of the two-loop renormalized Two-Higgs-Doublet model with soft Z 2 breaking
Chowdhury, Debtosh; Eberhardt, Otto
2015-11-01
We determine the next-to-leading order renormalization group equations for the Two-Higgs-Doublet model with a softly broken Z 2 symmetry and CP conservation in the scalar potential. We use them to identify the parameter regions which are stable up to the Planck scale and find that in this case the quartic couplings of the Higgs potential cannot be larger than 1 in magnitude and that the absolute values of the S-matrix eigenvalues cannot exceed 2 .5 at the electroweak symmetry breaking scale. Interpreting the 125 GeV resonance as the light CP -even Higgs eigenstate, we combine stability constraints, electroweak precision and flavour observables with the latest ATLAS and CMS data on Higgs signal strengths and heavy Higgs searches in global parameter fits to all four types of Z 2 symmetry. We quantify the maximal deviations from the alignment limit and find that in type II and Y the mass of the heavy CP -even ( CP -odd) scalar cannot be smaller than 340 GeV (360 GeV). Also, we pinpoint the physical parameter regions compatible with a stable scalar potential up to the Planck scale. Motivated by the question how natural a Higgs mass of 125 GeV can be in the context of a Two-Higgs-Doublet model, we also address the hierarchy problem and find that the Two-Higgs-Doublet model does not offer a perturbative solution to it beyond 5 TeV.
Directory of Open Access Journals (Sweden)
Erida Gjini
2016-03-01
Full Text Available The efficacy of vaccines is typically estimated prior to implementation, on the basis of randomized controlled trials. This does not preclude, however, subsequent assessment post-licensure, while mass-immunization and nonlinear transmission feedbacks are in place. In this paper we show how cross-sectional prevalence data post-vaccination can be interpreted in terms of pathogen transmission processes and vaccine parameters, using a dynamic epidemiological model. We advocate the use of such frameworks for model-based vaccine evaluation in the field, fitting trajectories of cross-sectional prevalence of pathogen strains before and after intervention. Using SI and SIS models, we illustrate how prevalence ratios in vaccinated and non-vaccinated hosts depend on true vaccine efficacy, the absolute and relative strength of competition between target and non-target strains, the time post follow-up, and transmission intensity. We argue that a mechanistic approach should be added to vaccine efficacy estimation against multi-type pathogens, because it naturally accounts for inter-strain competition and indirect effects, leading to a robust measure of individual protection per contact. Our study calls for systematic attention to epidemiological feedbacks when interpreting population level impact. At a broader level, our parameter estimation procedure provides a promising proof of principle for a generalizable framework to infer vaccine efficacy post-licensure.
A global fit of the γ-ray galactic center excess within the scalar singlet Higgs portal model
Energy Technology Data Exchange (ETDEWEB)
Cuoco, Alessandro; Eiteneuer, Benedikt; Heisig, Jan; Krämer, Michael [Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University,Sommerfeldstr. 16, 52056 Aachen (Germany)
2016-06-28
We analyse the excess in the γ-ray emission from the center of our galaxy observed by Fermi-LAT in terms of dark matter annihilation within the scalar Higgs portal model. In particular, we include the astrophysical uncertainties from the dark matter distribution and allow for unspecified additional dark matter components. We demonstrate through a detailed numerical fit that the strength and shape of the γ-ray spectrum can indeed be described by the model in various regions of dark matter masses and couplings. Constraints from invisible Higgs decays, direct dark matter searches, indirect searches in dwarf galaxies and for γ-ray lines, and constraints from the dark matter relic density reduce the parameter space to dark matter masses near the Higgs resonance. We find two viable regions: one where the Higgs-dark matter coupling is of O(10{sup −2}), and an additional dark matter component beyond the scalar WIMP of our model is preferred, and one region where the Higgs-dark matter coupling may be significantly smaller, but where the scalar WIMP constitutes a significant fraction or even all of dark matter. Both viable regions are hard to probe in future direct detection and collider experiments.
A global fit of the γ-ray galactic center excess within the scalar singlet Higgs portal model
International Nuclear Information System (INIS)
Cuoco, Alessandro; Eiteneuer, Benedikt; Heisig, Jan; Krämer, Michael
2016-01-01
We analyse the excess in the γ-ray emission from the center of our galaxy observed by Fermi-LAT in terms of dark matter annihilation within the scalar Higgs portal model. In particular, we include the astrophysical uncertainties from the dark matter distribution and allow for unspecified additional dark matter components. We demonstrate through a detailed numerical fit that the strength and shape of the γ-ray spectrum can indeed be described by the model in various regions of dark matter masses and couplings. Constraints from invisible Higgs decays, direct dark matter searches, indirect searches in dwarf galaxies and for γ-ray lines, and constraints from the dark matter relic density reduce the parameter space to dark matter masses near the Higgs resonance. We find two viable regions: one where the Higgs-dark matter coupling is of O(10 −2 ), and an additional dark matter component beyond the scalar WIMP of our model is preferred, and one region where the Higgs-dark matter coupling may be significantly smaller, but where the scalar WIMP constitutes a significant fraction or even all of dark matter. Both viable regions are hard to probe in future direct detection and collider experiments.
Harris, Judi
2008-01-01
Educational technology-related professional development (ETPD) can be designed in many different ways. It varies by general purposes and goals, specific learning objectives, curriculum content, the student grade levels for which the strategies and tools presented are appropriate, professional development model(s) used, how it is matched to…
Directory of Open Access Journals (Sweden)
James M McCaw
2011-04-01
Full Text Available We present a method to measure the relative transmissibility ("transmission fitness" of one strain of a pathogen compared to another. The model is applied to data from "competitive mixtures" experiments in which animals are co-infected with a mixture of two strains. We observe the mixture in each animal over time and over multiple generations of transmission. We use data from influenza experiments in ferrets to demonstrate the approach. Assessment of the relative transmissibility between two strains of influenza is important in at least three contexts: 1 Within the human population antigenically novel strains of influenza arise and compete for susceptible hosts. 2 During a pandemic event, a novel sub-type of influenza competes with the existing seasonal strain(s. The unfolding epidemiological dynamics are dependent upon both the population's susceptibility profile and the inherent transmissibility of the novel strain compared to the existing strain(s. 3 Neuraminidase inhibitors (NAIs, while providing significant potential to reduce transmission of influenza, exert selective pressure on the virus and so promote the emergence of drug-resistant strains. Any adverse outcome due to selection and subsequent spread of an NAI-resistant strain is exquisitely dependent upon the transmission fitness of that strain. Measurement of the transmission fitness of two competing strains of influenza is thus of critical importance in determining the likely time-course and epidemiology of an influenza outbreak, or the potential impact of an intervention measure such as NAI distribution. The mathematical framework introduced here also provides an estimate for the size of the transmitted inoculum. We demonstrate the framework's behaviour using data from ferret transmission studies, and through simulation suggest how to optimise experimental design for assessment of transmissibility. The method introduced here for assessment of mixed transmission events has
International Nuclear Information System (INIS)
Brenna, M; Colò, G; Roca-Maza, X; Bortignon, P F; Moghrabi, K; Grasso, M
2014-01-01
Self-consistent mean-field models are able to reproduce well the overall properties of nuclei for a wide range of masses. Nevertheless, they are intrinsically unsuitable for the description of some important observables like the single-particle strength distribution or, in connection with collective states, their damping width and their gamma decay to the ground state or to low lying states. For this reason, a completely microscopic approach beyond mean- field has been implemented recently, based on the Skyrme functional. When beyond mean-field theories are handled, the mean-field-fitted effective interaction should be refitted at the desired level of approximation. If zero-range interactions are used, divergences arise. We present some steps towards the refitting of Skyrme interactions, for its application in finite nuclei.
Beldad, Ardion Daroca; Hegner, Sabrina
2017-01-01
According to one market research, fitness or running apps are hugely popular in Germany. Such a trend prompts the question concerning the factors influencing German users’ intention to continue using a specific fitness app. To address the research question, the expanded Technology Acceptance Model
Veinot, Tiffany C; Senteio, Charles R; Hanauer, David; Lowery, Julie C
2017-09-02
To describe a new, comprehensive process model of clinical information interaction in primary care (Clinical Information Interaction Model, or CIIM) based on a systematic synthesis of published research. We used the "best fit" framework synthesis approach. Searches were performed in PubMed, Embase, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), PsycINFO, Library and Information Science Abstracts, Library, Information Science and Technology Abstracts, and Engineering Village. Two authors reviewed articles according to inclusion and exclusion criteria. Data abstraction and content analysis of 443 published papers were used to create a model in which every element was supported by empirical research. The CIIM documents how primary care clinicians interact with information as they make point-of-care clinical decisions. The model highlights 3 major process components: (1) context, (2) activity (usual and contingent), and (3) influence. Usual activities include information processing, source-user interaction, information evaluation, selection of information, information use, clinical reasoning, and clinical decisions. Clinician characteristics, patient behaviors, and other professionals influence the process. The CIIM depicts the complete process of information interaction, enabling a grasp of relationships previously difficult to discern. The CIIM suggests potentially helpful functionality for clinical decision support systems (CDSSs) to support primary care, including a greater focus on information processing and use. The CIIM also documents the role of influence in clinical information interaction; influencers may affect the success of CDSS implementations. The CIIM offers a new framework for achieving CDSS workflow integration and new directions for CDSS design that can support the work of diverse primary care clinicians. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All
Kang, Yun; Castillo-Chavez, Carlos
2014-01-01
The study of the dynamics of human infectious disease using deterministic models is typically carried out under the assumption that a critical mass of individuals is available and involved in the transmission process. However, in the study of animal disease dynamics where demographic considerations often play a significant role, this assumption must be weakened. Models of the dynamics of animal populations often naturally assume that the presence of a minimal number of individuals is essential to avoid extinction. In the ecological literature, this a priori requirement is commonly incorporated as an Allee effect . The focus here is on the study disease dynamics under the assumption that a critical mass of susceptible individuals is required to guarantee the population's survival. Specifically, the emphasis is on the study of the role of an Allee effect on a Susceptible-Infectious (SI) model where the possibility that susceptible and infected individuals reproduce, with the S-class the best fit. It is further assumed that infected individuals loose some of their ability to compete for resources, the cost imposed by the disease. These features are set in motion in as simple model as possible. They turn out to lead to a rich set of dynamical outcomes. This toy model supports the possibility of multi-stability (hysteresis), saddle node and Hopf bifurcations, and catastrophic events (disease-induced extinction). The analyses provide a full picture of the system under disease-free dynamics including disease-induced extinction and proceed to identify required conditions for disease persistence. We conclude that increases in (i) the maximum birth rate of a species, or (ii) in the relative reproductive ability of infected individuals, or (iii) in the competitive ability of a infected individuals at low density levels, or in (iv) the per-capita death rate (including disease-induced) of infected individuals, can stabilize the system (resulting in disease persistence). We
Stiglbauer, Barbara; Kovacs, Carrie
2017-12-28
In organizational psychology research, autonomy is generally seen as a job resource with a monotone positive relationship with desired occupational outcomes such as well-being. However, both Warr's vitamin model and person-environment (PE) fit theory suggest that negative outcomes may result from excesses of some job resources, including autonomy. Thus, the current studies used survey methodology to explore cross-sectional relationships between environmental autonomy, person-environment autonomy (mis)fit, and well-being. We found that autonomy and autonomy (mis)fit explained between 6% and 22% of variance in well-being, depending on type of autonomy (scheduling, method, or decision-making) and type of (mis)fit operationalization (atomistic operationalization through the separate assessment of actual and ideal autonomy levels vs. molecular operationalization through the direct assessment of perceived autonomy (mis)fit). Autonomy (mis)fit (PE-fit perspective) explained more unique variance in well-being than environmental autonomy itself (vitamin model perspective). Detrimental effects of autonomy excess on well-being were most evident for method autonomy and least consistent for decision-making autonomy. We argue that too-much-of-a-good-thing effects of job autonomy on well-being exist, but suggest that these may be dependent upon sample characteristics (range of autonomy levels), type of operationalization (molecular vs. atomistic fit), autonomy facet (method, scheduling, or decision-making), as well as individual and organizational moderators. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Directory of Open Access Journals (Sweden)
Shidrokh Goudarzi
2015-01-01
Full Text Available The vertical handover mechanism is an essential issue in the heterogeneous wireless environments where selection of an efficient network that provides seamless connectivity involves complex scenarios. This study uses two modules that utilize the particle swarm optimization (PSO algorithm to predict and make an intelligent vertical handover decision. In this paper, we predict the received signal strength indicator parameter using the curve fitting based particle swarm optimization (CF-PSO and the RBF neural networks. The results of the proposed methodology compare the predictive capabilities in terms of coefficient determination (R2 and mean square error (MSE based on the validation dataset. The results show that the effect of the model based on the CF-PSO is better than that of the model based on the RBF neural network in predicting the received signal strength indicator situation. In addition, we present a novel network selection algorithm to select the best candidate access point among the various access technologies based on the PSO. Simulation results indicate that using CF-PSO algorithm can decrease the number of unnecessary handovers and prevent the “Ping-Pong” effect. Moreover, it is demonstrated that the multiobjective particle swarm optimization based method finds an optimal network selection in a heterogeneous wireless environment.
International Nuclear Information System (INIS)
Taylor, J.M.G.
1987-01-01
In radiobiology functional endpoints are frequently used as an indirect measure of cellular survival. For some of these assays, the endpoint is an ordered categorical variable, e.g. breathing rate or degree of epilation. In fitting the L.Q. model, unless the detailed cellular structure causing the endpoint is well understood, it is only possible to estimate α/β but not α and β separately. The statistical method used is maximum likelihood on an extension of logistic regression to multiple categories. The model assumes that log(Q/sub j/(n,x)/1-Q/sub j/(n,x)) = ν/sub j/ -Anx-Bnx/sup 2/, for n fractions, x = dose/fraction, Q/sub j/(n,x) is P(response ≥j), j = 1, ...,k, for k categories. The method makes efficient use of all the observations and gives estimates and confidence intervals for α/β. It is available using PROC LOGIST in the SAS package. The technique can accommodate up to about 8 levels of response. The estimate of α/β is unbiased even if the data arises from sigmoid probability response curves other than the logistic. Using all the levels of response is more efficient than an analysis based on reducing the response to binary. The technique is applied to some multifraction data for epilation and depigmentation of mouse hair
Whiteman-Sandland, Jessica; Hawkins, Jemma; Clayton, Debbie
2016-08-01
This is the first study to measure the 'sense of community' reportedly offered by the CrossFit gym model. A cross-sectional study adapted Social Capital and General Belongingness scales to compare perceptions of a CrossFit gym and a traditional gym. CrossFit gym members reported significantly higher levels of social capital (both bridging and bonding) and community belongingness compared with traditional gym members. However, regression analysis showed neither social capital, community belongingness, nor gym type was an independent predictor of gym attendance. Exercise and health professionals may benefit from evaluating further the 'sense of community' offered by gym-based exercise programmes.
Directory of Open Access Journals (Sweden)
Nele Goeyvaerts
2015-12-01
Full Text Available Dynamic transmission models are essential to design and evaluate control strategies for airborne infections. Our objective was to develop a dynamic transmission model for seasonal influenza allowing to evaluate the impact of vaccinating specific age groups on the incidence of infection, disease and mortality. Projections based on such models heavily rely on assumed ‘input’ parameter values. In previous seasonal influenza models, these parameter values were commonly chosen ad hoc, ignoring between-season variability and without formal model validation or sensitivity analyses. We propose to directly estimate the parameters by fitting the model to age-specific influenza-like illness (ILI incidence data over multiple influenza seasons. We used a weighted least squares (WLS criterion to assess model fit and applied our method to Belgian ILI data over six influenza seasons. After exploring parameter importance using symbolic regression, we evaluated a set of candidate models of differing complexity according to the number of season-specific parameters. The transmission parameters (average R0, seasonal amplitude and timing of the seasonal peak, waning rates and the scale factor used for WLS optimization, influenced the fit to the observed ILI incidence the most. Our results demonstrate the importance of between-season variability in influenza transmission and our estimates are in line with the classification of influenza seasons according to intensity and vaccine matching.
Directory of Open Access Journals (Sweden)
Eric eBoyd
2012-06-01
Full Text Available The extent to which geochemical variation constrains the distribution of phototrophic metabolisms was modeled based on 439 observations in geothermal springs in Yellowstone National Park (YNP, Wyoming. Generalized additive models (GAMs were developed to predict the distribution of photosynthesis as a function of spring temperature, pH, and total sulfide. GAMs comprised of temperature explained 42.7% of the variation in the distribution of phototrophic metabolisms whereas GAMs comprised of sulfide and pH explained 20.7% and 11.7% of the variation, respectively. These results suggest that of the measured variables, temperature is the primary constraint on the distribution of phototrophic metabolism in YNP. GAMs comprised of multiple variables explained a larger percentage of the variation in the distribution of phototrophic metabolism, indicating additive interactions among variables. A GAM that combined temperature and sulfide explained the greatest variation in the dataset (54.8% while minimizing the introduction of degrees of freedom. In an effort to verify the extent to which phototroph distribution reflects constraints on activity, we examined the influence of sulfide and temperature on dissolved inorganic carbon (DIC uptake rates under both light and dark conditions. Light-driven DIC uptake decreased systematically with increasing concentrations of sulfide in acidic, algal-dominated systems, but was unaffected in alkaline, bacterial-dominated systems. In both alkaline and acidic systems, light-driven DIC uptake was suppressed in cultures incubated at temperatures 10°C greater than their in situ temperature. Collectively, these results suggest that the habitat range of phototrophs in YNP springs, specifically that of cyanobacteria and algae, largely results from constraints imposed by temperature and sulfide on the activity and fitness of these populations, a finding that is consistent with the predictions from GAMs.
Lewandowski, Damian; Dubińska-Magiera, Magda; Posyniak, Ewelina; Rupik, Weronika; Daczewska, Małgorzata
2017-07-01
In the grass snake (Natrix natrix), the newly developed somites form vesicles that are located on both sides of the neural tube. The walls of the vesicles are composed of tightly connected epithelial cells surrounding the cavity (the somitocoel). Also, in the newly formed somites, the Pax3 protein can be observed in the somite wall cells. Subsequently, the somite splits into three compartments: the sclerotome, dermomyotome (with the dorsomedial [DM] and the ventrolateral [VL] lips) and the myotome. At this stage, the Pax3 protein is detected in both the DM and VL lips of the dermomyotome and in the mononucleated cells of the myotome, whereas the Pax7 protein is observed in the medial part of the dermomyotome and in some of the mononucleated cells of the myotome. The mononucleated cells then become elongated and form myotubes. As myogenesis proceeds, the myotome is filled with multinucleated myotubes accompanied by mononucleated, Pax7-positive cells (satellite cells) that are involved in muscle growth. The Pax3-positive progenitor muscle cells are no longer observed. Moreover, we have observed unique features in the differentiation of the muscles in these snakes. Specifically, our studies have revealed the presence of two classes of muscles in the myotomes. The first class is characterised by fast muscle fibres, with myofibrils equally distributed throughout the sarcoplasm. In the second class, composed of slow muscle fibres, the sarcoplasm is filled with lipid droplets. We assume that their storage could play a crucial role during hibernation in the adult snakes. We suggest that the model of myotomal myogenesis in reptiles, birds and mammals shows the same morphological and molecular character. We therefore believe that the grass snake, in spite of the unique features of its myogenesis, fits into the amniotes-specific model of trunk muscle development.
Directory of Open Access Journals (Sweden)
Wakhid Slamet Ciptono
2011-02-01
Full Text Available This study purposively is to conduct an empirical analysis of the structural relations among critical factors of quality management practices (QMPs, world-class company practice (WCC, operational excellence practice (OE, and company performance (company non-financial performance or CNFP and company financial performance or CFP in the oil and gas companies operating in Indonesia. The current study additionally examines the relationships between QMPs and CFP through WCC, OE, and CNFP (as partial mediators simultaneously. The study uses data from a survey of 140 strategic business units (SBUs within 49 oil and gas contractor companies in Indonesia. The findings suggest that all six QMPs have positive and significant indirect relationships on CFP through WCC and CNFP. Only four of six QMPs have positive and significant indirect relationships on CFP through OE and CNFP. Hence, WCC, OE, and CNFP play as partial mediators between QMPs and CFP. CNFP has a significant influence on CFP. A major implication of this study is that oil and gas managers need to recognize the structural relations model fit by developing all of the research constructs simultaneously associated with a comprehensive TQM practice. Furthermore, the findings will assist oil and gas companies by improving CNFP, which is very critical to TQM, thereby contributing to a better achievement of CFP. The current study uses the Deming’s principles, Hayes and Wheelwright dimensions of world-class company practice, Chevron Texaco’s operational excellence practice, and the dimensions of company financial and non-financial performances. The paper also provides an insight into the sustainability of TQM implementation model and their effect on company financial performance in oil and gas companies in Indonesia.
Swider, P; Guérin, G; Baas, Joergen; Søballe, Kjeld; Bechtold, Joan E
2009-08-07
Orthopaedic implant fixation is strongly dependant upon the effective mechanical properties of newly formed tissue. In this study, we evaluated the potential of modal analysis to derive viscoelastic properties of periprosthetic tissue. We hypothesized that Young's modulus and loss factor could be obtained by a combined theoretical, computational and experimental modal analysis approach. This procedure was applied to ex vivo specimens from a cylindrical experimental implant placed in cancellous bone in an unloaded press-fit configuration, obtained after a four week observation period. Four sections each from seven textured titanium implants were investigated. The first resonant frequency and loss factor were measured. Average experimentally determined loss factor was 2% (SD 0.4%) and average first resonant frequency was 2.1 KHz (SD: 50). A 2D axisymmetric finite element (FE) model identified effective Young's modulus of tissue using experimental resonant frequencies as input. Average value was 42 MPa (SD: 2.4) and no significant difference between specimens was observed. In this pilot study, the non-destructive method allowed accurate measure of dynamic loss factor and resonant frequency and derivation of effective Young's modulus. Prior to implementing this dynamic protocol for broader mechanical evaluation of experimental implant fixation, further work is needed to determine if this affects results from subsequent destructive shear push-out tests.
International Nuclear Information System (INIS)
Hameeteman, K; Niessen, W J; Klein, S; Van 't Klooster, R; Selwaness, M; Van der Lugt, A; Witteman, J C M
2013-01-01
We present a method for carotid vessel wall volume quantification from magnetic resonance imaging (MRI). The method combines lumen and outer wall segmentation based on deformable model fitting with a learning-based segmentation correction step. After selecting two initialization points, the vessel wall volume in a region around the bifurcation is automatically determined. The method was trained on eight datasets (16 carotids) from a population-based study in the elderly for which one observer manually annotated both the lumen and outer wall. An evaluation was carried out on a separate set of 19 datasets (38 carotids) from the same study for which two observers made annotations. Wall volume and normalized wall index measurements resulting from the manual annotations were compared to the automatic measurements. Our experiments show that the automatic method performs comparably to the manual measurements. All image data and annotations used in this study together with the measurements are made available through the website http://ergocar.bigr.nl. (paper)
Directory of Open Access Journals (Sweden)
André F. De Champlain
2015-04-01
Full Text Available Purpose: This study aims to assess the fit of a number of exploratory and confirmatory factor analysis models to the 2010 Medical Council of Canada Qualifying Examination Part I (MCCQE1 clinical decision-making (CDM cases. The outcomes of this study have important implications for a range of domains, including scoring and test development. Methods: The examinees included all first-time Canadian medical graduates and international medical graduates who took the MCCQE1 in spring or fall 2010. The fit of one- to five-factor exploratory models was assessed for the item response matrix of the 2010 CDM cases. Five confirmatory factor analytic models were also examined with the same CDM response matrix. The structural equation modeling software program Mplus was used for all analyses. Results: Out of the five exploratory factor analytic models that were evaluated, a three-factor model provided the best fit. Factor 1 loaded on three medicine cases, two obstetrics and gynecology cases, and two orthopedic surgery cases. Factor 2 corresponded to pediatrics, and the third factor loaded on psychiatry cases. Among the five confirmatory factor analysis models examined in this study, three- and four-factor lifespan period models and the five-factor discipline models provided the best fit. Conclusion: The results suggest that knowledge of broad disciplinary domains best account for performance on CDM cases. In test development, particular effort should be placed on developing CDM cases according to broad discipline and patient age domains; CDM testlets should be assembled largely using the criteria of discipline and age.
DeGeest, David Scott; Schmidt, Frank
2015-01-01
Our objective was to apply the rigorous test developed by Browne (1992) to determine whether the circumplex model fits Big Five personality data. This test has yet to be applied to personality data. Another objective was to determine whether blended items explained correlations among the Big Five traits. We used two working adult samples, the Eugene-Springfield Community Sample and the Professional Worker Career Experience Survey. Fit to the circumplex was tested via Browne's (1992) procedure. Circumplexes were graphed to identify items with loadings on multiple traits (blended items), and to determine whether removing these items changed five-factor model (FFM) trait intercorrelations. In both samples, the circumplex structure fit the FFM traits well. Each sample had items with dual-factor loadings (8 items in the first sample, 21 in the second). Removing blended items had little effect on construct-level intercorrelations among FFM traits. We conclude that rigorous tests show that the fit of personality data to the circumplex model is good. This finding means the circumplex model is competitive with the factor model in understanding the organization of personality traits. The circumplex structure also provides a theoretically and empirically sound rationale for evaluating intercorrelations among FFM traits. Even after eliminating blended items, FFM personality traits remained correlated.
André, Marcel J
2013-08-01
Photosynthetic assimilation of CO2 in plants results in the balance between the photochemical energy developed by light in chloroplasts, and the consumption of that energy by the oxygenation processes, mainly the photorespiration in C3 plants. The analysis of classical biological models shows the difficulties to bring to fore the oxygenation rate due to the photorespiration pathway. As for other parameters, the most important key point is the estimation of the electron transport rate (ETR or J), i.e. the flux of biochemical energy, which is shared between the reductive and oxidative cycles of carbon. The only reliable method to quantify the linear electron flux responsible for the production of reductive energy is to directly measure the O2 evolution by (18)O2 labelling and mass spectrometry. The hypothesis that the respective rates of reductive and oxidative cycles of carbon are only determined by the kinetic parameters of Rubisco, the respective concentrations of CO2 and O2 at the Rubisco site and the available electron transport rate, ultimately leads to propose new expressions of biochemical model equations. The modelling of (18)O2 and (16)O2 unidirectional fluxes in plants shows that a simple model can fit the photosynthetic and photorespiration exchanges for a wide range of environmental conditions. Its originality is to express the carboxylation and the oxygenation as a function of external gas concentrations, by the definition of a plant specificity factor Sp that mimics the internal reactions of Rubisco in plants. The difference between the specificity factors of plant (Sp) and of Rubisco (Sr) is directly related to the conductance values to CO2 transfer between the atmosphere and the Rubisco site. This clearly illustrates that the values and the variation of conductance are much more important, in higher C3 plants, than the small variations of the Rubisco specificity factor. The simple model systematically expresses the reciprocal variations of
Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu
2016-03-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.
International Nuclear Information System (INIS)
Wilkins, T.A.; Chadney, D.C.; Bryant, J.; Palmstroem, S.H.; Winder, R.L.
1977-01-01
Using the simple univalent antigen univalent-antibody equilibrium model the dose-response curve of a radioimmunoassay (RIA) may be expressed as a function of Y, X and the four physical parameters of the idealised system. A compact but powerful mini-computer program has been written in BASIC for rapid iterative non-linear least squares curve fitting and dose interpolation with this function. In its simplest form the program can be operated in an 8K byte mini-computer. The program has been extensively tested with data from 10 different assay systems (RIA and CPBA) for measurement of drugs and hormones ranging in molecular size from thyroxine to insulin. For each assay system the results have been analysed in terms of (a) curve fitting biases and (b) direct comparison with manual fitting. In all cases the quality of fitting was remarkably good in spite of the fact that the chemistry of each system departed significantly from one or more of the assumptions implicit in the model used. A mathematical analysis of departures from the model's principal assumption has provided an explanation for this somewhat unexpected observation. The essential features of this analysis are presented in this paper together with the statistical analyses of the performance of the program. From these and the results obtained to date in the routine quality control of these 10 assays, it is concluded that the method of curve fitting and dose interpolation presented in this paper is likely to be of general applicability. (orig.) [de
Maydeu-Olivares, Alberto; Montano, Rosa
2013-01-01
We investigate the performance of three statistics, R [subscript 1], R [subscript 2] (Glas in "Psychometrika" 53:525-546, 1988), and M [subscript 2] (Maydeu-Olivares & Joe in "J. Am. Stat. Assoc." 100:1009-1020, 2005, "Psychometrika" 71:713-732, 2006) to assess the overall fit of a one-parameter logistic model…
Anshel, Mark H.; Kang, Minsoo
2007-01-01
The authors' purpose in this action study was to examine the effect of a 10-week intervention, using the Disconnected Values Model (DVM), on changes in selected measures of fitness, blood lipids, and exercise adherence among 51 university faculty (10 men and 41 women) from a school in the southeastern United States. The DVM is an intervention…
Lichtenberg, James W.; Hummel, Thomas J.
This investigation tested the hypothesis that the probabilistic structure underlying psychotherapy interviews is Markovian. The "goodness of fit" of a first-order Markov chain model to actual therapy interviews was assessed using a x squared test of homogeneity, and by generating by Monte Carlo methods empirical sampling distributions of…
Is a 'one size fits all' taphonomic model appropriate for the Mazon Creek Lagerstätte?
Clements, Thomas; Purnell, Mark; Gabbott, Sarah
2017-04-01
The Late Carboniferous Mazon Creek Lagerstätte (Illinois, USA) is a world renowned fossil deposit with a huge diversity of preserved flora and fauna. It is widely considered to represent the most complete Late Carboniferous river delta ecosystem because researchers have identified that the deposit preserves organisms from multiple habitats including coastal swamps, brackish lagoons and oceanic environments. Often these fossils have exquisite soft tissue preservation yielding far more information that the 'normal' skeletal fossil record, while some soft bodied animals, such as the notorious Tully Monster (Tullimonstrum gregarium), are only known from this locality. However, constraining a 'one-size fits all' taphonomic model for the Mazon Creek is difficult because of our poor understanding of sideritic concretionary formation or preservation (i.e. the presence of large numbers of unfossiliferous concretions), the large geographical area, the influences of fresh, brackish and saline waters during burial and the subsequent complicated diagenetic processes. To determine the preservational pathways of Mazon Creek fossils, we have compiled data of the mode of preservation of morphological characters for all major groups of fossil organisms found in this Lagerstätte. This data can be used to test for variance in mode of preservation between taxa and also between specific tissue types. Furthermore, experimental decay data is used to constrain the impact of decay prior to fossilisation. Our analysis indicates that there are variations in preservation potential of specific characters shared by taxa. Modes of preservation, however, seem to be consistent across the majority of taxa dependant on locality. This quantitative approach is being utilised as part of a larger ongoing investigation which combines taphonomy with geochemical analysis of siderite concretions from across the vast geographical area of the Mazon Creek. Together this approach will allow us to elucidate the
One size does not fit all - understanding the front-end and back-ens of business model innovation
DEFF Research Database (Denmark)
Günzel, Franziska; Holm, Anna B.
2013-01-01
production and delivery have affected key components of these business models, namely value creation, proposition, delivery and capture in the period 2002–2011. Our findings suggest the need to distinguish between front-end and back-end business model innovation processes, and to recognize the importance......Business model innovation is becoming a central research topic in management. However, a lack of a common understanding of the nature of the business model leads to disregarding its multifaceted structure when analyzing the business model innovation process. This article proposes a more detailed...... understanding of the business model innovation process by drawing on existing knowledge from new product development literature and examining the front-end and the back-end of business model innovation of three leading Danish newspapers. We studied how changes introduced during the development of digital news...
DEFF Research Database (Denmark)
Cameron, Ian; Gani, Rafiqul
2011-01-01
This chapter deals with the practicalities of building, testing, deploying and maintaining models. It gives specific advice for each phase of the modelling cycle. To do this, a modelling framework is introduced which covers: problem and model definition; model conceptualization; model data...... requirements; model construction; model solution; model verification; model validation and finally model deployment and maintenance. Within the adopted methodology, each step is discussedthrough the consideration of key issues and questions relevant to the modelling activity. Practical advice, based on many...... years of experience is providing in directing the reader in their activities.Traps and pitfalls are discussed and strategies also given to improve model development towards “fit-for-purpose” models. The emphasis in this chapter is the adoption and exercise of a modelling methodology that has proven very...
Directory of Open Access Journals (Sweden)
Ahmad Heru Mujianto
2017-01-01
Abstract Private Higher Education (PHE in Jombang apply online admission of new students selection process, so applicants simply register through admission of new students online website owned by their respective private universities, without needing to the university. But in the implementation there are still prospective students who apply directly to the office of admission of new students PHE, it makes the need to measure the success rate of admission of new students website online application in PHE. Also, so far admission of new students online website PHE in Jombang has never been evaluated to determine the success rate. HOT (Human Organization Technology Fit model is a model of success that can be used as a model for evaluating information systems. There are seven variables used by HOT Fit, i.e., system quality, information quality, service quality, system use, user satisfaction, net beneFits, organizational structure (organization structure. The result of the research shows that there are three assessment indicators with satisfaction value below 85%, the response time is 76,1%; the availability of 71.6% of aid facilities; and 64.2% display satisfaction. So that three indicators need to be increased again to get better results and can optimize the implementation of admission of new students website online PHE in Jombang. Keywords: Admission of new students; HOT Fit; Human Organization Technology; Private Higher Education; PHE.
Toribo, S.G.; Gray, B.R.; Liang, S.
2011-01-01
The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.
Berry, Prudence Jane
2014-01-01
This article looks at the range of financial reporting models available for use in the Australian higher education sector, the possible application of activity-based costing (ABC) in faculties and the eventual rejection of ABC in favour of a more qualitative model designed specifically for use in one institution, in a particular Faculty. The…
Wampold, Bruce E.; And Others
1995-01-01
Describes qualitative study of chemistry laboratory groups of undergraduates to explore notion that a critical aspect of the environment in person-environment models is the nature and density of the social interactions of the people in the environment. Holland's hexagonal model of personality types was the framework used to study related…
Christiansen, Bo
2015-04-01
Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.
DEFF Research Database (Denmark)
Stein, Wilfred D; Litman, Thomas
2006-01-01
by the body's immune system, or by random apoptosis or senescence. (iv) Recurrence suppressor mechanisms exist. (v) When such genes are disabled by random mutations, the dormant metastatic cell is activated, and will develop to a cancer recurrence. The model was also fitted to data on the survival......We successfully modeled the recurrence of tumors in breast cancer patients, assuming that: (i) A breast cancer patient is likely to have some circulating metastatic cells, even after initial surgery. (ii) These metastatic cells are dormant. (iii) The dormant cells are subject to attrition...
Energy Technology Data Exchange (ETDEWEB)
Abdeldayem, H.M.; Ruiz, P.; Delmon, B. [Unite de Catalyse et Chimie des Materiaux Divises, Universite Catholique de Louvain, Louvain-La-Neuve (Belgium); Thyrion, F.C. [Unite des Procedes Faculte des Sciences Appliquees, Universite Catholique de Louvain, Louvain-La-Neuve (Belgium)
1998-12-31
A new kinetic model for a more accurate and detailed fitting of the experimental data is proposed. The model is based on the remote control mechanism (RCM). The RCM assumes that some oxides (called `donors`) are able to activate molecular oxygen transforming it to very active mobile species (spillover oxygen (O{sub OS})). O{sub OS} migrates onto the surface of the other oxide (called `acceptor`) where it creates and/or regenerates the active sites during the reaction. The model contains tow terms, one considering the creation of selective sites and the other the catalytic reaction at each site. The model has been tested in the selective oxidation of propene into acrolein (T=380, 400, 420 C; oxygen and propene partial pressures between 38 and 152 Torr). Catalysts were prepared as pure MoO{sub 3} (acceptor) and their mechanical mixtures with {alpha}-Sb{sub 2}O{sub 4} (donor) in different proportions. The presence of {alpha}-Sb{sub 2}O{sub 4} changes the reaction order, the activation energy of the reaction and the number of active sites of MoO{sub 3} produced by oxygen spillover. These changes are consistent with a modification in the degree of irrigation of the surface by oxygen spillover. The fitting of the model to experimental results shows that the number of sites created by O{sub SO} increases with the amount of {alpha}-Sb{sub 2}O{sub 4}. (orig.)
Armour, Cherie; Carragher, Natacha; Elhai, Jon D
2013-01-01
Since the initial inclusion of PTSD in the DSM nomenclature, PTSD symptomatology has been distributed across three symptom clusters. However, a wealth of empirical research has concluded that PTSD's latent structure is best represented by one of two four-factor models: Numbing or Dysphoria. Recently, a newly proposed five-factor Dysphoric Arousal model, which separates the DSM-IV's Arousal cluster into two factors of Anxious Arousal and Dysphoric Arousal, has gathered support across a variety of trauma samples. To date, the Dysphoric Arousal model has not been assessed using nationally representative epidemiological data. We employed confirmatory factor analysis to examine PTSD's latent structure in two independent population based surveys from American (NESARC) and Australia (NSWHWB). We specified and estimated the Numbing model, the Dysphoria model, and the Dysphoric Arousal model in both samples. Results revealed that the Dysphoric Arousal model provided superior fit to the data compared to the alternative models. In conclusion, these findings suggest that items D1-D3 (sleeping difficulties; irritability; concentration difficulties) represent a separate, fifth factor within PTSD's latent structure using nationally representative epidemiological data in addition to single trauma specific samples. Copyright © 2012 Elsevier Ltd. All rights reserved.
Hamada, K.; Yoshizawa, K.
2013-12-01
Anelastic attenuation of seismic waves provides us with valuable information on temperature and water content in the Earth's mantle. While seismic velocity models have been investigated by many researchers, anelastic attenuation (or Q) models have yet to be investigated in detail mainly due to the intrinsic difficulties and uncertainties in the amplitude analysis of observed seismic waveforms. To increase the horizontal resolution of surface wave attenuation models on a regional scale, we have developed a new method of fully non-linear waveform fitting to measure inter-station phase velocities and amplitude ratios simultaneously, using the Neighborhood Algorithm (NA) as a global optimizer. Model parameter space (perturbations of phase speed and amplitude ratio) is explored to fit two observed waveforms on a common great-circle path by perturbing both phase and amplitude of the fundamental-mode surface waves. This method has been applied to observed waveform data of the USArray from 2007 to 2008, and a large-number of inter-station amplitude and phase speed data are corrected in a period range from 20 to 200 seconds. We have constructed preliminary phase speed and attenuation models using the observed phase and amplitude data, with careful considerations of the effects of elastic focusing and station correction factors for amplitude data. The phase velocity models indicate good correlation with the conventional tomographic results in North America on a large-scale; e.g., significant slow velocity anomaly in volcanic regions in the western United States. The preliminary results of surface-wave attenuation achieved a better variance reduction when the amplitude data are inverted for attenuation models in conjunction with corrections for receiver factors. We have also taken into account the amplitude correction for elastic focusing based on a geometrical ray theory, but its effects on the final model is somewhat limited and our attenuation model show anti
Rabin, Sam S.; Ward, Daniel S.; Malyshev, Sergey L.; Magi, Brian I.; Shevliakova, Elena; Pacala, Stephen W.
2018-03-01
This study describes and evaluates the Fire Including Natural & Agricultural Lands model (FINAL) which, for the first time, explicitly simulates cropland and pasture management fires separately from non-agricultural fires. The non-agricultural fire module uses empirical relationships to simulate burned area in a quasi-mechanistic framework, similar to past fire modeling efforts, but with a novel optimization method that improves the fidelity of simulated fire patterns to new observational estimates of non-agricultural burning. The agricultural fire components are forced with estimates of cropland and pasture fire seasonality and frequency derived from observational land cover and satellite fire datasets. FINAL accurately simulates the amount, distribution, and seasonal timing of burned cropland and pasture over 2001-2009 (global totals: 0.434×106 and 2.02×106 km2 yr-1 modeled, 0.454×106 and 2.04×106 km2 yr-1 observed), but carbon emissions for cropland and pasture fire are overestimated (global totals: 0.295 and 0.706 PgC yr-1 modeled, 0.194 and 0.538 PgC yr-1 observed). The non-agricultural fire module underestimates global burned area (1.91×106 km2 yr-1 modeled, 2.44×106 km2 yr-1 observed) and carbon emissions (1.14 PgC yr-1 modeled, 1.84 PgC yr-1 observed). The spatial pattern of total burned area and carbon emissions is generally well reproduced across much of sub-Saharan Africa, Brazil, Central Asia, and Australia, whereas the boreal zone sees underestimates. FINAL represents an important step in the development of global fire models, and offers a strategy for fire models to consider human-driven fire regimes on cultivated lands. At the regional scale, simulations would benefit from refinements in the parameterizations and improved optimization datasets. We include an in-depth discussion of the lessons learned from using the Levenberg-Marquardt algorithm in an interactive optimization for a dynamic global vegetation model.
Narlikar, Leelavati; Mehta, Nidhi; Galande, Sanjeev; Arjunwadkar, Mihir
2013-01-01
The structural simplicity and ability to capture serial correlations make Markov models a popular modeling choice in several genomic analyses, such as identification of motifs, genes and regulatory elements. A critical, yet relatively unexplored, issue is the determination of the order of the Markov model. Most biological applications use a predetermined order for all data sets indiscriminately. Here, we show the vast variation in the performance of such applications with the order. To identify the ‘optimal’ order, we investigated two model selection criteria: Akaike information criterion and Bayesian information criterion (BIC). The BIC optimal order delivers the best performance for mammalian phylogeny reconstruction and motif discovery. Importantly, this order is different from orders typically used by many tools, suggesting that a simple additional step determining this order can significantly improve results. Further, we describe a novel classification approach based on BIC optimal Markov models to predict functionality of tissue-specific promoters. Our classifier discriminates between promoters active across 12 different tissues with remarkable accuracy, yielding 3 times the precision expected by chance. Application to the metagenomics problem of identifying the taxum from a short DNA fragment yields accuracies at least as high as the more complex mainstream methodologies, while retaining conceptual and computational simplicity. PMID:23267010
Directory of Open Access Journals (Sweden)
Qian Jing-Guang
2014-07-01
Full Text Available The purpose of the study was to establish a dynamics model and a three-dimensional (3D finite element model to analyze loading characteristics of femoral neck during walking, squat, single-leg standing, and forward and lateral lunges. One male volunteer performed three trials of the five movements. The 3D kinematic data were captured and imported into the LifeMOD to establish a musculoskeletal dynamics model to obtain joint reaction and muscle forces of iliacus, gluteus medius, gluteus maximus, psoas major and adductor magnus. The loading data LfeMOD were imported and transformed into a hip finite-element model. The results of the finite element femur model showed that stress was localized along the compression arc and the tension arc. In addition, the trabecular bone and tension lines of the Ward's triangle also demonstrated high stress. The compact bone received the greatest peak stress in the forward lunge and the least stress in the squat. However, the spongy bone in the femoral neck region had the greatest stress during the walk and the least stress in the squat. The results from this study indicate that the forward lunge may be an effective method to prevent femoral neck fractures. Walking is another effective and simple method that may improve bone mass of the Ward's triangle and prevent osteoporosis and femoral neck fracture.
DEFF Research Database (Denmark)
Deforche, Koen; Cozzi-Lepri, Alessandro; Theys, Kristof
2008-01-01
BACKGROUND: A method has been developed to estimate a fitness landscape experienced by HIV-1 under treatment selective pressure as a function of the genotypic sequence thereby also estimating the genetic barrier to resistance. METHODS: We evaluated the performance of two estimated fitness landsca...
Blakeley-Smith, Audrey; Carr, Edward G.; Cale, Sanja I.; Owen-DeSchryver, Jamie S.
2009-01-01
Theoretical considerations suggest that problem behavior should increase when a child's competency does not match the curricular demands of the environment (i.e., when there is poor environmental fit). In the present study, environmental fit was examined for six children with autism spectrum disorders. Results indicated that the children exhibited…
NonpModelCheck: An R Package for Nonparametric Lack-of-Fit Testing and Variable Selection
Directory of Open Access Journals (Sweden)
Adriano Zanin Zambom
2017-05-01
Full Text Available We describe the R package NonpModelCheck for hypothesis testing and variable selection in nonparametric regression. This package implements functions to perform hypothesis testing for the significance of a predictor or a group of predictors in a fully nonparametric heteroscedastic regression model using high-dimensional one-way ANOVA. Based on the p values from the test of each covariate, three different algorithms allow the user to perform variable selection using false discovery rate corrections. A function for classical local polynomial regression is implemented for the multivariate context, where the degree of the polynomial can be as large as needed and bandwidth selection strategies are built in.
Zhao, Z G; Rong, E H; Li, S C; Zhang, L J; Zhang, Z W; Guo, Y Q; Ma, R Y
2016-08-01
Monitoring of oriental fruit moths (Grapholita molesta Busck) is a prerequisite for its control. This study introduced a digital image-processing method and logistic model for the control of oriental fruit moths. First, five triangular sex pheromone traps were installed separately within each area of 667 m2 in a peach orchard to monitor oriental fruit moths consecutively for 3 years. Next, full view images of oriental fruit moths were collected via a digital camera and then subjected to graying, separation and morphological analysis for automatic counting using MATLAB software. Afterwards, the results of automatic counting were used for fitting a logistic model to forecast the control threshold and key control period. There was a high consistency between automatic counting and manual counting (0.99, P < 0.05). According to the logistic model, oriental fruit moths had four occurrence peaks during a year, with a time-lag of 15-18 days between adult occurrence peak and the larval damage peak. Additionally, the key control period was from 28 June to 3 July each year, when the wormy fruit rate reached up to 5% and the trapping volume was approximately 10.2 per day per trap. Additionally, the key control period for the overwintering generation was 25 April. This study provides an automatic counting method and fitted logistic model with a great potential for application to the control of oriental fruit moths.
Li, Yuelin; Baser, Ray
2013-01-01
The US Food and Drug Administration recently announced the final guidelines on the development and validation of Patient-Reported Outcomes (PROs) assessments in drug labeling and clinical trials. This guidance paper may boost the demand for new PRO survey questionnaires. Henceforth biostatisticians may encounter psychometric methods more frequently, particularly Item Response Theory (IRT) models to guide the shortening of a PRO assessment instrument. This article aims to provide an introduction on the theory and practical analytic skills in fitting a Generalized Partial Credit Model in IRT (GPCM). GPCM theory is explained first, with special attention to a clearer exposition of the formal mathematics than what is typically available in the psychometric literature. Then a worked example is presented, using self-reported responses taken from the International Personality Item Pool. The worked example contains step-by-step guides on using the statistical languages R and WinBUGS in fitting the GPCM. Finally, the Fisher information function of the GPCM model is derived and used to evaluate, as an illustrative example, the usefulness of assessment items by their information contents. This article aims to encourage biostatisticians to apply IRT models in the re-analysis of existing data and in future research. PMID:22362655
Spears, Janine L.; Parrish, James L., Jr.
2013-01-01
This teaching case introduces students to a relatively simple approach to identifying and documenting security requirements within conceptual models that are commonly taught in systems analysis and design courses. An introduction to information security is provided, followed by a classroom example of a fictitious company, "Fun &…
Cherchye, L.J.H.; Vermeulen, F.M.P.
2006-01-01
We compare the empirical performance of unitary and collective labor supply models, using representative data from the Dutch DNB Household Survey.We conduct a nonparametric analysis that avoids the distortive impact of an erroneously speci.ed functional form for the prefer-ences and/or the
Directory of Open Access Journals (Sweden)
Jesús Montes
2015-01-01
data, though a deeper analysis from a biological perspective reveals that some are better suited for this purpose, as they represent more accurately the biological process. Based on the results of this analysis, we propose a set of mathematical equations and a methodology, adequate for modeling several aspects of biochemical synaptic behavior.
Directory of Open Access Journals (Sweden)
Erin Peterson
2014-01-01
Full Text Available This paper describes the STARS ArcGIS geoprocessing toolset, which is used to calcu- late the spatial information needed to fit spatial statistical models to stream network data using the SSN package. The STARS toolset is designed for use with a landscape network (LSN, which is a topological data model produced by the FLoWS ArcGIS geoprocessing toolset. An overview of the FLoWS LSN structure and a few particularly useful tools is also provided so that users will have a clear understanding of the underlying data struc- ture that the STARS toolset depends on. This document may be used as an introduction to new users. The methods used to calculate the spatial information and format the final .ssn object are also explicitly described so that users may create their own .ssn object using other data models and software.
DEFF Research Database (Denmark)
Andersen, Andreas; Rieckmann, Andreas
2016-01-01
In this article, we illustrate how to use mi impute chained with intreg to fit an analysis of covariance analysis of censored and nondetectable immunological concentrations measured in a randomized pretest–posttest design.......In this article, we illustrate how to use mi impute chained with intreg to fit an analysis of covariance analysis of censored and nondetectable immunological concentrations measured in a randomized pretest–posttest design....
Sakamoto, Toshihiro
2018-04-01
Crop phenological information is a critical variable in evaluating the influence of environmental stress on the final crop yield in spatio-temporal dimensions. Although the MODIS (Moderate Resolution Imaging Spectroradiometer) Land Cover Dynamics product (MCD12Q2) is widely used in place of crop phenological information, the definitions of MCD12Q2-derived phenological events (e.g. green-up date, dormancy date) were not completely consistent with those of crop development stages used in statistical surveys (e.g. emerged date, harvested date). It has been necessary to devise an alternative method focused on detecting continental-scale crop developmental stages using a different approach. Therefore, this study aimed to refine the Shape Model Fitting (SMF) method to improve its applicability to multiple major U.S. crops. The newly-refined SMF methods could estimate the timing of 36 crop-development stages of major U.S. crops, including corn, soybeans, winter wheat, spring wheat, barley, sorghum, rice, and cotton. The newly-developed calibration process did not require any long-term field observation data, and could calibrate crop-specific phenological parameters, which were used as coefficients in estimated equation, by using only freely accessible public data. The calibration of phenological parameters was conducted in two steps. In the first step, the national common phenological parameters, referred to as X0[base], were calibrated by using the statistical data of 2008. The SMF method coupled using X0[base] was named the rSMF[base] method. The second step was a further calibration to gain regionally-adjusted phenological parameters for each state, referred to as X0[local], by using additional statistical data of 2015 and 2016. The rSMF method using the X0[local] was named the rSMF[local] method. This second calibration process improved the estimation accuracy for all tested crops. When applying the rSMF[base] method to the validation data set (2009-2014), the root
Tang, Chuanning; Lew, Scott; He, Dacheng
2016-04-01
In vitro protein stability studies are commonly conducted via thermal or chemical denaturation/renaturation of protein. Conventional data analyses on the protein unfolding/(re)folding require well-defined pre- and post-transition baselines to evaluate Gibbs free-energy change associated with the protein unfolding/(re)folding. This evaluation becomes problematic when there is insufficient data for determining the pre- or post-transition baselines. In this study, fitting on such partial data obtained in protein chemical denaturation is established by introducing second-order differential (SOD) analysis to overcome the limitations that the conventional fitting method has. By reducing numbers of the baseline-related fitting parameters, the SOD analysis can successfully fit incomplete chemical denaturation data sets with high agreement to the conventional evaluation on the equivalent completed data, where the conventional fitting fails in analyzing them. This SOD fitting for the abbreviated isothermal chemical denaturation further fulfills data analysis methods on the insufficient data sets conducted in the two prevalent protein stability studies. © 2016 The Protein Society.
Daivadanam, Meena; Wahlström, Rolf; Ravindran, T K Sundari; Thankappan, K R; Ramanathan, Mala
2014-06-09
Interventions having a strong theoretical basis are more efficacious, providing a strong argument for incorporating theory into intervention planning. The objective of this study was to develop a conceptual model to facilitate the planning of dietary intervention strategies at the household level in rural Kerala. Three focus group discussions and 17 individual interviews were conducted among men and women, aged between 23 and 75 years. An interview guide facilitated the process to understand: 1) feasibility and acceptability of a proposed dietary behaviour change intervention; 2) beliefs about foods, particularly fruits and vegetables; 3) decision-making in households with reference to food choices and access; and 4) to gain insights into the kind of intervention strategies that may be practical at community and household level. The data were analysed using a modified form of qualitative framework analysis, which combined both deductive and inductive reasoning. A priori themes were identified from relevant behaviour change theories using construct definitions, and used to index the meaning units identified from the primary qualitative data. In addition, new themes emerging from the data were included. The associations between the themes were mapped into four main factors and its components, which contributed to construction of the conceptual model. Thirteen of the a priori themes from three behaviour change theories (Trans-theoretical model, Health Belief model and Theory of Planned Behaviour) were confirmed or slightly modified, while four new themes emerged from the data. The conceptual model had four main factors and its components: impact factors (decisional balance, risk perception, attitude); change processes (action-oriented, cognitive); background factors (personal modifiers, societal norms); and overarching factors (accessibility, perceived needs and preferences), built around a three-stage change spiral (pre-contemplation, intention, action). Decisional
Energy Technology Data Exchange (ETDEWEB)
Lim, Chang Seon [Dept. of Radiological Science, Konyang University College of Medical Sciences, Daejeon (Korea, Republic of); Cho, A Ra [Dept. of Medical Education, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Hur, Yera [Dept. of Medical Education, Konyang University College of Medicine, Daejeon (Korea, Republic of); Choi, Seong Youl [Dept. of Occupational Therapy, Kwangju women’s University, Gwangju (Korea, Republic of)
2017-09-15
Radiological Technologists deals with the life of a person which means professional competency is essential for the job. Nevertheless, there have been no studies in Korea that identified the job competence of radiologists. In order to define the core job competencies of Korean radiologists and to present the factor models, 147 questionnaires on job competency of radiology were analyzed using 'PASW Statistics Version 18.0' and 'AMOS Version 18.0'. The valid model consisted of five core job competencies ('Patient management', 'Health and safety', 'Operation of equipment', 'Procedures and management') and 17 sub – competencies. As a result of the factor analysis, the RMSEA value was 0.1 and the CFI, and TLI values were close to 0.9 in the measurement model of the five core job competencies. The validity analysis showed that the mean variance extraction was 0.5 or more and the conceptual reliability value was 0.7 or more , And there was a high correlation between subordinate competencies included in each subordinate competencies. The results of this study are expected to provide specific information necessary for the training and management of human resources centered on competence by clearly showing the job competence required for radiologists in Korea's health environment.
International Nuclear Information System (INIS)
Lim, Chang Seon; Cho, A Ra; Hur, Yera; Choi, Seong Youl
2017-01-01
Radiological Technologists deals with the life of a person which means professional competency is essential for the job. Nevertheless, there have been no studies in Korea that identified the job competence of radiologists. In order to define the core job competencies of Korean radiologists and to present the factor models, 147 questionnaires on job competency of radiology were analyzed using 'PASW Statistics Version 18.0' and 'AMOS Version 18.0'. The valid model consisted of five core job competencies ('Patient management', 'Health and safety', 'Operation of equipment', 'Procedures and management') and 17 sub – competencies. As a result of the factor analysis, the RMSEA value was 0.1 and the CFI, and TLI values were close to 0.9 in the measurement model of the five core job competencies. The validity analysis showed that the mean variance extraction was 0.5 or more and the conceptual reliability value was 0.7 or more , And there was a high correlation between subordinate competencies included in each subordinate competencies. The results of this study are expected to provide specific information necessary for the training and management of human resources centered on competence by clearly showing the job competence required for radiologists in Korea's health environment
Anan, Mohammad Tarek M.; Al-Saadi, Mohannad H.
2015-01-01
Objective The aim of this study was to compare the fit accuracies of metal partial removable dental prosthesis (PRDP) frameworks fabricated by the traditional technique (TT) or the light-curing modeling material technique (LCMT). Materials and methods A metal model of a Kennedy class III modification 1 mandibular dental arch with two edentulous spaces of different spans, short and long, was used for the study. Thirty identical working casts were used to produce 15 PRDP frameworks each by TT and by LCMT. Every framework was transferred to a metal master cast to measure the gap between the metal base of the framework and the crest of the alveolar ridge of the cast. Gaps were measured at three points on each side by a USB digital intraoral camera at ×16.5 magnification. Images were transferred to a graphics editing program. A single examiner performed all measurements. The two-tailed t-test was performed at the 5% significance level. Results The mean gap value was significantly smaller in the LCMT group compared to the TT group. The mean value of the short edentulous span was significantly smaller than that of the long edentulous span in the LCMT group, whereas the opposite result was obtained in the TT group. Conclusion Within the limitations of this study, it can be concluded that the fit of the LCMT-fabricated frameworks was better than the fit of the TT-fabricated frameworks. The framework fit can differ according to the span of the edentate ridge and the fabrication technique for the metal framework. PMID:26236129
Selb, Juliette; Ogden, Tyler M.; Dubb, Jay; Fang, Qianqian; Boas, David A.
2014-01-01
Abstract. Near-infrared spectroscopy (NIRS) estimations of the adult brain baseline optical properties based on a homogeneous model of the head are known to introduce significant contamination from extracerebral layers. More complex models have been proposed and occasionally applied to in vivo data, but their performances have never been characterized on realistic head structures. Here we implement a flexible fitting routine of time-domain NIRS data using graphics processing unit based Monte Carlo simulations. We compare the results for two different geometries: a two-layer slab with variable thickness of the first layer and a template atlas head registered to the subject’s head surface. We characterize the performance of the Monte Carlo approaches for fitting the optical properties from simulated time-resolved data of the adult head. We show that both geometries provide better results than the commonly used homogeneous model, and we quantify the improvement in terms of accuracy, linearity, and cross-talk from extracerebral layers. PMID:24407503
Carlotti, Massimo; Brizzi, Gabriele; Papandrea, Enzo; Prevedelli, Marco; Ridolfi, Marco; Dinelli, Bianca Maria; Magnani, Luca
2006-02-01
We present a new retrieval model designed to analyze the observations of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), which is on board the ENVironmental SATellite (ENVISAT). The new geo-fit multitarget retrieval model (GMTR) implements the geo-fit two-dimensional inversion for the simultaneous retrieval of several targets including a set of atmospheric constituents that are not considered by the ground processor of the MIPAS experiment. We describe the innovative solutions adopted in the inversion algorithm and the main functionalities of the corresponding computer code. The performance of GMTR is compared with that of the MIPAS ground processor in terms of accuracy of the retrieval products. Furthermore, we show the capability of GMTR to resolve the horizontal structures of the atmosphere. The new retrieval model is implemented in an optimized computer code that is distributed by the European Space Agency as "open source" in a package that includes a full set of auxiliary data for the retrieval of 28 atmospheric targets.
Kuroda, Natsuha; Gary, Dale E.; Wang, Haimin; Fleishman, Gregory D.; Nita, Gelu M.; Jing, Ju
2018-01-01
The well-established notion of a “common population” of the accelerated electrons simultaneously producing the hard X-ray (HXR) and microwave (MW) emission during the flare impulsive phase has been challenged by some studies reporting the discrepancies between the HXR-inferred and MW-inferred electron energy spectra. The traditional methods of spectral inversion have some problems that can be mainly attributed to the unrealistic and oversimplified treatment of the flare emission. To properly address this problem, we use a nonlinear force-free field (NLFFF) model extrapolated from an observed photospheric magnetogram as input to the three-dimensional, multiwavelength modeling platform GX Simulator and create a unified electron population model that can simultaneously reproduce the observed HXR and MW observations. We model the end of the impulsive phase of the 2015 June 22 M6.5 flare and constrain the modeled electron spatial and energy parameters using observations made by the highest-resolving instruments currently available in two wavelengths, the Reuven Ramaty High Energy Solar Spectroscopic Imager for HXR and the Expanded Owens Valley Solar Array for MW. Our results suggest that the HXR-emitting electron population model fits the standard flare model with a broken power-law spectrum ({E}{break}∼ 200 keV) that simultaneously produces the HXR footpoint emission and the MW high-frequency emission. The model also includes an “HXR-invisible” population of nonthermal electrons that are trapped in a large volume of magnetic field above the HXR-emitting loops, which is observable by its gyrosynchrotron radiation emitting mainly in the MW low-frequency range.
International Nuclear Information System (INIS)
Halpern, J.; Whittemore, A.S.
1987-01-01
Two methods were used to examine how lung cancer death rates vary with cumulative exposures to radiation and tobacco among uranium miners. The two methods produced similar results when death rate ratios were taken to be the product of radiation and tobacco effects. The estimates were discrepant when death rate ratios were taken to be the sum of radiation and tobacco effects. Both methods indicated better fit for the multiplicative model. It may be that cumulative exposures are inappropriate measures of the effects of radiation and tobacco on lung cancer death rates, as well as for other pollutants where the assumption of cumulative dose is the basis for risk assessments
Rakesh, Ramachandran; Srinivasan, Narayanaswamy
2016-01-01
Cryo-Electron Microscopy (cryo-EM) has become an important technique to obtain structural insights into large macromolecular assemblies. However the resolution of the density maps do not allow for its interpretation at atomic level. Hence they are combined with high resolution structures along with information from other experimental or bioinformatics techniques to obtain pseudo-atomic models. Here, we describe the use of evolutionary conservation of residues as obtained from protein structures and alignments of homologous proteins to detect errors in the fitting of atomic structures as well as improve accuracy of the protein-protein interfacial regions in the cryo-EM density maps.
Energy Technology Data Exchange (ETDEWEB)
Walker, M D; Matthews, J C; Asselin, M-C; Julyan, P J [School of Cancer and Enabling Sciences, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, M20 3LJ (United Kingdom); Watson, C C [Siemens Medical Solutions Molecular Imaging, Knoxville, TN 37932 (United States); Saleem, A; Dickinson, C; Charnley, N; Price, P M; Jones, T, E-mail: matthew.walker@manchester.ac.u [Academic Department of Radiation Oncology, Christie NHS Foundation Trust, University of Manchester, M20 4BX (United Kingdom)
2010-11-21
The precision of biological parameter estimates derived from dynamic PET data can be limited by the number of acquired coincidence events (prompts and randoms). These numbers are affected by the injected activity (A{sub 0}). The benefits of optimizing A{sub 0} were assessed using a new model of data variance which is formulated as a function of A{sub 0}. Seven cancer patients underwent dynamic [{sup 15}O]H{sub 2}O PET scans (32 scans) using a Biograph PET-CT scanner (Siemens), with A{sub 0} varied (142-839 MBq). These data were combined with simulations to (1) determine the accuracy of the new variance model, (2) estimate the improvements in parameter estimate precision gained by optimizing A{sub 0}, and (3) examine changes in precision for different size regions of interest (ROIs). The new variance model provided a good estimate of the relative variance in dynamic PET data across a wide range of A{sub 0}s and time frames for FBP reconstruction. Patient data showed that relative changes in estimate precision with A{sub 0} were in reasonable agreement with the changes predicted by the model: Pearson's correlation coefficients were 0.73 and 0.62 for perfusion (F) and the volume of distribution (V{sub T}), respectively. The between-scan variability in the parameter estimates agreed with the estimated precision for small ROIs (<5 mL). An A{sub 0} of 500-700 MBq was near optimal for estimating F and V{sub T} from abdominal [{sup 15}O]H{sub 2}O scans on this scanner. This optimization improved the precision of parameter estimates for small ROIs (<5 mL), with an injection of 600 MBq reducing the standard error on F by a factor of 1.13 as compared to the injection of 250 MBq, but by the more modest factor of 1.03 as compared to A{sub 0} = 400 MBq.
Energy Technology Data Exchange (ETDEWEB)
Lopez-Pino, N.; Padilla-Cabal, F.; Garcia-Alvarez, J. A.; Vazquez, L.; D' Alessandro, K.; Correa-Alfonso, C. M. [Departamento de Fisica Nuclear, Instituto Superior de Tecnologia y Ciencias Aplicadas (InSTEC) Ave. Salvador Allende y Luaces. Quinta de los Molinos. Habana 10600. A.P. 6163, La Habana (Cuba); Godoy, W.; Maidana, N. L.; Vanin, V. R. [Laboratorio do Acelerador Linear, Instituto de Fisica - Universidade de Sao Paulo Rua do Matao, Travessa R, 187, 05508-900, SP (Brazil)
2013-05-06
A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV, which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when the manufacturer parameters of the detector were used in the simulation. A complete Computerized Tomography (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.
Permadi, Ginanjar Setyo; Adi, Kusworo; Gernowo, Rahmad
2018-02-01
RSA algorithm give security in the process of the sending of messages or data by using 2 key, namely private key and public key .In this research to ensure and assess directly systems are made have meet goals or desire using a comprehensive evaluation methods HOT-Fit system .The purpose of this research is to build a information system sending mail by applying methods of security RSA algorithm and to evaluate in uses the method HOT-Fit to produce a system corresponding in the faculty physics. Security RSA algorithm located at the difficulty of factoring number of large coiled factors prima, the results of the prime factors has to be done to obtain private key. HOT-Fit has three aspects assessment, in the aspect of technology judging from the system status, the quality of system and quality of service. In the aspect of human judging from the use of systems and satisfaction users while in the aspect of organization judging from the structure and environment. The results of give a tracking system sending message based on the evaluation acquired.
Paul, Fabian; Noé, Frank; Weikl, Thomas R
2018-03-27
Unstructured proteins and peptides typically fold during binding to ligand proteins. A challenging problem is to identify the mechanism and kinetics of these binding-induced folding processes in experiments and atomistic simulations. In this Article, we present a detailed picture for the folding of the inhibitor peptide PMI into a helix during binding to the oncoprotein fragment 25-109 Mdm2 obtained from atomistic, explicit-water simulations and Markov state modeling. We find that binding-induced folding of PMI is highly parallel and can occur along a multitude of pathways. Some pathways are induced-fit-like with binding occurring prior to PMI helix formation, while other pathways are conformational-selection-like with binding after helix formation. On the majority of pathways, however, binding is intricately coupled to folding, without clear temporal ordering. A central feature of these pathways is PMI motion on the Mdm2 surface, along the binding groove of Mdm2 or over the rim of this groove. The native binding groove of Mdm2 thus appears as an asymmetric funnel for PMI binding. Overall, binding-induced folding of PMI does not fit into the classical picture of induced fit or conformational selection that implies a clear temporal ordering of binding and folding events. We argue that this holds in general for binding-induced folding processes because binding and folding events in these processes likely occur on similar time scales and do exhibit the time-scale separation required for temporal ordering.
Energy Technology Data Exchange (ETDEWEB)
Reed, S.L.; et al.
2017-01-17
We present the discovery and spectroscopic confirmation with the ESO NTT and Gemini South telescopes of eight new 6.0 < z < 6.5 quasars with z$_{AB}$ < 21.0. These quasars were photometrically selected without any star-galaxy morphological criteria from 1533 deg$^{2}$ using SED model fitting to photometric data from the Dark Energy Survey (g, r, i, z, Y), the VISTA Hemisphere Survey (J, H, K) and the Wide-Field Infrared Survey Explorer (W1, W2). The photometric data was fitted with a grid of quasar model SEDs with redshift dependent Lyman-{\\alpha} forest absorption and a range of intrinsic reddening as well as a series of low mass cool star models. Candidates were ranked using on a SED-model based $\\chi^{2}$-statistic, which is extendable to other future imaging surveys (e.g. LSST, Euclid). Our spectral confirmation success rate is 100% without the need for follow-up photometric observations as used in other studies of this type. Combined with automatic removal of the main types of non-astrophysical contaminants the method allows large data sets to be processed without human intervention and without being over run by spurious false candidates. We also present a robust parametric redshift estimating technique that gives comparable accuracy to MgII and CO based redshift estimators. We find two z $\\sim$ 6.2 quasars with HII near zone sizes < 3 proper Mpc which could indicate that these quasars may be young with ages < 10$^6$ - 10$^7$ years or lie in over dense regions of the IGM. The z = 6.5 quasar VDESJ0224-4711 has J$_{AB}$ = 19.75 is the second most luminous quasar known with z > 6.5.
Reed, S. L.; McMahon, R. G.; Martini, P.; Banerji, M.; Auger, M.; Hewett, P. C.; Koposov, S. E.; Gibbons, S. L. J.; Gonzalez-Solares, E.; Ostrovski, F.; Tie, S. S.; Abdalla, F. B.; Allam, S.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Evrard, A. E.; Finley, D. A.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gaztanaga, E.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; James, D. J.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Lima, M.; Maia, M. A. G.; Marshall, J. L.; Melchior, P.; Miller, C. J.; Miquel, R.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Tucker, D. L.; Walker, A. R.; Wester, W.
2017-07-01
We present the discovery and spectroscopic confirmation with the European Southern Observatory New Technology Telescope (NTT) and Gemini South telescopes of eight new, and the rediscovery of two previously known, 6.0 energy distribution (SED) model fitting to photometric data from Dark Energy Survey (g, r, I, z, Y), VISTA Hemisphere Survey (J, H, K) and Wide-field Infrared Survey Explorer (W1, W2). The photometric data were fitted with a grid of quasar model SEDs with redshift-dependent Ly α forest absorption and a range of intrinsic reddening as well as a series of low-mass cool star models. Candidates were ranked using an SED-model-based χ2-statistic, which is extendable to other future imaging surveys (e.g. LSST and Euclid). Our spectral confirmation success rate is 100 per cent without the need for follow-up photometric observations as used in other studies of this type. Combined with automatic removal of the main types of non-astrophysical contaminants, the method allows large data sets to be processed without human intervention and without being overrun by spurious false candidates. We also present a robust parametric redshift estimator that gives comparable accuracy to Mg II and CO-based redshift estimators. We find two z ˜ 6.2 quasars with H II near zone sizes ≤3 proper Mpc that could indicate that these quasars may be young with ages ≲ 106-107 years or lie in over dense regions of the IGM. The z = 6.5 quasar VDES J0224-4711 has JAB = 19.75 and is the second most luminous quasar known with z ≥ 6.5.
Ricketts, T Alexander; Sui, Xuemei; Lavie, Carl J; Blair, Steven N; Ross, Robert
2016-05-01
Guidelines for identification of obesity-related risk which stratify disease risk using specific combinations of body mass index and waist circumference. Whether the addition of cardiorespiratory fitness, an independent predictor of disease risk, provides better risk prediction of all-cause mortality within current body mass index and waist circumference categories is unknown. The study objective was to determine whether the addition of cardiorespiratory fitness improves prediction of all-cause mortality risk classified by the combination of body mass index and waist circumference. We performed a prospective observational study using data from the Aerobics Center Longitudinal Study. A total of 31,267 men (mean age, 43.9 years; standard deviation, 9.4 years) who completed a baseline medical examination between 1974 and 2002 were included. The main outcome measure was all-cause mortality. Participants were grouped using body mass index- and waist circumference-specific threshold combinations: normal body mass index: 18.5 to 24.9 kg/m(2), waist circumference threshold of 90 cm; overweight body mass index: 25.0 to 29.9 kg/m(2), waist circumference threshold of 100 cm, and obese body mass index: 30.0 to 34.9 kg/m(2), waist circumference threshold of 110 cm. Participants were classified using cardiorespiratory fitness as unfit or fit, where unfit was the lowest fifth of the age-specified distribution of maximal exercise test time on the treadmill among the entire Aerobics Center Longitudinal Study population. A total of 1399 deaths occurred over a follow-up of 14.1 ± 7.4 years, for a total of 439,991 person-years of observation. Men who were unfit and had normal body mass index with waist circumference men who were fit, respectively (P Men who were unfit and overweight had 41% (HR, 1.41; 95% CI, 1.04-1.90) higher mortality risk with a waist circumference Men who were unfit and obese were not at increased mortality risk (HR, 1.37; 95% CI, 0.90-2.09) with a waist
... gov home http://www.girlshealth.gov/ Home Fitness Fitness Want to look and feel your best? Physical ... are? Check out this info: What is physical fitness? top Physical fitness means you can do everyday ...
Directory of Open Access Journals (Sweden)
Palle Duun Rohde
2016-01-01
Full Text Available The ability of natural populations to withstand environmental stresses relies partly on their adaptive ability. In this study, we used a subset of the Drosophila Genetic Reference Panel, a population of inbred, genome-sequenced lines derived from a natural population of Drosophila melanogaster, to investigate whether this population harbors genetic variation for a set of stress resistance and life history traits. Using a genomic approach, we found substantial genetic variation for metabolic rate, heat stress resistance, expression of a major heat shock protein, and egg-to-adult viability investigated at a benign and a higher stressful temperature. This suggests that these traits will be able to evolve. In addition, we outline an approach to conduct pathway associations based on genomic linear models, which has potential to identify adaptive genes and pathways, and therefore can be a valuable tool in conservation genomics.
DEFF Research Database (Denmark)
Rohde, Palle Duun; Krag, Kristian; Loeschcke, Volker
2016-01-01
, to investigate whether this population harbors genetic variation for a set of stress resistance and life history traits. Using a genomic approach, we found substantial genetic variation for metabolic rate, heat stress resistance, expression of a major heat shock protein, and egg-to-adult viability investigated......The ability of natural populations to withstand environmental stresses relies partly on their adaptive ability. In this study, we used a subset of the Drosophila Genetic Reference Panel, a population of inbred, genome-sequenced lines derived from a natural population of Drosophila melanogaster...... at a benign and a higher stressful temperature. This suggests that these traits will be able to evolve. In addition, we outline an approach to conduct pathway associations based on genomic linear models, which has potential to identify adaptive genes and pathways, and therefore can be a valuable tool...
Koral, Kenneth F.; Avram, Anca M.; Kaminski, Mark S.; Dewaraja, Yuni K.
2012-01-01
Abstract Background For individualized treatment planning in radioimmunotherapy (RIT), correlations must be established between tracer-predicted and therapy-delivered absorbed doses. The focus of this work was to investigate this correlation for tumors. Methods The study analyzed 57 tumors in 19 follicular lymphoma patients treated with I-131 tositumomab and imaged with SPECT/CT multiple times after tracer and therapy administrations. Instead of the typical least-squares fit to a single tumor's measured time-activity data, estimation was accomplished via a biexponential mixed model in which the curves from multiple subjects were jointly estimated. The tumor-absorbed dose estimates were determined by patient-specific Monte Carlo calculation. Results The mixed model gave realistic tumor time-activity fits that showed the expected uptake and clearance phases even with noisy data or missing time points. Correlation between tracer and therapy tumor-residence times (r=0.98; ptracer-predicted and therapy-delivered mean tumor-absorbed doses (r=0.86; ptracer study for tumor dosimetry-based treatment planning in RIT. PMID:22947086
Collett, David
2002-01-01
INTRODUCTION Some Examples The Scope of this Book Use of Statistical Software STATISTICAL INFERENCE FOR BINARY DATA The Binomial Distribution Inference about the Success Probability Comparison of Two Proportions Comparison of Two or More Proportions MODELS FOR BINARY AND BINOMIAL DATA Statistical Modelling Linear Models Methods of Estimation Fitting Linear Models to Binomial Data Models for Binomial Response Data The Linear Logistic Model Fitting the Linear Logistic Model to Binomial Data Goodness of Fit of a Linear Logistic Model Comparing Linear Logistic Models Linear Trend in Proportions Comparing Stimulus-Response Relationships Non-Convergence and Overfitting Some other Goodness of Fit Statistics Strategy for Model Selection Predicting a Binary Response Probability BIOASSAY AND SOME OTHER APPLICATIONS The Tolerance Distribution Estimating an Effective Dose Relative Potency Natural Response Non-Linear Logistic Regression Models Applications of the Complementary Log-Log Model MODEL CHECKING Definition of Re...
Jones, Kyle M.; Randtke, Edward A.; Howison, Christine M.; Pagel, Mark D.
2016-03-01
We have developed a MRI method that can measure extracellular pH in tumor tissues, known as acidoCEST MRI. This method relies on the detection of Chemical Exchange Saturation Transfer (CEST) of iopamidol, an FDA-approved CT contrast agent that has two CEST signals. A log10 ratio of the two CEST signals is linearly correlated with pH, but independent of agent concentration, endogenous T1 relaxation time, and B1 inhomogeneity. Therefore, detecting both CEST effects of iopamidol during in vivo studies can be used to accurately measure the extracellular pH in tumor tissues. Past in vivo studies using acidoCEST MRI have suffered from respiration artifacts in orthotopic and lung tumor models that have corrupted pH measurements. In addition, the non-linear fitting method used to analyze results is unreliable as it is subject to over-fitting especially with noisy CEST spectra. To improve the technique, we have recently developed a respiration gated CEST MRI pulse sequence that has greatly reduced motion artifacts, and we have included both a prescan and post scan to remove endogenous CEST effects. In addition, we fit the results by parameterizing the contrast of the exogenous agent with respect to pH via the Bloch equations modified for chemical exchange, which is less subject to over-fitting than the non-linear method. These advances in the acidoCEST MRI technique and analysis methods have made pH measurements more reliable, especially in areas of the body subject to respiratory motion.
DEFF Research Database (Denmark)
de Vries, Stefan P. W.; Gupta, Srishti; Baig, Abiyad
2017-01-01
Campylobacter is the most common cause of foodborne bacterial illness worldwide. Faecal contamination of meat, especially chicken, during processing represents a key route of transmission to humans. There is a lack of insight into the mechanisms driving C. jejuni growth and survival within hosts......-rich and -poor conditions at 4 degrees C and infection of human gut epithelial cells was assessed by Tn-insertion site sequencing (Tn-seq). A total of 331 homologous gene clusters were essential for fitness during in vitro growth in three C. jejuni strains, revealing that a large part of its genome is dedicated...
Chen, Ping-Shun; Yu, Chun-Jen; Chen, Gary Yu-Hsin
2015-08-01
With the growth in the number of elderly and people with chronic diseases, the number of hospital services will need to increase in the near future. With myriad of information technologies utilized daily and crucial information-sharing tasks performed at hospitals, understanding the relationship between task performance and information system has become a critical topic. This research explored the resource pooling of hospital management and considered a computed tomography (CT) patient-referral mechanism between two hospitals using the information system theory framework of Task-Technology Fit (TTF) model. The TTF model could be used to assess the 'match' between the task and technology characteristics. The patient-referral process involved an integrated information framework consisting of a hospital information system (HIS), radiology information system (RIS), and picture archiving and communication system (PACS). A formal interview was conducted with the director of the case image center on the applicable characteristics of TTF model. Next, the Icam DEFinition (IDEF0) method was utilized to depict the As-Is and To-Be models for CT patient-referral medical operational processes. Further, the study used the 'leagility' concept to remove non-value-added activities and increase the agility of hospitals. The results indicated that hospital information systems could support the CT patient-referral mechanism, increase hospital performance, reduce patient wait time, and enhance the quality of care for patients.
Burnham, Andrew J; Armstrong, Jianling; Lowen, Anice C; Webster, Robert G; Govorkova, Elena A
2015-04-01
of seasonal influenza virus infections worldwide. The development of resistance to a single class of available antivirals, the neuraminidase (NA) inhibitors (NAIs), is a public health concern. Amino acid substitutions in the NA glycoprotein of influenza B virus not only can confer antiviral resistance but also can alter viral fitness. Here we used normal human bronchial epithelial (NHBE) cells, a model of the human upper respiratory tract, to examine the replicative capacities and fitness of NAI-resistant influenza B viruses. We show that virus with an E119A NA substitution can replicate efficiently in NHBE cells in the presence of oseltamivir or zanamivir and that virus with the H274Y NA substitution has a relative fitness greater than that of the wild-type NAI-susceptible virus. This study is the first to use NHBE cells to determine the fitness of NAI-resistant influenza B viruses. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Ghiorso, M. S.
2013-12-01
Internally consistent thermodynamic databases are critical resources that facilitate the calculation of heterogeneous phase equilibria and thereby support geochemical, petrological, and geodynamical modeling. These 'databases' are actually derived data/model systems that depend on a diverse suite of physical property measurements, calorimetric data, and experimental phase equilibrium brackets. In addition, such databases are calibrated with the adoption of various models for extrapolation of heat capacities and volumetric equations of state to elevated temperature and pressure conditions. Finally, these databases require specification of thermochemical models for the mixing properties of solid, liquid, and fluid solutions, which are often rooted in physical theory and, in turn, depend on additional experimental observations. The process of 'calibrating' a thermochemical database involves considerable effort and an extensive computational infrastructure. Because of these complexities, the community tends to rely on a small number of thermochemical databases, generated by a few researchers; these databases often have limited longevity and are universally difficult to maintain. ThermoFit is a software framework and user interface whose aim is to provide a modeling environment that facilitates creation, maintenance and distribution of thermodynamic data/model collections. Underlying ThermoFit are data archives of fundamental physical property, calorimetric, crystallographic, and phase equilibrium constraints that provide the essential experimental information from which thermodynamic databases are traditionally calibrated. ThermoFit standardizes schema for accessing these data archives and provides web services for data mining these collections. Beyond simple data management and interoperability, ThermoFit provides a collection of visualization and software modeling tools that streamline the model/database generation process. Most notably, ThermoFit facilitates the
Poitevin, Frédéric; Orland, Henri; Doniach, Sebastian; Koehl, Patrice; Delarue, Marc
2011-01-01
Small Angle X-ray Scattering (SAXS) techniques are becoming more and more useful for structural biologists and biochemists, thanks to better access to dedicated synchrotron beamlines, better detectors and the relative easiness of sample preparation. The ability to compute the theoretical SAXS profile of a given structural model, and to compare this profile with the measured scattering intensity, yields crucial structural informations about the macromolecule under study and/or its complexes in solution. An important contribution to the profile, besides the macromolecule itself and its solvent-excluded volume, is the excess density due to the hydration layer. AquaSAXS takes advantage of recently developed methods, such as AquaSol, that give the equilibrium solvent density map around macromolecules, to compute an accurate SAXS/WAXS profile of a given structure and to compare it to the experimental one. Here, we describe the interface architecture and capabilities of the AquaSAXS web server (http://lorentz.dynstr.pasteur.fr/aquasaxs.php). PMID:21665925
International Nuclear Information System (INIS)
Brenna, M.; Colo, G.; Roca-Maza, X.; Bortignon, P.F.; Moghrabi, K.; Grasso, M.
2014-01-01
A completely microscopic beyond mean-field approach has been elaborated to overcome some intrinsic limitations of self-consistent mean-field schemes applied to nuclear systems, such as the incapability to produce some properties of single-particle states (e.g. spectroscopic factors), as well as of collective states (e.g. their damping width and their gamma decay to the ground state or to low lying states). Since commonly used effective interactions are fitted at the mean-field level, one should aim at refitting them including the desired beyond mean-field contributions in the refitting procedure. If zero-range interactions are used, divergences arise. We present some steps towards the refitting of Skyrme interactions, for its application in finite nuclei. (authors)
Xu, Shuqing; Schlüter, Philipp M
2015-01-01
Divergent selection by pollinators can bring about strong reproductive isolation via changes at few genes of large effect. This has recently been demonstrated in sexually deceptive orchids, where studies (1) quantified the strength of reproductive isolation in the field; (2) identified genes that appear to be causal for reproductive isolation; and (3) demonstrated selection by analysis of natural variation in gene sequence and expression. In a group of closely related Ophrys orchids, specific floral scent components, namely n-alkenes, are the key floral traits that control specific pollinator attraction by chemical mimicry of insect sex pheromones. The genetic basis of species-specific differences in alkene production mainly lies in two biosynthetic genes encoding stearoyl-acyl carrier protein desaturases (SAD) that are associated with floral scent variation and reproductive isolation between closely related species, and evolve under pollinator-mediated selection. However, the implications of this genetic architecture of key floral traits on the evolutionary processes of pollinator adaptation and speciation in this plant group remain unclear. Here, we expand on these recent findings to model scenarios of adaptive evolutionary change at SAD2 and SAD5, their effects on plant fitness (i.e., offspring number), and the dynamics of speciation. Our model suggests that the two-locus architecture of reproductive isolation allows for rapid sympatric speciation by pollinator shift; however, the likelihood of such pollinator-mediated speciation is asymmetric between the two orchid species O. sphegodes and O. exaltata due to different fitness effects of their predominant SAD2 and SAD5 alleles. Our study not only provides insight into pollinator adaptation and speciation mechanisms of sexually deceptive orchids but also demonstrates the power of applying a modeling approach to the study of pollinator-driven ecological speciation.
GARCH Modelling of Cryptocurrencies
Directory of Open Access Journals (Sweden)
Jeffrey Chu
2017-10-01
Full Text Available With the exception of Bitcoin, there appears to be little or no literature on GARCH modelling of cryptocurrencies. This paper provides the first GARCH modelling of the seven most popular cryptocurrencies. Twelve GARCH models are fitted to each cryptocurrency, and their fits are assessed in terms of five criteria. Conclusions are drawn on the best fitting models, forecasts and acceptability of value at risk estimates.
O'Connell, Matthew; Delgado, Kristin; Lawrence, Amie; Kung, Mavis; Tristan, Esteban
2017-06-01
A growing body of applied research has identified certain psychological traits that are predictive of worker safety. However, most of these studies suffer from an overreliance on common method bias caused by self-report measures of both: (a) personal factors such as personality traits; and (b) outcomes such as safety behaviors and injuries. This study utilized archival data from 796 employees at a large U.S. automobile manufacturer. Data were gathered on a pre-employment assessment, SecureFit®, that measured key personality characteristics such as conscientiousness, locus of control, and risk taking. In addition, objective measures of workers' compensation claims and disciplinary actions were also gathered. The results indicated that disciplinary actions and workers' compensation claims were strongly correlated. It also demonstrated that the pre-employment assessment was able to predict both disciplinary actions and workers' compensation claims up to 12months in the future. Screening out just 8% of the applicant sample using the assessment would have resulted in a 35% reduction in disciplinary actions and 46% in workers' compensation claims, respectively. The study found a very strong relationship between counterproductive work behaviors (CWBs), such as not following rules, and workers' compensation claims. It also found a strong relationship between a combination of personality traits that have been shown to be associated with both variables, although the current study was able to demonstrate that relationship with objective measure of both variables. Individuals who receive disciplinary actions for things such as not following rules, not coming to work on time, etc. are significantly more likely to also be involved in serious safety incidents, and vice versa. Identifying those individuals early on in the hiring process and screening them out can significantly reduce the number of CWBs as well as workers' compensation claims. Copyright © 2017 Elsevier Ltd and
Warner, Daniel A
2014-11-01
Environmental factors strongly influence phenotypic variation within populations. The environment contributes to this variation in two ways: (1) by acting as a determinant of phenotypic variation (i.e., plastic responses) and (2) as an agent of selection that "chooses" among existing phenotypes. Understanding how these two environmental forces contribute to phenotypic variation is a major goal in the field of evolutionary biology and a primary objective of my research program. The objective of this article is to provide a framework to guide studies of environmental sources of phenotypic variation (specifically, developmental plasticity and maternal effects, and their adaptive significance). Two case studies from my research on reptiles are used to illustrate the general approaches I have taken to address these conceptual topics. Some key points for advancing our understanding of environmental influences on phenotypic variation include (1) merging laboratory-based research that identifies specific environmental effects with field studies to validate ecological relevance; (2) using controlled experimental approaches that mimic complex environments found in nature; (3) integrating data across biological fields (e.g., genetics, morphology, physiology, behavior, and ecology) under an evolutionary framework to provide novel insights into the underlying mechanisms that generate phenotypic variation; (4) assessing fitness consequences using measurements of survival and/or reproductive success across ontogeny (from embryos to adults) and under multiple ecologically-meaningful contexts; and (5) quantifying the strength and form of natural selection in multiple populations over multiple periods of time to understand the spatial and temporal consistency of phenotypic selection. Research programs that focus on organisms that are amenable to these approaches will provide the most promise for advancing our understanding of the environmental factors that generate the remarkable
Granacher, Urs; Lesinski, Melanie; Büsch, Dirk; Muehlbauer, Thomas; Prieske, Olaf; Puta, Christian; Gollhofer, Albert; Behm, David G.
2016-01-01
During the stages of long-term athlete development (LTAD), resistance training (RT) is an important means for (i) stimulating athletic development, (ii) tolerating the demands of long-term training and competition, and (iii) inducing long-term health promoting effects that are robust over time and track into adulthood. However, there is a gap in the literature with regards to optimal RT methods during LTAD and how RT is linked to biological age. Thus, the aims of this scoping review were (i) to describe and discuss the effects of RT on muscular fitness and athletic performance in youth athletes, (ii) to introduce a conceptual model on how to appropriately implement different types of RT within LTAD stages, and (iii) to identify research gaps from the existing literature by deducing implications for future research. In general, RT produced small-to-moderate effects on muscular fitness and athletic performance in youth athletes with muscular strength showing the largest improvement. Free weight, complex, and plyometric training appear to be well-suited to improve muscular fitness and athletic performance. In addition, balance training appears to be an important preparatory (facilitating) training program during all stages of LTAD but particularly during the early stages. As youth athletes become more mature, specificity, and intensity of RT methods increase. This scoping review identified research gaps that are summarized in the following and that should be addressed in future studies: (i) to elucidate the influence of gender and biological age on the adaptive potential following RT in youth athletes (especially in females), (ii) to describe RT protocols in more detail (i.e., always report stress and strain-based parameters), and (iii) to examine neuromuscular and tendomuscular adaptations following RT in youth athletes. PMID:27242538
Granacher, Urs; Lesinski, Melanie; Büsch, Dirk; Muehlbauer, Thomas; Prieske, Olaf; Puta, Christian; Gollhofer, Albert; Behm, David G
2016-01-01
During the stages of long-term athlete development (LTAD), resistance training (RT) is an important means for (i) stimulating athletic development, (ii) tolerating the demands of long-term training and competition, and (iii) inducing long-term health promoting effects that are robust over time and track into adulthood. However, there is a gap in the literature with regards to optimal RT methods during LTAD and how RT is linked to biological age. Thus, the aims of this scoping review were (i) to describe and discuss the effects of RT on muscular fitness and athletic performance in youth athletes, (ii) to introduce a conceptual model on how to appropriately implement different types of RT within LTAD stages, and (iii) to identify research gaps from the existing literature by deducing implications for future research. In general, RT produced small-to-moderate effects on muscular fitness and athletic performance in youth athletes with muscular strength showing the largest improvement. Free weight, complex, and plyometric training appear to be well-suited to improve muscular fitness and athletic performance. In addition, balance training appears to be an important preparatory (facilitating) training program during all stages of LTAD but particularly during the early stages. As youth athletes become more mature, specificity, and intensity of RT methods increase. This scoping review identified research gaps that are summarized in the following and that should be addressed in future studies: (i) to elucidate the influence of gender and biological age on the adaptive potential following RT in youth athletes (especially in females), (ii) to describe RT protocols in more detail (i.e., always report stress and strain-based parameters), and (iii) to examine neuromuscular and tendomuscular adaptations following RT in youth athletes.
Directory of Open Access Journals (Sweden)
Urs eGranacher
2016-05-01
Full Text Available During the stages of long-term athlete development (LTAD, resistance training (RT is an important means for (i stimulating athletic development, (ii tolerating the demands of long-term training and competition, and (iii inducing long-term health promoting effects that are robust over time and track into adulthood. However, there is a gap in the literature with regards to optimal RT methods during LTAD and how RT is linked to biological age.Thus, the aims of this scoping review were (i to describe and discuss the effects of RT on muscular fitness and athletic performance in youth athletes, (ii to introduce a conceptual model on how to appropriately implement different types of RT within LTAD stages, and (iii to identify research gaps from the existing literature by deducing implications for future research.In general, RT produced small-to-moderate effects on muscular fitness and athletic performance in youth athletes with muscular strength showing the largest improvement. Free weight, complex, and plyometric training appear to be well-suited to improve muscular fitness and athletic performance. In addition, balance training appears to be an important preparatory (facilitating training program during all stages of LTAD but particularly during the early stages. As youth athletes become more mature, specificity and intensity of RT methods increase. This scoping review identified research gaps that are summarized in the following and that should be addressed in future studies: (i to elucidate the influence of gender and biological age on the adaptive potential following RT in youth athletes (especially in females, (ii to describe RT protocols in more detail (i.e., always report stress and strain-based parameters, and (iii to examine neuromuscular and tendomuscular adaptations following RT in youth athletes.
Directory of Open Access Journals (Sweden)
Renata Pires Gonçalves
2012-02-01
. The experiments of type dosage x response are very common in the determination of levels of nutrients in optimal food balance and include the use of regression models to achieve this objective. Nevertheless, the regression analysis routine, generally, uses a priori information about a possible relationship between the response variable. The isotonic regression is a method of estimation by least squares that generates estimates which preserves data ordering. In the theory of isotonic regression this information is essential and it is expected to increase fitting efficiency. The objective of this work was to use an isotonic regression methodology, as an alternative way of analyzing data of Zn deposition in tibia of male birds of Hubbard lineage. We considered the models of plateau response of polynomial quadratic and linear exponential forms. In addition to these models, we also proposed the fitting of a logarithmic model to the data and the efficiency of the methodology was evaluated by Monte Carlo simulations, considering different scenarios for the parametric values. The isotonization of the data yielded an improvement in all the fitting quality parameters evaluated. Among the models used, the logarithmic presented estimates of the parameters more consistent with the values reported in literature.
Everaars, Jeroen; Settele, Josef; Dormann, Carsten F
2018-01-01
Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness), number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies) with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study), for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1) Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2) Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3) Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response to this ratio
Settele, Josef; Dormann, Carsten F.
2018-01-01
Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness), number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies) with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study), for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1) Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2) Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3) Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response to this ratio
Directory of Open Access Journals (Sweden)
Jeroen Everaars
Full Text Available Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness, number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study, for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1 Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2 Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3 Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response
Energy Technology Data Exchange (ETDEWEB)
Milani, G., E-mail: gabriele.milani@polimi.it [Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan (Italy); Hanel, T.; Donetti, R. [Pirelli Tyre, Via Alberto e Piero Pirelli 25, 20126 Milan (Italy); Milani, F. [Chem. Co, Via J.F. Kennedy 2, 45030 Occhiobello (Italy)
2016-06-08
The paper is aimed at studying the possible interaction between two different accelerators (DPG and TBBS) in the chemical kinetic of Natural Rubber (NR) vulcanized with sulphur. The same blend with several DPG and TBBS concentrations is deeply analyzed from an experimental point of view, varying the curing temperature in the range 150-180°C and obtaining rheometer curves with a step of 10°C. In order to study any possible interaction between the two accelerators –and eventually evaluating its engineering relevance-rheometer data are normalized by means of the well known Sun and Isayev normalization approach and two output parameters are assumed as meaningful to have an insight into the possible interaction, namely time at maximum torque and reversion percentage. Two different numerical meta-models, which belong to the family of the so-called response surfaces RS are compared. The first is linear against TBBS and DPG and therefore well reproduces no interaction between the accelerators, whereas the latter is a non-linear RS with bilinear term. Both RS are deduced from standard best fitting of experimental data available. It is found that, generally, there is a sort of interaction between TBBS and DPG, but that the error introduced making use of a linear model (no interaction) is generally lower than 10%, i.e. fully acceptable from an engineering standpoint.
Miyamoto, H.; Shoji, Y.; Akasaka, R.; Lemmon, E. W.
2017-10-01
Natural working fluid mixtures, including combinations of CO2, hydrocarbons, water, and ammonia, are expected to have applications in energy conversion processes such as heat pumps and organic Rankine cycles. However, the available literature data, much of which were published between 1975 and 1992, do not incorporate the recommendations of the Guide to the Expression of Uncertainty in Measurement. Therefore, new and more reliable thermodynamic property measurements obtained with state-of-the-art technology are required. The goal of the present study was to obtain accurate vapor-liquid equilibrium (VLE) properties for complex mixtures based on two different gases with significant variations in their boiling points. Precise VLE data were measured with a recirculation-type apparatus with a 380 cm3 equilibration cell and two windows allowing observation of the phase behavior. This cell was equipped with recirculating and expansion loops that were immersed in temperature-controlled liquid and air baths, respectively. Following equilibration, the composition of the sample in each loop was ascertained by gas chromatography. VLE data were acquired for CO2/ethanol and CO2/isopentane binary mixtures within the temperature range from 300 K to 330 K and at pressures up to 7 MPa. These data were used to fit interaction parameters in a Helmholtz energy mixture model. Comparisons were made with the available literature data and values calculated by thermodynamic property models.
Pärna, Kersti; Pürjer, Mari-Liis; Ringmets, Inge; Tekkel, Mare
2014-07-10
In developed countries, smoking spreads through society like an epidemic in which adults from higher socioeconomic groups are the first to adopt and earlier to quit smoking, and in which exists a lag in adoption of smoking between men and women. The objective of this study was to describe trends in daily and occasional smoking, to investigate association between smoking status and education, and to examine if the associations in 1990-2010 in Estonia fit the pattern predicted by the model of tobacco epidemic. The study was based on a 20-64-year-old subsample (n = 18740) of nationally representative postal cross-sectional surveys conducted every second year in Estonia during 1990-2010. Cigarette smoking and education were examined. χ2 test for trend was used to determine daily and occasional smoking trends over study years. Multinomial logistic regression model was used to test educational differences in daily and occasional smoking for every study year. Adjusted relative risk ratios (RRRs) with 95% confidence intervals were calculated. In 1990-2010, daily smoking varied largely between genders showing decreasing trend among men, but not among women. In 2010, one third of men and one fifth of women were daily smokers. Daily smoking was not clearly associated with education among men in 1990-1994 and among women in 1990-2000. Men revealed inverse relationship between daily smoking and education since 1996, but women since 2002. In 2010, compared to men and women with higher education, relative risk ratio of daily smoking was 2.92 (95% CI = 2.01-4.25) among men and 2.29 (95% CI = 1.65-3.17) among women with secondary education, but 4.98 (95% CI 3.12-7.94) among men and 6.62 (95% CI = 4.07-10.76) among women with basic education.In 1990-2010, occasional smoking was stable and similar (varying between 7-10%) among men and women, no association with education was found. Daily smoking patterns in Estonia fit the model of tobacco epidemic in developed countries
Directory of Open Access Journals (Sweden)
Erxu Pi
Full Text Available Temperature is one of the most significant environmental factors that affects germination of grass seeds. Reliable prediction of the optimal temperature for seed germination is crucial for determining the suitable regions and favorable sowing timing for turf grass cultivation. In this study, a back-propagation-artificial-neural-network-aided dual quintic equation (BP-ANN-QE model was developed to improve the prediction of the optimal temperature for seed germination. This BP-ANN-QE model was used to determine optimal sowing times and suitable regions for three Cynodon dactylon cultivars (C. dactylon, 'Savannah' and 'Princess VII'. Prediction of the optimal temperature for these seeds was based on comprehensive germination tests using 36 day/night (high/low temperature regimes (both ranging from 5/5 to 40/40°C with 5°C increments. Seed germination data from these temperature regimes were used to construct temperature-germination correlation models for estimating germination percentage with confidence intervals. Our tests revealed that the optimal high/low temperature regimes required for all the three bermudagrass cultivars are 30/5, 30/10, 35/5, 35/10, 35/15, 35/20, 40/15 and 40/20°C; constant temperatures ranging from 5 to 40°C inhibited the germination of all three cultivars. While comparing different simulating methods, including DQEM, Bisquare ANN-QE, and BP-ANN-QE in establishing temperature based germination percentage rules, we found that the R(2 values of germination prediction function could be significantly improved from about 0.6940-0.8177 (DQEM approach to 0.9439-0.9813 (BP-ANN-QE. These results indicated that our BP-ANN-QE model has better performance than the rests of the compared models. Furthermore, data of the national temperature grids generated from monthly-average temperature for 25 years were fit into these functions and we were able to map the germination percentage of these C. dactylon cultivars in the national scale
Pi, Erxu; Mantri, Nitin; Ngai, Sai Ming; Lu, Hongfei; Du, Liqun
2013-01-01
Temperature is one of the most significant environmental factors that affects germination of grass seeds. Reliable prediction of the optimal temperature for seed germination is crucial for determining the suitable regions and favorable sowing timing for turf grass cultivation. In this study, a back-propagation-artificial-neural-network-aided dual quintic equation (BP-ANN-QE) model was developed to improve the prediction of the optimal temperature for seed germination. This BP-ANN-QE model was used to determine optimal sowing times and suitable regions for three Cynodon dactylon cultivars (C. dactylon, 'Savannah' and 'Princess VII'). Prediction of the optimal temperature for these seeds was based on comprehensive germination tests using 36 day/night (high/low) temperature regimes (both ranging from 5/5 to 40/40°C with 5°C increments). Seed germination data from these temperature regimes were used to construct temperature-germination correlation models for estimating germination percentage with confidence intervals. Our tests revealed that the optimal high/low temperature regimes required for all the three bermudagrass cultivars are 30/5, 30/10, 35/5, 35/10, 35/15, 35/20, 40/15 and 40/20°C; constant temperatures ranging from 5 to 40°C inhibited the germination of all three cultivars. While comparing different simulating methods, including DQEM, Bisquare ANN-QE, and BP-ANN-QE in establishing temperature based germination percentage rules, we found that the R(2) values of germination prediction function could be significantly improved from about 0.6940-0.8177 (DQEM approach) to 0.9439-0.9813 (BP-ANN-QE). These results indicated that our BP-ANN-QE model has better performance than the rests of the compared models. Furthermore, data of the national temperature grids generated from monthly-average temperature for 25 years were fit into these functions and we were able to map the germination percentage of these C. dactylon cultivars in the national scale of China, and
Defining fitness in evolutionary models
Indian Academy of Sciences (India)
2008-12-23
Dec 23, 2008 ... cently, by the work of Charlesworth (1994, , for the collected analyses) and .... offspring produced by a female at the end of the season that survive to ...... Kawecki T. J. and Stearns S. C. 1993 The evolution of life histories in spatially heterogeneous environments: optimal reaction norms revisited. Evol. Ecol.
Vedøy, Tord Finne
2012-01-01
Objectives. The aim was (1) to investigate the association between education and smoking status (current, former and never-smoking) among non-western immigrants in Norway and (2) examine if these associations fit the pattern predicted by the model of the cigarette epidemic. Design. Data came from the Oslo Health Study and the Oslo Immigrant Health study (2000–2002). The first included all Oslo citizens from seven selected birth cohorts. The second included all Oslo citizens born in Turkey, Iran, Pakistan, Vietnam and Sri Lanka. 14,768 respondents answered questions on smoking, education and relevant background variables (over-all response rate 43.3%). Two gender specific multinomial logistic regression models with smoking status [current, former or never-smoker (reference)] as dependent variable were computed and predicted probabilities of smoking status among groups with different levels of education were calculated. Results. Smoking prevalence among men ranged from 19% among Sri Lankans to 56% among Turks. Compared to the smoking prevalence among Norwegian men (27%), smoking was widespread among Iranians (42%) and Vietnamese (36%). Higher education was associated with lower probability of current smoking among all male immigrant groups except Sri Lankans. Never having smoked was positively associated with education among Pakistani and Norwegian men. Among women, education was higher than for other levels of education. The probability of being a never-smoker was high among Turkish and Iranian women with primary education. Conclusions. High smoking prevalence among Turkish and Iranian men highlights the importance of addressing smoking behaviour in subgroups of the general population. Smoking was almost non-existent among Pakistani, Vietnamese and Sri Lankan women and indicates strong persistent social norms against smoking. PMID:22762415
Vedøy, Tord Finne
2013-01-01
The aim was (1) to investigate the association between education and smoking status (current, former and never-smoking) among non-western immigrants in Norway and (2) examine if these associations fit the pattern predicted by the model of the cigarette epidemic. Data came from the Oslo Health Study and the Oslo Immigrant Health study (2000-2002). The first included all Oslo citizens from seven selected birth cohorts. The second included all Oslo citizens born in Turkey, Iran, Pakistan, Vietnam and Sri Lanka. 14,768 respondents answered questions on smoking, education and relevant background variables (over-all response rate 43.3%). Two gender specific multinomial logistic regression models with smoking status [current, former or never-smoker (reference)] as dependent variable were computed and predicted probabilities of smoking status among groups with different levels of education were calculated. Smoking prevalence among men ranged from 19% among Sri Lankans to 56% among Turks. Compared to the smoking prevalence among Norwegian men (27%), smoking was widespread among Iranians (42%) and Vietnamese (36%). Higher education was associated with lower probability of current smoking among all male immigrant groups except Sri Lankans. Never having smoked was positively associated with education among Pakistani and Norwegian men. Among women, education was higher than for other levels of education. The probability of being a never-smoker was high among Turkish and Iranian women with primary education. High smoking prevalence among Turkish and Iranian men highlights the importance of addressing smoking behaviour in subgroups of the general population. Smoking was almost non-existent among Pakistani, Vietnamese and Sri Lankan women and indicates strong persistent social norms against smoking.
Directory of Open Access Journals (Sweden)
Moshirfar M
2011-08-01
Full Text Available Majid Moshirfar1, Charles M Calvo2, Krista I Kinard1, Lloyd B Williams1, Shameema Sikder3, Marcus C Neuffer11University of Utah, Department of Ophthalmology and Visual Sciences, Salt Lake City, UT, USA; 2University of Nevada, School of Medicine, Las Vegas, NV, USA; 3Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, USABackground: This study analyzes the characteristics of donor and recipient tissue preparation between the Hessburg-Barron and Hanna punch and trephine systems by using elliptical curve fitting models, light microscopy, and anterior segment optical coherence tomography (AS-OCT.Methods: Eight millimeter Hessburg-Barron and Hanna vacuum trephines and punches were used on six cadaver globes and six corneal-scleral rims, respectively. Eccentricity data were generated using measurements from photographs of the corneal buttons and were used to generate an elliptical curve fit to calculate properties of the corneal button. The trephination angle and punch angle were measured by digital protractor software from light microscopy and AS-OCT images to evaluate the consistency with which each device cuts the cornea.Results: The Hanna trephine showed a trend towards producing a more circular recipient button than the Barron trephine (ratio of major axis to minor axis, ie, 1.059 ± 0.041 versus 1.110 ± 0.027 (P = 0.147 and the Hanna punch showed a trend towards producing a more circular donor cut than the Barron punch, ie, 1.021 ± 0.022 versus 1.046 ± 0.039 (P = 0.445. The Hanna trephine was demonstrated to have a more consistent trephination angle than the Barron trephine when assessing light microscopy images, ie, ±14.39° (95% confidence interval [CI] 111.9–157.7 versus ±19.38° (95% CI 101.9–150.2, P = 0.492 and OCT images, ie, ± 8.08° (95% CI 106.2–123.3 versus ± 11.16° (95% CI 109.3–132.6, P = 0.306. The angle created by the Hanna punch had less variability than the Barron punch from both the light microscopy
Energy Technology Data Exchange (ETDEWEB)
Campione, Salvatore [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Warne, Larry K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sainath, Kamalesh [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Basilio, Lorena I. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-10-01
In this report we overview the fundamental concepts for a pair of techniques which together greatly hasten computational predictions of electromagnetic pulse (EMP) excitation of finite-length dissipative conductors over a ground plane. In a time- domain, transmission line (TL) model implementation, predictions are computationally bottlenecked time-wise, either for late-time predictions (about 100ns-10000ns range) or predictions concerning EMP excitation of long TLs (order of kilometers or more ). This is because the method requires a temporal convolution to account for the losses in the ground. Addressing this to facilitate practical simulation of EMP excitation of TLs, we first apply a technique to extract an (approximate) complex exponential function basis-fit to the ground/Earth's impedance function, followed by incorporating this into a recursion-based convolution acceleration technique. Because the recursion-based method only requires the evaluation of the most recent voltage history data (versus the entire history in a "brute-force" convolution evaluation), we achieve necessary time speed- ups across a variety of TL/Earth geometry/material scenarios. Intentionally Left Blank
Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P
2008-01-01
Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...
Modeling the Accidental Deaths
Directory of Open Access Journals (Sweden)
Mariyam Hafeez
2008-01-01
Full Text Available The model for accidental deaths in the city of Lahore has been developed by using a class of Generalized Linear Models. Various link functions have been used in developing the model. The diagnostic checks have been carried out to see the validity of the fitted model.
Zumbo, Bruno D.; Ochieng, Charles O.
Many measures found in educational research are ordered categorical response variables that are empirical realizations of an underlying normally distributed variate. These ordered categorical variables are commonly referred to as Likert or rating scale data. Regression models are commonly fit using these ordered categorical variables as the…
International Nuclear Information System (INIS)
Schubmehl, M.
1999-03-01
Temperature and density histories of direct-drive laser fusion implosions are important to an understanding of the reaction's progress. Such measurements also document phenomena such as preheating of the core and improper compression that can interfere with the thermonuclear reaction. Model x-ray spectra from the non-LTE (local thermodynamic equilibrium) radiation transport post-processor for LILAC have recently been fitted to OMEGA data. The spectrum fitting code reads in a grid of model spectra and uses an iterative weighted least-squares algorithm to perform a fit to experimental data, based on user-input parameter estimates. The purpose of this research was to upgrade the fitting code to compute formal uncertainties on fitted quantities, and to provide temperature and density estimates with error bars. A standard error-analysis process was modified to compute these formal uncertainties from information about the random measurement error in the data. Preliminary tests of the code indicate that the variances it returns are both reasonable and useful
DEFF Research Database (Denmark)
Carlson, Kerstin
The International Criminal Tribunal for the former Yugoslavia (ICTY) was the first and most celebrated of a wave of international criminal tribunals (ICTs) built in the 1990s designed to advance liberalism through international criminal law. Model(ing) Justice examines the case law of the ICTY...
ten Cate, J.M.
2015-01-01
Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of
Kobrin, Jennifer L.; Sinharay, Sandip; Haberman, Shelby J.; Chajewski, Michael
2011-01-01
This study examined the adequacy of a multiple linear regression model for predicting first-year college grade point average (FYGPA) using SAT[R] scores and high school grade point average (HSGPA). A variety of techniques, both graphical and statistical, were used to examine if it is possible to improve on the linear regression model. The results…
Finch, W Holmes; Kelley, Ken
2014-01-01
A powerful tool for analyzing nested designs in a variety of fields, multilevel/hierarchical modeling allows researchers to account for data collected at multiple levels. Multilevel Modeling Using R provides you with a helpful guide to conducting multilevel data modeling using the R software environment.After reviewing standard linear models, the authors present the basics of multilevel models and explain how to fit these models using R. They then show how to employ multilevel modeling with longitudinal data and demonstrate the valuable graphical options in R. The book also describes models fo
DEFF Research Database (Denmark)
Knudsen, Torben
2011-01-01
model structure suggested by University of Lund the WP4 leader. This particular model structure has the advantages that it fits better into the control design frame work used by WP3-4 compared to the model structures previously developed in WP2. The different model structures are first summarised....... Then issues dealing with optimal experimental design is considered. Finally the parameters are estimated in the chosen static and dynamic models and a validation is performed. Two of the static models, one of them the additive model, explains the data well. In case of dynamic models the suggested additive...
Kelderman, Hendrikus
1984-01-01
Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch
DEFF Research Database (Denmark)
Andreasen, Martin Møller; Meldrum, Andrew
pricing factors using the sequential regression approach. Our findings suggest that the two models largely provide the same in-sample fit, but loadings from ordinary and risk-adjusted Campbell-Shiller regressions are generally best matched by the shadow rate models. We also find that the shadow rate...... models perform better than the QTSMs when forecasting bond yields out of sample....
Current models for the acute toxicity of cationic metals to fish focus on the binding of free metal ions to the gill surface. This binding, and the consequent metal toxicity, can be reduced by metal-complexing ligands...
Directory of Open Access Journals (Sweden)
Ghada R. El Said
2015-12-01
The suggested integrated model helps for better understanding of KMS from the perspective of users’ motivation, system design, and tasks. This paper contributes-with academic and practical implications for KMS researchers, developers, and managers.
Zandbelt, Bram
2017-01-01
Introductory presentation on cognitive modeling for the course ‘Cognitive control’ of the MSc program Cognitive Neuroscience at Radboud University. It addresses basic questions, such as 'What is a model?', 'Why use models?', and 'How to use models?'
Anaïs Schaeffer
2012-01-01
By analysing the production of mesons in the forward region of LHC proton-proton collisions, the LHCf collaboration has provided key information needed to calibrate extremely high-energy cosmic ray models. Average transverse momentum (pT) as a function of rapidity loss ∆y. Black dots represent LHCf data and the red diamonds represent SPS experiment UA7 results. The predictions of hadronic interaction models are shown by open boxes (sibyll 2.1), open circles (qgsjet II-03) and open triangles (epos 1.99). Among these models, epos 1.99 shows the best overall agreement with the LHCf data. LHCf is dedicated to the measurement of neutral particles emitted at extremely small angles in the very forward region of LHC collisions. Two imaging calorimeters – Arm1 and Arm2 – take data 140 m either side of the ATLAS interaction point. “The physics goal of this type of analysis is to provide data for calibrating the hadron interaction models – the well-known &...
Knight, Gwenan M.; Colijn, Caroline; Shrestha, Sourya; Fofana, Mariam; Cobelens, Frank; White, Richard G.; Dowdy, David W.; Cohen, Ted
2015-01-01
Drug resistance poses a serious challenge for the control of tuberculosis in many settings. It is well established that the expected future trend in resistance depends on the reproductive fitness of drug-resistant Mycobacterium tuberculosis. However, the variability in fitness between strains with
Directory of Open Access Journals (Sweden)
Mehmet YEŞİLBUDAK
2018-03-01
Full Text Available The information about solar parameters is important in the installation of photovoltaic energy systems that are reliable, environmentally friendly and sustainable. In this study, initially, long-term global solar radiation, sunshine duration and air temperature data of Ankara are analyzed on the annual, monthly and daily basis, elaborately. Afterwards, three different empirical methods that are polynomial, Gaussian and Fourier are used for the purpose of modeling long-term monthly total global solar radiation, monthly total sunshine duration and monthly mean air temperature data. The coefficient of determination and the root mean square error are computed as statistical test metrics in order to compare data modeling performance of the mentioned empirical methods. The empirical methods that provide the best results enable to model the solar characteristics of Ankara more accurately and the achieved outcomes constitute the significant resource for other locations with similar climatic conditions.
Tashiro, Tohru
2014-03-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.
International Nuclear Information System (INIS)
Tashiro, Tohru
2014-01-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model
Aydinli, A.; Bender, M.; Chong, A.; Yue, X.
2016-01-01
The present research investigates the applicability of prominent Western volunteering frameworks in Hong Kong. Two cross-sectional surveys involving a total of 268 respondents were conducted. In Study 1, we tested a model of volunteering among 149 Hong Kong Chinese adult individuals (Mage = 34.8
Zane, S.; Rea, N.; Turolla, R.; Nobili, L.
2009-01-01
Within the magnetar scenario, the ‘twisted magnetosphere’ model appears very promising in explaining the persistent X-ray emission from soft gamma repeaters (SGRs) and anomalous X-ray pulsars (AXPs). In the first two papers of the series, we have presented a 3D Monte Carlo code for solving radiation
DEFF Research Database (Denmark)
Armour, C.; Carragher, N.; Elhai, J. D.
2013-01-01
Since the initial inclusion of PTSD in the DSM nomenclature, PTSD symptomatology has been distributed across three symptom clusters. However, a wealth of empirical research has concluded that PTSD's latent structure is best represented by one of two four-factor models: Numbing or Dysphoria. Recen...
International Nuclear Information System (INIS)
Gorecki, Paul K.
2013-01-01
The all-island wholesale electricity market, SEM, has to comply with the Target Model by 2016. SEM has worked well for consumers through mitigating market power, facilitating entry and ensuring adequate generation capacity, problems that will persist. But the SEM is a mandatory pool with central dispatch, the Target Model is a self dispatch with bilateral contracts. Minimal change to the SEM in complying with the Target Model is preferable to reinvention of SEM. The latter option might be appropriate when the EU internal electricity market is complete and the all-island market has sufficient interconnection to participate fully in that market. - Highlights: ► The Single Electricity Market (SEM) has worked well for consumers in Ireland. ► The SEM has to conform to the Target Model (TM) by 2016. ► The SEM is mandatory pool/central dispatch; the TM is bilateral contracts/self dispatch. ► Ensuring compliance with TM is best achieved through minimal change to SEM. ► Far reaching change is more appropriate once SEM is fully integrated in the EU electricity market.
Energy Technology Data Exchange (ETDEWEB)
El-Badry, Kareem; Quataert, Eliot [Department of Astronomy, University of California, Berkeley, CA (United States); Wetzel, Andrew R.; Hopkins, Philip F. [TAPIR, California Institute of Technology, Pasadena, CA (United States); Geha, Marla [Department of Astronomy, Yale University, New Haven, CT (United States); Kereš, Dusan; Chan, T. K. [Department of Physics, Center for Astrophysics and Space Sciences, University of California at San Diego, La Jolla (United States); Faucher-Giguère, Claude-André, E-mail: kelbadry@berkeley.edu [Department of Physics and Astronomy and CIERA, Northwestern University, Evanston, IL (United States)
2017-02-01
In low-mass galaxies, stellar feedback can drive gas outflows that generate non-equilibrium fluctuations in the gravitational potential. Using cosmological zoom-in baryonic simulations from the Feedback in Realistic Environments project, we investigate how these fluctuations affect stellar kinematics and the reliability of Jeans dynamical modeling in low-mass galaxies. We find that stellar velocity dispersion and anisotropy profiles fluctuate significantly over the course of galaxies’ starburst cycles. We therefore predict an observable correlation between star formation rate and stellar kinematics: dwarf galaxies with higher recent star formation rates should have systemically higher stellar velocity dispersions. This prediction provides an observational test of the role of stellar feedback in regulating both stellar and dark-matter densities in dwarf galaxies. We find that Jeans modeling, which treats galaxies as virialized systems in dynamical equilibrium, overestimates a galaxy’s dynamical mass during periods of post-starburst gas outflow and underestimates it during periods of net inflow. Short-timescale potential fluctuations lead to typical errors of ∼20% in dynamical mass estimates, even if full three-dimensional stellar kinematics—including the orbital anisotropy—are known exactly. When orbital anisotropy is not known a priori, typical mass errors arising from non-equilibrium fluctuations in the potential are larger than those arising from the mass-anisotropy degeneracy. However, Jeans modeling alone cannot reliably constrain the orbital anisotropy, and problematically, it often favors anisotropy models that do not reflect the true profile. If galaxies completely lose their gas and cease forming stars, fluctuations in the potential subside, and Jeans modeling becomes much more reliable.
Directory of Open Access Journals (Sweden)
Magnus Benetti
2012-05-01
Full Text Available The most accurate tool for assessment of cardiorespiratory fitness is cardiopulmonary exercise testing (CPET. However, CPET requires expensive equipment, trained technicians and time, which limits their use in population studies. In view of this issue, the present study aims to develop regression equations for predicting the cardiorespiratory fitness of adults using simple measurement variables. The study used data from 8,293 subjects, 5,291 male and 3,235 female (age range, 18 to 65 years. The sample was recruited in Florianopolis, Santa Catarina. To develop equations for prediction of peak oxygen uptake (VO2peak, the data associated were: fitness, age, body mass, height, resting heart rate, hypertension, diabetes, dyslipidemia and smoking. After statistical analyses, two equations for men and two for women were developed. The complete equations showed an adjusted R2 = 0.531 and a standard error of estimate (SEE = 7.15 ml-1∙kg-1∙min for men and R2 = 0.436 and SEE = 5.68 ml-1∙kg-1∙min for women. We conclude that the model developed for prediction of cardiorespiratory fitness is feasible and practical for prediction of VO2peak in epidemiological studies or when CPET cannot be performed.
Regularized Structural Equation Modeling
Jacobucci, Ross; Grimm, Kevin J.; McArdle, John J.
2016-01-01
A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM’s utility. PMID:27398019
Roh, Soonhee; Burnette, Catherine E; Lee, Kyoung Hag; Lee, Yeon-Shim; Martin, James I; Lawler, Michael J
2017-01-01
American Indian (AI) older adults are vulnerable to mental health disparities, yet very little is known about the factors associated with help-seeking for mental health services among them. The purpose of this study was to investigate the utility of Andersen's Behavioral Model in explaining AI older adults' help-seeking attitudes toward professional mental health services. Hierarchical regression analysis was used to examine predisposing, enabling, and need variables as predictors of help-seeking attitudes toward mental health services in a sample of 233 AI older adults from the Midwest. The model was found to have limited utility in the context of older AI help-seeking attitudes, as the proportion of explained variance was low. Gender, perceived stigma, social support, and physical health were significant predictors, whereas age, perceived mental health, and health insurance were not. © The Author(s) 2014.
Directory of Open Access Journals (Sweden)
Mohammed Koubiti
2014-07-01
Full Text Available Various codes of line-shape modeling are compared to each other through the profile of the C ii 723-nm line for typical plasma conditions encountered in the ablation clouds of carbon pellets, injected in magnetic fusion devices. Calculations were performed for a single electron density of 1017 cm−3 and two plasma temperatures (T = 2 and 4 eV. Ion and electron temperatures were assumed to be equal (Te = Ti = T. The magnetic field, B, was set equal to either to zero or 4 T. Comparisons between the line-shape modeling codes and two experimental spectra of the C ii 723-nm line, measured perpendicularly to the B-field in the Large Helical Device (LHD using linear polarizers, are also discussed.
2013-09-30
Group decide that we should address additional species, e.g. bottlenose dolphins , we will take those up in turn. For elephant seals our objective...2012), the right whale analysis provides a framework for analyzing many different mammalian species, including humans . By integrating sporadic...observations with an underlying process model, we can infer how individuals are interacting with their environment, and how their health and condition is
Pradip Saud; Thomas B. Lynch; Anup K. C.; James M. Guldin
2016-01-01
The inclusion of quadratic mean diameter (QMD) and relative spacing index (RSI) substantially improved the predictive capacity of heightâdiameter at breast height (d.b.h.) and crown ratio models (CR), respectively. Data were obtained from 208 permanent plots established in western Arkansas and eastern Oklahoma during 1985â1987 and remeasured for the sixth time (2012â...
DEFF Research Database (Denmark)
Stein, Wilfred D; Litman, Thomas
2006-01-01
appears to be a random event. Inasmuch as the kinetics of cancer recurrence in published data sets closely follows the model found for the appearance of sporadic retinoblastoma, tumor recurrence could be triggered by mutations in awakening- suppressor mechanisms. The retinoblastoma tumor suppressor gene...... was identified by tracing its occurrence in familial retinoblastoma pedigrees. Will it be possible to track the postulated cancer recurrence, awakening suppressor gene(s) in early recurrence breast cancer patients?...
Adikaram, K. K. L. B.; Becker, T.
2015-01-01
Data processing requires a robust linear fit identification method. In this paper, we introduce a non-parametric robust linear fit identification method for time series. The method uses an indicator 2/n to identify linear fit, where n is number of terms in a series. The ratio R max of a max − a min and S n − a min *n and that of R min of a max − a min and a max *n − S n are always equal to 2/n, where a max is the maximum element, a min is the minimum element and S n is the sum of all elements. If any series expected to follow y = c consists of data that do not agree with y = c form, R max > 2/n and R min > 2/n imply that the maximum and minimum elements, respectively, do not agree with linear fit. We define threshold values for outliers and noise detection as 2/n * (1 + k 1 ) and 2/n * (1 + k 2 ), respectively, where k 1 > k 2 and 0 ≤ k 1 ≤ n/2 − 1. Given this relation and transformation technique, which transforms data into the form y = c, we show that removing all data that do not agree with linear fit is possible. Furthermore, the method is independent of the number of data points, missing data, removed data points and nature of distribution (Gaussian or non-Gaussian) of outliers, noise and clean data. These are major advantages over the existing linear fit methods. Since having a perfect linear relation between two variables in the real world is impossible, we used artificial data sets with extreme conditions to verify the method. The method detects the correct linear fit when the percentage of data agreeing with linear fit is less than 50%, and the deviation of data that do not agree with linear fit is very small, of the order of ±10−4%. The method results in incorrect detections only when numerical accuracy is insufficient in the calculation process. PMID:26571035
Directory of Open Access Journals (Sweden)
Glickman Andrea
2010-01-01
Full Text Available Abstract Needle exchange programs chase political as well as epidemiological dragons, carrying within them both implicit moral and political goals. In the exchange model of syringe distribution, injection drug users (IDUs must provide used needles in order to receive new needles. Distribution and retrieval are co-existent in the exchange model. Likewise, limitations on how many needles can be received at a time compel addicts to have multiple points of contact with professionals where the virtues of treatment and detox are impressed upon them. The centre of gravity for syringe distribution programs needs to shift from needle exchange to needle distribution, which provides unlimited access to syringes. This paper provides a case study of the Washington Needle Depot, a program operating under the syringe distribution model, showing that the distribution and retrieval of syringes can be separated with effective results. Further, the experience of IDUs is utilized, through paid employment, to provide a vulnerable population of people with clean syringes to prevent HIV and HCV.
de Jong, Johan; Lemmink, Koen A. P. M.; King, Abby C.; Huisman, Mark; Stevens, Martin
Objective: To determine the effects on energy expenditure, health and fitness outcomes after 12 months of GALM. Methods: Subjects from matched neighbourhoods were assigned to an intervention (IG) (n = 79) or a waiting-list control group (CG) (n = 102). During the 12 months the IG attended two series
Rafal Podlaski; Francis .A. Roesch
2013-01-01
The goals of this study are (1) to analyse the accuracy of the approximation of empirical distributions of diameter at breast height (dbh) using two-component mixtures of either the Weibull distribution or the gamma distribution in two−cohort stands, and (2) to discuss the procedure of choosing goodness−of−fit tests. The study plots were...
de Jong, Johan; Lemmink, Koen A. P. M.; Stevens, Martin; de Greef, Mathieu H. G.; Rispens, Pieter; King, Abby C.; Mulder, Theo
Objective: To determine the effects on energy expenditure, health and fitness outcomes in sedentary older adults aged 55-65 after 6-month participation in the GALM program. Methods: In three Dutch communities, subjects from matched neighbourhoods were assigned to an intervention (n = 79) or a
DEFF Research Database (Denmark)
Bukh, Jens; Meuleman, Philip; Tellier, Raymond
2010-01-01
Chimpanzees represent the only animal model for studies of the natural history of hepatitis C virus (HCV). To generate virus stocks of important HCV variants, we infected chimpanzees with HCV strains of genotypes 1-6 and determined the infectivity titer of acute-phase plasma pools in additional...... animals. The courses of first- and second-passage infections were similar, with early appearance of viremia, HCV RNA titers of >10(4.7) IU/mL, and development of acute hepatitis; the chronicity rate was 56%. The challenge pools had titers of 10(3)-10(5) chimpanzee infectious doses/mL. Human liver...
Residual-based model diagnosis methods for mixture cure models.
Peng, Yingwei; Taylor, Jeremy M G
2017-06-01
Model diagnosis, an important issue in statistical modeling, has not yet been addressed adequately for cure models. We focus on mixture cure models in this work and propose some residual-based methods to examine the fit of the mixture cure model, particularly the fit of the latency part of the mixture cure model. The new methods extend the classical residual-based methods to the mixture cure model. Numerical work shows that the proposed methods are capable of detecting lack-of-fit of a mixture cure model, particularly in the latency part, such as outliers, improper covariate functional form, or nonproportionality in hazards if the proportional hazards assumption is employed in the latency part. The methods are illustrated with two real data sets that were previously analyzed with mixture cure models. © 2016, The International Biometric Society.
Directory of Open Access Journals (Sweden)
Tea Ya. Danelyan
2014-01-01
Full Text Available The article states the general principles of structural modeling in aspect of the theory of systems and gives the interrelation with other types of modeling to adjust them to the main directions of modeling. Mathematical methods of structural modeling, in particular method of expert evaluations are considered.
African Journals Online (AJOL)
Moatez Billah HARIDA
The use of the simulator “Hybrid Electrical Vehicle Model Balances Fidelity and. Speed (HEVMBFS)” and the global control strategy make it possible to achieve encouraging results. Key words: Series parallel hybrid vehicle - nonlinear model - linear model - Diesel engine - Engine modelling -. HEV simulator - Predictive ...
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Piccolo, Chiara; Heitzig, Martina
2011-01-01
This chapter presents various types of constitutive models and their applications. There are 3 aspects dealt with in this chapter, namely: creation and solution of property models, the application of parameter estimation and finally application examples of constitutive models. A systematic...... procedure is introduced for the analysis and solution of property models. Models that capture and represent the temperature dependent behaviour of physical properties are introduced, as well as equation of state models (EOS) such as the SRK EOS. Modelling of liquid phase activity coefficients are also...
Directory of Open Access Journals (Sweden)
Leonardo Machado Pires
2007-10-01
Full Text Available Os modelos polinomiais são mais difundidos no meio florestal brasileiro na descrição do perfil de árvores devido à sua facilidade de ajuste e precisão. O mesmo não ocorre com os modelos não-lineares, os quais possuem maior dificuldade de ajuste. Dentre os modelos não-lineares clássicos, na descrição do perfil, podem-se citar o de Gompertz, o Logístico e o de Weibull. Portanto, este estudo visou comparar os modelos lineares e não lineares para a descrição do perfil de árvores. As medidas de comparação foram o coeficiente de determinação (R², o erro-padrão residual (s yx, o coeficiente de determinação corrigido (R²ajustado, o gráfico dos resíduos e a facilidade de ajuste. Os resultados ressaltaram que, dentre os modelos não-lineares, o que obteve melhor desempenho, de forma geral, foi o modelo Logístico, apesar de o modelo de Gompertz ser melhor em termos de erro-padrão residual. Nos modelos lineares, o polinômio proposto por Pires & Calegario foi superior aos demais. Ao comparar os modelos não-lineares com os lineares, o modelo Logístico foi melhor em razão, principalmente, do fato de o comportamento dos dados ser não-linear, à baixa correlação entre os parâmetros e à fácil interpretação deles, facilitando a convergência e o ajuste.Polynomial models are most commonly used in Brazilian forestry for taper modeling due to its straightforwardly fitting and precision. The use of nonlinear regression classic models, like Gompertz, Logistic and Weibull, is not very common in Brazil. Therefore, this study aimed to verify the best nonlinear and linear models, and among these the best model to describe the longitudinal tree profile. The comparison measures were: R², syx, R²adjusted, residual graphics and fitting convergence. The results pointed out that among the non-linear models the best behavior, in general, was given by the Logistic model, although the Gompertz model was superior compared with the Weibull
Chang, CC
2012-01-01
Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko
2015-09-30
behaviour and in turn to a change in vital rates. In addition, we aim to build statistical tools that can be applied to real-world management situations in...vital rates (Fleishman et al. 2015). Schick participated in a workshop at the IMCC in Glasgow, 2014, which was organised by Leslie New. He focused on...L. Schwarz, S. E. Simmons, L. Thomas, P. L. Tyack, and J. Harwood. 2014. Using short-term measures of behaviour to estimate long-term fitness of
Directory of Open Access Journals (Sweden)
Euzebio Medrado da Silva
2004-04-01
Full Text Available A distribuição granulométrica de partículas sólidas é essencial para as áreas de material de construção, mecânica dos solos, física dos solos, hidrossedimentologia, entre outras. As técnicas utilizadas na avaliação da distribuição granulométrica de amostras resultam em valores pontuais, dependendo de posterior interpolação para o traçado da curva granulométrica e a obtenção de diâmetros característicos específicos. A transformação de valores pontuais em funções contínuas pode ser realizada por meio de modelos matemáticos. Entretanto, há poucos estudos com a finalidade de determinar o melhor modelo para o ajuste de curvas granulométricas. O objetivo deste trabalho foi testar e comparar 14 diferentes modelos passíveis de utilização no traçado da curva granulométrica de partículas sólidas com base em quatro pontos medidos. O parâmetro de comparação entre os modelos foi a soma de quadrado dos erros entre os valores medidos e calculados. Os modelos mais recomendados no traçado da curva granulométrica, a partir de quatro pontos, são os de Skaggs et al. 3P, Lima & Silva 3P, Weibull 3P e Morgan et al. 3P, todos com três parâmetros de ajuste.Particle-size distribution is fundamental for characterizing construction materials, soil mechanics, soil physics, sediment-flux in rivers, and others. The techniques used to determine the particle-size distribution of a sample are point-wise, demanding posterior interpolation to fit the complete particle-size distribution curve and to obtain values of specific diameters. The transformation of discrete points into continuous functions can be made by mathematical models. However, there are few studies to determine the best model to fit particle-size distribution curves. The objective of this work was to test and compare 14 different models with feasibility to fit the cumulative particle-size distribution curve based on four measured points. The parameter used to compare
International Nuclear Information System (INIS)
Buchler, J.R.; Gottesman, S.T.; Hunter, J.H. Jr.
1990-01-01
Various papers on galactic models are presented. Individual topics addressed include: observations relating to galactic mass distributions; the structure of the Galaxy; mass distribution in spiral galaxies; rotation curves of spiral galaxies in clusters; grand design, multiple arm, and flocculent spiral galaxies; observations of barred spirals; ringed galaxies; elliptical galaxies; the modal approach to models of galaxies; self-consistent models of spiral galaxies; dynamical models of spiral galaxies; N-body models. Also discussed are: two-component models of galaxies; simulations of cloudy, gaseous galactic disks; numerical experiments on the stability of hot stellar systems; instabilities of slowly rotating galaxies; spiral structure as a recurrent instability; model gas flows in selected barred spiral galaxies; bar shapes and orbital stochasticity; three-dimensional models; polar ring galaxies; dynamical models of polar rings
Model-Based Enterprise Summit Report
2014-02-01
Models Become Much More Efficient and Effective When Coupled With Knowledge Design Advisors CAD Fit Machine Motion KanBan Trigger Models Tolerance...Based Enterprise Geometry Kinematics Design Advisors Control Physics Planning System Models CAD Fit Machine Motion KanBan Trigger Models Tolerance
Ternent, Lucy; Dyson, Rosemary J.; Krachler, Anne-Marie; Jabbari, Sara
2015-01-01
Bacterial resistance to antibiotic treatment is a huge concern: introduction of any new antibiotic is shortly followed by the emergence of resistant bacterial isolates in the clinic. This issue is compounded by a severe lack of new antibiotics reaching the market. The significant rise in clinical resistance to antibiotics is especially problematic in nosocomial infections, where already vulnerable patients may fail to respond to treatment, causing even greater health concern. A recent focus has been on the development of anti-virulence drugs as a second line of defence in the treatment of antibiotic-resistant infections. This treatment, which weakens bacteria by reducing their virulence rather than killing them, should allow infections to be cleared through the body׳s natural defence mechanisms. In this way there should be little to no selective pressure exerted on the organism and, as such, a predominantly resistant population should be less likely to emerge. However, before the likelihood of resistance to these novel drugs emerging can be predicted, we must first establish whether such drugs can actually be effective. Many believe that anti-virulence drugs would not be powerful enough to clear existing infections, restricting their potential application to prophylaxis. We have developed a mathematical model that provides a theoretical framework to reveal the circumstances under which anti-virulence drugs may or may not be successful. We demonstrate that by harnessing and combining the advantages of antibiotics with those provided by anti-virulence drugs, given infection-specific parameters, it is possible to identify treatment strategies that would efficiently clear bacterial infections, while preventing the emergence of antibiotic-resistant subpopulations. Our findings strongly support the continuation of research into anti-virulence drugs and demonstrate that their applicability may reach beyond infection prevention. PMID:25701634
Ternent, Lucy; Dyson, Rosemary J; Krachler, Anne-Marie; Jabbari, Sara
2015-05-07
Bacterial resistance to antibiotic treatment is a huge concern: introduction of any new antibiotic is shortly followed by the emergence of resistant bacterial isolates in the clinic. This issue is compounded by a severe lack of new antibiotics reaching the market. The significant rise in clinical resistance to antibiotics is especially problematic in nosocomial infections, where already vulnerable patients may fail to respond to treatment, causing even greater health concern. A recent focus has been on the development of anti-virulence drugs as a second line of defence in the treatment of antibiotic-resistant infections. This treatment, which weakens bacteria by reducing their virulence rather than killing them, should allow infections to be cleared through the body׳s natural defence mechanisms. In this way there should be little to no selective pressure exerted on the organism and, as such, a predominantly resistant population should be less likely to emerge. However, before the likelihood of resistance to these novel drugs emerging can be predicted, we must first establish whether such drugs can actually be effective. Many believe that anti-virulence drugs would not be powerful enough to clear existing infections, restricting their potential application to prophylaxis. We have developed a mathematical model that provides a theoretical framework to reveal the circumstances under which anti-virulence drugs may or may not be successful. We demonstrate that by harnessing and combining the advantages of antibiotics with those provided by anti-virulence drugs, given infection-specific parameters, it is possible to identify treatment strategies that would efficiently clear bacterial infections, while preventing the emergence of antibiotic-resistant subpopulations. Our findings strongly support the continuation of research into anti-virulence drugs and demonstrate that their applicability may reach beyond infection prevention. Copyright © 2015 The Authors. Published by