WorldWideScience

Sample records for models parametric model-fitting

  1. Classification rates: non‐parametric verses parametric models using ...

    African Journals Online (AJOL)

    This research sought to establish if non parametric modeling achieves a higher correct classification ratio than a parametric model. The local likelihood technique was used to model fit the data sets. The same sets of data were modeled using parametric logit and the abilities of the two models to correctly predict the binary ...

  2. Parametric Explosion Spectral Model

    Energy Technology Data Exchange (ETDEWEB)

    Ford, S R; Walter, W R

    2012-01-19

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before occurred. We develop a parametric model of the nuclear explosion seismic source spectrum derived from regional phases that is compatible with earthquake-based geometrical spreading and attenuation. Earthquake spectra are fit with a generalized version of the Brune spectrum, which is a three-parameter model that describes the long-period level, corner-frequency, and spectral slope at high-frequencies. Explosion spectra can be fit with similar spectral models whose parameters are then correlated with near-source geology and containment conditions. We observe a correlation of high gas-porosity (low-strength) with increased spectral slope. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.

  3. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh; Genton, Marc G.

    2014-01-01

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte

  4. Further Empirical Results on Parametric Versus Non-Parametric IRT Modeling of Likert-Type Personality Data

    Science.gov (United States)

    Maydeu-Olivares, Albert

    2005-01-01

    Chernyshenko, Stark, Chan, Drasgow, and Williams (2001) investigated the fit of Samejima's logistic graded model and Levine's non-parametric MFS model to the scales of two personality questionnaires and found that the graded model did not fit well. We attribute the poor fit of the graded model to small amounts of multidimensionality present in…

  5. Parametric and Non-Parametric System Modelling

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg

    1999-01-01

    the focus is on combinations of parametric and non-parametric methods of regression. This combination can be in terms of additive models where e.g. one or more non-parametric term is added to a linear regression model. It can also be in terms of conditional parametric models where the coefficients...... considered. It is shown that adaptive estimation in conditional parametric models can be performed by combining the well known methods of local polynomial regression and recursive least squares with exponential forgetting. The approach used for estimation in conditional parametric models also highlights how...... networks is included. In this paper, neural networks are used for predicting the electricity production of a wind farm. The results are compared with results obtained using an adaptively estimated ARX-model. Finally, two papers on stochastic differential equations are included. In the first paper, among...

  6. Measured, modeled, and causal conceptions of fitness

    Science.gov (United States)

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  7. A Model Fit Statistic for Generalized Partial Credit Model

    Science.gov (United States)

    Liang, Tie; Wells, Craig S.

    2009-01-01

    Investigating the fit of a parametric model is an important part of the measurement process when implementing item response theory (IRT), but research examining it is limited. A general nonparametric approach for detecting model misfit, introduced by J. Douglas and A. S. Cohen (2001), has exhibited promising results for the two-parameter logistic…

  8. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  9. Study on Semi-Parametric Statistical Model of Safety Monitoring of Cracks in Concrete Dams

    Directory of Open Access Journals (Sweden)

    Chongshi Gu

    2013-01-01

    Full Text Available Cracks are one of the hidden dangers in concrete dams. The study on safety monitoring models of concrete dam cracks has always been difficult. Using the parametric statistical model of safety monitoring of cracks in concrete dams, with the help of the semi-parametric statistical theory, and considering the abnormal behaviors of these cracks, the semi-parametric statistical model of safety monitoring of concrete dam cracks is established to overcome the limitation of the parametric model in expressing the objective model. Previous projects show that the semi-parametric statistical model has a stronger fitting effect and has a better explanation for cracks in concrete dams than the parametric statistical model. However, when used for forecast, the forecast capability of the semi-parametric statistical model is equivalent to that of the parametric statistical model. The modeling of the semi-parametric statistical model is simple, has a reasonable principle, and has a strong practicality, with a good application prospect in the actual project.

  10. Image sequence analysis in nuclear medicine: (1) Parametric imaging using statistical modelling

    International Nuclear Information System (INIS)

    Liehn, J.C.; Hannequin, P.; Valeyre, J.

    1989-01-01

    This is a review of parametric imaging methods on Nuclear Medicine. A Parametric Image is an image in which each pixel value is a function of the value of the same pixel of an image sequence. The Local Model Method is the fitting of each pixel time activity curve by a model which parameter values form the Parametric Images. The Global Model Method is the modelling of the changes between two images. It is applied to image comparison. For both methods, the different models, the identification criterion, the optimization methods and the statistical properties of the images are discussed. The analysis of one or more Parametric Images is performed using 1D or 2D histograms. The statistically significant Parametric Images, (Images of significant Variances, Amplitudes and Differences) are also proposed [fr

  11. Bayesian non parametric modelling of Higgs pair production

    Directory of Open Access Journals (Sweden)

    Scarpa Bruno

    2017-01-01

    Full Text Available Statistical classification models are commonly used to separate a signal from a background. In this talk we face the problem of isolating the signal of Higgs pair production using the decay channel in which each boson decays into a pair of b-quarks. Typically in this context non parametric methods are used, such as Random Forests or different types of boosting tools. We remain in the same non-parametric framework, but we propose to face the problem following a Bayesian approach. A Dirichlet process is used as prior for the random effects in a logit model which is fitted by leveraging the Polya-Gamma data augmentation. Refinements of the model include the insertion in the simple model of P-splines to relate explanatory variables with the response and the use of Bayesian trees (BART to describe the atoms in the Dirichlet process.

  12. Parametric fitting of data obtained from detectors with finite resolution and limited acceptance

    International Nuclear Information System (INIS)

    Gagunashvili, N.D.

    2011-01-01

    A goodness-of-fit test for fitting of a parametric model to data obtained from a detector with finite resolution and limited acceptance is proposed. The parameters of the model are found by minimization of a statistic that is used for comparing experimental data and simulated reconstructed data. Numerical examples are presented to illustrate and validate the fitting procedure.

  13. Latest astronomical constraints on some non-linear parametric dark energy models

    Science.gov (United States)

    Yang, Weiqiang; Pan, Supriya; Paliathanasis, Andronikos

    2018-04-01

    We consider non-linear redshift-dependent equation of state parameters as dark energy models in a spatially flat Friedmann-Lemaître-Robertson-Walker universe. To depict the expansion history of the universe in such cosmological scenarios, we take into account the large-scale behaviour of such parametric models and fit them using a set of latest observational data with distinct origin that includes cosmic microwave background radiation, Supernove Type Ia, baryon acoustic oscillations, redshift space distortion, weak gravitational lensing, Hubble parameter measurements from cosmic chronometers, and finally the local Hubble constant from Hubble space telescope. The fitting technique avails the publicly available code Cosmological Monte Carlo (COSMOMC), to extract the cosmological information out of these parametric dark energy models. From our analysis, it follows that those models could describe the late time accelerating phase of the universe, while they are distinguished from the Λ-cosmology.

  14. Parametric uncertainty modeling for robust control

    DEFF Research Database (Denmark)

    Rasmussen, K.H.; Jørgensen, Sten Bay

    1999-01-01

    The dynamic behaviour of a non-linear process can often be approximated with a time-varying linear model. In the presented methodology the dynamics is modeled non-conservatively as parametric uncertainty in linear lime invariant models. The obtained uncertainty description makes it possible...... to perform robustness analysis on a control system using the structured singular value. The idea behind the proposed method is to fit a rational function to the parameter variation. The parameter variation can then be expressed as a linear fractional transformation (LFT), It is discussed how the proposed...... point changes. It is shown that a diagonal PI control structure provides robust performance towards variations in feed flow rate or feed concentrations. However including both liquid and vapor flow delays robust performance specifications cannot be satisfied with this simple diagonal control structure...

  15. Parametric resonance in the early Universe—a fitting analysis

    Energy Technology Data Exchange (ETDEWEB)

    Figueroa, Daniel G. [Theoretical Physics Department, CERN, Geneva (Switzerland); Torrentí, Francisco, E-mail: daniel.figueroa@cern.ch, E-mail: f.torrenti@csic.es [Instituto de Física Teórica IFT-UAM/CSIC, Universidad Autónoma de Madrid, Cantoblanco 28049, Madrid (Spain)

    2017-02-01

    Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanning over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.

  16. Parametric resonance in the early Universe—a fitting analysis

    International Nuclear Information System (INIS)

    Figueroa, Daniel G.; Torrentí, Francisco

    2017-01-01

    Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanning over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.

  17. Parametrization of Seesaw Models and Light Sterile Neutrinos

    CERN Document Server

    Blennow, Mattias

    2011-01-01

    The recent recomputation of the neutrino fluxes from nuclear reactors relaxes the tension between the LSND and MiniBooNE anomalies and disappearance data when interpreted in terms of sterile neutrino oscillations. The simplest extension of the Standard Model with such fermion singlets is the addition of right-handed sterile neutrinos with small Majorana masses. Even when introducing three right-handed neutrinos, this scenario has less free parameters than the 3+2 scenarios studied in the literature. This begs the question whether the best fit regions obtained can be reproduced by this simplest extension of the Standard Model. In order to address this question, we devise an exact parametrization of Standard Model extensions with right-handed neutrinos. Apart from the usual 3x3 neutrino mixing matrix and the 3 masses of the lightest neutrinos, the extra degrees of freedom are encoded in another 3x3 unitary matrix and 3 additional mixing angles. The parametrization includes all the correlations among masses and ...

  18. Two-parametric model of electron beam in computational dosimetry for radiation processing

    International Nuclear Information System (INIS)

    Lazurik, V.M.; Lazurik, V.T.; Popov, G.; Zimek, Z.

    2016-01-01

    Computer simulation of irradiation process of various materials with electron beam (EB) can be applied to correct and control the performances of radiation processing installations. Electron beam energy measurements methods are described in the international standards. The obtained results of measurements can be extended by implementation computational dosimetry. Authors have developed the computational method for determination of EB energy on the base of two-parametric fitting of semi-empirical model for the depth dose distribution initiated by mono-energetic electron beam. The analysis of number experiments show that described method can effectively consider random displacements arising from the use of aluminum wedge with a continuous strip of dosimetric film and minimize the magnitude uncertainty value of the electron energy evaluation, calculated from the experimental data. Two-parametric fitting method is proposed for determination of the electron beam model parameters. These model parameters are as follow: E 0 – energy mono-energetic and mono-directional electron source, X 0 – the thickness of the aluminum layer, located in front of irradiated object. That allows obtain baseline data related to the characteristic of the electron beam, which can be later on applied for computer modeling of the irradiation process. Model parameters which are defined in the international standards (like E p – the most probably energy and R p – practical range) can be linked with characteristics of two-parametric model (E 0 , X 0 ), which allows to simulate the electron irradiation process. The obtained data from semi-empirical model were checked together with the set of experimental results. The proposed two-parametric model for electron beam energy evaluation and estimation of accuracy for computational dosimetry methods on the base of developed model are discussed. - Highlights: • Experimental and computational methods of electron energy evaluation. • Development

  19. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  20. The lz(p)* Person-Fit Statistic in an Unfolding Model Context

    NARCIS (Netherlands)

    Tendeiro, Jorge N.

    2017-01-01

    Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded

  1. Hyperbolic and semi-parametric models in finance

    Science.gov (United States)

    Bingham, N. H.; Kiesel, Rüdiger

    2001-02-01

    The benchmark Black-Scholes-Merton model of mathematical finance is parametric, based on the normal/Gaussian distribution. Its principal parametric competitor, the hyperbolic model of Barndorff-Nielsen, Eberlein and others, is briefly discussed. Our main theme is the use of semi-parametric models, incorporating the mean vector and covariance matrix as in the Markowitz approach, plus a non-parametric part, a scalar function incorporating features such as tail-decay. Implementation is also briefly discussed.

  2. Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data.

    Science.gov (United States)

    Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao

    2013-01-01

    Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions.

  3. The l z ( p ) * Person-Fit Statistic in an Unfolding Model Context.

    Science.gov (United States)

    Tendeiro, Jorge N

    2017-01-01

    Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded unfolding model is used. Results from a simulation study indicate that the person-fit statistic performed relatively well in detecting midpoint response style patterns and not so well in detecting extreme response style patterns.

  4. PRESS-based EFOR algorithm for the dynamic parametrical modeling of nonlinear MDOF systems

    Science.gov (United States)

    Liu, Haopeng; Zhu, Yunpeng; Luo, Zhong; Han, Qingkai

    2017-09-01

    In response to the identification problem concerning multi-degree of freedom (MDOF) nonlinear systems, this study presents the extended forward orthogonal regression (EFOR) based on predicted residual sums of squares (PRESS) to construct a nonlinear dynamic parametrical model. The proposed parametrical model is based on the non-linear autoregressive with exogenous inputs (NARX) model and aims to explicitly reveal the physical design parameters of the system. The PRESS-based EFOR algorithm is proposed to identify such a model for MDOF systems. By using the algorithm, we built a common-structured model based on the fundamental concept of evaluating its generalization capability through cross-validation. The resulting model aims to prevent over-fitting with poor generalization performance caused by the average error reduction ratio (AERR)-based EFOR algorithm. Then, a functional relationship is established between the coefficients of the terms and the design parameters of the unified model. Moreover, a 5-DOF nonlinear system is taken as a case to illustrate the modeling of the proposed algorithm. Finally, a dynamic parametrical model of a cantilever beam is constructed from experimental data. Results indicate that the dynamic parametrical model of nonlinear systems, which depends on the PRESS-based EFOR, can accurately predict the output response, thus providing a theoretical basis for the optimal design of modeling methods for MDOF nonlinear systems.

  5. Simulation of parametric model towards the fixed covariate of right censored lung cancer data

    Science.gov (United States)

    Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila

    2017-09-01

    In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.

  6. Magnetorheological fluid dampers: a review of parametric modelling

    International Nuclear Information System (INIS)

    Wang, D H; Liao, W H

    2011-01-01

    Due to the inherent nonlinear nature of magnetorheological (MR) dampers, one of the challenging aspects for developing and utilizing these devices to achieve high performance is the development of models that can accurately describe their unique characteristics. In this review, the characteristics of MR dampers are summarized according to the measured responses under different conditions. On these bases, the considerations and methods of the parametric dynamic modelling for MR dampers are given and the state-of-the-art parametric dynamic modelling, identification and validation techniques for MR dampers are reviewed. In the past two decades, the models for MR dampers have been focused on how to improve the modelling accuracy. Although the force–displacement behaviour is well represented by most of the proposed dynamic models for MR dampers, no simple parametric models with high accuracy for MR dampers can be found. In addition, the parametric dynamic models for MR dampers with on-line updating ability and the inverse parametric models for MR dampers are scarcely explored. Moreover, whether one dynamic model for MR dampers can portray the force–displacement and force–velocity behaviour is not only determined by the dynamic model itself but also determined by the identification method. (topical review)

  7. Model Adaptation in Parametric Space for POD-Galerkin Models

    Science.gov (United States)

    Gao, Haotian; Wei, Mingjun

    2017-11-01

    The development of low-order POD-Galerkin models is largely motivated by the expectation to use the model developed with a set of parameters at their native values to predict the dynamic behaviors of the same system under different parametric values, in other words, a successful model adaptation in parametric space. However, most of time, even small deviation of parameters from their original value may lead to large deviation or unstable results. It has been shown that adding more information (e.g. a steady state, mean value of a different unsteady state, or an entire different set of POD modes) may improve the prediction of flow with other parametric states. For a simple case of the flow passing a fixed cylinder, an orthogonal mean mode at a different Reynolds number may stabilize the POD-Galerkin model when Reynolds number is changed. For a more complicated case of the flow passing an oscillatory cylinder, a global POD-Galerkin model is first applied to handle the moving boundaries, then more information (e.g. more POD modes) is required to predicate the flow under different oscillatory frequencies. Supported by ARL.

  8. Parametric cost models for space telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtnay

    2017-11-01

    Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.

  9. Parametric Cost Models for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney

    2010-01-01

    Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.

  10. FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES

    Directory of Open Access Journals (Sweden)

    U. S. Panday

    2012-09-01

    Full Text Available In literature and in photogrammetric workstations many approaches and systems to automatically reconstruct buildings from remote sensing data are described and available. Those building models are being used for instance in city modeling or in cadastre context. If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS data. This leads to inconsistent building outlines, which has a negative influence on visual impression, but more seriously also represents a wrong legal boundary in the cadaster. Oblique aerial images as opposed to nadir-view images reveal greater detail, enabling to see different views of an object taken from different directions. Building walls are visible from oblique images directly and those images are used for automated roof overhang estimation in this research. A fitting algorithm is employed to find roof parameters of simple buildings. It uses a least squares algorithm to fit projected wire frames to their corresponding edge lines extracted from the images. Self-occlusion is detected based on intersection result of viewing ray and the planes formed by the building whereas occlusion from other objects is detected using an ALS point cloud. Overhang and ground height are obtained by sweeping vertical and horizontal planes respectively. Experimental results are verified with high resolution ortho-images, field survey, and ALS data. Planimetric accuracy of 1cm mean and 5cm standard deviation was obtained, while buildings' orientation were accurate to mean of 0.23° and standard deviation of 0.96° with ortho-image. Overhang parameters were aligned to approximately 10cm with field survey. The ground and roof heights were accurate to mean of – 9cm and 8cm with standard deviations of 16cm and 8cm with ALS respectively. The developed approach reconstructs 3D building models well in cases of sufficient texture. More images should be acquired for

  11. Parametric structural modeling of insect wings

    International Nuclear Information System (INIS)

    Mengesha, T E; Vallance, R R; Barraja, M; Mittal, R

    2009-01-01

    Insects produce thrust and lift forces via coupled fluid-structure interactions that bend and twist their compliant wings during flapping cycles. Insight into this fluid-structure interaction is achieved with numerical modeling techniques such as coupled finite element analysis and computational fluid dynamics, but these methods require accurate and validated structural models of insect wings. Structural models of insect wings depend principally on the shape, dimensions and material properties of the veins and membrane cells. This paper describes a method for parametric modeling of wing geometry using digital images and demonstrates the use of the geometric models in constructing three-dimensional finite element (FE) models and simple reduced-order models. The FE models are more complete and accurate than previously reported models since they accurately represent the topology of the vein network, as well as the shape and dimensions of the veins and membrane cells. The methods are demonstrated by developing a parametric structural model of a cicada forewing.

  12. Housing price prediction: parametric versus semi-parametric spatial hedonic models

    Science.gov (United States)

    Montero, José-María; Mínguez, Román; Fernández-Avilés, Gema

    2018-01-01

    House price prediction is a hot topic in the economic literature. House price prediction has traditionally been approached using a-spatial linear (or intrinsically linear) hedonic models. It has been shown, however, that spatial effects are inherent in house pricing. This article considers parametric and semi-parametric spatial hedonic model variants that account for spatial autocorrelation, spatial heterogeneity and (smooth and nonparametrically specified) nonlinearities using penalized splines methodology. The models are represented as a mixed model that allow for the estimation of the smoothing parameters along with the other parameters of the model. To assess the out-of-sample performance of the models, the paper uses a database containing the price and characteristics of 10,512 homes in Madrid, Spain (Q1 2010). The results obtained suggest that the nonlinear models accounting for spatial heterogeneity and flexible nonlinear relationships between some of the individual or areal characteristics of the houses and their prices are the best strategies for house price prediction.

  13. Toward an Empirically-based Parametric Explosion Spectral Model

    Science.gov (United States)

    Ford, S. R.; Walter, W. R.; Ruppert, S.; Matzel, E.; Hauk, T. F.; Gok, R.

    2010-12-01

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never occurred. We develop a parametric model of the nuclear explosion seismic source spectrum derived from regional phases (Pn, Pg, and Lg) that is compatible with earthquake-based geometrical spreading and attenuation. Earthquake spectra are fit with a generalized version of the Brune spectrum, which is a three-parameter model that describes the long-period level, corner-frequency, and spectral slope at high-frequencies. These parameters are then correlated with near-source geology and containment conditions. There is a correlation of high gas-porosity (low strength) with increased spectral slope. However, there are trade-offs between the slope and corner-frequency, which we try to independently constrain using Mueller-Murphy relations and coda-ratio techniques. The relationship between the parametric equation and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source, and aid in the prediction of observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing.

  14. Analysis of survival in breast cancer patients by using different parametric models

    Science.gov (United States)

    Enera Amran, Syahila; Asrul Afendi Abdullah, M.; Kek, Sie Long; Afiqah Muhamad Jamil, Siti

    2017-09-01

    In biomedical applications or clinical trials, right censoring was often arising when studying the time to event data. In this case, some individuals are still alive at the end of the study or lost to follow up at a certain time. It is an important issue to handle the censoring data in order to prevent any bias information in the analysis. Therefore, this study was carried out to analyze the right censoring data with three different parametric models; exponential model, Weibull model and log-logistic models. Data of breast cancer patients from Hospital Sultan Ismail, Johor Bahru from 30 December 2008 until 15 February 2017 was used in this study to illustrate the right censoring data. Besides, the covariates included in this study are the time of breast cancer infection patients survive t, age of each patients X1 and treatment given to the patients X2 . In order to determine the best parametric models in analysing survival of breast cancer patients, the performance of each model was compare based on Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and log-likelihood value using statistical software R. When analysing the breast cancer data, all three distributions were shown consistency of data with the line graph of cumulative hazard function resembles a straight line going through the origin. As the result, log-logistic model was the best fitted parametric model compared with exponential and Weibull model since it has the smallest value in AIC and BIC, also the biggest value in log-likelihood.

  15. Incorporating parametric uncertainty into population viability analysis models

    Science.gov (United States)

    McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.

    2011-01-01

    Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.

  16. Interactive Dimensioning of Parametric Models

    KAUST Repository

    Kelly, T.

    2015-06-22

    We propose a solution for the dimensioning of parametric and procedural models. Dimensioning has long been a staple of technical drawings, and we present the first solution for interactive dimensioning: A dimension line positioning system that adapts to the view direction, given behavioral properties. After proposing a set of design principles for interactive dimensioning, we describe our solution consisting of the following major components. First, we describe how an author can specify the desired interactive behavior of a dimension line. Second, we propose a novel algorithm to place dimension lines at interactive speeds. Third, we introduce multiple extensions, including chained dimension lines, controls for different parameter types (e.g. discrete choices, angles), and the use of dimension lines for interactive editing. Our results show the use of dimension lines in an interactive parametric modeling environment for architectural, botanical, and mechanical models.

  17. Parametric fitting of corneal height data to a biconic surface.

    Science.gov (United States)

    Janunts, Edgar; Kannengießer, Marc; Langenbucher, Achim

    2015-03-01

    As the average corneal shape can effectively be approximated by a conic section, a determination of the corneal shape by biconic parameters is desired. The purpose of the paper is to introduce a straightforward mathematical approach for extracting clinically relevant parameters of corneal surface, such as radii of curvature and conic constants for principle meridians and astigmatism. A general description for modeling the ocular surfaces in a biconic form is given, based on which an implicit parametric surface fitting algorithm is introduced. The solution of the biconic fitting is obtained by a two sequential least squares optimization approach with constrains. The data input can be raw information from any corneal topographer with not necessarily a uniform data distribution. Various simulated and clinical data are studied including surfaces with rotationally symmetric and non-symmetric geometries. The clinical data was obtained from the Pentacam (Oculus) for the patient having undergone a refractive surgery. A sub-micrometer fitting accuracy was obtained for all simulated surfaces: 0,08 μm RMS fitting error at max for rotationally symmetric and 0,125 μm for non-symmetric surfaces. The astigmatism was recovered in a sub-minutes resolution. The equality in rotational symmetric and the superiority in non-symmetric surfaces of the presented model over the widely used quadric fitting model is shown. The introduced biconic surface fitting algorithm is able to recover the apical radii of curvature and conic constants in principle meridians. This methodology could be a platform for advanced IOL calculations and enhanced contact lens fitting. Copyright © 2014. Published by Elsevier GmbH.

  18. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    Science.gov (United States)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  19. Parametric Cost and Schedule Modeling for Early Technology Development

    Science.gov (United States)

    2018-04-02

    Research NoteNational Security Rep rt PARAMETRIC MODELING FOR EARLY TECHNOLOGY DEVELOPMENT COST AND SCHEDULE Chuck...Alexander NSR_11x17_Cover_CostModeling_v8.indd 1 11/20/17 3:15 PM PARAMETRIC COST AND SCHEDULE MODELING FOR EARLY  TECHNOLOGY DEVELOPMENT Chuck...COST AND SCHEDULE MODELING FOR EARLY  TECHNOLOGY DEVELOPMENT iii Contents Figures

  20. Incident Duration Modeling Using Flexible Parametric Hazard-Based Models

    Directory of Open Access Journals (Sweden)

    Ruimin Li

    2014-01-01

    Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.

  1. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  2. Improved parametric fits for the HeH2 ab initio energy surface

    International Nuclear Information System (INIS)

    Muchnick, P.

    1992-01-01

    A brief history of the development of ab initio calculations for the HeH 2 quasi-molecule energy surface, and the parametric fits to these ab initio calculations, is presented. The concept of 'physical reasonableness' of the parametric fit is discussed. Several new improved parametric fits for the energy surface, meeting these requirements, are then proposed. One fit extends the Russek-Garcia parametric fit for the deep repulsion region to include r-dependent parameters, resulting in a more physically reasonable fit with smaller average error. This improved surface fit is applied to quasi-elastic collisions of He on H 2 in the impulse approximation. Previous classical calculations of the scaled inelastic vibrorotational excitation energy distributions are improved with this more accurate parametric fit of the energy surface and with the incorporation of quantum effects in vibrational excitation. It is shown that Sigmund's approach in developing his scaling law is incomplete in the contribution of the three-body interactions to vibrational excitation of the H 2 molecule is concerned. The Sigmund theory is extended to take into account for r-dependency of three-body interactions. A parametric fit for the entire energy surface from essentially 0 ≤R≤∞ and 1.2≤r≤1.6 a.u., where R is the intermolecular spacing and r is the hydrogen bonding length, is also presented. This fit is physically reasonable in all asymptotic limits. This first, full surface parametric fit is based primarily upon a composite of ab initio studies by Russek and Garcia and Meyer, Hariharan and Kutzelnigg. Parametric fits for the H 2 (1sσ g ) 2 , H 2 + (1sσ g ), H 2 + (2pσ u ) and (LiH 2 ) + energy surfaces are also presented. The new parametric fits for H 2 , H 2 + (1sσ g ) are shown to be improvements over the well-known Morse potentials for these surfaces

  3. Parametric Modelling of As-Built Beam Framed Structure in Bim Environment

    Science.gov (United States)

    Yang, X.; Koehl, M.; Grussenmeyer, P.

    2017-02-01

    A complete documentation and conservation of a historic timber roof requires the integration of geometry modelling, attributional and dynamic information management and results of structural analysis. Recently developed as-built Building Information Modelling (BIM) technique has the potential to provide a uniform platform, which provides possibility to integrate the traditional geometry modelling, parametric elements management and structural analysis together. The main objective of the project presented in this paper is to develop a parametric modelling tool for a timber roof structure whose elements are leaning and crossing beam frame. Since Autodesk Revit, as the typical BIM software, provides the platform for parametric modelling and information management, an API plugin, able to automatically create the parametric beam elements and link them together with strict relationship, was developed. The plugin under development is introduced in the paper, which can obtain the parametric beam model via Autodesk Revit API from total station points and terrestrial laser scanning data. The results show the potential of automatizing the parametric modelling by interactive API development in BIM environment. It also integrates the separate data processing and different platforms into the uniform Revit software.

  4. Parametric uncertainty in optical image modeling

    Science.gov (United States)

    Potzick, James; Marx, Egon; Davidson, Mark

    2006-10-01

    Optical photomask feature metrology and wafer exposure process simulation both rely on optical image modeling for accurate results. While it is fair to question the accuracies of the available models, model results also depend on several input parameters describing the object and imaging system. Errors in these parameter values can lead to significant errors in the modeled image. These parameters include wavelength, illumination and objective NA's, magnification, focus, etc. for the optical system, and topography, complex index of refraction n and k, etc. for the object. In this paper each input parameter is varied over a range about its nominal value and the corresponding images simulated. Second order parameter interactions are not explored. Using the scenario of the optical measurement of photomask features, these parametric sensitivities are quantified by calculating the apparent change of the measured linewidth for a small change in the relevant parameter. Then, using reasonable values for the estimated uncertainties of these parameters, the parametric linewidth uncertainties can be calculated and combined to give a lower limit to the linewidth measurement uncertainty for those parameter uncertainties.

  5. A non-parametric consistency test of the ΛCDM model with Planck CMB data

    Energy Technology Data Exchange (ETDEWEB)

    Aghamousa, Amir; Shafieloo, Arman [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of); Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr [School of Physics, The University of New South Wales, Sydney NSW 2052 (Australia)

    2017-09-01

    Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation of the base ΛCDM model as cosmology's gold standard.

  6. Fitting PAC spectra with stochastic models: PolyPacFit

    Energy Technology Data Exchange (ETDEWEB)

    Zacate, M. O., E-mail: zacatem1@nku.edu [Northern Kentucky University, Department of Physics and Geology (United States); Evenson, W. E. [Utah Valley University, College of Science and Health (United States); Newhouse, R.; Collins, G. S. [Washington State University, Department of Physics and Astronomy (United States)

    2010-04-15

    PolyPacFit is an advanced fitting program for time-differential perturbed angular correlation (PAC) spectroscopy. It incorporates stochastic models and provides robust options for customization of fits. Notable features of the program include platform independence and support for (1) fits to stochastic models of hyperfine interactions, (2) user-defined constraints among model parameters, (3) fits to multiple spectra simultaneously, and (4) any spin nuclear probe.

  7. A parametric model order reduction technique for poroelastic finite element models.

    Science.gov (United States)

    Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico

    2017-10-01

    This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.

  8. Ground-Based Telescope Parametric Cost Model

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.

  9. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions

  10. Nonlinear Container Ship Model for the Study of Parametric Roll Resonance

    Directory of Open Access Journals (Sweden)

    Christian Holden

    2007-10-01

    Full Text Available Parametric roll is a critical phenomenon for ships, whose onset may cause roll oscillations up to +-40 degrees, leading to very dangerous situations and possibly capsizing. Container ships have been shown to be particularly prone to parametric roll resonance when they are sailing in moderate to heavy head seas. A Matlab/Simulink parametric roll benchmark model for a large container ship has been implemented and validated against a wide set of experimental data. The model is a part of a Matlab/Simulink Toolbox (MSS, 2007. The benchmark implements a 3rd-order nonlinear model where the dynamics of roll is strongly coupled with the heave and pitch dynamics. The implemented model has shown good accuracy in predicting the container ship motions, both in the vertical plane and in the transversal one. Parametric roll has been reproduced for all the data sets in which it happened, and the model provides realistic results which are in good agreement with the model tank experiments.

  11. Linear Parametric Model Checking of Timed Automata

    DEFF Research Database (Denmark)

    Hune, Tohmas Seidelin; Romijn, Judi; Stoelinga, Mariëlle

    2001-01-01

    We present an extension of the model checker Uppaal capable of synthesize linear parameter constraints for the correctness of parametric timed automata. The symbolic representation of the (parametric) state-space is shown to be correct. A second contribution of this paper is the identication...... of a subclass of parametric timed automata (L/U automata), for which the emptiness problem is decidable, contrary to the full class where it is know to be undecidable. Also we present a number of lemmas enabling the verication eort to be reduced for L/U automata in some cases. We illustrate our approach...

  12. Modeling and forecasting petroleum futures volatility

    International Nuclear Information System (INIS)

    Sadorsky, Perry

    2006-01-01

    Forecasts of oil price volatility are important inputs into macroeconometric models, financial market risk assessment calculations like value at risk, and option pricing formulas for futures contracts. This paper uses several different univariate and multivariate statistical models to estimate forecasts of daily volatility in petroleum futures price returns. The out-of-sample forecasts are evaluated using forecast accuracy tests and market timing tests. The TGARCH model fits well for heating oil and natural gas volatility and the GARCH model fits well for crude oil and unleaded gasoline volatility. Simple moving average models seem to fit well in some cases provided the correct order is chosen. Despite the increased complexity, models like state space, vector autoregression and bivariate GARCH do not perform as well as the single equation GARCH model. Most models out perform a random walk and there is evidence of market timing. Parametric and non-parametric value at risk measures are calculated and compared. Non-parametric models outperform the parametric models in terms of number of exceedences in backtests. These results are useful for anyone needing forecasts of petroleum futures volatility. (author)

  13. Nonscaling parametrization of hadronic spectra and dual parton model

    International Nuclear Information System (INIS)

    Gaponenko, O.N.

    2001-01-01

    Using the popular Wdowczyk-Wolfendale parametrization (WW-parametrization) as an example one studies restrictions imposed by a dual parton model for different nonscaling parametrizations of the pulsed hadron spectra in soft hadron-hadron and hadron-nuclear interactions. One derived a new parametrization free from basic drawback of the WW-formulae. In the central range the determined parametrization show agreement with the Wdowczyk-Wolfendale formula, but in contrast to the last-named one it does not result in contradiction with the experiment due to fast reduction of inelastic factor reduction with energy increase [ru

  14. PARAMETRIC MODEL OF LUMBAR VERTEBRA

    Directory of Open Access Journals (Sweden)

    CAPPETTI Nicola

    2010-11-01

    Full Text Available The present work proposes the realization of a parametric/variational CAD model of a normotype lumbar vertebra, which could be used for improving the effectiveness of actual imaging techniques in informational augmentation of the orthopaedic and traumatological diagnosis. In addition it could be used for ergonomic static and dynamical analysis of the lumbar region and vertebral column.

  15. A parametric model of muscle moment arm as a function of joint angle: application to the dorsiflexor muscle group in mice.

    Science.gov (United States)

    Miller, S W; Dennis, R G

    1996-12-01

    A parametric model was developed to describe the relationship between muscle moment arm and joint angle. The model was applied to the dorsiflexor muscle group in mice, for which the moment arm was determined as a function of ankle angle. The moment arm was calculated from the torque measured about the ankle upon application of a known force along the line of action of the dorsiflexor muscle group. The dependence of the dorsiflexor moment arm on ankle angle was modeled as r = R sin(a + delta), where r is the moment arm calculated from the measured torque and a is the joint angle. A least-squares curve fit yielded values for R, the maximum moment arm, and delta, the angle at which the maximum moment arm occurs as offset from 90 degrees. Parametric models were developed for two strains of mice, and no differences were found between the moment arms determined for each strain. Values for the maximum moment arm, R, for the two different strains were 0.99 and 1.14 mm, in agreement with the limited data available from the literature. While in some cases moment arm data may be better fitted by a polynomial, use of the parametric model provides a moment arm relationship with meaningful anatomical constants, allowing for the direct comparison of moment arm characteristics between different strains and species.

  16. APT cost scaling: Preliminary indications from a Parametric Costing Model (PCM)

    International Nuclear Information System (INIS)

    Krakowski, R.A.

    1995-01-01

    A Parametric Costing Model has been created and evaluate as a first step in quantitatively understanding important design options for the Accelerator Production of Tritium (APT) concept. This model couples key economic and technical elements of APT in a two-parameter search of beam energy and beam power that minimizes costs within a range of operating constraints. The costing and engineering depth of the Parametric Costing Model is minimal at the present open-quotes entry levelclose quotes, and is intended only to demonstrate a potential for a more-detailed, cost-based integrating design tool. After describing the present basis of the Parametric Costing Model and giving an example of a single parametric scaling run derived therefrom, the impacts of choices related to resistive versus superconducting accelerator structures and cost of electricity versus plant availability (open-quotes load curveclose quotes) are reported. Areas of further development and application are suggested

  17. Probabilistic Reachability for Parametric Markov Models

    DEFF Research Database (Denmark)

    Hahn, Ernst Moritz; Hermanns, Holger; Zhang, Lijun

    2011-01-01

    Given a parametric Markov model, we consider the problem of computing the rational function expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression...

  18. MTL-Model Checking of One-Clock Parametric Timed Automata is Undecidable

    Directory of Open Access Journals (Sweden)

    Karin Quaas

    2014-03-01

    Full Text Available Parametric timed automata extend timed automata (Alur and Dill, 1991 in that they allow the specification of parametric bounds on the clock values. Since their introduction in 1993 by Alur, Henzinger, and Vardi, it is known that the emptiness problem for parametric timed automata with one clock is decidable, whereas it is undecidable if the automaton uses three or more parametric clocks. The problem is open for parametric timed automata with two parametric clocks. Metric temporal logic, MTL for short, is a widely used specification language for real-time systems. MTL-model checking of timed automata is decidable, no matter how many clocks are used in the timed automaton. In this paper, we prove that MTL-model checking for parametric timed automata is undecidable, even if the automaton uses only one clock and one parameter and is deterministic.

  19. Parametric, nonparametric and parametric modelling of a chaotic circuit time series

    Science.gov (United States)

    Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.

    2000-09-01

    The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.

  20. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    Science.gov (United States)

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  1. The global electroweak Standard Model fit after the Higgs discovery

    CERN Document Server

    Baak, Max

    2013-01-01

    We present an update of the global Standard Model (SM) fit to electroweak precision data under the assumption that the new particle discovered at the LHC is the SM Higgs boson. In this scenario all parameters entering the calculations of electroweak precision observalbes are known, allowing, for the first time, to over-constrain the SM at the electroweak scale and assert its validity. Within the SM the W boson mass and the effective weak mixing angle can be accurately predicted from the global fit. The results are compatible with, and exceed in precision, the direct measurements. An updated determination of the S, T and U parameters, which parametrize the oblique vacuum corrections, is given. The obtained values show good consistency with the SM expectation and no direct signs of new physics are seen. We conclude with an outlook to the global electroweak fit for a future e+e- collider.

  2. Semi-parametric estimation for ARCH models

    Directory of Open Access Journals (Sweden)

    Raed Alzghool

    2018-03-01

    Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator

  3. A Novel Parametric Model For The Human Respiratory System

    Directory of Open Access Journals (Sweden)

    Clara Mihaela IONESCU

    2003-12-01

    Full Text Available The purpose of this work is to present some recent results in an ongoing research project between Ghent University and Chess Medical Technology Company Belgium. The overall aim of the project is to provide a fast method for identification of the human respiratory system in order to allow for an instantaneously diagnosis of the patient by the medical staff. A novel parametric model of the human respiratory system as well as the obtained experimental results is presented in this paper. A prototype apparatus developed by the company, based on the forced oscillation technique is used to record experimental data from 4 patients in this paper. Signal processing is based on spectral analysis and is followed by the parametric identification of a non-linear mechanistic model. The parametric model is equivalent to the structure of a simple electrical RLC-circuit, containing a non-linear capacitor. These parameters have a useful and easy-to-interpret physical meaning for the medical staff members.

  4. Research on Mixer Parametric Modeling System Based on Redevelopment of ANSYS

    Directory of Open Access Journals (Sweden)

    Bin Zheng

    2015-01-01

    Full Text Available In this paper, the mixer parametric modeling system software was developed by using VB which was taken as the foreground development program, and the paper combined with ANSYS software to create the finite element model of mixer blade and cylinder for the following numerical simulation of the flow field and parameter optimization of mixer. The software user interface was developed by VB and the pre-process model was created by invoking APDL of ANSYS in background. Therefore, the operation of modeling, meshing, component-building of mixer blade and cylinder were completed by using APDL and the graphic and text were outputted and displayed on the mixer parametric modeling system user interface which was developed by VB. Practice proved that it is convenient to modify the mixer solid model created by the parametric design language of ANSYS due to the similar structure.

  5. Comparison of parametric, orthogonal, and spline functions to model individual lactation curves for milk yield in Canadian Holsteins

    Directory of Open Access Journals (Sweden)

    Corrado Dimauro

    2010-11-01

    Full Text Available Test day records for milk yield of 57,390 first lactation Canadian Holsteins were analyzed with a linear model that included the fixed effects of herd-test date and days in milk (DIM interval nested within age and calving season. Residuals from this model were analyzed as a new variable and fitted with a five parameter model, fourth-order Legendre polynomials, with linear, quadratic and cubic spline models with three knots. The fit of the models was rather poor, with about 30-40% of the curves showing an adjusted R-square lower than 0.20 across all models. Results underline a great difficulty in modelling individual deviations around the mean curve for milk yield. However, the Ali and Schaeffer (5 parameter model and the fourth-order Legendre polynomials were able to detect two basic shapes of individual deviations among the mean curve. Quadratic and, especially, cubic spline functions had better fitting performances but a poor predictive ability due to their great flexibility that results in an abrupt change of the estimated curve when data are missing. Parametric and orthogonal polynomials seem to be robust and affordable under this standpoint.

  6. Optimal parametric modelling of measured short waves

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    the importance of selecting a suitable sampling interval for better estimates of parametric modelling and also for better statistical representation. Implementation of the above algorithms in a structural monitoring system has the potential advantage of storing...

  7. Active component modeling for analog integrated circuit design. Model parametrization and implementation in the SPICE-PAC circuit simulator

    International Nuclear Information System (INIS)

    Marchal, Xavier

    1992-01-01

    In order to use CAD efficiently in the analysis and design of electronic Integrated circuits, adequate modeling of active non-linear devices such as MOSFET transistors must be available to the designer. Many mathematical forms can be given to those models, such as explicit relations, or implicit equations to be solved. A major requirement in developing MOS transistor models for IC simulation is the availability of electrical characteristic curves over a wide range of channel width and length, including the sub-micrometer range. To account in a convenient way for bulk charge influence on I_D_S = f(V_D_S, V_G_S, v_B_S) device characteristics, all 3 standard SPICE MOS models use an empirical fitting parameter called the 'charge sharing factor'. Unfortunately, this formulation produces models which only describe correctly either some of the short channel phenomena, or some particular operating conditions (low injection, avalanche effect, etc.). We present here a cellular model (CDM = Charge Distributed Model) implemented in the open modular SPICE-PAC Simulator; this model is derived from the 4-terminal WANG charge controlled MOSFET model, using the charge sheet approximation. The CDM model describes device characteristics in ail operating regions without introducing drain current discontinuities and without requiring a 'charge sharing factor'. A usual problem to be faced by designers when they simulate MOS ICs is to find a reliable source of model parameters. Though most models have a physical basis, some of their parameters cannot be easily estimated from physical considerations. It can also happen that physically determined parameters values do not produce a good fit to measured device characteristics. Thus it is generally necessary to extract model parameters from measured transistor data, to ensure that model equations approximate measured curves accurately enough. Model parameters extraction can be done in 2 different ways, exposed in this thesis. The first

  8. PARAMETRIC MODELING, CREATIVITY, AND DESIGN: TWO EXPERIENCES WITH ARCHITECTURE’ STUDENTS

    Directory of Open Access Journals (Sweden)

    Wilson Florio

    2012-02-01

    Full Text Available The aim of this article is to reflect on the use of the parametric modeling in two didactic experiences. The first experiment involved resources of the Paracloud program and its relation with the Rhinoceros program, that resulted in the production of physical models produced with the aid of the laser cutting. In the second experiment, the students had produced algorithms in the Grasshopper, resulting in families of structures and coverings. The study objects are both the physical models and digital algorithms resultants from this experimentation. For the analysis and synthesis of the results, we adopted four important assumptions: 1. the value of attitudes and environment of work; 2. the importance of experimentation and improvisation; 3. understanding of the design process as a situated act and as a ill-defined problem; 4. the inclusion of creative and critical thought in the disciplines. The results allow us to affirm that the parametric modeling stimulates creativity, therefore allowing combination of different parameters, that result in unexpected discoveries. Keywords: Teach-Learning, Parametric Modeling, Laser Cutter, Grasshopper, Design Process, Creativity.

  9. Bim and Gis: when Parametric Modeling Meets Geospatial Data

    Science.gov (United States)

    Barazzetti, L.; Banfi, F.

    2017-12-01

    Geospatial data have a crucial role in several projects related to infrastructures and land management. GIS software are able to perform advanced geospatial analyses, but they lack several instruments and tools for parametric modelling typically available in BIM. At the same time, BIM software designed for buildings have limited tools to handle geospatial data. As things stand at the moment, BIM and GIS could appear as complementary solutions, notwithstanding research work is currently under development to ensure a better level of interoperability, especially at the scale of the building. On the other hand, the transition from the local (building) scale to the infrastructure (where geospatial data cannot be neglected) has already demonstrated that parametric modelling integrated with geoinformation is a powerful tool to simplify and speed up some phases of the design workflow. This paper reviews such mixed approaches with both simulated and real examples, demonstrating that integration is already a reality at specific scales, which are not dominated by "pure" GIS or BIM. The paper will also demonstrate that some traditional operations carried out with GIS software are also available in parametric modelling software for BIM, such as transformation between reference systems, DEM generation, feature extraction, and geospatial queries. A real case study is illustrated and discussed to show the advantage of a combined use of both technologies. BIM and GIS integration can generate greater usage of geospatial data in the AECOO (Architecture, Engineering, Construction, Owner and Operator) industry, as well as new solutions for parametric modelling with additional geoinformation.

  10. BIM AND GIS: WHEN PARAMETRIC MODELING MEETS GEOSPATIAL DATA

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2017-12-01

    Full Text Available Geospatial data have a crucial role in several projects related to infrastructures and land management. GIS software are able to perform advanced geospatial analyses, but they lack several instruments and tools for parametric modelling typically available in BIM. At the same time, BIM software designed for buildings have limited tools to handle geospatial data. As things stand at the moment, BIM and GIS could appear as complementary solutions, notwithstanding research work is currently under development to ensure a better level of interoperability, especially at the scale of the building. On the other hand, the transition from the local (building scale to the infrastructure (where geospatial data cannot be neglected has already demonstrated that parametric modelling integrated with geoinformation is a powerful tool to simplify and speed up some phases of the design workflow. This paper reviews such mixed approaches with both simulated and real examples, demonstrating that integration is already a reality at specific scales, which are not dominated by “pure” GIS or BIM. The paper will also demonstrate that some traditional operations carried out with GIS software are also available in parametric modelling software for BIM, such as transformation between reference systems, DEM generation, feature extraction, and geospatial queries. A real case study is illustrated and discussed to show the advantage of a combined use of both technologies. BIM and GIS integration can generate greater usage of geospatial data in the AECOO (Architecture, Engineering, Construction, Owner and Operator industry, as well as new solutions for parametric modelling with additional geoinformation.

  11. Conformally parametrized surfaces associated with CPN-1 sigma models

    International Nuclear Information System (INIS)

    Grundland, A M; Hereman, W A; Yurdusen, I-dot

    2008-01-01

    Two-dimensional parametrized surfaces immersed in the su(N) algebra are investigated. The focus is on surfaces parametrized by solutions of the equations for the CP N-1 sigma model. The Lie-point symmetries of the CP N-1 model are computed for arbitrary N. The Weierstrass formula for immersion is determined and an explicit formula for a moving frame on a surface is constructed. This allows us to determine the structural equations and geometrical properties of surfaces in R N 2 -1 . The fundamental forms, Gaussian and mean curvatures, Willmore functional and topological charge of surfaces are given explicitly in terms of any holomorphic solution of the CP 2 model. The approach is illustrated through several examples, including surfaces immersed in low-dimensional su(N) algebras

  12. Study of the long-term values and prices of plutonium; a simplified parametrized model; Etude des valeurs et des prix du plutonium a long terme; un modele parametre simplifie

    Energy Technology Data Exchange (ETDEWEB)

    Gaussens, J; Paillot, H [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1965-07-01

    The authors define the notions of use values and price of plutonium. They give a 'simplified parametrized model' simulating the equilibrium of the offer and the demand in time, concerning the plutonium and the price deriving from the relative scarcity of this metal, taking into account the technical and economic operating parameters of the various reactors confronted. This model is simple enough to allow direct computations and establish clear relations between the various parameters. The use of the linear programmes method allows on the other hand a wide extension of the model. This report includes three main parts: I - General description of the study (without detailed calculations) II - Mathematical development of the simplified parametrized model and application (the basic data and the results of the calculations are given) III - Appendices (giving the detailed computations of part II). (authors) [French] Les auteurs definissent les notions de valeurs d'usage et de prix du plutonium. Ils donnent un 'modele parametre simplifie' simulant l'equilibre de l'office et de la demande dans le temps concernant le plutonium et le prix qui decoule de la rarete relative de ce metal, compte tenu des parametres techniques et economiques de fonctionnement des divers reacteurs en presence. Ce modele est suffisamment simple pour permettre des calculs manuels et etablir des liaisons claires entre les divers parametres. L'utilisation de la technique des programmes lineaires permet par ailleurs une extension considerable du modele. Cette note comprend trois parties: I - Expose general de l'etude (sans expose du detail des calculs) II - Developpement mathematique du modele parametre simplifie et application (on precise les donnees de base et le resultat des calculs) III - Annexes (donnant le detail des calculs de la partie II). (auteurs)

  13. Monitoring coastal marshes biomass with CASI: a comparison of parametric and non-parametric models

    Science.gov (United States)

    Mo, Y.; Kearney, M.

    2017-12-01

    Coastal marshes are important carbon sinks that face multiple natural and anthropogenic stresses. Optical remote sensing is a powerful tool for closely monitoring the biomass of coastal marshes. However, application of hyperspectral sensors on assessing the biomass of diverse coastal marsh ecosystems is limited. This study samples spectral and biophysical data from coastal freshwater, intermediate, brackish, and saline marshes in Louisiana, and develops parametric and non-parametric models for using the Compact Airborne Spectrographic Imager (CASI) to retrieve the marshes' biomass. Linear models and random forest models are developed from simulated CASI data (48 bands, 380-1050 nm, bandwidth 14 nm). Linear models are also developed using narrowband vegetation indices computed from all possible band combinations from the blue, red, and near infrared wavelengths. It is found that the linear models derived from the optimal narrowband vegetation indices provide strong predictions for the marshes' Leaf Area Index (LAI; R2 > 0.74 for ARVI), but not for their Aboveground Green Biomass (AGB; R2 > 0.25). The linear models derived from the simulated CASI data strongly predict the marshes' LAI (R2 = 0.93) and AGB (R2 = 0.71) and have 27 and 30 bands/variables in the final models through stepwise regression, respectively. The random forest models derived from the simulated CASI data also strongly predict the marshes' LAI and AGB (R2 = 0.91 and 0.84, respectively), where the most important variables for predicting LAI are near infrared bands at 784 and 756 nm and for predicting ABG are red bands at 684 and 670 nm. In sum, the random forest model is preferable for assessing coastal marsh biomass using CASI data as it offers high R2 for both LAI and AGB. The superior performance of the random forest model is likely to due to that it fully utilizes the full-spectrum data and makes no assumption of the approximate normality of the sampling population. This study offers solutions

  14. Parametric reduced models for the nonlinear Schrödinger equation.

    Science.gov (United States)

    Harlim, John; Li, Xiantao

    2015-05-01

    Reduced models for the (defocusing) nonlinear Schrödinger equation are developed. In particular, we develop reduced models that only involve the low-frequency modes given noisy observations of these modes. The ansatz of the reduced parametric models are obtained by employing a rational approximation and a colored-noise approximation, respectively, on the memory terms and the random noise of a generalized Langevin equation that is derived from the standard Mori-Zwanzig formalism. The parameters in the resulting reduced models are inferred from noisy observations with a recently developed ensemble Kalman filter-based parametrization method. The forecasting skill across different temperature regimes are verified by comparing the moments up to order four, a two-time correlation function statistics, and marginal densities of the coarse-grained variables.

  15. A Hybrid Wind-Farm Parametrization for Mesoscale and Climate Models

    Science.gov (United States)

    Pan, Yang; Archer, Cristina L.

    2018-04-01

    To better understand the potential impact of wind farms on weather and climate at the regional to global scales, a new hybrid wind-farm parametrization is proposed for mesoscale and climate models. The proposed parametrization is a hybrid model because it is not based on physical processes or conservation laws, but on the multiple linear regression of the results of large-eddy simulations (LES) with the geometric properties of the wind-farm layout (e.g., the blockage ratio and blockage distance). The innovative aspect is that each wind turbine is treated individually based on its position in the farm and on the wind direction by predicting the velocity upstream of each turbine. The turbine-induced forces and added turbulence kinetic energy (TKE) are first derived analytically and then implemented in the Weather Research and Forecasting model. Idealized simulations of the offshore Lillgrund wind farm are conducted. The wind-speed deficit and TKE predicted with the hybrid model are in excellent agreement with those from the LES results, while the wind-power production estimated with the hybrid model is within 10% of that observed. Three additional wind farms with larger inter-turbine spacing than at Lillgrund are also considered, and a similar agreement with LES results is found, proving that the hybrid parametrization works well with any wind farm regardless of the spacing between turbines. These results indicate the wind-turbine position, wind direction, and added TKE are essential in accounting for the wind-farm effects on the surroundings, for which the hybrid wind-farm parametrization is a promising tool.

  16. Fast, Sequence Adaptive Parcellation of Brain MR Using Parametric Models

    DEFF Research Database (Denmark)

    Puonti, Oula; Iglesias, Juan Eugenio; Van Leemput, Koen

    2013-01-01

    In this paper we propose a method for whole brain parcellation using the type of generative parametric models typically used in tissue classification. Compared to the non-parametric, multi-atlas segmentation techniques that have become popular in recent years, our method obtains state-of-the-art ...

  17. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    Science.gov (United States)

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  18. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Science.gov (United States)

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  19. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Directory of Open Access Journals (Sweden)

    A H Sabry

    Full Text Available The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  20. Parametric analysis of ATM solar array.

    Science.gov (United States)

    Singh, B. K.; Adkisson, W. B.

    1973-01-01

    The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.

  1. A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models

    Directory of Open Access Journals (Sweden)

    Erin Scott

    2016-01-01

    Full Text Available The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE models to observation reference data (Mackinson et al. 2009. The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting >1000 specific individual searches to find the statistically ‘best fit’ model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the ‘best fit’ model for ecological accuracy.

  2. Parametric packet-based audiovisual quality model for IPTV services

    CERN Document Server

    Garcia, Marie-Neige

    2014-01-01

    This volume presents a parametric packet-based audiovisual quality model for Internet Protocol TeleVision (IPTV) services. The model is composed of three quality modules for the respective audio, video and audiovisual components. The audio and video quality modules take as input a parametric description of the audiovisual processing path, and deliver an estimate of the audio and video quality. These outputs are sent to the audiovisual quality module which provides an estimate of the audiovisual quality. Estimates of perceived quality are typically used both in the network planning phase and as part of the quality monitoring. The same audio quality model is used for both these phases, while two variants of the video quality model have been developed for addressing the two application scenarios. The addressed packetization scheme is MPEG2 Transport Stream over Real-time Transport Protocol over Internet Protocol. In the case of quality monitoring, that is the case for which the network is already set-up, the aud...

  3. Explicit/multi-parametric model predictive control (MPC) of linear discrete-time systems by dynamic and multi-parametric programming

    KAUST Repository

    Kouramas, K.I.; Faí sca, N.P.; Panos, C.; Pistikopoulos, E.N.

    2011-01-01

    This work presents a new algorithm for solving the explicit/multi- parametric model predictive control (or mp-MPC) problem for linear, time-invariant discrete-time systems, based on dynamic programming and multi-parametric programming techniques

  4. Prevalence Incidence Mixture Models

    Science.gov (United States)

    The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.

  5. Parametrization of contrails in a comprehensive climate model

    Energy Technology Data Exchange (ETDEWEB)

    Ponater, M; Brinkop, S; Sausen, R; Schumann, U [Deutsche Forschungs- und Versuchsanstalt fuer Luft- und Raumfahrt e.V., Oberpfaffenhofen (Germany). Inst. fuer Physik der Atmosphaere

    1998-12-31

    A contrail parametrization scheme for a general circulation model (GCM) is presented. Guidelines for its development were that it should be based on the thermodynamic theory of contrail formation and that it should be consistent with the cloud parametrization scheme of the GCM. Results of a six-year test integration indicate reasonable results concerning the spatial and temporal development of both contrail coverage and contrail optical properties. Hence, the scheme forms a promising basis for the quantitative estimation of the contrail climatic impact. (author) 9 refs.

  6. Parametrization of contrails in a comprehensive climate model

    Energy Technology Data Exchange (ETDEWEB)

    Ponater, M.; Brinkop, S.; Sausen, R.; Schumann, U. [Deutsche Forschungs- und Versuchsanstalt fuer Luft- und Raumfahrt e.V., Oberpfaffenhofen (Germany). Inst. fuer Physik der Atmosphaere

    1997-12-31

    A contrail parametrization scheme for a general circulation model (GCM) is presented. Guidelines for its development were that it should be based on the thermodynamic theory of contrail formation and that it should be consistent with the cloud parametrization scheme of the GCM. Results of a six-year test integration indicate reasonable results concerning the spatial and temporal development of both contrail coverage and contrail optical properties. Hence, the scheme forms a promising basis for the quantitative estimation of the contrail climatic impact. (author) 9 refs.

  7. A Parametric Computational Model of the Action Potential of Pacemaker Cells.

    Science.gov (United States)

    Ai, Weiwei; Patel, Nitish D; Roop, Partha S; Malik, Avinash; Andalam, Sidharta; Yip, Eugene; Allen, Nathan; Trew, Mark L

    2018-01-01

    A flexible, efficient, and verifiable pacemaker cell model is essential to the design of real-time virtual hearts that can be used for closed-loop validation of cardiac devices. A new parametric model of pacemaker action potential is developed to address this need. The action potential phases are modeled using hybrid automaton with one piecewise-linear continuous variable. The model can capture rate-dependent dynamics, such as action potential duration restitution, conduction velocity restitution, and overdrive suppression by incorporating nonlinear update functions. Simulated dynamics of the model compared well with previous models and clinical data. The results show that the parametric model can reproduce the electrophysiological dynamics of a variety of pacemaker cells, such as sinoatrial node, atrioventricular node, and the His-Purkinje system, under varying cardiac conditions. This is an important contribution toward closed-loop validation of cardiac devices using real-time heart models.

  8. Fitting neuron models to spike trains

    Directory of Open Access Journals (Sweden)

    Cyrille eRossant

    2011-02-01

    Full Text Available Computational modeling is increasingly used to understand the function of neural circuitsin systems neuroscience.These studies require models of individual neurons with realisticinput-output properties.Recently, it was found that spiking models can accurately predict theprecisely timed spike trains produced by cortical neurons in response tosomatically injected currents,if properly fitted. This requires fitting techniques that are efficientand flexible enough to easily test different candidate models.We present a generic solution, based on the Brian simulator(a neural network simulator in Python, which allowsthe user to define and fit arbitrary neuron models to electrophysiological recordings.It relies on vectorization and parallel computing techniques toachieve efficiency.We demonstrate its use on neural recordings in the barrel cortex andin the auditory brainstem, and confirm that simple adaptive spiking modelscan accurately predict the response of cortical neurons. Finally, we show how a complexmulticompartmental model can be reduced to a simple effective spiking model.

  9. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    DEFF Research Database (Denmark)

    Bolker, B.M.; Gardner, B.; Maunder, M.

    2013-01-01

    Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. R is convenient and (relatively) easy...... to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield...

  10. Comparison of parametric methods for modeling corneal surfaces

    Science.gov (United States)

    Bouazizi, Hala; Brunette, Isabelle; Meunier, Jean

    2017-02-01

    Corneal topography is a medical imaging technique to get the 3D shape of the cornea as a set of 3D points of its anterior and posterior surfaces. From these data, topographic maps can be derived to assist the ophthalmologist in the diagnosis of disorders. In this paper, we compare three different mathematical parametric representations of the corneal surfaces leastsquares fitted to the data provided by corneal topography. The parameters obtained from these models reduce the dimensionality of the data from several thousand 3D points to only a few parameters and could eventually be useful for diagnosis, biometry, implant design etc. The first representation is based on Zernike polynomials that are commonly used in optics. A variant of these polynomials, named Bhatia-Wolf will also be investigated. These two sets of polynomials are defined over a circular domain which is convenient to model the elevation (height) of the corneal surface. The third representation uses Spherical Harmonics that are particularly well suited for nearly-spherical object modeling, which is the case for cornea. We compared the three methods using the following three criteria: the root-mean-square error (RMSE), the number of parameters and the visual accuracy of the reconstructed topographic maps. A large dataset of more than 2000 corneal topographies was used. Our results showed that Spherical Harmonics were superior with a RMSE mean lower than 2.5 microns with 36 coefficients (order 5) for normal corneas and lower than 5 microns for two diseases affecting the corneal shapes: keratoconus and Fuchs' dystrophy.

  11. Comparison of radiation parametrizations within the HARMONIE-AROME NWP model

    Science.gov (United States)

    Rontu, Laura; Lindfors, Anders V.

    2018-05-01

    Downwelling shortwave radiation at the surface (SWDS, global solar radiation flux), given by three different parametrization schemes, was compared to observations in the HARMONIE-AROME numerical weather prediction (NWP) model experiments over Finland in spring 2017. Simulated fluxes agreed well with each other and with the observations in the clear-sky cases. In the cloudy-sky conditions, all schemes tended to underestimate SWDS at the daily level, as compared to the measurements. Large local and temporal differences between the model results and observations were seen, related to the variations and uncertainty of the predicted cloud properties. The results suggest a possibility to benefit from the use of different radiative transfer parametrizations in a NWP model to obtain perturbations for the fine-resolution ensemble prediction systems. In addition, we recommend usage of the global radiation observations for the standard validation of the NWP models.

  12. Parametric models to relate spike train and LFP dynamics with neural information processing.

    Science.gov (United States)

    Banerjee, Arpan; Dean, Heather L; Pesaran, Bijan

    2012-01-01

    Spike trains and local field potentials (LFPs) resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models) that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus-driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP) of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior. We obtained significant spike-field onset time correlations from single trials using a previously published data set where significantly strong correlation was only obtained through trial averaging. We also found that unified models extracted a stronger relationship between neural response latency and trial

  13. Modeling of Acoustic Field for a Parametric Focusing Source Using the Spheroidal Beam Equation

    Directory of Open Access Journals (Sweden)

    Yu Lili

    2015-09-01

    Full Text Available A theoretical model of acoustic field for a parametric focusing source on concave spherical surface is proposed. In this model, the source boundary conditions of the Spheroidal Beam Equation (SBE for difference frequency wave excitation were studied. Propagation curves and beam patterns for difference frequency component of the acoustic field are compared with those obtained for Khokhlov-Zabolotskaya-Kuznetsov (KZK model. The results demonstrate that the focused parametric model of SBE is good valid for a large aperture angle in the strongly focused acoustic field. It is also investigated that high directivity and good focal ability with the decreasing of downshift ratio and the increasing of half-aperture angle for the focused parametric model of SBE.

  14. Induced subgraph searching for geometric model fitting

    Science.gov (United States)

    Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi

    2017-11-01

    In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.

  15. Exchange of parametric bridge models using a neutral data format

    NARCIS (Netherlands)

    Ji, Y.; Borrmann, André; Beetz, J.; Obergrießer, M.

    2013-01-01

    Parametric modeling is a well-established methodology in the field of mechanical engineering. It allows the creation of flexible geometric models using parameters for dimensions and makes it possible to define numeric relationships between these parameters by means of mathematical formulas and

  16. SEMIPARAMETRIC VERSUS PARAMETRIC CLASSIFICATION MODELS - AN APPLICATION TO DIRECT MARKETING

    NARCIS (Netherlands)

    BULT, [No Value

    In this paper we are concerned with estimation of a classification model using semiparametric and parametric methods. Benefits and limitations of semiparametric models in general, and of Manski's maximum score method in particular, are discussed. The maximum score method yields consistent estimates

  17. Active component modeling for analog integrated circuit design. Model parametrization and implementation in the SPICE-PAC circuit simulator; Modelisation de composants actifs pour la CAO de circuits integres analogiques. Parametrage et implantation de modeles dans le simulateur SPICE-PAC

    Energy Technology Data Exchange (ETDEWEB)

    Marchal, Xavier

    1992-06-19

    performants ne donnent des resultats fiables que si les modeles de composants introduits dans le simulateur sont suffisamment precis. Les transistors MOSFET sont les dispositifs les plus frequents; il est propose dans SPICE, 3 modeles differents classes suivant le rapport temps de simulation/precision des resultats. Cependant pour certaines applications, ceux-ci ne modelisent que partiellement les phenomenes physiques reels (effet d'avalanche ou faible inversion); il est alors possible d'affiner les resultats avec le modele a charges distribuees (MCD). Celui-ci, par une description cellulaire du transistor, permet l'evaluation plus fine de parametres physiques le long du canal (charges, mobilite, champs electriques). La modelisation ne peut etre complete et utilisable sans la determination des parametres du modele; les modeles de transistor MOS les plus complets peuvent en comporter plus d'une quarantaine. Nous proposerons differents programmes appeles indifferemment programmes 'd'ajustement', 'de fittage' ou 'de parametrage' qui permettent d'evaluer les parametres d'un modele pour que celui-ci fournisse des caracteristiques electriques calculees en bon accord avec ses caracteristiques electriques experimentales. Ceux-ci sont des programmes d'optimisation multidimensionnelle non lineaire multi-criteres; les criteres etant exprimes sous la forme de fonctions objectifs ayant pour variables les parametres a determiner, dans un domaine hyperrectangulaire. Nous montrerons que dans le cas d'un 'parametrage' en regime DC, seules des mesures directes d'un dispositif sont utilisees; pour obtenir les parametres AC et TRAN, nous avons recours a des mesures indirectes. Dans le premier cas seule la simulation d'un dispositif est effectuee, dans le second il est necessaire de simuler un circuit complexe environnant ce dispositif. Ces taches sont traitees par deux applications differentes FIT-PAC et OPT-PAC, associees au simulateur SPICE-PAC. Nous presentons et discutons les resultats de

  18. Fitting Hidden Markov Models to Psychological Data

    Directory of Open Access Journals (Sweden)

    Ingmar Visser

    2002-01-01

    Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.

  19. The numerical model for parametric studies of forest haul roads pavements

    Directory of Open Access Journals (Sweden)

    Lenka Ševelová

    2010-01-01

    Full Text Available Forest roads pavement structures are considered to be low volume roads. These roads serve as a mean of transport of wood and people. Besides they are currently often used for recreational purpose. The construction of the pavements should be suitable for forest transportation irrespective of their low bearing capacity. These pavement structures are very specific for special unbound materials that are used in their construction. To meet the requirements of the pavement designs and simulation analysis the FEM model in the software ANSYS was created.This paper compares two material models used for the description of the behaviour of unbound materials. The first is linear elastic according to Hook theory (H model and the second one is nonlinear plastic model Drucker-Prager (D–P model. ANSYS software has been used to create flexible model based on the parametrers of variable principle. The flexible model is parametric to realize repeated calculations useful for optimization analysis.

  20. Contrast Gain Control Model Fits Masking Data

    Science.gov (United States)

    Watson, Andrew B.; Solomon, Joshua A.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    We studied the fit of a contrast gain control model to data of Foley (JOSA 1994), consisting of thresholds for a Gabor patch masked by gratings of various orientations, or by compounds of two orientations. Our general model includes models of Foley and Teo & Heeger (IEEE 1994). Our specific model used a bank of Gabor filters with octave bandwidths at 8 orientations. Excitatory and inhibitory nonlinearities were power functions with exponents of 2.4 and 2. Inhibitory pooling was broad in orientation, but narrow in spatial frequency and space. Minkowski pooling used an exponent of 4. All of the data for observer KMF were well fit by the model. We have developed a contrast gain control model that fits masking data. Unlike Foley's, our model accepts images as inputs. Unlike Teo & Heeger's, our model did not require multiple channels for different dynamic ranges.

  1. Functional summary statistics for the Johnson-Mehl model

    DEFF Research Database (Denmark)

    Møller, Jesper; Ghorbani, Mohammad

    The Johnson-Mehl germination-growth model is a spatio-temporal point process model which among other things have been used for the description of neurotransmitters datasets. However, for such datasets parametric Johnson-Mehl models fitted by maximum likelihood have yet not been evaluated by means...... of functional summary statistics. This paper therefore invents four functional summary statistics adapted to the Johnson-Mehl model, with two of them based on the second-order properties and the other two on the nuclei-boundary distances for the associated Johnson-Mehl tessellation. The functional summary...... statistics theoretical properties are investigated, non-parametric estimators are suggested, and their usefulness for model checking is examined in a simulation study. The functional summary statistics are also used for checking fitted parametric Johnson-Mehl models for a neurotransmitters dataset....

  2. Wind Farm parametrization in the mesoscale model WRF

    DEFF Research Database (Denmark)

    Volker, Patrick; Badger, Jake; Hahmann, Andrea N.

    2012-01-01

    , but are parametrized as another sub-grid scale process. In order to appropriately capture the wind farm wake recovery and its direction, two properties are important, among others, the total energy extracted by the wind farm and its velocity deficit distribution. In the considered parametrization the individual...... the extracted force is proportional to the turbine area interfacing a grid cell. The sub-grid scale wake expansion is achieved by adding turbulence kinetic energy (proportional to the extracted power) to the flow. The validity of both wind farm parametrizations has been verified against observational data. We...... turbines produce a thrust dependent on the background velocity. For the sub-grid scale velocity deficit, the entrainment from the free atmospheric flow into the wake region, which is responsible for the expansion, is taken into account. Furthermore, since the model horizontal distance is several times...

  3. Multivariable Parametric Cost Model for Ground Optical Telescope Assembly

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2005-01-01

    A parametric cost model for ground-based telescopes is developed using multivariable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction-limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature are examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e., multi-telescope phased-array systems). Additionally, single variable models Based on aperture diameter are derived.

  4. Multivariable Parametric Cost Model for Ground Optical: Telescope Assembly

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature were examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter were derived.

  5. DEVELOPING PARAMETRIC BUILDING MODELS – THE GANDIS USE CASE

    Directory of Open Access Journals (Sweden)

    W. Thaller

    2012-09-01

    Full Text Available In the course of a project related to green building design, we have created a group of eight parametric building models that can be manipulated interactively with respect to dimensions, number of floors, and a few other parameters. We report on the commonalities and differences between the models and the abstractions that we were able to identify.

  6. A parametric costing model for wave energy technology

    International Nuclear Information System (INIS)

    1992-01-01

    This document describes the philosophy and technical approach to a parametric cost model for offshore wave energy systems. Consideration is given both to existing known devices and other devices yet to be conceptualised. The report is complementary to a spreadsheet based cost estimating model. The latter permits users to derive capital cost estimates using either inherent default data or user provided data, if a particular scheme provides sufficient design definition for more accurate estimation. The model relies on design default data obtained from wave energy device designs and a set of specifically collected cost data. (author)

  7. Parametric model measurement: reframing traditional measurement ideas in neuropsychological practice and research.

    Science.gov (United States)

    Brown, Gregory G; Thomas, Michael L; Patt, Virginie

    Neuropsychology is an applied measurement field with its psychometric work primarily built upon classical test theory (CTT). We describe a series of psychometric models to supplement the use of CTT in neuropsychological research and test development. We introduce increasingly complex psychometric models as measurement algebras, which include model parameters that represent abilities and item properties. Within this framework of parametric model measurement (PMM), neuropsychological assessment involves the estimation of model parameters with ability parameter values assuming the role of test 'scores'. Moreover, the traditional notion of measurement error is replaced by the notion of parameter estimation error, and the definition of reliability becomes linked to notions of item and test information. The more complex PMM approaches incorporate into the assessment of neuropsychological performance formal parametric models of behavior validated in the experimental psychology literature, along with item parameters. These PMM approaches endorse the use of experimental manipulations of model parameters to assess a test's construct representation. Strengths and weaknesses of these models are evaluated by their implications for measurement error conditional upon ability level, sensitivity to sample characteristics, computational challenges to parameter estimation, and construct validity. A family of parametric psychometric models can be used to assess latent processes of interest to neuropsychologists. By modeling latent abilities at the item level, psychometric studies in neuropsychology can investigate construct validity and measurement precision within a single framework and contribute to a unification of statistical methods within the framework of generalized latent variable modeling.

  8. Modeling Evolution on Nearly Neutral Network Fitness Landscapes

    Science.gov (United States)

    Yakushkina, Tatiana; Saakian, David B.

    2017-08-01

    To describe virus evolution, it is necessary to define a fitness landscape. In this article, we consider the microscopic models with the advanced version of neutral network fitness landscapes. In this problem setting, we suppose a fitness difference between one-point mutation neighbors to be small. We construct a modification of the Wright-Fisher model, which is related to ordinary infinite population models with nearly neutral network fitness landscape at the large population limit. From the microscopic models in the realistic sequence space, we derive two versions of nearly neutral network models: with sinks and without sinks. We claim that the suggested model describes the evolutionary dynamics of RNA viruses better than the traditional Wright-Fisher model with few sequences.

  9. Analytical fitting model for rough-surface BRDF.

    Science.gov (United States)

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  10. Evaluating Portfolio Value-At-Risk Using Semi-Parametric GARCH Models

    NARCIS (Netherlands)

    J.V.K. Rombouts; M.J.C.M. Verbeek (Marno)

    2009-01-01

    textabstractIn this paper we examine the usefulness of multivariate semi-parametric GARCH models for evaluating the Value-at-Risk (VaR) of a portfolio with arbitrary weights. We specify and estimate several alternative multivariate GARCH models for daily returns on the S&P 500 and Nasdaq indexes.

  11. Parametric model to estimate containment loads following an ex-vessel steam spike

    International Nuclear Information System (INIS)

    Lopez, R.; Hernandez, J.; Huerta, A.

    1998-01-01

    This paper describes the use of a relatively simple parametric model to estimate containment loads following an ex-vessel steam spike. The study was motivated because several PSAs have identified containment loads accompanying reactor vessel failures as a major contributor to early containment failure. The paper includes a detailed description of the simple but physically sound parametric model which was adopted to estimate containment loads following a steam spike into the reactor cavity. (author)

  12. Update on Parametric Cost Models for Space Telescopes

    Science.gov (United States)

    Stahl. H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda

    2011-01-01

    Since the June 2010 Astronomy Conference, an independent review of our cost data base discovered some inaccuracies and inconsistencies which can modify our previously reported results. This paper will review changes to the data base, our confidence in those changes and their effect on various parametric cost models

  13. Spatial variability and parametric uncertainty in performance assessment models

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mancillas, James; Painter, Scott; Tomishima, Yasuo

    2011-01-01

    The problem of defining an appropriate treatment of distribution functions (which could represent spatial variability or parametric uncertainty) is examined based on a generic performance assessment model for a high-level waste repository. The generic model incorporated source term models available in GoldSim ® , the TDRW code for contaminant transport in sparse fracture networks with a complex fracture-matrix interaction process, and a biosphere dose model known as BDOSE TM . Using the GoldSim framework, several Monte Carlo sampling approaches and transport conceptualizations were evaluated to explore the effect of various treatments of spatial variability and parametric uncertainty on dose estimates. Results from a model employing a representative source and ensemble-averaged pathway properties were compared to results from a model allowing for stochastic variation of transport properties along streamline segments (i.e., explicit representation of spatial variability within a Monte Carlo realization). We concluded that the sampling approach and the definition of an ensemble representative do influence consequence estimates. In the examples analyzed in this paper, approaches considering limited variability of a transport resistance parameter along a streamline increased the frequency of fast pathways resulting in relatively high dose estimates, while those allowing for broad variability along streamlines increased the frequency of 'bottlenecks' reducing dose estimates. On this basis, simplified approaches with limited consideration of variability may suffice for intended uses of the performance assessment model, such as evaluation of site safety. (author)

  14. Random-growth urban model with geographical fitness

    Science.gov (United States)

    Kii, Masanobu; Akimoto, Keigo; Doi, Kenji

    2012-12-01

    This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.

  15. Identification of parametric models with a priori knowledge of process properties

    Directory of Open Access Journals (Sweden)

    Janiszowski Krzysztof B.

    2016-12-01

    Full Text Available An approach to estimation of a parametric discrete-time model of a process in the case of some a priori knowledge of the investigated process properties is presented. The knowledge of plant properties is introduced in the form of linear bounds, which can be determined for the coefficient vector of the parametric model studied. The approach yields special biased estimation of model coefficients that preserves demanded properties. A formula for estimation of the model coefficients is derived and combined with a recursive scheme determined for minimization of the sum of absolute model errors. The estimation problem of a model with known static gains of inputs is discussed and proper formulas are derived. This approach can overcome the non-identifiability problem which has been observed during estimation based on measurements recorded in industrial closed-loop control systems. The application of the proposed approach to estimation of a model for an industrial plant (a water injector into the steam flow in a power plant is presented and discussed.

  16. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Science.gov (United States)

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  17. Parametric Analysis of Flexible Logic Control Model

    Directory of Open Access Journals (Sweden)

    Lihua Fu

    2013-01-01

    Full Text Available Based on deep analysis about the essential relation between two input variables of normal two-dimensional fuzzy controller, we used universal combinatorial operation model to describe the logic relationship and gave a flexible logic control method to realize the effective control for complex system. In practical control application, how to determine the general correlation coefficient of flexible logic control model is a problem for further studies. First, the conventional universal combinatorial operation model has been limited in the interval [0,1]. Consequently, this paper studies a kind of universal combinatorial operation model based on the interval [a,b]. And some important theorems are given and proved, which provide a foundation for the flexible logic control method. For dealing reasonably with the complex relations of every factor in complex system, a kind of universal combinatorial operation model with unequal weights is put forward. Then, this paper has carried out the parametric analysis of flexible logic control model. And some research results have been given, which have important directive to determine the values of the general correlation coefficients in practical control application.

  18. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    Science.gov (United States)

    Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise

    2013-01-01

    1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.

  19. Bioprocess iterative batch-to-batch optimization based on hybrid parametric/nonparametric models.

    Science.gov (United States)

    Teixeira, Ana P; Clemente, João J; Cunha, António E; Carrondo, Manuel J T; Oliveira, Rui

    2006-01-01

    This paper presents a novel method for iterative batch-to-batch dynamic optimization of bioprocesses. The relationship between process performance and control inputs is established by means of hybrid grey-box models combining parametric and nonparametric structures. The bioreactor dynamics are defined by material balance equations, whereas the cell population subsystem is represented by an adjustable mixture of nonparametric and parametric models. Thus optimizations are possible without detailed mechanistic knowledge concerning the biological system. A clustering technique is used to supervise the reliability of the nonparametric subsystem during the optimization. Whenever the nonparametric outputs are unreliable, the objective function is penalized. The technique was evaluated with three simulation case studies. The overall results suggest that the convergence to the optimal process performance may be achieved after a small number of batches. The model unreliability risk constraint along with sampling scheduling are crucial to minimize the experimental effort required to attain a given process performance. In general terms, it may be concluded that the proposed method broadens the application of the hybrid parametric/nonparametric modeling technique to "newer" processes with higher potential for optimization.

  20. Crossing statistic: Bayesian interpretation, model selection and resolving dark energy parametrization problem

    International Nuclear Information System (INIS)

    Shafieloo, Arman

    2012-01-01

    By introducing Crossing functions and hyper-parameters I show that the Bayesian interpretation of the Crossing Statistics [1] can be used trivially for the purpose of model selection among cosmological models. In this approach to falsify a cosmological model there is no need to compare it with other models or assume any particular form of parametrization for the cosmological quantities like luminosity distance, Hubble parameter or equation of state of dark energy. Instead, hyper-parameters of Crossing functions perform as discriminators between correct and wrong models. Using this approach one can falsify any assumed cosmological model without putting priors on the underlying actual model of the universe and its parameters, hence the issue of dark energy parametrization is resolved. It will be also shown that the sensitivity of the method to the intrinsic dispersion of the data is small that is another important characteristic of the method in testing cosmological models dealing with data with high uncertainties

  1. The FITS model office ergonomics program: a model for best practice.

    Science.gov (United States)

    Chim, Justine M Y

    2014-01-01

    An effective office ergonomics program can predict positive results in reducing musculoskeletal injury rates, enhancing productivity, and improving staff well-being and job satisfaction. Its objective is to provide a systematic solution to manage the potential risk of musculoskeletal disorders among computer users in an office setting. A FITS Model office ergonomics program is developed. The FITS Model Office Ergonomics Program has been developed which draws on the legislative requirements for promoting the health and safety of workers using computers for extended periods as well as previous research findings. The Model is developed according to the practical industrial knowledge in ergonomics, occupational health and safety management, and human resources management in Hong Kong and overseas. This paper proposes a comprehensive office ergonomics program, the FITS Model, which considers (1) Furniture Evaluation and Selection; (2) Individual Workstation Assessment; (3) Training and Education; (4) Stretching Exercises and Rest Break as elements of an effective program. An experienced ergonomics practitioner should be included in the program design and implementation. Through the FITS Model Office Ergonomics Program, the risk of musculoskeletal disorders among computer users can be eliminated or minimized, and workplace health and safety and employees' wellness enhanced.

  2. Identifying Some Risk Factors for the Time to Death of the Elderly Using the Semi-Parametric Blended Model of Survival Analysis With Competing Risks

    Directory of Open Access Journals (Sweden)

    Samane Hajiabbasi

    2018-01-01

    Conclusion In single-variable fitting, age, history of myocardial infarction, history of stroke, and kidney problems were identified to have significant effects on the time to death of the elderly. Based on one-variable semi-parametric competing risk mixture fitted models, more significant risk factors for the time to death of elderly was identified when compared with a fitted multivariate mode to the data. This implies that the role of some independent variables can be explained by other independent variables.

  3. Reinforcement Toolbox, a Parametric Reinforcement Modelling Tool for Curved Surface Structures

    NARCIS (Netherlands)

    Lauppe, J.; Rolvink, A.; Coenders, J.L.

    2013-01-01

    This paper presents a computational strategy and parametric modelling toolbox which aim at enhancing the design- and production process of reinforcement in freeform curved surface structures. The computational strategy encompasses the necessary steps of raising an architectural curved surface model

  4. Four-parametric two-layer algebraic model of transition boundary layer at a planar plate

    International Nuclear Information System (INIS)

    Labusov, A.N.; Lapin, Yu.V.

    1996-01-01

    Consideration is given to four-parametric two-layer algebraic model of transition boundary layer on a plane plate, based on generalization of one-parametric algebraic Prandtl-Loitsjansky-Klauzer-3 model. The algebraic model uses Prandtl formulas for mixing path with Loitsjansky damping multiplier in the internal region and the relation for turbulent viscosity, based on universal scales of external region and named the Klauzer-3 formula. 12 refs., 10 figs

  5. Facial Performance Transfer via Deformable Models and Parametric Correspondence.

    Science.gov (United States)

    Asthana, Akshay; de la Hunty, Miles; Dhall, Abhinav; Goecke, Roland

    2012-09-01

    The issue of transferring facial performance from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, deformable face models, such as the Active Appearance Model (AAM), have made it possible to track and synthesize faces in real time. Not surprisingly, deformable face model-based approaches for facial performance transfer have gained tremendous interest in the computer vision and graphics community. In this paper, we focus on the problem of real-time facial performance transfer using the AAM framework. We propose a novel approach of learning the mapping between the parameters of two completely independent AAMs, using them to facilitate the facial performance transfer in a more realistic manner than previous approaches. The main advantage of modeling this parametric correspondence is that it allows a "meaningful" transfer of both the nonrigid shape and texture across faces irrespective of the speakers' gender, shape, and size of the faces, and illumination conditions. We explore linear and nonlinear methods for modeling the parametric correspondence between the AAMs and show that the sparse linear regression method performs the best. Moreover, we show the utility of the proposed framework for a cross-language facial performance transfer that is an area of interest for the movie dubbing industry.

  6. Developmental models for estimating ecological responses to environmental variability: structural, parametric, and experimental issues.

    Science.gov (United States)

    Moore, Julia L; Remais, Justin V

    2014-03-01

    Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.

  7. Parametric amplification of metric fluctuations during reheating in two field models

    International Nuclear Information System (INIS)

    Finelli, F.; Brandenberger, R.

    2000-01-01

    We study the parametric amplification of super-Hubble-scale scalar metric fluctuations at the end of inflation in some specific two-field models of inflation, a class of which is motivated by hybrid inflation. We demonstrate that there can indeed be a large growth of fluctuations due to parametric resonance and that this effect is not taken into account by the conventional theory of isocurvature perturbations. Scalar field interactions play a crucial role in this analysis. We discuss the conditions under which there can be nontrivial parametric resonance effects on large scales

  8. A non-parametric hierarchical model to discover behavior dynamics from tracks

    NARCIS (Netherlands)

    Kooij, J.F.P.; Englebienne, G.; Gavrila, D.M.

    2012-01-01

    We present a novel non-parametric Bayesian model to jointly discover the dynamics of low-level actions and high-level behaviors of tracked people in open environments. Our model represents behaviors as Markov chains of actions which capture high-level temporal dynamics. Actions may be shared by

  9. A parametric model for the global thermodynamic behavior of fluids in the critical region

    International Nuclear Information System (INIS)

    Luettmer-Strathmann, J.; Tang, S.; Sengers, J.V.

    1992-01-01

    The asymptotic thermodynamic behavior of fluids near the critical point is described by scaling laws with universal scaling functions that can be represented by parametric equations. In this paper, we derive a more general parametric model that incorporates the crossover from singular thermodynamic behavior near the critical point to regular classical thermodynamic behavior far away from the critical point. Using ethane as an example, we show that such a parametric crossover model yields an accurate representation of the thermodynamic properties of fluids in a large region around the critical point

  10. topicmodels: An R Package for Fitting Topic Models

    Directory of Open Access Journals (Sweden)

    Bettina Grun

    2011-05-01

    Full Text Available Topic models allow the probabilistic modeling of term frequency occurrences in documents. The fitted model can be used to estimate the similarity between documents as well as between a set of specified keywords using an additional layer of latent variables which are referred to as topics. The R package topicmodels provides basic infrastructure for fitting topic models based on data structures from the text mining package tm. The package includes interfaces to two algorithms for fitting topic models: the variational expectation-maximization algorithm provided by David M. Blei and co-authors and an algorithm using Gibbs sampling by Xuan-Hieu Phan and co-authors.

  11. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  12. Parametric Resonance in the Early Universe - A Fitting Analysis

    CERN Document Server

    Figueroa, Daniel G.

    2017-02-01

    Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in $3+1$ dimensions, we parametrise the dynamics' outcome scanning over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasise the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequ...

  13. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia; Harmandaris, Vagelis; Katsoulakis, Markos A.; Plechac, Petr

    2015-01-01

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics

  14. Parametric modeling for damped sinusoids from multiple channels

    DEFF Research Database (Denmark)

    Zhou, Zhenhua; So, Hing Cheung; Christensen, Mads Græsbøll

    2013-01-01

    frequencies and damping factors are then computed with the multi-channel weighted linear prediction method. The estimated sinusoidal poles are then matched to each channel according to the extreme value theory of distribution of random fields. Simulations are performed to show the performance advantages......The problem of parametric modeling for noisy damped sinusoidal signals from multiple channels is addressed. Utilizing the shift invariance property of the signal subspace, the number of distinct sinusoidal poles in the multiple channels is first determined. With the estimated number, the distinct...... of the proposed multi-channel sinusoidal modeling methodology compared with existing methods....

  15. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    Science.gov (United States)

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  16. A simple non-parametric goodness-of-fit test for elliptical copulas

    Directory of Open Access Journals (Sweden)

    Jaser Miriam

    2017-12-01

    Full Text Available In this paper, we propose a simple non-parametric goodness-of-fit test for elliptical copulas of any dimension. It is based on the equality of Kendall’s tau and Blomqvist’s beta for all bivariate margins. Nominal level and power of the proposed test are investigated in a Monte Carlo study. An empirical application illustrates our goodness-of-fit test at work.

  17. Integrating acoustic analysis in the architectural design process using parametric modelling

    DEFF Research Database (Denmark)

    Peters, Brady

    2011-01-01

    This paper discusses how parametric modeling techniques can be used to provide architectural designers with a better understanding of the acoustic performance of their designs and provide acoustic engineers with models that can be analyzed using computational acoustic analysis software. Architects......, acoustic performance can inform the geometry and material logic of the design. In this way, the architectural design and the acoustic analysis model become linked....

  18. Local fit evaluation of structural equation models using graphical criteria.

    Science.gov (United States)

    Thoemmes, Felix; Rosseel, Yves; Textor, Johannes

    2018-03-01

    Evaluation of model fit is critically important for every structural equation model (SEM), and sophisticated methods have been developed for this task. Among them are the χ² goodness-of-fit test, decomposition of the χ², derived measures like the popular root mean square error of approximation (RMSEA) or comparative fit index (CFI), or inspection of residuals or modification indices. Many of these methods provide a global approach to model fit evaluation: A single index is computed that quantifies the fit of the entire SEM to the data. In contrast, graphical criteria like d-separation or trek-separation allow derivation of implications that can be used for local fit evaluation, an approach that is hardly ever applied. We provide an overview of local fit evaluation from the viewpoint of SEM practitioners. In the presence of model misfit, local fit evaluation can potentially help in pinpointing where the problem with the model lies. For models that do fit the data, local tests can identify the parts of the model that are corroborated by the data. Local tests can also be conducted before a model is fitted at all, and they can be used even for models that are globally underidentified. We discuss appropriate statistical local tests, and provide applied examples. We also present novel software in R that automates this type of local fit evaluation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Explicit/multi-parametric model predictive control (MPC) of linear discrete-time systems by dynamic and multi-parametric programming

    KAUST Repository

    Kouramas, K.I.

    2011-08-01

    This work presents a new algorithm for solving the explicit/multi- parametric model predictive control (or mp-MPC) problem for linear, time-invariant discrete-time systems, based on dynamic programming and multi-parametric programming techniques. The algorithm features two key steps: (i) a dynamic programming step, in which the mp-MPC problem is decomposed into a set of smaller subproblems in which only the current control, state variables, and constraints are considered, and (ii) a multi-parametric programming step, in which each subproblem is solved as a convex multi-parametric programming problem, to derive the control variables as an explicit function of the states. The key feature of the proposed method is that it overcomes potential limitations of previous methods for solving multi-parametric programming problems with dynamic programming, such as the need for global optimization for each subproblem of the dynamic programming step. © 2011 Elsevier Ltd. All rights reserved.

  20. Two-parametric model of metals hardening during cold working

    International Nuclear Information System (INIS)

    Khajkin, B.E.

    1985-01-01

    Mathematical models of cold working metal resistance σ depending on deformation degree have been analyzed. Advantage of two-parametric formula combining simplicity with satisfactory accuracy of experimental data approximation is noted. The formula is convenient when determining value σ, which is average with respect of deformation location, as average geometric value

  1. Using multinomial and imprecise probability for non-parametric modelling of rainfall in Manizales (Colombia

    Directory of Open Access Journals (Sweden)

    Ibsen Chivatá Cárdenas

    2008-05-01

    Full Text Available This article presents a rainfall model constructed by applying non-parametric modelling and imprecise probabilities; these tools were used because there was not enough homogeneous information in the study area. The area’s hydro-logical information regarding rainfall was scarce and existing hydrological time series were not uniform. A distributed extended rainfall model was constructed from so-called probability boxes (p-boxes, multinomial probability distribu-tion and confidence intervals (a friendly algorithm was constructed for non-parametric modelling by combining the last two tools. This model confirmed the high level of uncertainty involved in local rainfall modelling. Uncertainty en-compassed the whole range (domain of probability values thereby showing the severe limitations on information, leading to the conclusion that a detailed estimation of probability would lead to significant error. Nevertheless, rele-vant information was extracted; it was estimated that maximum daily rainfall threshold (70 mm would be surpassed at least once every three years and the magnitude of uncertainty affecting hydrological parameter estimation. This paper’s conclusions may be of interest to non-parametric modellers and decisions-makers as such modelling and imprecise probability represents an alternative for hydrological variable assessment and maybe an obligatory proce-dure in the future. Its potential lies in treating scarce information and represents a robust modelling strategy for non-seasonal stochastic modelling conditions

  2. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  3. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  4. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  5. Suitability of parametric models to describe the hydraulic properties of an unsaturated coarse sand and gravel

    Science.gov (United States)

    Mace, Andy; Rudolph, David L.; Kachanoski , R. Gary

    1998-01-01

    The performance of parametric models used to describe soil water retention (SWR) properties and predict unsaturated hydraulic conductivity (K) as a function of volumetric water content (θ) is examined using SWR and K(θ) data for coarse sand and gravel sediments. Six 70 cm long, 10 cm diameter cores of glacial outwash were instrumented at eight depths with porous cup ten-siometers and time domain reflectometry probes to measure soil water pressure head (h) and θ, respectively, for seven unsaturated and one saturated steady-state flow conditions. Forty-two θ(h) and K(θ) relationships were measured from the infiltration tests on the cores. Of the four SWR models compared in the analysis, the van Genuchten (1980) equation with parameters m and n restricted according to the Mualem (m = 1 - 1/n) criterion is best suited to describe the θ(h) relationships. The accuracy of two models that predict K(θ) using parameter values derived from the SWR models was also evaluated. The model developed by van Genuchten (1980) based on the theoretical expression of Mualem (1976) predicted K(θ) more accurately than the van Genuchten (1980) model based on the theory of Burdine (1953). A sensitivity analysis shows that more accurate predictions of K(θ) are achieved using SWR model parameters derived with residual water content (θr) specified according to independent measurements of θ at values of h where θ/h ∼ 0 rather than model-fit θr values. The accuracy of the model K(θ) function improves markedly when at least one value of unsaturated K is used to scale the K(θ) function predicted using the saturated K. The results of this investigation indicate that the hydraulic properties of coarse-grained sediments can be accurately described using the parametric models. In addition, data collection efforts should focus on measuring at least one value of unsaturated hydraulic conductivity and as complete a set of SWR data as possible, particularly in the dry range.

  6. Parametric laws to model urban pollutant dispersion with a street network approach

    Science.gov (United States)

    Soulhac, L.; Salizzoni, P.; Mejean, P.; Perkins, R. J.

    2013-03-01

    This study discusses the reliability of the street network approach for pollutant dispersion modelling in urban areas. This is essentially based on a box model, with parametric relations that explicitly model the main phenomena that contribute to the street canyon ventilation: the mass exchanges between the street and the atmosphere, the pollutant advection along the street axes and the pollutant transfer at street intersections. In the first part of the paper the focus is on the development of a model for the bulk transfer street/atmosphere, which represents the main ventilation mechanisms for wind direction that are almost perpendicular to the axis of the street. We then discuss the role of the advective transfer along the street axis on its ventilation, depending on the length of the street and the direction of the external wind. Finally we evaluate the performances of a box model integrating parametric exchange laws for these transfer phenomena. To that purpose we compare the prediction of the model to wind tunnel experiments of pollutant dispersion within a street canyon placed in an idealised urban district.

  7. Automatic fitting of spiking neuron models to electrophysiological recordings

    Directory of Open Access Journals (Sweden)

    Cyrille Rossant

    2010-03-01

    Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.

  8. An Evaluation of Parametric and Nonparametric Models of Fish Population Response.

    Energy Technology Data Exchange (ETDEWEB)

    Haas, Timothy C.; Peterson, James T.; Lee, Danny C.

    1999-11-01

    Predicting the distribution or status of animal populations at large scales often requires the use of broad-scale information describing landforms, climate, vegetation, etc. These data, however, often consist of mixtures of continuous and categorical covariates and nonmultiplicative interactions among covariates, complicating statistical analyses. Using data from the interior Columbia River Basin, USA, we compared four methods for predicting the distribution of seven salmonid taxa using landscape information. Subwatersheds (mean size, 7800 ha) were characterized using a set of 12 covariates describing physiography, vegetation, and current land-use. The techniques included generalized logit modeling, classification trees, a nearest neighbor technique, and a modular neural network. We evaluated model performance using out-of-sample prediction accuracy via leave-one-out cross-validation and introduce a computer-intensive Monte Carlo hypothesis testing approach for examining the statistical significance of landscape covariates with the non-parametric methods. We found the modular neural network and the nearest-neighbor techniques to be the most accurate, but were difficult to summarize in ways that provided ecological insight. The modular neural network also required the most extensive computer resources for model fitting and hypothesis testing. The generalized logit models were readily interpretable, but were the least accurate, possibly due to nonlinear relationships and nonmultiplicative interactions among covariates. Substantial overlap among the statistically significant (P<0.05) covariates for each method suggested that each is capable of detecting similar relationships between responses and covariates. Consequently, we believe that employing one or more methods may provide greater biological insight without sacrificing prediction accuracy.

  9. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    Science.gov (United States)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  10. Comparison of Parametrization Techniques for an Electrical Circuit Model of Lithium-Sulfur Batteries

    DEFF Research Database (Denmark)

    Knap, Vaclav; Stroe, Daniel Loan; Teodorescu, Remus

    2015-01-01

    on the comparison of different parametrization methods of electrical circuit models (ECMs) for Li-S batteries. These methods are used to parametrize an ECM based on laboratory measurements performed on a Li-S pouch cell. Simulation results of ECMs are presented and compared against measurement values...

  11. Theory of fluctuations and parametric noise in a point nuclear reactor model

    International Nuclear Information System (INIS)

    Rodriguez, M.A.; San Miguel, M.; Sancho, J.M.

    1984-01-01

    We present a joint description of internal fluctuations and parametric noise in a point nuclear reactor model in which delayed neutrons and a detector are considered. We obtain kinetic equations for the first moments and define effective kinetic parameters which take into account the effect of parametric Gaussian white noise. We comment on the validity of Langevin approximations for this problem. We propose a general method to deal with weak but otherwise arbitrary non-white parametric noise. Exact kinetic equations are derived for Gaussian non-white noise. (author)

  12. A simple parametric model observer for quality assurance in computer tomography

    Science.gov (United States)

    Anton, M.; Khanin, A.; Kretz, T.; Reginatto, M.; Elster, C.

    2018-04-01

    Model observers are mathematical classifiers that are used for the quality assessment of imaging systems such as computer tomography. The quality of the imaging system is quantified by means of the performance of a selected model observer. For binary classification tasks, the performance of the model observer is defined by the area under its ROC curve (AUC). Typically, the AUC is estimated by applying the model observer to a large set of training and test data. However, the recording of these large data sets is not always practical for routine quality assurance. In this paper we propose as an alternative a parametric model observer that is based on a simple phantom, and we provide a Bayesian estimation of its AUC. It is shown that a limited number of repeatedly recorded images (10–15) is already sufficient to obtain results suitable for the quality assessment of an imaging system. A MATLAB® function is provided for the calculation of the results. The performance of the proposed model observer is compared to that of the established channelized Hotelling observer and the nonprewhitening matched filter for simulated images as well as for images obtained from a low-contrast phantom on an x-ray tomography scanner. The results suggest that the proposed parametric model observer, along with its Bayesian treatment, can provide an efficient, practical alternative for the quality assessment of CT imaging systems.

  13. Process simulation and parametric modeling for strategic project management

    CERN Document Server

    Morales, Peter J

    2013-01-01

    Process Simulation and Parametric Modeling for Strategic Project Management will offer CIOs, CTOs and Software Development Managers, IT Graduate Students an introduction to a set of technologies that will help them understand how to better plan software development projects, manage risk and have better insight into the complexities of the software development process.A novel methodology will be introduced that allows a software development manager to better plan and access risks in the early planning of a project.  By providing a better model for early software development estimation and softw

  14. Modeling and Validation across Scales: Parametrizing the effect of the forested landscape

    DEFF Research Database (Denmark)

    Dellwik, Ebba; Badger, Merete; Angelou, Nikolas

    be transferred into a parametrization of forests in wind models. The presentation covers three scales: the single tree, the forest edges and clearings, and the large-scale forested landscape in which the forest effects are parameterized with a roughness length. Flow modeling results and validation against...

  15. Weighted fit of parametric functions to distributions: The new interface of HOBOOK with MINUIT

    International Nuclear Information System (INIS)

    Lessner, E.S.

    1989-08-01

    The fitting routines of the HBOOK package allow weighted fit of parametric functions to the contents of a one, two or N-dimensional distribution, and analysis of the function in the neighborhood of its minimum, through an interface with the MINUIT package. These routines have been rewritten so as to interface the new version of MINUIT and to allow for smooth transitions to future versions of both packages. We discuss the interface and its capabilities: it is more stable than the previous version and presents a more accurate error analysis. The fitting algorithm is based on the Fletcher method, known for its reliability. Exponential, Gaussian and polynomial fitting are provided, as well as arbitrary user-defined fitting, to one, two and N-dimensional distributions. For the latter, the user is required to provide a smooth parametric function and is given the ability to guide the algorithm in finding the desired minimum. Examples are given. 6 refs., 1 fig

  16. A Neural Parametric Singing Synthesizer Modeling Timbre and Expression from Natural Songs

    Directory of Open Access Journals (Sweden)

    Merlijn Blaauw

    2017-12-01

    Full Text Available We recently presented a new model for singing synthesis based on a modified version of the WaveNet architecture. Instead of modeling raw waveform, we model features produced by a parametric vocoder that separates the influence of pitch and timbre. This allows conveniently modifying pitch to match any target melody, facilitates training on more modest dataset sizes, and significantly reduces training and generation times. Nonetheless, compared to modeling waveform directly, ways of effectively handling higher-dimensional outputs, multiple feature streams and regularization become more important with our approach. In this work, we extend our proposed system to include additional components for predicting F0 and phonetic timings from a musical score with lyrics. These expression-related features are learned together with timbrical features from a single set of natural songs. We compare our method to existing statistical parametric, concatenative, and neural network-based approaches using quantitative metrics as well as listening tests.

  17. Parametric analysis of the statistical model of the stick-slip process

    Science.gov (United States)

    Lima, Roberta; Sampaio, Rubens

    2017-06-01

    In this paper it is performed a parametric analysis of the statistical model of the response of a dry-friction oscillator. The oscillator is a spring-mass system which moves over a base with a rough surface. Due to this roughness, the mass is subject to a dry-frictional force modeled as a Coulomb friction. The system is stochastically excited by an imposed bang-bang base motion. The base velocity is modeled by a Poisson process for which a probabilistic model is fully specified. The excitation induces in the system stochastic stick-slip oscillations. The system response is composed by a random sequence alternating stick and slip-modes. With realizations of the system, a statistical model is constructed for this sequence. In this statistical model, the variables of interest of the sequence are modeled as random variables, as for example, the number of time intervals in which stick or slip occur, the instants at which they begin, and their duration. Samples of the system response are computed by integration of the dynamic equation of the system using independent samples of the base motion. Statistics and histograms of the random variables which characterize the stick-slip process are estimated for the generated samples. The objective of the paper is to analyze how these estimated statistics and histograms vary with the system parameters, i.e., to make a parametric analysis of the statistical model of the stick-slip process.

  18. ASPEN: A fully kinetic, reduced-description particle-in-cell model for simulating parametric instabilities

    International Nuclear Information System (INIS)

    Vu, H.X.; Bezzerides, B.; DuBois, D.F.

    1999-01-01

    A fully kinetic, reduced-description particle-in-cell (RPIC) model is presented in which deviations from quasineutrality, electron and ion kinetic effects, and nonlinear interactions between low-frequency and high-frequency parametric instabilities are modeled correctly. The model is based on a reduced description where the electromagnetic field is represented by three separate temporal envelopes in order to model parametric instabilities with low-frequency and high-frequency daughter waves. Because temporal envelope approximations are invoked, the simulation can be performed on the electron time scale instead of the time scale of the light waves. The electrons and ions are represented by discrete finite-size particles, permitting electron and ion kinetic effects to be modeled properly. The Poisson equation is utilized to ensure that space-charge effects are included. The RPIC model is fully three dimensional and has been implemented in two dimensions on the Accelerated Strategic Computing Initiative (ASCI) parallel computer at Los Alamos National Laboratory, and the resulting simulation code has been named ASPEN. The authors believe this code is the first particle-in-cell code capable of simulating the interaction between low-frequency and high-frequency parametric instabilities in multiple dimensions. Test simulations of stimulated Raman scattering, stimulated Brillouin scattering, and Langmuir decay instability are presented

  19. Thermodynamic model and parametric analysis of a tubular SOFC module

    Science.gov (United States)

    Campanari, Stefano

    Solid oxide fuel cells (SOFCs) have been considered in the last years as one of the most promising technologies for very high-efficiency electric energy generation from natural gas, both with simple fuel cell plants and with integrated gas turbine-fuel cell systems. Among the SOFC technologies, tubular SOFC stacks with internal reforming have emerged as one of the most mature technology, with a serious potential for a future commercialization. In this paper, a thermodynamic model of a tubular SOFC stack, with natural gas feeding, internal reforming of hydrocarbons and internal air preheating is proposed. In the first section of the paper, the model is discussed in detail, analyzing its calculating equations and tracing its logical steps; the model is then calibrated on the available data for a recently demonstrated tubular SOFC prototype plant. In the second section of the paper, it is carried out a detailed parametric analysis of the stack working conditions, as a function of the main operating parameters. The discussion of the results of the thermodynamic and parametric analysis yields interesting considerations about partial load SOFC operation and load regulation, and about system design and integration with gas turbine cycles.

  20. A comparative study of non-parametric models for identification of ...

    African Journals Online (AJOL)

    However, the frequency response method using random binary signals was good for unpredicted white noise characteristics and considered the best method for non-parametric system identifica-tion. The autoregressive external input (ARX) model was very useful for system identification, but on applicati-on, few input ...

  1. PULSE: Integrated Parametric Modeling for a Shading System : From Daylight Optimization to Additive Manufacturing

    NARCIS (Netherlands)

    Teeling, M.V.M.T.; Turrin, M.; de Ruiter, P.; Turrin, Michela; Peters, Brady; O'Brien, William; Stouffs, Rudi; Dogan, Timur

    2017-01-01

    This paper presents a parametric approach to an integrated and performance-oriented design, from the conceptual design phase towards materialization. The novelty occurs in the use of parametric models as a way of integrating multidisciplinary design constraints, from daylight optimization to the

  2. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  3. Assessment and Reduction of Model Parametric Uncertainties: A Case Study with A Distributed Hydrological Model

    Science.gov (United States)

    Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.

    2017-12-01

    The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40

  4. ITEM LEVEL DIAGNOSTICS AND MODEL - DATA FIT IN ITEM ...

    African Journals Online (AJOL)

    Global Journal

    Item response theory (IRT) is a framework for modeling and analyzing item response ... data. Though, there is an argument that the evaluation of fit in IRT modeling has been ... National Council on Measurement in Education ... model data fit should be based on three types of ... prediction should be assessed through the.

  5. Evaluation of parametric and nonparametric models to predict water flow; Avaliacao entre modelos parametricos e nao parametricos para previsao de vazoes afluentes

    Energy Technology Data Exchange (ETDEWEB)

    Marques, T.C.; Cruz Junior, G.; Vinhal, C. [Universidade Federal de Goias (UFG), Goiania, GO (Brazil). Escola de Engenharia Eletrica e de Computacao], Emails: thyago@eeec.ufg.br, gcruz@eeec.ufg.br, vinhal@eeec.ufg.br

    2009-07-01

    The goal of this paper is to present a methodology to carry out the seasonal stream flow forecasting using database of average monthly inflows of one Brazilian hydroelectric plant located at Grande, Tocantins, Paranaiba, Sao Francisco and Iguacu river's. The model is based on the Adaptive Network Based Fuzzy Inference System (ANFIS), the non-parametric model. The performance of this model was compared with a periodic autoregressive model, the parametric model. The results show that the forecasting errors of the non-parametric model considered are significantly lower than the parametric model. (author)

  6. Goodness-of-Fit Assessment of Item Response Theory Models

    Science.gov (United States)

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  7. Assessing scenario and parametric uncertainties in risk analysis: a model uncertainty audit

    International Nuclear Information System (INIS)

    Tarantola, S.; Saltelli, A.; Draper, D.

    1999-01-01

    In the present study a process of model audit is addressed on a computational model used for predicting maximum radiological doses to humans in the field of nuclear waste disposal. Global uncertainty and sensitivity analyses are employed to assess output uncertainty and to quantify the contribution of parametric and scenario uncertainties to the model output. These tools are of fundamental importance for risk analysis and decision making purposes

  8. Mixing parametrizations for ocean climate modelling

    Science.gov (United States)

    Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir

    2016-04-01

    The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model

  9. Parametric modeling of energy filtering by energy barriers in thermoelectric nanocomposites

    Energy Technology Data Exchange (ETDEWEB)

    Zianni, Xanthippi, E-mail: xzianni@teiste.gr, E-mail: xzianni@gmail.com [Department of Aircraft Technology, Technological Educational Institution of Sterea Ellada, 34400 Psachna (Greece); Department of Microelectronics, INN, NCSR “Demokritos,” 15310 Athens (Greece); Narducci, Dario [Department of Materials Science, University of Milano Bicocca, 20125 Milano (Italy)

    2015-01-21

    We present a parametric modeling of the thermoelectric transport coefficients based on a model previously used to interpret experimental measurements on the conductivity, σ, and Seebeck coefficient, S, in highly Boron-doped polycrystalline Si, where a very significant thermoelectric power factor (TPF) enhancement was observed. We have derived analytical formalism for the transport coefficients in the presence of an energy barrier assuming thermionic emission over the barrier for (i) non-degenerate and (ii) degenerate one-band semiconductor. Simple generic parametric equations are found that are in agreement with the exact Boltzmann transport formalism in a wide range of parameters. Moreover, we explore the effect of energy barriers in 1-d composite semiconductors in the presence of two phases: (a) the bulk-like phase and (b) the barrier phase. It is pointed out that significant TPF enhancement can be achieved in the composite structure of two phases with different thermal conductivities. The TPF enhancement is estimated as a function of temperature, the Fermi energy position, the type of scattering, and the barrier height. The derived modeling provides guidance for experiments and device design.

  10. Parametric overdispersed frailty models for current status data.

    Science.gov (United States)

    Abrams, Steven; Aerts, Marc; Molenberghs, Geert; Hens, Niel

    2017-12-01

    Frailty models have a prominent place in survival analysis to model univariate and multivariate time-to-event data, often complicated by the presence of different types of censoring. In recent years, frailty modeling gained popularity in infectious disease epidemiology to quantify unobserved heterogeneity using Type I interval-censored serological data or current status data. In a multivariate setting, frailty models prove useful to assess the association between infection times related to multiple distinct infections acquired by the same individual. In addition to dependence among individual infection times, overdispersion can arise when the observed variability in the data exceeds the one implied by the model. In this article, we discuss parametric overdispersed frailty models for time-to-event data under Type I interval-censoring, building upon the work by Molenberghs et al. (2010) and Hens et al. (2009). The proposed methodology is illustrated using bivariate serological data on hepatitis A and B from Flanders, Belgium anno 1993-1994. Furthermore, the relationship between individual heterogeneity and overdispersion at a stratum-specific level is studied through simulations. Although it is important to account for overdispersion, one should be cautious when modeling both individual heterogeneity and overdispersion based on current status data as model selection is hampered by the loss of information due to censoring. © 2017, The International Biometric Society.

  11. Parametric model of the scala tympani for haptic-rendered cochlear implantation.

    Science.gov (United States)

    Todd, Catherine; Naghdy, Fazel

    2005-01-01

    A parametric model of the human scala tympani has been designed for use in a haptic-rendered computer simulation of cochlear implant surgery. It will be the first surgical simulator of this kind. A geometric model of the Scala Tympani has been derived from measured data for this purpose. The model is compared with two existing descriptions of the cochlear spiral. A first approximation of the basilar membrane is also produced. The structures are imported into a force-rendering software application for system development.

  12. Fast and Sequence-Adaptive Whole-Brain Segmentation Using Parametric Bayesian Modeling

    DEFF Research Database (Denmark)

    Puonti, Oula; Iglesias, Juan Eugenio; Van Leemput, Koen

    2016-01-01

    the performance of a segmentation algorithm designed to meet these requirements, building upon generative parametric models previously used in tissue classification. The method is tested on four different datasets acquired with different scanners, field strengths and pulse sequences, demonstrating comparable...

  13. Application of semi parametric modelling to times series forecasting: case of the electricity consumption

    International Nuclear Information System (INIS)

    Lefieux, V.

    2007-10-01

    Reseau de Transport d'Electricite (RTE), in charge of operating the French electric transportation grid, needs an accurate forecast of the power consumption in order to operate it correctly. The forecasts used everyday result from a model combining a nonlinear parametric regression and a SARIMA model. In order to obtain an adaptive forecasting model, nonparametric forecasting methods have already been tested without real success. In particular, it is known that a nonparametric predictor behaves badly with a great number of explanatory variables, what is commonly called the curse of dimensionality. Recently, semi parametric methods which improve the pure nonparametric approach have been proposed to estimate a regression function. Based on the concept of 'dimension reduction', one those methods (called MAVE : Moving Average -conditional- Variance Estimate) can apply to time series. We study empirically its effectiveness to predict the future values of an autoregressive time series. We then adapt this method, from a practical point of view, to forecast power consumption. We propose a partially linear semi parametric model, based on the MAVE method, which allows to take into account simultaneously the autoregressive aspect of the problem and the exogenous variables. The proposed estimation procedure is practically efficient. (author)

  14. Comparison of computed tomography based parametric and patient-specific finite element models of the healthy and metastatic spine using a mesh-morphing algorithm.

    Science.gov (United States)

    O'Reilly, Meaghan Anne; Whyne, Cari Marisa

    2008-08-01

    A comparative analysis of parametric and patient-specific finite element (FE) modeling of spinal motion segments. To develop patient-specific FE models of spinal motion segments using mesh-morphing methods applied to a parametric FE model. To compare strain and displacement patterns in parametric and morphed models for both healthy and metastatically involved vertebrae. Parametric FE models may be limited in their ability to fully represent patient-specific geometries and material property distributions. Generation of multiple patient-specific FE models has been limited because of computational expense. Morphing methods have been successfully used to generate multiple specimen-specific FE models of caudal rat vertebrae. FE models of a healthy and a metastatic T6-T8 spinal motion segment were analyzed with and without patient-specific material properties. Parametric and morphed models were compared using a landmark-based morphing algorithm. Morphing of the parametric FE model and including patient-specific material properties both had a strong impact on magnitudes and patterns of vertebral strain and displacement. Small but important geometric differences can be represented through morphing of parametric FE models. The mesh-morphing algorithm developed provides a rapid method for generating patient-specific FE models of spinal motion segments.

  15. Modelling population dynamics model formulation, fitting and assessment using state-space methods

    CERN Document Server

    Newman, K B; Morgan, B J T; King, R; Borchers, D L; Cole, D J; Besbeas, P; Gimenez, O; Thomas, L

    2014-01-01

    This book gives a unifying framework for estimating the abundance of open populations: populations subject to births, deaths and movement, given imperfect measurements or samples of the populations.  The focus is primarily on populations of vertebrates for which dynamics are typically modelled within the framework of an annual cycle, and for which stochastic variability in the demographic processes is usually modest. Discrete-time models are developed in which animals can be assigned to discrete states such as age class, gender, maturity,  population (within a metapopulation), or species (for multi-species models). The book goes well beyond estimation of abundance, allowing inference on underlying population processes such as birth or recruitment, survival and movement. This requires the formulation and fitting of population dynamics models.  The resulting fitted models yield both estimates of abundance and estimates of parameters characterizing the underlying processes.  

  16. Does model fit decrease the uncertainty of the data in comparison with a general non-model least squares fit?

    International Nuclear Information System (INIS)

    Pronyaev, V.G.

    2003-01-01

    The information entropy is taken as a measure of knowledge about the object and the reduced univariante variance as a common measure of uncertainty. Covariances in the model versus non-model least square fits are discussed

  17. On selection of optimal stochastic model for accelerated life testing

    International Nuclear Information System (INIS)

    Volf, P.; Timková, J.

    2014-01-01

    This paper deals with the problem of proper lifetime model selection in the context of statistical reliability analysis. Namely, we consider regression models describing the dependence of failure intensities on a covariate, for instance, a stressor. Testing the model fit is standardly based on the so-called martingale residuals. Their analysis has already been studied by many authors. Nevertheless, the Bayes approach to the problem, in spite of its advantages, is just developing. We shall present the Bayes procedure of estimation in several semi-parametric regression models of failure intensity. Then, our main concern is the Bayes construction of residual processes and goodness-of-fit tests based on them. The method is illustrated with both artificial and real-data examples. - Highlights: • Statistical survival and reliability analysis and Bayes approach. • Bayes semi-parametric regression modeling in Cox's and AFT models. • Bayes version of martingale residuals and goodness-of-fit test

  18. Exploring location influences on firm survival rates using parametric duration models

    NARCIS (Netherlands)

    Manzato, G.G.; Arentze, T.A.; Timmermans, H.J.P.; Ettema, D.F.; Timmermans, H.J.P.; Vries, de B.

    2010-01-01

    Using parametric duration models applied to an office firm dataset, we carried out an exploratory study about the location influences on firm survival rates. Amongst the variables included, we found that accessibility to infrastructure supply, regional effects, demographic and economic aspects, and

  19. Exploring location influences on firm survival rates using parametric duration models

    NARCIS (Netherlands)

    Manzato, G.G.; Arentze, T.A.; Timmermans, H.J.P.; Ettema, D.F.

    2011-01-01

    Using parametric duration models applied to an office firm dataset, we carried out an exploratory study about the location influences on firm survival rates. Amongst the variables included, we found that accessibility to infrastructure supply, regional effects, demographic and economic aspects, and

  20. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.; Katzfuss, M.; Hu, J.; Johnson, V. E.

    2014-01-01

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  1. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.

    2014-09-16

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models\\' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  2. X-1 to X-Wings: Developing a Parametric Cost Model

    Science.gov (United States)

    Sterk, Steve; McAtee, Aaron

    2015-01-01

    In todays cost-constrained environment, NASA needs an X-Plane database and parametric cost model that can quickly provide rough order of magnitude predictions of cost from initial concept to first fight of potential X-Plane aircraft. This paper takes a look at the steps taken in developing such a model and reports the results. The challenges encountered in the collection of historical data and recommendations for future database management are discussed. A step-by-step discussion of the development of Cost Estimating Relationships (CERs) is then covered.

  3. Lumped parametric model of the human ear for sound transmission.

    Science.gov (United States)

    Feng, Bin; Gan, Rong Z

    2004-09-01

    A lumped parametric model of the human auditoria peripherals consisting of six masses suspended with six springs and ten dashpots was proposed. This model will provide the quantitative basis for the construction of a physical model of the human middle ear. The lumped model parameters were first identified using published anatomical data, and then determined through a parameter optimization process. The transfer function of the middle ear obtained from human temporal bone experiments with laser Doppler interferometers was used for creating the target function during the optimization process. It was found that, among 14 spring and dashpot parameters, there were five parameters which had pronounced effects on the dynamic behaviors of the model. The detailed discussion on the sensitivity of those parameters was provided with appropriate applications for sound transmission in the ear. We expect that the methods for characterizing the lumped model of the human ear and the model parameters will be useful for theoretical modeling of the ear function and construction of the ear physical model.

  4. An R package for fitting age, period and cohort models

    Directory of Open Access Journals (Sweden)

    Adriano Decarli

    2014-11-01

    Full Text Available In this paper we present the R implementation of a GLIM macro which fits age-period-cohort model following Osmond and Gardner. In addition to the estimates of the corresponding model, owing to the programming capability of R as an object oriented language, methods for printing, plotting and summarizing the results are provided. Furthermore, the researcher has fully access to the output of the main function (apc which returns all the models fitted within the function. It is so possible to critically evaluate the goodness of fit of the resulting model.

  5. Parametric Model for Astrophysical Proton-Proton Interactions and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Karlsson, Niklas [KTH Royal Institute of Technology, Stockholm (Sweden)

    2007-01-01

    Observations of gamma-rays have been made from celestial sources such as active galaxies, gamma-ray bursts and supernova remnants as well as the Galactic ridge. The study of gamma rays can provide information about production mechanisms and cosmic-ray acceleration. In the high-energy regime, one of the dominant mechanisms for gamma-ray production is the decay of neutral pions produced in interactions of ultra-relativistic cosmic-ray nuclei and interstellar matter. Presented here is a parametric model for calculations of inclusive cross sections and transverse momentum distributions for secondary particles--gamma rays, e±, ve, $\\bar{v}$e, vμ and $\\bar{μ}$e--produced in proton-proton interactions. This parametric model is derived on the proton-proton interaction model proposed by Kamae et al.; it includes the diffraction dissociation process, Feynman-scaling violation and the logarithmically rising inelastic proton-proton cross section. To improve fidelity to experimental data for lower energies, two baryon resonance excitation processes were added; one representing the Δ(1232) and the other multiple resonances with masses around 1600 MeV/c2. The model predicts the power-law spectral index for all secondary particle to be about 0.05 lower in absolute value than that of the incident proton and their inclusive cross sections to be larger than those predicted by previous models based on the Feynman-scaling hypothesis. The applications of the presented model in astrophysics are plentiful. It has been implemented into the Galprop code to calculate the contribution due to pion decays in the Galactic plane. The model has also been used to estimate the cosmic-ray flux in the Large Magellanic Cloud based on HI, CO and gamma-ray observations. The transverse momentum distributions enable calculations when the proton distribution is anisotropic. It is shown that the gamma-ray spectrum and flux due to a

  6. Study of the long-term values and prices of plutonium; a simplified parametrized model

    International Nuclear Information System (INIS)

    Gaussens, J.; Paillot, H.

    1965-01-01

    The authors define the notions of use values and price of plutonium. They give a 'simplified parametrized model' simulating the equilibrium of the offer and the demand in time, concerning the plutonium and the price deriving from the relative scarcity of this metal, taking into account the technical and economic operating parameters of the various reactors confronted. This model is simple enough to allow direct computations and establish clear relations between the various parameters. The use of the linear programmes method allows on the other hand a wide extension of the model. This report includes three main parts: I - General description of the study (without detailed calculations) II - Mathematical development of the simplified parametrized model and application (the basic data and the results of the calculations are given) III - Appendices (giving the detailed computations of part II). (authors) [fr

  7. Environmental Parametric Cost Model in Oil and Gas EPC Contracts

    Directory of Open Access Journals (Sweden)

    Madjid Abbaspour

    2018-01-01

    Full Text Available This study aims at identifying the parameters that govern the environmental costs in oil and gas projects. An initial conceptual model was proposed. Next, the costs of environmental management work packages were estimated, separately and were applied in project control tools (WBS/CBS. Then, an environmental parametric cost model was designed to determine the environmental costs and relevant weighting factors. The suggested model can be considered as an innovative approach to designate the environmental indicators in oil and gas projects. The validity of variables was investigated based on Delphi method. The results indicated that the project environmental management’s weighting factor is 0.87% of total project’s weighting factor.

  8. Efficient occupancy model-fitting for extensive citizen-science data

    Science.gov (United States)

    Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen

  9. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  10. Automated model fit method for diesel engine control development

    NARCIS (Netherlands)

    Seykens, X.L.J.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.J.H.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  11. A methodology for the parametric modelling of the flow coefficients and flow rate in hydraulic valves

    International Nuclear Information System (INIS)

    Valdés, José R.; Rodríguez, José M.; Saumell, Javier; Pütz, Thomas

    2014-01-01

    Highlights: • We develop a methodology for the parametric modelling of flow in hydraulic valves. • We characterize the flow coefficients with a generic function with two parameters. • The parameters are derived from CFD simulations of the generic geometry. • We apply the methodology to two cases from the automotive brake industry. • We validate by comparing with CFD results varying the original dimensions. - Abstract: The main objective of this work is to develop a methodology for the parametric modelling of the flow rate in hydraulic valve systems. This methodology is based on the derivation, from CFD simulations, of the flow coefficient of the critical restrictions as a function of the Reynolds number, using a generalized square root function with two parameters. The methodology is then demonstrated by applying it to two completely different hydraulic systems: a brake master cylinder and an ABS valve. This type of parametric valve models facilitates their implementation in dynamic simulation models of complex hydraulic systems

  12. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    Science.gov (United States)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on

  13. Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data

    Science.gov (United States)

    McNeish, Daniel; Harring, Jeffrey R.

    2017-01-01

    To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…

  14. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species.

    Science.gov (United States)

    Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R

    2017-01-04

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  15. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    Science.gov (United States)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  16. New Parametric Imaging Algorithm for Quantification of Binding Parameter in non-reversible compartment model: MLAIR

    International Nuclear Information System (INIS)

    Kim, Su Jin; Lee, Jae Sung; Kim, Yu Kyeong; Lee, Dong Soo

    2007-01-01

    Parametric imaging allows us analysis of the entire brain or body image. Graphical approaches are commonly employed to generate parametric imaging through linear or multilinear regression. However, this linear regression method has limited accuracy due to bias in high level of noise data. Several methods have been proposed to reduce bias for linear regression estimation especially in reversible model. In this study, we focus on generating a net accumulation rate (K i ), which is related to binding parameter in brain receptor study, parametric imaging in an irreversible compartment model using multiple linear analysis. The reliability of a newly developed multiple linear analysis method (MLAIR) was assessed through the Monte Carlo simulation, and we applied it to a [ 11 C]MeNTI PET for opioid receptor

  17. Fitting ARMA Time Series by Structural Equation Models.

    Science.gov (United States)

    van Buuren, Stef

    1997-01-01

    This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)

  18. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  19. Are Physical Education Majors Models for Fitness?

    Science.gov (United States)

    Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela

    2012-01-01

    The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…

  20. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....

  1. Dynamic modeling and explicit/multi-parametric MPC control of pressure swing adsorption systems

    KAUST Repository

    Khajuria, Harish

    2011-01-01

    Pressure swing adsorption (PSA) is a flexible, albeit complex gas separation system. Due to its inherent nonlinear nature and discontinuous operation, the design of a model based PSA controller, especially with varying operating conditions, is a challenging task. This work focuses on the design of an explicit/multi-parametric model predictive controller for a PSA system. Based on a system involving four adsorbent beds separating 70% H2, 30% CH4 mixture into high purity hydrogen, the key controller objective is to fast track H2 purity to a set point value of 99.99%. To perform this task, a rigorous and systematic framework is employed. First, a high fidelity detailed dynamic model is built to represent the system\\'s real operation, and understand its dynamic behavior. The model is then used to derive appropriate linear models by applying suitable system identification techniques. For the reduced models, a model predictive control (MPC) step is formulated, where latest developments in multi-parametric programming and control are applied to derive a novel explicit MPC controller. To test the performance of the designed controller, closed loop simulations are performed where the dynamic model is used as the virtual plant. Comparison studies of the derived explicit MPC controller are also performed with conventional PID controllers. © 2010 Elsevier Ltd. All rights reserved.

  2. Improved stratospheric atmosphere forecasts in the general circulation model through a methane oxidation parametrization

    Science.gov (United States)

    Wang, S.; Jun, Z.

    2017-12-01

    Climatic characteristics of tropical stratospheric methane have been well researched using various satellite data, and numerical simulations have furtherly conducted using chemical climatic models, while the impact of stratospheric methane oxidation on distribution of water vapor is not paid enough attention in general circulation models. Simulated values of water vapour in the tropical upper stratosphere, and throughout much of the extratropical stratosphere, were too low. Something must be done to remedy this deficiency in order to producing realistic stratospheric water vapor using a general circulation model including the whole stratosphere. Introduction of a simple parametrization of the upper-stratospheric moisture source due to methane oxidation and a sink due to photolysis in the mesosphere was conducted. Numerical simulations and analysis of the influence of stratospheric methane on the prediction of tropical stratospheric moisture and temperature fields were carried out. This study presents the advantages of methane oxidation parametrization in producing a realistic distribution of water vapour in the tropical stratosphere and analyzes the impact of methane chemical process on the general circulation model using two storm cases including a heavy rain in South China and a typhoon caused tropical storm.It is obvious that general circulation model with methane oxidation parametrization succeeds in simulating the water vapor and temperature in stratosphere. The simulating rain center value of contrast experiment is increased up to 10% than that of the control experiment. Introduction of methane oxidation parametrization has modified the distribution of water vapour and then producing a broadly realistic distribution of temperature. Objective weather forecast verifications have been performed using simulating results of one month, which demonstrate somewhat positive effects on the model skill. There is a certain extent impact of methane oxidation

  3. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    Science.gov (United States)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  4. Towards the generation of a parametric foot model using principal component analysis: A pilot study.

    Science.gov (United States)

    Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan

    2016-06-01

    There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Model for fitting longitudinal traits subject to threshold response applied to genetic evaluation for heat tolerance

    Directory of Open Access Journals (Sweden)

    Misztal Ignacy

    2009-01-01

    Full Text Available Abstract A semi-parametric non-linear longitudinal hierarchical model is presented. The model assumes that individual variation exists both in the degree of the linear change of performance (slope beyond a particular threshold of the independent variable scale and in the magnitude of the threshold itself; these individual variations are attributed to genetic and environmental components. During implementation via a Bayesian MCMC approach, threshold levels were sampled using a Metropolis step because their fully conditional posterior distributions do not have a closed form. The model was tested by simulation following designs similar to previous studies on genetics of heat stress. Posterior means of parameters of interest, under all simulation scenarios, were close to their true values with the latter always being included in the uncertain regions, indicating an absence of bias. The proposed models provide flexible tools for studying genotype by environmental interaction as well as for fitting other longitudinal traits subject to abrupt changes in the performance at particular points on the independent variable scale.

  6. Bayesian spatial semi-parametric modeling of HIV variation in Kenya.

    Directory of Open Access Journals (Sweden)

    Oscar Ngesa

    Full Text Available Spatial statistics has seen rapid application in many fields, especially epidemiology and public health. Many studies, nonetheless, make limited use of the geographical location information and also usually assume that the covariates, which are related to the response variable, have linear effects. We develop a Bayesian semi-parametric regression model for HIV prevalence data. Model estimation and inference is based on fully Bayesian approach via Markov Chain Monte Carlo (McMC. The model is applied to HIV prevalence data among men in Kenya, derived from the Kenya AIDS indicator survey, with n = 3,662. Past studies have concluded that HIV infection has a nonlinear association with age. In this study a smooth function based on penalized regression splines is used to estimate this nonlinear effect. Other covariates were assumed to have a linear effect. Spatial references to the counties were modeled as both structured and unstructured spatial effects. We observe that circumcision reduces the risk of HIV infection. The results also indicate that men in the urban areas were more likely to be infected by HIV as compared to their rural counterpart. Men with higher education had the lowest risk of HIV infection. A nonlinear relationship between HIV infection and age was established. Risk of HIV infection increases with age up to the age of 40 then declines with increase in age. Men who had STI in the last 12 months were more likely to be infected with HIV. Also men who had ever used a condom were found to have higher likelihood to be infected by HIV. A significant spatial variation of HIV infection in Kenya was also established. The study shows the practicality and flexibility of Bayesian semi-parametric regression model in analyzing epidemiological data.

  7. Are Fit Indices Biased in Favor of Bi-Factor Models in Cognitive Ability Research?: A Comparison of Fit in Correlated Factors, Higher-Order, and Bi-Factor Models via Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Grant B. Morgan

    2015-02-01

    Full Text Available Bi-factor confirmatory factor models have been influential in research on cognitive abilities because they often better fit the data than correlated factors and higher-order models. They also instantiate a perspective that differs from that offered by other models. Motivated by previous work that hypothesized an inherent statistical bias of fit indices favoring the bi-factor model, we compared the fit of correlated factors, higher-order, and bi-factor models via Monte Carlo methods. When data were sampled from a true bi-factor structure, each of the approximate fit indices was more likely than not to identify the bi-factor solution as the best fitting. When samples were selected from a true multiple correlated factors structure, approximate fit indices were more likely overall to identify the correlated factors solution as the best fitting. In contrast, when samples were generated from a true higher-order structure, approximate fit indices tended to identify the bi-factor solution as best fitting. There was extensive overlap of fit values across the models regardless of true structure. Although one model may fit a given dataset best relative to the other models, each of the models tended to fit the data well in absolute terms. Given this variability, models must also be judged on substantive and conceptual grounds.

  8. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    Science.gov (United States)

    Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin

    2018-06-01

    Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.

  9. Parametric model of ventilators simulated in OpenFOAM and Elmer

    Science.gov (United States)

    Čibera, Václav; Matas, Richard; Sedláček, Jan

    2016-03-01

    The main goal of presented work was to develop parametric model of a ventilator for CFD and structural analysis. The whole model was designed and scripted in freely available open source programmes in particular in OpenFOAM and Elmer. The main script, which runs or generates other scripts and further control the course of simulation, was written in bash scripting language in Linux environment. Further, the scripts needed for a mesh generation and running of a simulation were prepared using m4 word pre-processor. The use of m4 allowed comfortable set up of the higher amount of scripts. Consequently, the mesh was generated for fluid and solid part of the ventilator within OpenFOAM. Although OpenFOAM offers also a few tools for structural analysis, the mesh of solid parts was transferred into Elmer mesh format with the aim to perform structural analysis in this software. This submitted paper deals namely with part concerning fluid flow through parametrized geometry with different initial conditions. As an example, two simulations were conducted for the same geometric parameters and mesh but for different angular velocity of ventilator rotation.

  10. Parametric model of ventilators simulated in OpenFOAM and Elmer

    Directory of Open Access Journals (Sweden)

    Čibera Václav

    2016-01-01

    Full Text Available The main goal of presented work was to develop parametric model of a ventilator for CFD and structural analysis. The whole model was designed and scripted in freely available open source programmes in particular in OpenFOAM and Elmer. The main script, which runs or generates other scripts and further control the course of simulation, was written in bash scripting language in Linux environment. Further, the scripts needed for a mesh generation and running of a simulation were prepared using m4 word pre-processor. The use of m4 allowed comfortable set up of the higher amount of scripts. Consequently, the mesh was generated for fluid and solid part of the ventilator within OpenFOAM. Although OpenFOAM offers also a few tools for structural analysis, the mesh of solid parts was transferred into Elmer mesh format with the aim to perform structural analysis in this software. This submitted paper deals namely with part concerning fluid flow through parametrized geometry with different initial conditions. As an example, two simulations were conducted for the same geometric parameters and mesh but for different angular velocity of ventilator rotation.

  11. A Comparison of Item Fit Statistics for Mixed IRT Models

    Science.gov (United States)

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  12. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example

    Science.gov (United States)

    Helgesson, P.; Sjöstrand, H.

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  13. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example.

    Science.gov (United States)

    Helgesson, P; Sjöstrand, H

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  14. Parametric study of the Incompletely Stirred Reactor modeling

    Energy Technology Data Exchange (ETDEWEB)

    Mobini, K. [Department of Mechanical Engineering, Shahid Rajaee University, Lavizan, Tehran (Iran); Bilger, R.W. [School of Aerospace, Mechanical and Mechatronic Engineering, University of Sydney, Sydney (Australia)

    2009-09-15

    The Incompletely Stirred Reactor (ISR) is a generalization of the widely-used Perfectly Stirred Reactor (PSR) model and allows for incomplete mixing within the reactor. Its formulation is based on the Conditional Moment Closure (CMC) method. This model is applicable to nonpremixed combustion with strong recirculation such as in a gas turbine combustor primary zone. The model uses the simplifying assumptions that the conditionally-averaged reactive-scalar concentrations are independent of position in the reactor: this results in ordinary differential equations in mixture fraction space. The simplicity of the model permits the use of very complex chemical mechanisms. The effects of the detailed chemistry can be found while still including the effects of micromixing. A parametric study is performed here on an ISR for combustion of methane at overall stoichiometric conditions to investigate the sensitivity of the model to different parameters. The focus here is on emissions of nitric oxide and carbon monoxide. It is shown that the most important parameters in the ISR model are reactor residence time, the chemical mechanism and the core-averaged Probability Density Function (PDF). Using several different shapes for the core-averaged PDF, it is shown that use of a bimodal PDF with a low minimum at stoichiometric mixture fraction and a large variance leads to lower nitric oxide formation. The 'rich-plus-lean' mixing or staged combustion strategy for combustion is thus supported. (author)

  15. Beyond axial symmetry: An improved class of models for global data

    KAUST Repository

    Castruccio, Stefano

    2014-03-01

    An important class of models for data on a spherical domain, called axially symmetric, assumes stationarity across longitudes but not across latitudes. The main aim of this work is to introduce a new and more flexible class of models by relaxing the assumption of longitudinal stationarity in the context of regularly gridded climate model output. In this investigation, two other related topics are discussed: the lack of fit of an axially symmetric parametric model compared with a non-parametric model and to longitudinally reversible processes, an important subclass of axially symmetric models.

  16. Beyond axial symmetry: An improved class of models for global data

    KAUST Repository

    Castruccio, Stefano; Genton, Marc G.

    2014-01-01

    An important class of models for data on a spherical domain, called axially symmetric, assumes stationarity across longitudes but not across latitudes. The main aim of this work is to introduce a new and more flexible class of models by relaxing the assumption of longitudinal stationarity in the context of regularly gridded climate model output. In this investigation, two other related topics are discussed: the lack of fit of an axially symmetric parametric model compared with a non-parametric model and to longitudinally reversible processes, an important subclass of axially symmetric models.

  17. Evaluation of model-based versus non-parametric monaural noise-reduction approaches for hearing aids.

    Science.gov (United States)

    Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker

    2012-08-01

    Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.

  18. Parametric System Model for a Stirling Radioisotope Generator

    Science.gov (United States)

    Schmitz, Paul C.

    2015-01-01

    A Parametric System Model (PSM) was created in order to explore conceptual designs, the impact of component changes and power level on the performance of the Stirling Radioisotope Generator (SRG). Using the General Purpose Heat Source (GPHS approximately 250 Wth) modules as the thermal building block from which a SRG is conceptualized, trade studies are performed to understand the importance of individual component scaling on isotope usage. Mathematical relationships based on heat and power throughput, temperature, mass, and volume were developed for each of the required subsystems. The PSM uses these relationships to perform component- and system-level trades.

  19. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    Science.gov (United States)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  20. Developing integrated parametric planning models for budgeting and managing complex projects

    Science.gov (United States)

    Etnyre, Vance A.; Black, Ken U.

    1988-01-01

    The applicability of integrated parametric models for the budgeting and management of complex projects is investigated. Methods for building a very flexible, interactive prototype for a project planning system, and software resources available for this purpose, are discussed and evaluated. The prototype is required to be sensitive to changing objectives, changing target dates, changing costs relationships, and changing budget constraints. To achieve the integration of costs and project and task durations, parametric cost functions are defined by a process of trapezoidal segmentation, where the total cost for the project is the sum of the various project cost segments, and each project cost segment is the integral of a linearly segmented cost loading function over a specific interval. The cost can thus be expressed algebraically. The prototype was designed using Lotus-123 as the primary software tool. This prototype implements a methodology for interactive project scheduling that provides a model of a system that meets most of the goals for the first phase of the study and some of the goals for the second phase.

  1. Item level diagnostics and model - data fit in item response theory ...

    African Journals Online (AJOL)

    Item response theory (IRT) is a framework for modeling and analyzing item response data. Item-level modeling gives IRT advantages over classical test theory. The fit of an item score pattern to an item response theory (IRT) models is a necessary condition that must be assessed for further use of item and models that best fit ...

  2. Exploration of location influences on firm survival rates using parametric duration models

    NARCIS (Netherlands)

    Manzato, G.G.; Arentze, T.A.; Timmermans, H.J.P.; Ettema, D.F.

    2011-01-01

    This study explored the influences of location on business firm survival rates with the use of parametric duration models applied to a data set. Of the variables included, those found to be the most significant were accessibility to infrastructure supply, regional effects, demographic and economic

  3. Nonparametric Bayesian Modeling of Complex Networks

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Mørup, Morten

    2013-01-01

    an infinite mixture model as running example, we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model?s fit and predictive performance. We explain how advanced nonparametric models......Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...

  4. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    Science.gov (United States)

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  5. Parametric design and analysis framework with integrated dynamic models

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer

    2014-01-01

    of building energy and indoor environment, are generally confined to late in the design process. Consequence based design is a framework intended for the early design stage. It involves interdisciplinary expertise that secures validity and quality assurance with a simulationist while sustaining autonomous...... control with the building designer. Consequence based design is defined by the specific use of integrated dynamic modeling, which includes the parametric capabilities of a scripting tool and building simulation features of a building performance simulation tool. The framework can lead to enhanced...

  6. An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models

    Science.gov (United States)

    Ames, Allison J.; Penfield, Randall D.

    2015-01-01

    Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…

  7. Using an Autonomous Scale Ship Model for Resistance and Parametric Roll Tests

    Directory of Open Access Journals (Sweden)

    Fernando LOPEZ PEŇA

    2015-04-01

    Full Text Available This work presents the developing of a self-propelled scale ship model aimed to perform resistance and parametric roll tests in towing tanks. The main characteristic of the proposed system is that it doesn’t have any material link to a towing device to carry out the tests. This ship model has been fully instrumented in order to acquire all the significant raw data, process them onboard and communicate with an inshore station. This works presents a description of the proposed model as well as some results obtained by its use during a towing tank testing campaign.

  8. The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing.

    Directory of Open Access Journals (Sweden)

    Jaclyn K Mann

    2014-08-01

    Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  9. A generalized Jaynes-Cummings model: The relativistic parametric amplifier and a single trapped ion

    Energy Technology Data Exchange (ETDEWEB)

    Ojeda-Guillén, D., E-mail: dojedag@ipn.mx [Escuela Superior de Cómputo, Instituto Politécnico Nacional, Av. Juan de Dios Bátiz esq. Av. Miguel Othón de Mendizábal, Col. Lindavista, Delegación Gustavo A. Madero, C.P. 07738 Ciudad de México (Mexico); Mota, R. D. [Escuela Superior de Ingeniería Mecánica y Eléctrica, Unidad Culhuacán, Instituto Politécnico Nacional, Av. Santa Ana No. 1000, Col. San Francisco Culhuacán, Delegación Coyoacán, C.P. 04430 Ciudad de México (Mexico); Granados, V. D. [Escuela Superior de Física y Matemáticas, Instituto Politécnico Nacional, Ed. 9, Unidad Profesional Adolfo López Mateos, Delegación Gustavo A. Madero, C.P. 07738 Ciudad de México (Mexico)

    2016-06-15

    We introduce a generalization of the Jaynes-Cummings model and study some of its properties. We obtain the energy spectrum and eigenfunctions of this model by using the tilting transformation and the squeezed number states of the one-dimensional harmonic oscillator. As physical applications, we connect this new model to two important and novelty problems: the relativistic parametric amplifier and the quantum simulation of a single trapped ion.

  10. Onset patterns in a simple model of localized parametric forcing.

    Science.gov (United States)

    Porter, J; Tinao, I; Laverón-Simavilla, A; Rodríguez, J

    2013-10-01

    We investigate pattern selection at onset in a parametrically and inhomogeneously forced partial differential equation obtained by generalizing Mathieu's equation to include spatial interactions. No separation of scales is assumed. The proposed model is directly relevant to the case of parametrically forced surface waves, such as cross-waves, excited by the horizontal vibration of a fluid, where the forcing is localized to a finite region near the endwall or wavemaker. The availability of analytical solutions in the limit of piecewise constant forcing allows us investigate in detail the dependence of selected eigenfunctions on spatial detuning, forcing width, damping, boundary conditions, and container size. A wide range of onset patterns are located and described, many of which are rotated, modulated, or both, and deviate far from simple crosswise oriented standing waves. The linear selection mechanisms governing this multiplicity of potential onset patterns are discussed.

  11. Standard error propagation in R-matrix model fitting for light elements

    International Nuclear Information System (INIS)

    Chen Zhenpeng; Zhang Rui; Sun Yeying; Liu Tingjin

    2003-01-01

    The error propagation features with R-matrix model fitting 7 Li, 11 B and 17 O systems were researched systematically. Some laws of error propagation were revealed, an empirical formula P j = U j c / U j d = K j · S-bar · √m / √N for describing standard error propagation was established, the most likely error ranges for standard cross sections of 6 Li(n,t), 10 B(n,α0) and 10 B(n,α1) were estimated. The problem that the standard error of light nuclei standard cross sections may be too small results mainly from the R-matrix model fitting, which is not perfect. Yet R-matrix model fitting is the most reliable evaluation method for such data. The error propagation features of R-matrix model fitting for compound nucleus system of 7 Li, 11 B and 17 O has been studied systematically, some laws of error propagation are revealed, and these findings are important in solving the problem mentioned above. Furthermore, these conclusions are suitable for similar model fitting in other scientific fields. (author)

  12. Efficient scheme for parametric fitting of data in arbitrary dimensions.

    Science.gov (United States)

    Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching

    2008-07-01

    We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.

  13. Comparison of Parametric and Nonparametric Methods for Analyzing the Bias of a Numerical Model

    Directory of Open Access Journals (Sweden)

    Isaac Mugume

    2016-01-01

    Full Text Available Numerical models are presently applied in many fields for simulation and prediction, operation, or research. The output from these models normally has both systematic and random errors. The study compared January 2015 temperature data for Uganda as simulated using the Weather Research and Forecast model with actual observed station temperature data to analyze the bias using parametric (the root mean square error (RMSE, the mean absolute error (MAE, mean error (ME, skewness, and the bias easy estimate (BES and nonparametric (the sign test, STM methods. The RMSE normally overestimates the error compared to MAE. The RMSE and MAE are not sensitive to direction of bias. The ME gives both direction and magnitude of bias but can be distorted by extreme values while the BES is insensitive to extreme values. The STM is robust for giving the direction of bias; it is not sensitive to extreme values but it does not give the magnitude of bias. The graphical tools (such as time series and cumulative curves show the performance of the model with time. It is recommended to integrate parametric and nonparametric methods along with graphical methods for a comprehensive analysis of bias of a numerical model.

  14. A Parametric Model of Shoulder Articulation for Virtual Assessment of Space Suit Fit

    Science.gov (United States)

    Kim, K. Han; Young, Karen S.; Bernal, Yaritza; Boppana, Abhishektha; Vu, Linh Q.; Benson, Elizabeth A.; Jarvis, Sarah; Rajulu, Sudhakar L.

    2016-01-01

    Suboptimal suit fit is a known risk factor for crewmember shoulder injury. Suit fit assessment is however prohibitively time consuming and cannot be generalized across wide variations of body shapes and poses. In this work, we have developed a new design tool based on the statistical analysis of body shape scans. This tool is aimed at predicting the skin deformation and shape variations for any body size and shoulder pose for a target population. This new process, when incorporated with CAD software, will enable virtual suit fit assessments, predictively quantifying the contact volume, and clearance between the suit and body surface at reduced time and cost.

  15. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    Science.gov (United States)

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Estimation of the lifetime distribution of mechatronic systems in the presence of a covariate: A comparison among parametric, semiparametric and nonparametric models

    International Nuclear Information System (INIS)

    Bobrowski, Sebastian; Chen, Hong; Döring, Maik; Jensen, Uwe; Schinköthe, Wolfgang

    2015-01-01

    In practice manufacturers may have lots of failure data of similar products using the same technology basis under different operating conditions. Thus, one can try to derive predictions for the distribution of the lifetime of newly developed components or new application environments through the existing data using regression models based on covariates. Three categories of such regression models are considered: a parametric, a semiparametric and a nonparametric approach. First, we assume that the lifetime is Weibull distributed, where its parameters are modelled as linear functions of the covariate. Second, the Cox proportional hazards model, well-known in Survival Analysis, is applied. Finally, a kernel estimator is used to interpolate between empirical distribution functions. In particular the last case is new in the context of reliability analysis. We propose a goodness of fit measure (GoF), which can be applied to all three types of regression models. Using this GoF measure we discuss a new model selection procedure. To illustrate this method of reliability prediction, the three classes of regression models are applied to real test data of motor experiments. Further the performance of the approaches is investigated by Monte Carlo simulations. - Highlights: • We estimate the lifetime distribution in the presence of a covariate. • Three types of regression models are considered and compared. • A new nonparametric estimator based on our particular data structure is introduced. • We propose a goodness of fit measure and show a new model selection procedure. • A case study with real data and Monte Carlo simulations are performed

  17. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development.

    Science.gov (United States)

    Tøndel, Kristin; Niederer, Steven A; Land, Sander; Smith, Nicolas P

    2014-05-20

    Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input-output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on

  18. The issue of statistical power for overall model fit in evaluating structural equation models

    Directory of Open Access Journals (Sweden)

    Richard HERMIDA

    2015-06-01

    Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.

  19. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Science.gov (United States)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  20. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Directory of Open Access Journals (Sweden)

    Jinchao Feng

    2018-03-01

    Full Text Available We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data. The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  1. Model developments in TERRA_URB, the upcoming standard urban parametrization of the atmospheric numerical model COSMO(-CLM)

    Science.gov (United States)

    Wouters, Hendrik; Blahak, Ulrich; Helmert, Jürgen; Raschendorfer, Matthias; Demuzere, Matthias; Fay, Barbara; Trusilova, Kristina; Mironov, Dmitrii; Reinert, Daniel; Lüthi, Daniel; Machulskaya, Ekaterina

    2015-04-01

    In order to address urban climate at the regional scales, a new efficient urban land-surface parametrization TERRA_URB has been developed and coupled to the atmospheric numerical model COSMO-CLM. Hereby, several new advancements for urban land-surface models are introduced which are crucial for capturing the urban surface-energy balance and its seasonal dependency in the mid-latitudes. This includes a new PDF-based water-storage parametrization for impervious land, the representation of radiative absorption and emission by greenhouse gases in the infra-red spectrum in the urban canopy layer, and the inclusion of heat emission from human activity. TERRA_URB has been applied in offline urban-climate studies during European observation campaigns at Basel (BUBBLE), Toulouse (CAPITOUL), and Singapore, and currently applied in online studies for urban areas in Belgium, Germany, Switzerland, Helsinki, Singapore, and Melbourne. Because of its computational efficiency, high accuracy and its to-the-point conceptual easiness, TERRA_URB has been selected to become the standard urban parametrization of the atmospheric numerical model COSMO(-CLM). This allows for better weather forecasts for temperature and precipitation in cities with COSMO, and an improved assessment of urban outdoor hazards in the context of global climate change and urban expansion with COSMO-CLM. We propose additional extensions to TERRA_URB towards a more robust representation of cities over the world including their structural design. In a first step, COSMO's standard EXTernal PARarameter (EXTPAR) tool is updated for representing the cities into the land cover over the entire globe. Hereby, global datasets in the standard EXTPAR tool are used to retrieve the 'Paved' or 'sealed' surface Fraction (PF) referring to the presence of buildings and streets. Furthermore, new global data sets are incorporated in EXTPAR for describing the Anthropogenic Heat Flux (AHF) due to human activity, and optionally the

  2. Repair models of cell survival and corresponding computer program for survival curve fitting

    International Nuclear Information System (INIS)

    Shen Xun; Hu Yiwei

    1992-01-01

    Some basic concepts and formulations of two repair models of survival, the incomplete repair (IR) model and the lethal-potentially lethal (LPL) model, are introduced. An IBM-PC computer program for survival curve fitting with these models was developed and applied to fit the survivals of human melanoma cells HX118 irradiated at different dose rates. Comparison was made between the repair models and two non-repair models, the multitar get-single hit model and the linear-quadratic model, in the fitting and analysis of the survival-dose curves. It was shown that either IR model or LPL model can fit a set of survival curves of different dose rates with same parameters and provide information on the repair capacity of cells. These two mathematical models could be very useful in quantitative study on the radiosensitivity and repair capacity of cells

  3. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    OpenAIRE

    Matthew P. Adams; Catherine J. Collier; Sven Uthicke; Yan X. Ow; Lucas Langlois; Katherine R. O’Brien

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluat...

  4. Scalability of the muscular action in a parametric 3D model of the index finger.

    Science.gov (United States)

    Sancho-Bru, Joaquín L; Vergara, Margarita; Rodríguez-Cervantes, Pablo-Jesús; Giurintano, David J; Pérez-González, Antonio

    2008-01-01

    A method for scaling the muscle action is proposed and used to achieve a 3D inverse dynamic model of the human finger with all its components scalable. This method is based on scaling the physiological cross-sectional area (PCSA) in a Hill muscle model. Different anthropometric parameters and maximal grip force data have been measured and their correlations have been analyzed and used for scaling the PCSA of each muscle. A linear relationship between the normalized PCSA and the product of the length and breadth of the hand has been finally used for scaling, with a slope of 0.01315 cm(-2), with the length and breadth of the hand expressed in centimeters. The parametric muscle model has been included in a parametric finger model previously developed by the authors, and it has been validated reproducing the results of an experiment in which subjects from different population groups exerted maximal voluntary forces with their index finger in a controlled posture.

  5. [How to fit and interpret multilevel models using SPSS].

    Science.gov (United States)

    Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael

    2007-05-01

    Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.

  6. Edge Sharpness Assessment by Parametric Modeling: Application to Magnetic Resonance Imaging.

    Science.gov (United States)

    Ahmad, R; Ding, Y; Simonetti, O P

    2015-05-01

    In biomedical imaging, edge sharpness is an important yet often overlooked image quality metric. In this work, a semi-automatic method to quantify edge sharpness in the presence of significant noise is presented with application to magnetic resonance imaging (MRI). The method is based on parametric modeling of image edges. First, an edge map is automatically generated and one or more edges-of-interest (EOI) are manually selected using graphical user interface. Multiple exclusion criteria are then enforced to eliminate edge pixels that are potentially not suitable for sharpness assessment. Second, at each pixel of the EOI, an image intensity profile is read along a small line segment that runs locally normal to the EOI. Third, the profiles corresponding to all EOI pixels are individually fitted with a sigmoid function characterized by four parameters, including one that represents edge sharpness. Last, the distribution of the sharpness parameter is used to quantify edge sharpness. For validation, the method is applied to simulated data as well as MRI data from both phantom imaging and cine imaging experiments. This method allows for fast, quantitative evaluation of edge sharpness even in images with poor signal-to-noise ratio. Although the utility of this method is demonstrated for MRI, it can be adapted for other medical imaging applications.

  7. Model-fitting approach to kinetic analysis of non-isothermal oxidation of molybdenite

    International Nuclear Information System (INIS)

    Ebrahimi Kahrizsangi, R.; Abbasi, M. H.; Saidi, A.

    2007-01-01

    The kinetics of molybdenite oxidation was studied by non-isothermal TGA-DTA with heating rate 5 d eg C .min -1 . The model-fitting kinetic approach applied to TGA data. The Coats-Redfern method used of model fitting. The popular model-fitting gives excellent fit non-isothermal data in chemically controlled regime. The apparent activation energy was determined to be about 34.2 kcalmol -1 With pre-exponential factor about 10 8 sec -1 for extent of reaction less than 0.5

  8. Parametric analysis of LIBRETTO-4 and 5 in-pile tritium transport model on EcosimPro

    Energy Technology Data Exchange (ETDEWEB)

    Alcalde, Pablo Martínez, E-mail: pablomiguel.martinez@externos.ciemat.es [Universidad Nacional de Educación a Distancia (UNED), c/Juan del Rosal 12, 28040 Madrid (Spain); Moreno, Carlos; Ibarra, Ángel [CIEMAT, Avda. Complutense 40, 28040 Madrid (Spain)

    2014-10-15

    Highlights: • Introduction of a new tritium transport model of LIBRETTO-4 and 5 on EcosimPro{sup ®}. • Analysis of model input parameter and variable sensitivities and effects on tritium simulated fluxes. • Demonstrations of high tritium out-flux dependencies on lead-lithium parameters. • Rough fitting achievements proposed by Li17Pb solubility or recombination increase. - Abstract: A new model for LIBRETTO-4/1, 4/2 and 5 experiments have been developed on ECOSIMPro{sup ©} tool to simulate tritium in-pile breeding and transport into two separate purge gas channels with He + 0.1%H{sub 2}. Release from lead lithium eutectic plenum with coupled permeation through an austenitic steel wall on the first and single permeation through EUROFER-97 in the temperature ranges of 300–550 °C can be simulated tuning the transport parameters involved. A parametric study has been performed to reduce the degrees of freedom and to determine the error caused in the simulation due to the uncertainty in experimental input data. The information obtained is essential for the experimental benchmarking. The Tritium Permeation Percentage (TPP) is an output calculated parameter with low variations between 2 and 6% along the whole experimental time easy to compare (730 Full Power Days for LIBRETTO-4 and 520 for 5). Tritium transport parameter ranges verifying this output are defined herein.

  9. Parametric sensitivity analysis of an agro-economic model of management of irrigation water

    Science.gov (United States)

    El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse

    2015-04-01

    The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.

  10. Parametric study of a turbocompound diesel engine based on an analytical model

    International Nuclear Information System (INIS)

    Zhao, Rongchao; Zhuge, Weilin; Zhang, Yangjun; Yin, Yong; Zhao, Yanting; Chen, Zhen

    2016-01-01

    Turbocompounding is an important technique to recover waste heat from engine exhaust and reduce CO_2 emission. This paper presents a parametric study of turbocompound diesel engine based on analytical model. An analytical model was developed to investigate the influence of system parameters on the engine fuel consumption. The model is based on thermodynamics knowledge and empirical models, which can consider the impacts of each parameter independently. The effects of turbine efficiency, back pressure, exhaust temperature, pressure ratio and engine speed on the recovery energy, pumping loss and engine fuel reductions were studied. Results show that turbine efficiency, exhaust temperature and back pressure has great influence on the fuel reduction and optimal power turbine (PT) expansion ratio. However, engine operation speed has little impact on the fuel savings obtained by turbocompounding. The interaction mechanism between the PT recovery power and engine pumping loss is presented in the paper. Due to the nonlinear characteristic of turbine power, there is an optimum value of PT expansion ratio to achieve largest power gain. At the end, the fuel saving potential of high performance turbocompound engine and the requirements for it are proposed in the paper. - Highlights: • An analytical model for turbocompound engine is developed and validated. • Parametric study is performed to obtain lowest BSFC and optimal expansion ratio. • The influences of each parameter on the fuel saving potentials are presented. • The impact mechanisms of each parameter on the energy tradeoff are disclosed. • It provides an effective tool to guide the preliminary design of turbocompounding.

  11. A local non-parametric model for trade sign inference

    Science.gov (United States)

    Blazejewski, Adam; Coggins, Richard

    2005-03-01

    We investigate a regularity in market order submission strategies for 12 stocks with large market capitalization on the Australian Stock Exchange. The regularity is evidenced by a predictable relationship between the trade sign (trade initiator), size of the trade, and the contents of the limit order book before the trade. We demonstrate this predictability by developing an empirical inference model to classify trades into buyer-initiated and seller-initiated. The model employs a local non-parametric method, k-nearest neighbor, which in the past was used successfully for chaotic time series prediction. The k-nearest neighbor with three predictor variables achieves an average out-of-sample classification accuracy of 71.40%, compared to 63.32% for the linear logistic regression with seven predictor variables. The result suggests that a non-linear approach may produce a more parsimonious trade sign inference model with a higher out-of-sample classification accuracy. Furthermore, for most of our stocks the observed regularity in market order submissions seems to have a memory of at least 30 trading days.

  12. Chempy: A flexible chemical evolution model for abundance fitting. Do the Sun's abundances alone constrain chemical evolution models?

    Science.gov (United States)

    Rybizki, Jan; Just, Andreas; Rix, Hans-Walter

    2017-09-01

    Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar

  13. Principles of parametric estimation in modeling language competition.

    Science.gov (United States)

    Zhang, Menghan; Gong, Tao

    2013-06-11

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.

  14. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  15. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  16. Tests of fit of historically-informed models of African American Admixture.

    Science.gov (United States)

    Gross, Jessica M

    2018-02-01

    African American populations in the U.S. formed primarily by mating between Africans and Europeans over the last 500 years. To date, studies of admixture have focused on either a one-time admixture event or continuous input into the African American population from Europeans only. Our goal is to gain a better understanding of the admixture process by examining models that take into account (a) assortative mating by ancestry in the African American population, (b) continuous input from both Europeans and Africans, and (c) historically informed variation in the rate of African migration over time. We used a model-based clustering method to generate distributions of African ancestry in three samples comprised of 147 African Americans from two published sources. We used a log-likelihood method to examine the fit of four models to these distributions and used a log-likelihood ratio test to compare the relative fit of each model. The mean ancestry estimates for our datasets of 77% African/23% European to 83% African/17% European ancestry are consistent with previous studies. We find admixture models that incorporate continuous gene flow from Europeans fit significantly better than one-time event models, and that a model involving continuous gene flow from Africans and Europeans fits better than one with continuous gene flow from Europeans only for two samples. Importantly, models that involve continuous input from Africans necessitate a higher level of gene flow from Europeans than previously reported. We demonstrate that models that take into account information about the rate of African migration over the past 500 years fit observed patterns of African ancestry better than alternative models. Our approach will enrich our understanding of the admixture process in extant and past populations. © 2017 Wiley Periodicals, Inc.

  17. Fitting Equilibrium Search Models to Labour Market Data

    DEFF Research Database (Denmark)

    Bowlus, Audra J.; Kiefer, Nicholas M.; Neumann, George R.

    1996-01-01

    Specification and estimation of a Burdett-Mortensen type equilibrium search model is considered. The estimation is nonstandard. An estimation strategy asymptotically equivalent to maximum likelihood is proposed and applied. The results indicate that specifications with a small number of productiv...... of productivity types fit the data well compared to the homogeneous model....

  18. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  19. Super learning to hedge against incorrect inference from arbitrary parametric assumptions in marginal structural modeling.

    Science.gov (United States)

    Neugebauer, Romain; Fireman, Bruce; Roy, Jason A; Raebel, Marsha A; Nichols, Gregory A; O'Connor, Patrick J

    2013-08-01

    Clinical trials are unlikely to ever be launched for many comparative effectiveness research (CER) questions. Inferences from hypothetical randomized trials may however be emulated with marginal structural modeling (MSM) using observational data, but success in adjusting for time-dependent confounding and selection bias typically relies on parametric modeling assumptions. If these assumptions are violated, inferences from MSM may be inaccurate. In this article, we motivate the application of a data-adaptive estimation approach called super learning (SL) to avoid reliance on arbitrary parametric assumptions in CER. Using the electronic health records data from adults with new-onset type 2 diabetes, we implemented MSM with inverse probability weighting (IPW) estimation to evaluate the effect of three oral antidiabetic therapies on the worsening of glomerular filtration rate. Inferences from IPW estimation were noticeably sensitive to the parametric assumptions about the associations between both the exposure and censoring processes and the main suspected source of confounding, that is, time-dependent measurements of hemoglobin A1c. SL was successfully implemented to harness flexible confounding and selection bias adjustment from existing machine learning algorithms. Erroneous IPW inference about clinical effectiveness because of arbitrary and incorrect modeling decisions may be avoided with SL. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Developing Parametric Models for the Assembly of Machine Fixtures for Virtual Multiaxial CNC Machining Centers

    Science.gov (United States)

    Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.

    2018-01-01

    This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.

  1. Evaluation of treatment response in depression studies using a Bayesian parametric cure rate model.

    Science.gov (United States)

    Santen, Gijs; Danhof, Meindert; Della Pasqua, Oscar

    2008-10-01

    Efficacy trials with antidepressant drugs often fail to show significant treatment effect even though efficacious treatments are investigated. This failure can, amongst other factors, be attributed to the lack of sensitivity of the statistical method as well as of the endpoints to pharmacological activity. For regulatory purposes the most widely used efficacy endpoint is still the mean change in HAM-D score at the end of the study, despite evidence from literature showing that the HAM-D scale might not be a sensitive tool to assess drug effect and that changes from baseline at the end of treatment may not reflect the extent of response. In the current study, we evaluate the prospect of applying a Bayesian parametric cure rate model (CRM) to analyse antidepressant effect in efficacy trials with paroxetine. The model is based on a survival approach, which allows for a fraction of surviving patients indefinitely after completion of treatment. Data was extracted from GlaxoSmithKline's clinical databases. Response was defined as a 50% change from baseline HAM-D at any assessment time after start of therapy. Survival times were described by a log-normal distribution and drug effect was parameterised as a covariate on the fraction of non-responders. The model was able to fit the data from different studies accurately and results show that response to treatment does not lag for two weeks, as is mythically believed. In conclusion, we demonstrate how parameterisation of a survival model can be used to characterise treatment response in depression trials. The method contrasts with the long-established snapshot on changes from baseline, as it incorporates the time course of response throughout treatment.

  2. A versatile curve-fit model for linear to deeply concave rank abundance curves

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    A new, flexible curve-fit model for linear to concave rank abundance curves was conceptualized and validated using observational data. The model links the geometric-series model and log-series model and can also fit deeply concave rank abundance curves. The model is based ¿ in an unconventional way

  3. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  4. Gfitter - Revisiting the global electroweak fit of the Standard Model and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Flaecher, H.; Hoecker, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Goebel, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Haller, J. [Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Moenig, K.; Stelzer, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2008-11-15

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M{sub H}=116.4{sup +18.3}{sub -1.3} GeV, and the 2{sigma} and 3{sigma} allowed regions [114,145] GeV and [[113,168] and [180,225

  5. A person fit test for IRT models for polytomous items

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Dagohoy, A.V.

    2007-01-01

    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability

  6. Parametric Thermal Models of the Transient Reactor Test Facility (TREAT)

    Energy Technology Data Exchange (ETDEWEB)

    Bradley K. Heath

    2014-03-01

    This work supports the restart of transient testing in the United States using the Department of Energy’s Transient Reactor Test Facility at the Idaho National Laboratory. It also supports the Global Threat Reduction Initiative by reducing proliferation risk of high enriched uranium fuel. The work involves the creation of a nuclear fuel assembly model using the fuel performance code known as BISON. The model simulates the thermal behavior of a nuclear fuel assembly during steady state and transient operational modes. Additional models of the same geometry but differing material properties are created to perform parametric studies. The results show that fuel and cladding thermal conductivity have the greatest effect on fuel temperature under the steady state operational mode. Fuel density and fuel specific heat have the greatest effect for transient operational model. When considering a new fuel type it is recommended to use materials that decrease the specific heat of the fuel and the thermal conductivity of the fuel’s cladding in order to deal with higher density fuels that accompany the LEU conversion process. Data on the latest operating conditions of TREAT need to be attained in order to validate BISON’s results. BISON’s models for TREAT (material models, boundary convection models) are modest and need additional work to ensure accuracy and confidence in results.

  7. Checking the Adequacy of Fit of Models from Split-Plot Designs

    DEFF Research Database (Denmark)

    Almini, A. A.; Kulahci, Murat; Montgomery, D. C.

    2009-01-01

    models. In this article, we propose the computation of two R-2, R-2-adjusted, prediction error sums of squares (PRESS), and R-2-prediction statistics to measure the adequacy of fit for the WP and the SP submodels in a split-plot design. This is complemented with the graphical analysis of the two types......One of the main features that distinguish split-plot experiments from other experiments is that they involve two types of experimental errors: the whole-plot (WP) error and the subplot (SP) error. Taking this into consideration is very important when computing measures of adequacy of fit for split-plot...... of errors to check for any violation of the underlying assumptions and the adequacy of fit of split-plot models. Using examples, we show how computing two measures of model adequacy of fit for each split-plot design model is appropriate and useful as they reveal whether the correct WP and SP effects have...

  8. Parametric estimation of covariance function in Gaussian-process based Kriging models. Application to uncertainty quantification for computer experiments

    International Nuclear Information System (INIS)

    Bachoc, F.

    2013-01-01

    The parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the two estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a meta modeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown. (author) [fr

  9. The linear transformation model with frailties for the analysis of item response times.

    Science.gov (United States)

    Wang, Chun; Chang, Hua-Hua; Douglas, Jeffrey A

    2013-02-01

    The item response times (RTs) collected from computerized testing represent an underutilized source of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. In this paper, we propose a semi-parametric model for RTs, the linear transformation model with a latent speed covariate, which combines the flexibility of non-parametric modelling and the brevity as well as interpretability of parametric modelling. In this new model, the RTs, after some non-parametric monotone transformation, become a linear model with latent speed as covariate plus an error term. The distribution of the error term implicitly defines the relationship between the RT and examinees' latent speeds; whereas the non-parametric transformation is able to describe various shapes of RT distributions. The linear transformation model represents a rich family of models that includes the Cox proportional hazards model, the Box-Cox normal model, and many other models as special cases. This new model is embedded in a hierarchical framework so that both RTs and responses are modelled simultaneously. A two-stage estimation method is proposed. In the first stage, the Markov chain Monte Carlo method is employed to estimate the parametric part of the model. In the second stage, an estimating equation method with a recursive algorithm is adopted to estimate the non-parametric transformation. Applicability of the new model is demonstrated with a simulation study and a real data application. Finally, methods to evaluate the model fit are suggested. © 2012 The British Psychological Society.

  10. The relevance of parametric U-uptake models in ESR age calculations

    International Nuclear Information System (INIS)

    Gruen, Rainer

    2009-01-01

    In the ESR dating three basic parametric U-uptake models have been applied for dating teeth: early U-uptake (EU: closed system), linear U-uptake (LU) and recent U-uptake (RU, it is assumed that the dose rate contribution of U in the dental tissues is zero). In many ESR dating publications it is still assumed that samples comply with one or the other parametric U-uptake model calculation or that their correct age lies somewhere between EU and LU. Observations of the spatial distribution of uranium in dental tissues show that it is difficult to predict any relationships between the relative uptake in the dental tissues. Combined U-series/ESR age estimates can give insights into the actual U-uptake. An evaluation of published data shows that for cave sites, a significant number of results fall outside the EU and LU bracket, while for open air sites, the majority of data are outside this bracket, particularly showing greatly delayed U-uptake. This may be due to changes in the hydrological system, leading to erosion which exposes the open air site. U-leaching has also been observed on samples from open air sites, in which case any reasonable age calculation is impossible.

  11. Modelación de episodios críticos de contaminación por material particulado (PM10 en Santiago de Chile: Comparación de la eficiencia predictiva de los modelos paramétricos y no paramétricos Modeling critical episodes of air pollution by PM10 in Santiago, Chile: Comparison of the predictive efficiency of parametric and non-parametric statistical models

    Directory of Open Access Journals (Sweden)

    Sergio A. Alvarado

    2010-12-01

    Full Text Available Objetivo: Evaluar la eficiencia predictiva de modelos estadísticos paramétricos y no paramétricos para predecir episodios críticos de contaminación por material particulado PM10 del día siguiente, que superen en Santiago de Chile la norma de calidad diaria. Una predicción adecuada de tales episodios permite a la autoridad decretar medidas restrictivas que aminoren la gravedad del episodio, y consecuentemente proteger la salud de la comunidad. Método: Se trabajó con las concentraciones de material particulado PM10 registradas en una estación asociada a la red de monitorización de la calidad del aire MACAM-2, considerando 152 observaciones diarias de 14 variables, y con información meteorológica registrada durante los años 2001 a 2004. Se ajustaron modelos estadísticos paramétricos Gamma usando el paquete estadístico STATA v11, y no paramétricos usando una demo del software estadístico MARS v 2.0 distribuida por Salford-Systems. Resultados: Ambos métodos de modelación presentan una alta correlación entre los valores observados y los predichos. Los modelos Gamma presentan mejores aciertos que MARS para las concentraciones de PM10 con valores Objective: To evaluate the predictive efficiency of two statistical models (one parametric and the other non-parametric to predict critical episodes of air pollution exceeding daily air quality standards in Santiago, Chile by using the next day PM10 maximum 24h value. Accurate prediction of such episodes would allow restrictive measures to be applied by health authorities to reduce their seriousness and protect the community´s health. Methods: We used the PM10 concentrations registered by a station of the Air Quality Monitoring Network (152 daily observations of 14 variables and meteorological information gathered from 2001 to 2004. To construct predictive models, we fitted a parametric Gamma model using STATA v11 software and a non-parametric MARS model by using a demo version of Salford

  12. The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models

    OpenAIRE

    GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.

    2008-01-01

    In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.

  13. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  14. Non-parametric identification of multivariable systems : a local rational modeling approach with application to a vibration isolation benchmark

    NARCIS (Netherlands)

    Voorhoeve, R.J.; van der Maas, A.; Oomen, T.A.J.

    2018-01-01

    Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF

  15. Initialization of the Euler model MODIS with field data from the 'EPRI plume model validation project'

    International Nuclear Information System (INIS)

    Petersen, G.; Eppel, D.; Lautenschlager, M.; Mueller, A.

    1985-01-01

    The program deck MODIS (''MOment DIStribution'') is designed to be used as operational tool for modelling the dispersion of a point source under general atmospheric conditions. The concentration distribution is determined by calculating its cross-wind moments on a vertical grid oriented in the main wind direction. The model contains a parametrization for horizontal and vertical coefficients based on a second order closure model. The Eulerian time scales, preliminary determined by fitting measured plume cross sections, are confirmed by comparison with data from the EPRI plume model validation project. (orig.) [de

  16. Parametric and semiparametric models with applications to reliability, survival analysis, and quality of life

    CERN Document Server

    Nikulin, M; Mesbah, M; Limnios, N

    2004-01-01

    Parametric and semiparametric models are tools with a wide range of applications to reliability, survival analysis, and quality of life. This self-contained volume examines these tools in survey articles written by experts currently working on the development and evaluation of models and methods. While a number of chapters deal with general theory, several explore more specific connections and recent results in "real-world" reliability theory, survival analysis, and related fields.

  17. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    Science.gov (United States)

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected

  18. Modelling present-day basal melt rates for Antarctic ice shelves using a parametrization of buoyant meltwater plumes

    Science.gov (United States)

    Lazeroms, Werner M. J.; Jenkins, Adrian; Hilmar Gudmundsson, G.; van de Wal, Roderik S. W.

    2018-01-01

    Basal melting below ice shelves is a major factor in mass loss from the Antarctic Ice Sheet, which can contribute significantly to possible future sea-level rise. Therefore, it is important to have an adequate description of the basal melt rates for use in ice-dynamical models. Most current ice models use rather simple parametrizations based on the local balance of heat between ice and ocean. In this work, however, we use a recently derived parametrization of the melt rates based on a buoyant meltwater plume travelling upward beneath an ice shelf. This plume parametrization combines a non-linear ocean temperature sensitivity with an inherent geometry dependence, which is mainly described by the grounding-line depth and the local slope of the ice-shelf base. For the first time, this type of parametrization is evaluated on a two-dimensional grid covering the entire Antarctic continent. In order to apply the essentially one-dimensional parametrization to realistic ice-shelf geometries, we present an algorithm that determines effective values for the grounding-line depth and basal slope in any point beneath an ice shelf. Furthermore, since detailed knowledge of temperatures and circulation patterns in the ice-shelf cavities is sparse or absent, we construct an effective ocean temperature field from observational data with the purpose of matching (area-averaged) melt rates from the model with observed present-day melt rates. Our results qualitatively replicate large-scale observed features in basal melt rates around Antarctica, not only in terms of average values, but also in terms of the spatial pattern, with high melt rates typically occurring near the grounding line. The plume parametrization and the effective temperature field presented here are therefore promising tools for future simulations of the Antarctic Ice Sheet requiring a more realistic oceanic forcing.

  19. Parametric Linear Hybrid Automata for Complex Environmental Systems Modeling

    Directory of Open Access Journals (Sweden)

    Samar Hayat Khan Tareen

    2015-07-01

    Full Text Available Environmental systems, whether they be weather patterns or predator-prey relationships, are dependent on a number of different variables, each directly or indirectly affecting the system at large. Since not all of these factors are known, these systems take on non-linear dynamics, making it difficult to accurately predict meaningful behavioral trends far into the future. However, such dynamics do not warrant complete ignorance of different efforts to understand and model close approximations of these systems. Towards this end, we have applied a logical modeling approach to model and analyze the behavioral trends and systematic trajectories that these systems exhibit without delving into their quantification. This approach, formalized by René Thomas for discrete logical modeling of Biological Regulatory Networks (BRNs and further extended in our previous studies as parametric biological linear hybrid automata (Bio-LHA, has been previously employed for the analyses of different molecular regulatory interactions occurring across various cells and microbial species. As relationships between different interacting components of a system can be simplified as positive or negative influences, we can employ the Bio-LHA framework to represent different components of the environmental system as positive or negative feedbacks. In the present study, we highlight the benefits of hybrid (discrete/continuous modeling which lead to refinements among the fore-casted behaviors in order to find out which ones are actually possible. We have taken two case studies: an interaction of three microbial species in a freshwater pond, and a more complex atmospheric system, to show the applications of the Bio-LHA methodology for the timed hybrid modeling of environmental systems. Results show that the approach using the Bio-LHA is a viable method for behavioral modeling of complex environmental systems by finding timing constraints while keeping the complexity of the model

  20. Fitting Latent Cluster Models for Networks with latentnet

    Directory of Open Access Journals (Sweden)

    Pavel N. Krivitsky

    2007-12-01

    Full Text Available latentnet is a package to fit and evaluate statistical latent position and cluster models for networks. Hoff, Raftery, and Handcock (2002 suggested an approach to modeling networks based on positing the existence of an latent space of characteristics of the actors. Relationships form as a function of distances between these characteristics as well as functions of observed dyadic level covariates. In latentnet social distances are represented in a Euclidean space. It also includes a variant of the extension of the latent position model to allow for clustering of the positions developed in Handcock, Raftery, and Tantrum (2007.The package implements Bayesian inference for the models based on an Markov chain Monte Carlo algorithm. It can also compute maximum likelihood estimates for the latent position model and a two-stage maximum likelihood method for the latent position cluster model. For latent position cluster models, the package provides a Bayesian way of assessing how many groups there are, and thus whether or not there is any clustering (since if the preferred number of groups is 1, there is little evidence for clustering. It also estimates which cluster each actor belongs to. These estimates are probabilistic, and provide the probability of each actor belonging to each cluster. It computes four types of point estimates for the coefficients and positions: maximum likelihood estimate, posterior mean, posterior mode and the estimator which minimizes Kullback-Leibler divergence from the posterior. You can assess the goodness-of-fit of the model via posterior predictive checks. It has a function to simulate networks from a latent position or latent position cluster model.

  1. Collins fragmentation function for pions and kaons in a spectator model

    Energy Technology Data Exchange (ETDEWEB)

    Bacchetta, A. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Gamberg, L.P. [Penn State Univ., Berks, PA (United States). Dept. of Physics; Goldstein, G.R. [Tufts Univ., Medford, MA (United States). Dept. of Physics and Astronomy; Mukherjee, A. [Indian Institute of Technology Bombay, Mumbai (India). Physics Dept.

    2007-07-15

    We calculate the Collins fragmentation function in the framework of a spectator model with pseudoscalar pion-quark coupling and a Gaussian form factor at the vertex. We determine the model parameters by fitting the unpolarized fragmentation function for pions and kaons. We show that the Collins function for the pions in this model is in reasonable agreement with recent parametrizations obtained by fits of the available data. In addition, we compute for the first time the Collins function for the kaons. (orig.)

  2. A model of parametric X-ray radiation for application to diagnostic radiology

    International Nuclear Information System (INIS)

    Di Domenico, G.; Cardarelli, P.; Gambaccini, M.; Marziani, M.; Taibi, A.; Comandini, A.

    2011-01-01

    Parametric X-ray Radiation (PXR) is well known as an intense, tunable and quasi-monochromatic X-ray source. From the very first work of Ter-Mikaelian, who proposed the interaction phenomenon for Parametric X-rays many theoretical and experimental studies have investigated the characteristics of such a novel X-ray source. Within the framework of classical electrodynamics, we have thoroughly studied the physical implications of electrons moving through a medium at relativistic speed and then developed an analytical model of X-ray diffraction based on the PXR phenomenon. The model has been used to obtain information on the characteristics of PXR diffracted beam in terms of X-ray intensity, energy spectrum and angular distribution. Several crystals have been studied both in Bragg and Laue geometry and their relative yield has been compared. Preliminary results on the diagnostic potential of PXR have shown that, at a distance from the crystal which produces a size of the X-ray field useful for an imaging application, the photon yield of PXR is higher than that produced by a conventional X-ray tube, provided that a similar electron current is available.

  3. Economic policy optimization based on both one stochastic model and the parametric control theory

    Science.gov (United States)

    Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit

    2016-06-01

    A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)

  4. LEP asymmetries and fits of the standard model

    International Nuclear Information System (INIS)

    Pietrzyk, B.

    1994-01-01

    The lepton and quark asymmetries measured at LEP are presented. The results of the Standard Model fits to the electroweak data presented at this conference are given. The top mass obtained from the fit to the LEP data is 172 -14-20 +13+18 GeV; it is 177 -11-19 +11+18 when also the collider, ν and A LR data are included. (author). 10 refs., 3 figs., 2 tabs

  5. The Borromini's helicoidal staircase in Barberini Palace: scan laser survey and parametric modeling.

    Directory of Open Access Journals (Sweden)

    Leonardo Paris

    2015-07-01

    model, first of all measurable, but also able to disclosing shapes and geometries otherwise hardly perceptible, selecting parts or showing details.A first reading level of the models, obtained according to consolidated  procedures of the points-cloud management, has highlighted the formal matrix of the oval with the identification of the centers of the polycentric line, the resulting three-dimensional development of the various helices belonging to cylindrical surface portions adjacent to each other and different radius, the relationship with the slope. Within this formal matrix of the first level it enters the formal matrix of the architectural order into the rhythm of the six pairs of alternate columns and of the balustrades, and in the entablature that develops as a tape into the central space illuminated by a skylight.The modular structure of the scale has also suggested a experimentation by means of parametric modeling techniques to try to trace the ideal model of the Borromini’s helical staircase. The digital parametric model tested here is a new mode of representation than the models already consolidated, either analog or digital. Through a critical selection of some remarkable points belonging to the geometries present, through a statistical and normalized observation of the recurring measures, we have identified a generative algorithm able to be representative of the design intentions of the author. The entire search path followed to reach the definition of parametric generative model – from analysis of the real model to the design of the virtual parametric model; from the development of the algorithm generative until to his check made by comparing the generated model and the model survey - it was found to be a constraint rigorous methodological for a more complete and appropriate knowledge of the work.

  6. Model Fit and Item Factor Analysis: Overfactoring, Underfactoring, and a Program to Guide Interpretation.

    Science.gov (United States)

    Clark, D Angus; Bowles, Ryan P

    2018-04-23

    In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.

  7. Fitting measurement models to vocational interest data: are dominance models ideal?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz; Rounds, James; Williams, Bruce A

    2009-09-01

    In this study, the authors examined the item response process underlying 3 vocational interest inventories: the Occupational Preference Inventory (C.-P. Deng, P. I. Armstrong, & J. Rounds, 2007), the Interest Profiler (J. Rounds, T. Smith, L. Hubert, P. Lewis, & D. Rivkin, 1999; J. Rounds, C. M. Walker, et al., 1999), and the Interest Finder (J. E. Wall & H. E. Baker, 1997; J. E. Wall, L. L. Wise, & H. E. Baker, 1996). Item response theory (IRT) dominance models, such as the 2-parameter and 3-parameter logistic models, assume that item response functions (IRFs) are monotonically increasing as the latent trait increases. In contrast, IRT ideal point models, such as the generalized graded unfolding model, have IRFs that peak where the latent trait matches the item. Ideal point models are expected to fit better because vocational interest inventories ask about typical behavior, as opposed to requiring maximal performance. Results show that across all 3 interest inventories, the ideal point model provided better descriptions of the response process. The importance of specifying the correct item response model for precise measurement is discussed. In particular, scores computed by a dominance model were shown to be sometimes illogical: individuals endorsing mostly realistic or mostly social items were given similar scores, whereas scores based on an ideal point model were sensitive to which type of items respondents endorsed.

  8. Revisiting the Global Electroweak Fit of the Standard Model and Beyond with Gfitter

    CERN Document Server

    Flächer, Henning; Haller, J; Höcker, A; Mönig, K; Stelzer, J

    2009-01-01

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter projec...

  9. Power-law neuronal fluctuations in a recurrent network model of parametric working memory.

    Science.gov (United States)

    Miller, Paul; Wang, Xiao-Jing

    2006-02-01

    In a working memory system, persistent activity maintains information in the absence of external stimulation, therefore the time scale and structure of correlated neural fluctuations reflect the intrinsic microcircuit dynamics rather than direct responses to sensory inputs. Here we show that a parametric working memory model capable of graded persistent activity is characterized by arbitrarily long correlation times, with Fano factors and power spectra of neural activity described by the power laws of a random walk. Collective drifts of the mnemonic firing pattern induce long-term noise correlations between pairs of cells, with the sign (positive or negative) and amplitude proportional to the product of the gradients of their tuning curves. None of the power-law behavior was observed in a variant of the model endowed with discrete bistable neural groups, where noise fluctuations were unable to cause long-term changes in rate. Therefore such behavior can serve as a probe for a quasi-continuous attractor. We propose that the unusual correlated fluctuations have important implications for neural coding in parametric working memory circuits.

  10. Economic modeling and parametric studies for SOMBRERO - a laser-driven IFE power plant

    International Nuclear Information System (INIS)

    Meier, W.R.; Rosenberg, C.W. Jr. von

    1992-01-01

    Economic modeling and parametric studies for the SOMBRERO laser-driven inertial fusion energy (IFE) electric power plant have been conducted to determine the most attractive operating point. Cost scaling relationships have been developed and integrated into a cost-performance model of the plant. The figure-of-merit for determining the most attractive design point is the constant-dollar cost of electricity. Results are presented as a function of the driver energy. The sensitivity of the results to variations in the assumed net electric output and target performance is also examined

  11. Economic modeling and parametric studies for OSIRIS - a HIB-driven IFE power plant

    International Nuclear Information System (INIS)

    Meier, W.R.; Bieri, R.L.

    1992-01-01

    Economic modeling and parametric studies for the Osiris HIB-driven inertial fusion energy (IFE) electric power plant have been conducted to determine the most attractive operating point. Cost scaling relationships have been developed and integrated into a cost-performance model of the plant. The figure-of-merit for determining the most attractive design point is the constant-dollar cost of electricity. Results are presented as a function of the driver energy. The sensitivity of the results to variations in the assumed net electric output and target performance is also examined

  12. Modeling the Earth's magnetospheric magnetic field confined within a realistic magnetopause

    Science.gov (United States)

    Tsyganenko, N. A.

    1995-01-01

    Empirical data-based models of the magnetosphereic magnetic field have been widely used during recent years. However, the existing models (Tsyganenko, 1987, 1989a) have three serious deficiencies: (1) an unstable de facto magnetopause, (2) a crude parametrization by the K(sub p) index, and (3) inaccuracies in the equatorial magnetotail B(sub z) values. This paper describes a new approach to the problem; the essential new features are (1) a realistic shape and size of the magnetopause, based on fits to a large number of observed crossing (allowing a parametrization by the solar wind pressure), (2) fully controlled shielding of the magnetic field produced by all magnetospheric current systems, (3) new flexible representations for the tail and ring currents, and (4) a new directional criterion for fitting the model field to spacecraft data, providing improved accuracy for field line mapping. Results are presented from initial efforts to create models assembled from these modules and calibrated against spacecraft data sets.

  13. Fit Gap Analysis – The Role of Business Process Reference Models

    Directory of Open Access Journals (Sweden)

    Dejan Pajk

    2013-12-01

    Full Text Available Enterprise resource planning (ERP systems support solutions for standard business processes such as financial, sales, procurement and warehouse. In order to improve the understandability and efficiency of their implementation, ERP vendors have introduced reference models that describe the processes and underlying structure of an ERP system. To select and successfully implement an ERP system, the capabilities of that system have to be compared with a company’s business needs. Based on a comparison, all of the fits and gaps must be identified and further analysed. This step usually forms part of ERP implementation methodologies and is called fit gap analysis. The paper theoretically overviews methods for applying reference models and describes fit gap analysis processes in detail. The paper’s first contribution is its presentation of a fit gap analysis using standard business process modelling notation. The second contribution is the demonstration of a process-based comparison approach between a supply chain process and an ERP system process reference model. In addition to its theoretical contributions, the results can also be practically applied to projects involving the selection and implementation of ERP systems.

  14. Supersymmetry with prejudice: Fitting the wrong model to LHC data

    Science.gov (United States)

    Allanach, B. C.; Dolan, Matthew J.

    2012-09-01

    We critically examine interpretations of hypothetical supersymmetric LHC signals, fitting to alternative wrong models of supersymmetry breaking. The signals we consider are some of the most constraining on the sparticle spectrum: invariant mass distributions with edges and endpoints from the golden decay chain q˜→qχ20(→l˜±l∓q)→χ10l+l-q. We assume a constrained minimal supersymmetric standard model (CMSSM) point to be the ‘correct’ one, but fit the signals instead with minimal gauge mediated supersymmetry breaking models (mGMSB) with a neutralino quasistable lightest supersymmetric particle, minimal anomaly mediation and large volume string compactification models. Minimal anomaly mediation and large volume scenario can be unambiguously discriminated against the CMSSM for the assumed signal and 1fb-1 of LHC data at s=14TeV. However, mGMSB would not be discriminated on the basis of the kinematic endpoints alone. The best-fit point spectra of mGMSB and CMSSM look remarkably similar, making experimental discrimination at the LHC based on the edges or Higgs properties difficult. However, using rate information for the golden chain should provide the additional separation required.

  15. Fast and exact Newton and Bidirectional fitting of Active Appearance Models.

    Science.gov (United States)

    Kossaifi, Jean; Tzimiropoulos, Yorgos; Pantic, Maja

    2016-12-21

    Active Appearance Models (AAMs) are generative models of shape and appearance that have proven very attractive for their ability to handle wide changes in illumination, pose and occlusion when trained in the wild, while not requiring large training dataset like regression-based or deep learning methods. The problem of fitting an AAM is usually formulated as a non-linear least squares one and the main way of solving it is a standard Gauss-Newton algorithm. In this paper we extend Active Appearance Models in two ways: we first extend the Gauss-Newton framework by formulating a bidirectional fitting method that deforms both the image and the template to fit a new instance. We then formulate a second order method by deriving an efficient Newton method for AAMs fitting. We derive both methods in a unified framework for two types of Active Appearance Models, holistic and part-based, and additionally show how to exploit the structure in the problem to derive fast yet exact solutions. We perform a thorough evaluation of all algorithms on three challenging and recently annotated inthe- wild datasets, and investigate fitting accuracy, convergence properties and the influence of noise in the initialisation. We compare our proposed methods to other algorithms and show that they yield state-of-the-art results, out-performing other methods while having superior convergence properties.

  16. When the model fits the frame: the impact of regulatory fit on efficacy appraisal and persuasion in health communication.

    Science.gov (United States)

    Bosone, Lucia; Martinez, Frédéric; Kalampalikis, Nikos

    2015-04-01

    In health-promotional campaigns, positive and negative role models can be deployed to illustrate the benefits or costs of certain behaviors. The main purpose of this article is to investigate why, how, and when exposure to role models strengthens the persuasiveness of a message, according to regulatory fit theory. We argue that exposure to a positive versus a negative model activates individuals' goals toward promotion rather than prevention. By means of two experiments, we demonstrate that high levels of persuasion occur when a message advertising healthy dietary habits offers a regulatory fit between its framing and the described role model. Our data also establish that the effects of such internal regulatory fit by vicarious experience depend on individuals' perceptions of response-efficacy and self-efficacy. Our findings constitute a significant theoretical complement to previous research on regulatory fit and contain valuable practical implications for health-promotional campaigns. © 2015 by the Society for Personality and Social Psychology, Inc.

  17. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan; Krebs-Smith, Susan M.; Midthune, Douglas; Perez, Adriana; Buckman, Dennis W.; Kipnis, Victor; Freedman, Laurence S.; Dodd, Kevin W.; Carroll, Raymond J

    2011-01-01

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  18. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  19. Soil physical properties influencing the fitting parameters in Philip and Kostiakov infiltration models

    International Nuclear Information System (INIS)

    Mbagwu, J.S.C.

    1994-05-01

    Among the many models developed for monitoring the infiltration process those of Philip and Kostiakov have been studied in detail because of their simplicity and the ease of estimating their fitting parameters. The important soil physical factors influencing the fitting parameters in these infiltration models are reported in this study. The results of the study show that the single most important soil property affecting the fitting parameters in these models is the effective porosity. 36 refs, 2 figs, 5 tabs

  20. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. The influence of model parameters on catchment-response

    International Nuclear Information System (INIS)

    Shah, S.M.S.; Gabriel, H.F.; Khan, A.A.

    2002-01-01

    This paper deals with the study of influence of influence of conceptual rainfall-runoff model parameters on catchment response (runoff). A conceptual modified watershed yield model is employed to study the effects of model-parameters on catchment-response, i.e. runoff. The model is calibrated, using manual parameter-fitting approach, also known as trial and error parameter-fitting. In all, there are twenty one (21) parameters that control the functioning of the model. A lumped parametric approach is used. The detailed analysis was performed on Ling River near Kahuta, having catchment area of 56 sq. miles. The model includes physical parameters like GWSM, PETS, PGWRO, etc. fitting coefficients like CINF, CGWS, etc. and initial estimates of the surface-water and groundwater storages i.e. srosp and gwsp. Sensitivity analysis offers a good way, without repetititious computations, the proper weight and consideration that must be taken when each of the influencing factor is evaluated. Sensitivity-analysis was performed to evaluate the influence of model-parameters on runoff. The sensitivity and relative contributions of model parameters influencing catchment-response are studied. (author)

  2. Novel 3D modeling methods for virtual fabrication and EDA compatible design of MEMS via parametric libraries

    International Nuclear Information System (INIS)

    Schröpfer, Gerold; Lorenz, Gunar; Rouvillois, Stéphane; Breit, Stephen

    2010-01-01

    This paper provides a brief summary of the state-of-the-art of MEMS-specific modeling techniques and describes the validation of new models for a parametric component library. Two recently developed 3D modeling tools are described in more detail. The first one captures a methodology for designing MEMS devices and simulating them together with integrated electronics within a standard electronic design automation (EDA) environment. The MEMS designer can construct the MEMS model directly in a 3D view. The resulting 3D model differs from a typical feature-based 3D CAD modeling tool in that there is an underlying behavioral model and parametric layout associated with each MEMS component. The model of the complete MEMS device that is shared with the standard EDA environment can be fully parameterized with respect to manufacturing- and design-dependent variables. Another recent innovation is a process modeling tool that allows accurate and highly realistic visualization of the step-by-step creation of 3D micro-fabricated devices. The novelty of the tool lies in its use of voxels (3D pixels) rather than conventional 3D CAD techniques to represent the 3D geometry. Case studies for experimental devices are presented showing how the examination of these virtual prototypes can reveal design errors before mask tape out, support process development before actual fabrication and also enable failure analysis after manufacturing.

  3. Parametric Packet-Layer Model for Evaluation Audio Quality in Multimedia Streaming Services

    Science.gov (United States)

    Egi, Noritsugu; Hayashi, Takanori; Takahashi, Akira

    We propose a parametric packet-layer model for monitoring audio quality in multimedia streaming services such as Internet protocol television (IPTV). This model estimates audio quality of experience (QoE) on the basis of quality degradation due to coding and packet loss of an audio sequence. The input parameters of this model are audio bit rate, sampling rate, frame length, packet-loss frequency, and average burst length. Audio bit rate, packet-loss frequency, and average burst length are calculated from header information in received IP packets. For sampling rate, frame length, and audio codec type, the values or the names used in monitored services are input into this model directly. We performed a subjective listening test to examine the relationships between these input parameters and perceived audio quality. The codec used in this test was the Advanced Audio Codec-Low Complexity (AAC-LC), which is one of the international standards for audio coding. On the basis of the test results, we developed an audio quality evaluation model. The verification results indicate that audio quality estimated by the proposed model has a high correlation with perceived audio quality.

  4. Model Fitting for Predicted Precipitation in Darwin: Some Issues with Model Choice

    Science.gov (United States)

    Farmer, Jim

    2010-01-01

    In Volume 23(2) of the "Australian Senior Mathematics Journal," Boncek and Harden present an exercise in fitting a Markov chain model to rainfall data for Darwin Airport (Boncek & Harden, 2009). Days are subdivided into those with precipitation and precipitation-free days. The author abbreviates these labels to wet days and dry days.…

  5. Semi-parametric modelling of investments in heating installations: The case of the Dutch glasshouse industry

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.; Pietola, K.

    2005-01-01

    This paper applies a semi-parametric approach to estimating a generalised model of investments in heating installations. The results suggest that marginal costs of investments in heating installations increase quickly at small investment levels, whereas the increase slows down at higher investment

  6. New parametrization for the scale dependent growth function in general relativity

    International Nuclear Information System (INIS)

    Dent, James B.; Dutta, Sourish; Perivolaropoulos, Leandros

    2009-01-01

    We study the scale-dependent evolution of the growth function δ(a,k) of cosmological perturbations in dark energy models based on general relativity. This scale dependence is more prominent on cosmological scales of 100h -1 Mpc or larger. We derive a new scale-dependent parametrization which generalizes the well-known Newtonian approximation result f 0 (a)≡(dlnδ 0 /dlna)=Ω(a) γ (γ=(6/11) for ΛCDM) which is a good approximation on scales less than 50h -1 Mpc. Our generalized parametrization is of the form f(a)=(f 0 (a)/1+ξ(a,k)), where ξ(a,k)=(3H 0 2 Ω 0m )/(ak 2 ). We demonstrate that this parametrization fits the exact result of a full general relativistic evaluation of the growth function up to horizon scales for both ΛCDM and dynamical dark energy. In contrast, the scale independent parametrization does not provide a good fit on scales beyond 5% of the horizon scale (k≅0.01h -1 Mpc).

  7. Universal Rate Model Selector: A Method to Quickly Find the Best-Fit Kinetic Rate Model for an Experimental Rate Profile

    Science.gov (United States)

    2017-08-01

    k2 – k1) 3.3 Universal Kinetic Rate Platform Development Kinetic rate models range from pure chemical reactions to mass transfer...14 8. The rate model that best fits the experimental data is a first-order or homogeneous catalytic reaction ...Avrami (7), and intraparticle diffusion (6) rate equations to name a few. A single fitting algorithm (kinetic rate model ) for a reaction does not

  8. Multidimensional Rank Reduction Estimator for Parametric MIMO Channel Models

    Directory of Open Access Journals (Sweden)

    Marius Pesavento

    2004-08-01

    Full Text Available A novel algebraic method for the simultaneous estimation of MIMO channel parameters from channel sounder measurements is developed. We consider a parametric multipath propagation model with P discrete paths where each path is characterized by its complex path gain, its directions of arrival and departure, time delay, and Doppler shift. This problem is treated as a special case of the multidimensional harmonic retrieval problem. While the well-known ESPRIT-type algorithms exploit shift-invariance between specific partitions of the signal matrix, the rank reduction estimator (RARE algorithm exploits their internal Vandermonde structure. A multidimensional extension of the RARE algorithm is developed, analyzed, and applied to measurement data recorded with the RUSK vector channel sounder in the 2 GHz band.

  9. Nonlinear models for fitting growth curves of Nellore cows reared in the Amazon Biome

    Directory of Open Access Journals (Sweden)

    Kedma Nayra da Silva Marinho

    2013-09-01

    Full Text Available Growth curves of Nellore cows were estimated by comparing six nonlinear models: Brody, Logistic, two alternatives by Gompertz, Richards and Von Bertalanffy. The models were fitted to weight-age data, from birth to 750 days of age of 29,221 cows, born between 1976 and 2006 in the Brazilian states of Acre, Amapá, Amazonas, Pará, Rondônia, Roraima and Tocantins. The models were fitted by the Gauss-Newton method. The goodness of fit of the models was evaluated by using mean square error, adjusted coefficient of determination, prediction error and mean absolute error. Biological interpretation of parameters was accomplished by plotting estimated weights versus the observed weight means, instantaneous growth rate, absolute maturity rate, relative instantaneous growth rate, inflection point and magnitude of the parameters A (asymptotic weight and K (maturing rate. The Brody and Von Bertalanffy models fitted the weight-age data but the other models did not. The average weight (A and growth rate (K were: 384.6±1.63 kg and 0.0022±0.00002 (Brody and 313.40±0.70 kg and 0.0045±0.00002 (Von Bertalanffy. The Brody model provides better goodness of fit than the Von Bertalanffy model.

  10. Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Pantic, Maja

    2016-01-01

    Fitting algorithms for Active Appearance Models (AAMs) are usually considered to be robust but slow or fast but less able to generalize well to unseen variations. In this paper, we look into AAM fitting algorithms and make the following orthogonal contributions: We present a simple “project-out‿

  11. The LMDZ4 general circulation model: climate performance and sensitivity to parametrized physics with emphasis on tropical convection

    Energy Technology Data Exchange (ETDEWEB)

    Hourdin, Frederic; Musat, Ionela; Bony, Sandrine; Codron, Francis; Dufresne, Jean-Louis; Fairhead, Laurent; Grandpeix, Jean-Yves; LeVan, Phu; Li, Zhao-Xin; Lott, Francois [CNRS/UPMC, Laboratoire de Meteorologie Dynamique (LMD/IPSL), Paris Cedex 05 (France); Braconnot, Pascale; Friedlingstein, Pierre [Laboratoire des Sciences du Climat et de l' Environnement (LSCE/IPSL), Saclay (France); Filiberti, Marie-Angele [Institut Pierre Simon Laplace (IPSL), Paris (France); Krinner, Gerhard [Laboratoire de Glaciologie et Geophysique de l' Environnement, Grenoble (France)

    2006-12-15

    The LMDZ4 general circulation model is the atmospheric component of the IPSL-CM4 coupled model which has been used to perform climate change simulations for the 4th IPCC assessment report. The main aspects of the model climatology (forced by observed sea surface temperature) are documented here, as well as the major improvements with respect to the previous versions, which mainly come form the parametrization of tropical convection. A methodology is proposed to help analyse the sensitivity of the tropical Hadley-Walker circulation to the parametrization of cumulus convection and clouds. The tropical circulation is characterized using scalar potentials associated with the horizontal wind and horizontal transport of geopotential (the Laplacian of which is proportional to the total vertical momentum in the atmospheric column). The effect of parametrized physics is analysed in a regime sorted framework using the vertical velocity at 500 hPa as a proxy for large scale vertical motion. Compared to Tiedtke's convection scheme, used in previous versions, the Emanuel's scheme improves the representation of the Hadley-Walker circulation, with a relatively stronger and deeper large scale vertical ascent over tropical continents, and suppresses the marked patterns of concentrated rainfall over oceans. Thanks to the regime sorted analyses, these differences are attributed to intrinsic differences in the vertical distribution of convective heating, and to the lack of self-inhibition by precipitating downdraughts in Tiedtke's parametrization. Both the convection and cloud schemes are shown to control the relative importance of large scale convection over land and ocean, an important point for the behaviour of the coupled model. (orig.)

  12. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    International Nuclear Information System (INIS)

    Liang, Zhong Wei; Wang, Yi Jun; Ye, Bang Yan; Brauwer, Richard Kars

    2012-01-01

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process

  13. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Zhong Wei; Wang, Yi Jun [Guangzhou Univ., Guangzhou (China); Ye, Bang Yan [South China Univ. of Technology, Guangzhou (China); Brauwer, Richard Kars [Indian Institute of Technology, Kanpur (India)

    2012-10-15

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process.

  14. Hybrid Model of Inhomogeneous Solar Wind Plasma Heating by Alfven Wave Spectrum: Parametric Studies

    Science.gov (United States)

    Ofman, L.

    2010-01-01

    Observations of the solar wind plasma at 0.3 AU and beyond show that a turbulent spectrum of magnetic fluctuations is present. Remote sensing observations of the corona indicate that heavy ions are hotter than protons and their temperature is anisotropic (T(sub perpindicular / T(sub parallel) >> 1). We study the heating and the acceleration of multi-ion plasma in the solar wind by a turbulent spectrum of Alfvenic fluctuations using a 2-D hybrid numerical model. In the hybrid model the protons and heavy ions are treated kinetically as particles, while the electrons are included as neutralizing background fluid. This is the first two-dimensional hybrid parametric study of the solar wind plasma that includes an input turbulent wave spectrum guided by observation with inhomogeneous background density. We also investigate the effects of He++ ion beams in the inhomogeneous background plasma density on the heating of the solar wind plasma. The 2-D hybrid model treats parallel and oblique waves, together with cross-field inhomogeneity, self-consistently. We investigate the parametric dependence of the perpendicular heating, and the temperature anisotropy in the H+-He++ solar wind plasma. It was found that the scaling of the magnetic fluctuations power spectrum steepens in the higher-density regions, and the heating is channeled to these regions from the surrounding lower-density plasma due to wave refraction. The model parameters are applicable to the expected solar wind conditions at about 10 solar radii.

  15. Rapid world modeling: Fitting range data to geometric primitives

    International Nuclear Information System (INIS)

    Feddema, J.; Little, C.

    1996-01-01

    For the past seven years, Sandia National Laboratories has been active in the development of robotic systems to help remediate DOE's waste sites and decommissioned facilities. Some of these facilities have high levels of radioactivity which prevent manual clean-up. Tele-operated and autonomous robotic systems have been envisioned as the only suitable means of removing the radioactive elements. World modeling is defined as the process of creating a numerical geometric model of a real world environment or workspace. This model is often used in robotics to plan robot motions which perform a task while avoiding obstacles. In many applications where the world model does not exist ahead of time, structured lighting, laser range finders, and even acoustical sensors have been used to create three dimensional maps of the environment. These maps consist of thousands of range points which are difficult to handle and interpret. This paper presents a least squares technique for fitting range data to planar and quadric surfaces, including cylinders and ellipsoids. Once fit to these primitive surfaces, the amount of data associated with a surface is greatly reduced up to three orders of magnitude, thus allowing for more rapid handling and analysis of world data

  16. Rate-dependent extensions of the parametric magneto-dynamic model with magnetic hysteresis

    Directory of Open Access Journals (Sweden)

    S. Steentjes

    2017-05-01

    Full Text Available This paper extends the parametric magneto-dynamic model of soft magnetic steel sheets to account for the phase shift between local magnetic flux density and magnetic field strength. This phase shift originates from the damped motion of domain walls and is strongly dependent on the microstructure of the material. In this regard, two different approaches to include the rate-dependent effects are investigated: a purely phenomenological, mathematical approach and a physical-based one.

  17. Dynamic heat transfer modeling and parametric study of thermoelectric radiant cooling and heating panel system

    International Nuclear Information System (INIS)

    Luo, Yongqiang; Zhang, Ling; Liu, Zhongbing; Wang, Yingzi; Wu, Jing; Wang, Xiliang

    2016-01-01

    Highlights: • Dynamic model of thermoelectric radiant panel system is established. • The internal parameters of thermoelectric module are dynamically calculated in simulation. • Both artificial neural networks model and system model are verified through experiment data. • Optimized system structure is obtained through parametric study. - Abstract: Radiant panel system can optimize indoor thermal comfort with lower energy consumption. The thermoelectric radiant panel (TERP) system is a new and effective prototype of radiant system using thermoelectric module (TEM) instead of conventional water pipes, as heat source. The TERP can realize more stable and easier system control as well as lower initial and operative cost. In this study, an improved system dynamic model was established by combining analytical system model and artificial neural networks (ANN) as well as the dynamic calculation functions of internal parameters of TEM. The double integral was used for the calculation of surface average temperature of TERP. The ANN model and system model were in good agreement with experiment data in both cooling and heating mode. In order to optimize the system design structure, parametric study was conducted in terms of the thickness of aluminum panel and insulation, as well as the arrangement of TEMs on the surface of radiant panel. It was found through simulation results that the optimum thickness of aluminum panel and insulation are respectively around 1–2 mm and 40–50 mm. In addition, TEMs should be uniformly installed on the surface of radiant panel and each TEM should stand at the central position of a square-shaped typical region with length around 0.387–0.548 m.

  18. Parametric soil water retention models: a critical evaluation of expressions for the full moisture range

    Science.gov (United States)

    Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane

    2018-02-01

    Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.

  19. The dielectric properties of biological tissues: III. Parametric models for the dielectric spectrum of tissues

    International Nuclear Information System (INIS)

    Gabriel, S.; Lau, R.W.; Gabriel, C.

    1996-01-01

    A parametric model was developed to describe the variation of dielectric properties of tissues as a function of frequency. The experimental spectrum from 10 Hz to 100 GHz was modelled with four dispersion regions. The development of the model was based on recently acquired data, complemented by data surveyed from the literature. The purpose is to enable the prediction of dielectric data that are in line with those contained in the vast body of literature on the subject. The analysis was carried out on a Microsoft Excel spreadsheet. Parameters are given for 17 tissue types. (author)

  20. Using the Flipchem Photochemistry Model When Fitting Incoherent Scatter Radar Data

    Science.gov (United States)

    Reimer, A. S.; Varney, R. H.

    2017-12-01

    The North face Resolute Bay Incoherent Scatter Radar (RISR-N) routinely images the dynamics of the polar ionosphere, providing measurements of the plasma density, electron temperature, ion temperature, and line of sight velocity with seconds to minutes time resolution. RISR-N does not directly measure ionospheric parameters, but backscattered signals, recording them as voltage samples. Using signal processing techniques, radar autocorrelation functions (ACF) are estimated from the voltage samples. A model of the signal ACF is then fitted to the ACF using non-linear least-squares techniques to obtain the best-fit ionospheric parameters. The signal model, and therefore the fitted parameters, depend on the ionospheric ion composition that is used [e.g. Zettergren et. al. (2010), Zou et. al. (2017)].The software used to process RISR-N ACF data includes the "flipchem" model, which is an ion photochemistry model developed by Richards [2011] that was adapted from the Field LineInterhemispheric Plasma (FLIP) model. Flipchem requires neutral densities, neutral temperatures, electron density, ion temperature, electron temperature, solar zenith angle, and F10.7 as inputs to compute ion densities, which are input to the signal model. A description of how the flipchem model is used in RISR-N fitting software will be presented. Additionally, a statistical comparison of the fitted electron density, ion temperature, electron temperature, and velocity obtained using a flipchem ionosphere, a pure O+ ionosphere, and a Chapman O+ ionosphere will be presented. The comparison covers nearly two years of RISR-N data (April 2015 - December 2016). Richards, P. G. (2011), Reexamination of ionospheric photochemistry, J. Geophys. Res., 116, A08307, doi:10.1029/2011JA016613.Zettergren, M., Semeter, J., Burnett, B., Oliver, W., Heinselman, C., Blelly, P.-L., and Diaz, M.: Dynamic variability in F-region ionospheric composition at auroral arc boundaries, Ann. Geophys., 28, 651-664, https

  1. Modelling road accident blackspots data with the discrete generalized Pareto distribution.

    Science.gov (United States)

    Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María

    2014-10-01

    This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. The relationship between multilevel models and non-parametric multilevel mixture models: Discrete approximation of intraclass correlation, random coefficient distributions, and residual heteroscedasticity.

    Science.gov (United States)

    Rights, Jason D; Sterba, Sonya K

    2016-11-01

    Multilevel data structures are common in the social sciences. Often, such nested data are analysed with multilevel models (MLMs) in which heterogeneity between clusters is modelled by continuously distributed random intercepts and/or slopes. Alternatively, the non-parametric multilevel regression mixture model (NPMM) can accommodate the same nested data structures through discrete latent class variation. The purpose of this article is to delineate analytic relationships between NPMM and MLM parameters that are useful for understanding the indirect interpretation of the NPMM as a non-parametric approximation of the MLM, with relaxed distributional assumptions. We define how seven standard and non-standard MLM specifications can be indirectly approximated by particular NPMM specifications. We provide formulas showing how the NPMM can serve as an approximation of the MLM in terms of intraclass correlation, random coefficient means and (co)variances, heteroscedasticity of residuals at level 1, and heteroscedasticity of residuals at level 2. Further, we discuss how these relationships can be useful in practice. The specific relationships are illustrated with simulated graphical demonstrations, and direct and indirect interpretations of NPMM classes are contrasted. We provide an R function to aid in implementing and visualizing an indirect interpretation of NPMM classes. An empirical example is presented and future directions are discussed. © 2016 The British Psychological Society.

  3. truncSP: An R Package for Estimation of Semi-Parametric Truncated Linear Regression Models

    Directory of Open Access Journals (Sweden)

    Maria Karlsson

    2014-05-01

    Full Text Available Problems with truncated data occur in many areas, complicating estimation and inference. Regarding linear regression models, the ordinary least squares estimator is inconsistent and biased for these types of data and is therefore unsuitable for use. Alternative estimators, designed for the estimation of truncated regression models, have been developed. This paper presents the R package truncSP. The package contains functions for the estimation of semi-parametric truncated linear regression models using three different estimators: the symmetrically trimmed least squares, quadratic mode, and left truncated estimators, all of which have been shown to have good asymptotic and ?nite sample properties. The package also provides functions for the analysis of the estimated models. Data from the environmental sciences are used to illustrate the functions in the package.

  4. Mechanistic site-based emulation of a global ocean biogeochemical model (MEDUSA 1.0 for parametric analysis and calibration: an application of the Marine Model Optimization Testbed (MarMOT 1.1

    Directory of Open Access Journals (Sweden)

    J. C. P. Hemmings

    2015-03-01

    Full Text Available Biogeochemical ocean circulation models used to investigate the role of plankton ecosystems in global change rely on adjustable parameters to capture the dominant biogeochemical dynamics of a complex biological system. In principle, optimal parameter values can be estimated by fitting models to observational data, including satellite ocean colour products such as chlorophyll that achieve good spatial and temporal coverage of the surface ocean. However, comprehensive parametric analyses require large ensemble experiments that are computationally infeasible with global 3-D simulations. Site-based simulations provide an efficient alternative but can only be used to make reliable inferences about global model performance if robust quantitative descriptions of their relationships with the corresponding 3-D simulations can be established. The feasibility of establishing such a relationship is investigated for an intermediate complexity biogeochemistry model (MEDUSA coupled with a widely used global ocean model (NEMO. A site-based mechanistic emulator is constructed for surface chlorophyll output from this target model as a function of model parameters. The emulator comprises an array of 1-D simulators and a statistical quantification of the uncertainty in their predictions. The unknown parameter-dependent biogeochemical environment, in terms of initial tracer concentrations and lateral flux information required by the simulators, is a significant source of uncertainty. It is approximated by a mean environment derived from a small ensemble of 3-D simulations representing variability of the target model behaviour over the parameter space of interest. The performance of two alternative uncertainty quantification schemes is examined: a direct method based on comparisons between simulator output and a sample of known target model "truths" and an indirect method that is only partially reliant on knowledge of the target model output. In general, chlorophyll

  5. Dynamic Human Body Modeling Using a Single RGB Camera.

    Science.gov (United States)

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-03-18

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  6. Estrategia para el diseño paramétrico basado en modelos. // Strategy for model-based parametric design.

    Directory of Open Access Journals (Sweden)

    S. A. Marrero Osorio

    2008-09-01

    Full Text Available El presente artículo expone una manera de diseñar paramétricamente utilizando los programas de computadora (CAD,CAE, PMS difundidos entre los diseñadores durante los últimos 20 años. La propuesta se basa en modelos matemáticosque consideran el conocimiento sobre la ingeniería del objeto de diseño y lo relacionado con la confección de su modelovirtual tridimensional, planos y otro aspectos; utilizando el Método de los Grafos Dicromáticos para resolver los problemascomputacionales que se presentan en el diseño paramétrico. Se analizan los puntos de vista de diferentes autores en relacióncon el proceso general de diseño y es ubicado dentro del mismo el diseño paramétrico, realizándose una explicación formalque permite arribar a conclusiones interesantes.Palabras claves: Diseño paramétrico, diseño asistido por computadoras (CAD, ingeniería asistida porcomputadoras (CAE, software para el modelado paramétrico (PMS, resolución de problemas._____________________________________________________________________________Abstract:The present article exposes a way to design parametrically applying programs (CAD, CAE, PMS accepted by designers along thelast 20 years. The proposal is based on mathematical models that ponder the knowledge on the engineering of the design object andthe building of its three-dimensional virtual models, blueprints and another aspects; using the dichromatic graph method to solvecomputational problems in parametric design. The points of view of different authors are analyzed in connection with the generalprocess of design, locating parametric design inside it, carrying out a formal explanation which arrives to interesting conclusions.Key words: Parametric design, computer aided design (CAD, computer aided engineering (CAE,parametric modeling software (PMS, problem solving.

  7. A parametric duration model of the reaction times of drivers distracted by mobile phone conversations.

    Science.gov (United States)

    Haque, Md Mazharul; Washington, Simon

    2014-01-01

    The use of mobile phones while driving is more prevalent among young drivers-a less experienced cohort with elevated crash risk. The objective of this study was to examine and better understand the reaction times of young drivers to a traffic event originating in their peripheral vision whilst engaged in a mobile phone conversation. The CARRS-Q advanced driving simulator was used to test a sample of young drivers on various simulated driving tasks, including an event that originated within the driver's peripheral vision, whereby a pedestrian enters a zebra crossing from a sidewalk. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), hands-free and handheld. In addition to driving the simulator each participant completed questionnaires related to driver demographics, driving history, usage of mobile phones while driving, and general mobile phone usage history. The participants were 21-26 years old and split evenly by gender. Drivers' reaction times to a pedestrian in the zebra crossing were modelled using a parametric accelerated failure time (AFT) duration model with a Weibull distribution. Also tested where two different model specifications to account for the structured heterogeneity arising from the repeated measures experimental design. The Weibull AFT model with gamma heterogeneity was found to be the best fitting model and identified four significant variables influencing the reaction times, including phone condition, driver's age, license type (provisional license holder or not), and self-reported frequency of usage of handheld phones while driving. The reaction times of drivers were more than 40% longer in the distracted condition compared to baseline (not distracted). Moreover, the impairment of reaction times due to mobile phone conversations was almost double for provisional compared to open license holders. A reduction in the ability to detect traffic events in the periphery whilst distracted

  8. Multi-binding site model-based curve-fitting program for the computation of RIA data

    International Nuclear Information System (INIS)

    Malan, P.G.; Ekins, R.P.; Cox, M.G.; Long, E.M.R.

    1977-01-01

    In this paper, a comparison will be made of model-based and empirical curve-fitting procedures. The implementation of a multiple binding-site curve-fitting model which will successfully fit a wide range of assay data, and which can be run on a mini-computer is described. The latter sophisticated model also provides estimates of binding site concentrations and the values of the respective equilibrium constants present: the latter have been used for refining assay conditions using computer optimisation techniques. (orig./AJ) [de

  9. Parametric Adjustments to the Rankine Vortex Wind Model for Gulf of Mexico Hurricanes

    Science.gov (United States)

    2012-11-01

    2012 4. TITLE AND SUBTITLE Parametric Adjustments to the Rankine Vortex Wind Model for Gulf of Mexico Hurricanes 5a. CONTRACT NUMBER 5b. GRANT ...may be used to construct spatially varying wind fields for the GOM region (e.g., Thompson and Cardone [12]), but this requires using a complicated...Storm Damage Reduc- tion, and Dredging Operations and Environmental Research (DOER). The USACE Headquarters granted permission to publish this paper

  10. Estimating solar ultraviolet irradiance (290-385 nm by means of the spectral parametric models: SPCTRAL2 and SMARTS2

    Directory of Open Access Journals (Sweden)

    I. Foyo-Moreno

    2000-11-01

    Full Text Available Since the discovery of the ozone depletion in Antarctic and the globally declining trend of stratospheric ozone concentration, public and scientific concern has been raised in the last decades. A very important consequence of this fact is the increased broadband and spectral UV radiation in the environment and the biological effects and heath risks that may take place in the near future. The absence of widespread measurements of this radiometric flux has lead to the development and use of alternative estimation procedures such as the parametric approaches. Parametric models compute the radiant energy using available atmospheric parameters. Some parametric models compute the global solar irradiance at surface level by addition of its direct beam and diffuse components. In the present work, we have developed a comparison between two cloudless sky parametrization schemes. Both methods provide an estimation of the solar spectral irradiance that can be integrated spectrally within the limits of interest. For this test we have used data recorded in a radiometric station located at Granada (37.180°N, 3.580°W, 660 m a.m.s.l., an inland location. The database includes hourly values of the relevant variables covering the years 1994-95. The performance of the models has been tested in relation to their predictive capability of global solar irradiance in the UV range (290–385 nm. After our study, it appears that information concerning the aerosol radiative effects is fundamental in order to obtain a good estimation. The original version of SPCTRAL2 provides estimates of the experimental values with negligible mean bias deviation. This suggests not only the appropriateness of the model but also the convenience of the aerosol features fixed in it to Granada conditions. SMARTS2 model offers increased flexibility concerning the selection of different aerosol models included in the code and provides the best results when the selected models are those

  11. Estimating solar ultraviolet irradiance (290-385 nm by means of the spectral parametric models: SPCTRAL2 and SMARTS2

    Directory of Open Access Journals (Sweden)

    I. Foyo-Moreno

    Full Text Available Since the discovery of the ozone depletion in Antarctic and the globally declining trend of stratospheric ozone concentration, public and scientific concern has been raised in the last decades. A very important consequence of this fact is the increased broadband and spectral UV radiation in the environment and the biological effects and heath risks that may take place in the near future. The absence of widespread measurements of this radiometric flux has lead to the development and use of alternative estimation procedures such as the parametric approaches. Parametric models compute the radiant energy using available atmospheric parameters. Some parametric models compute the global solar irradiance at surface level by addition of its direct beam and diffuse components. In the present work, we have developed a comparison between two cloudless sky parametrization schemes. Both methods provide an estimation of the solar spectral irradiance that can be integrated spectrally within the limits of interest. For this test we have used data recorded in a radiometric station located at Granada (37.180°N, 3.580°W, 660 m a.m.s.l., an inland location. The database includes hourly values of the relevant variables covering the years 1994-95. The performance of the models has been tested in relation to their predictive capability of global solar irradiance in the UV range (290–385 nm. After our study, it appears that information concerning the aerosol radiative effects is fundamental in order to obtain a good estimation. The original version of SPCTRAL2 provides estimates of the experimental values with negligible mean bias deviation. This suggests not only the appropriateness of the model but also the convenience of the aerosol features fixed in it to Granada conditions. SMARTS2 model offers increased flexibility concerning the selection of different aerosol models included in the code and provides the best results when the selected models are those

  12. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  13. Detection and modelling of time-dependent QTL in animal populations

    DEFF Research Database (Denmark)

    Lund, Mogens S; Sørensen, Peter; Madsen, Per

    2008-01-01

    A longitudinal approach is proposed to map QTL affecting function-valued traits and to estimate their effect over time. The method is based on fitting mixed random regression models. The QTL allelic effects are modelled with random coefficient parametric curves and using a gametic relationship...... matrix. A simulation study was conducted in order to assess the ability of the approach to fit different patterns of QTL over time. It was found that this longitudinal approach was able to adequately fit the simulated variance functions and considerably improved the power of detection of time-varying QTL...... effects compared to the traditional univariate model. This was confirmed by an analysis of protein yield data in dairy cattle, where the model was able to detect QTL with high effect either at the beginning or the end of the lactation, that were not detected with a simple 305 day model....

  14. Helicopter model rotor-blade vortex interaction impulsive noise: Scalability and parametric variations

    Science.gov (United States)

    Splettstoesser, W. R.; Schultz, K. J.; Boxwell, D. A.; Schmitz, F. H.

    1984-01-01

    Acoustic data taken in the anechoic Deutsch-Niederlaendischer Windkanal (DNW) have documented the blade vortex interaction (BVI) impulsive noise radiated from a 1/7-scale model main rotor of the AH-1 series helicopter. Averaged model scale data were compared with averaged full scale, inflight acoustic data under similar nondimensional test conditions. At low advance ratios (mu = 0.164 to 0.194), the data scale remarkable well in level and waveform shape, and also duplicate the directivity pattern of BVI impulsive noise. At moderate advance ratios (mu = 0.224 to 0.270), the scaling deteriorates, suggesting that the model scale rotor is not adequately simulating the full scale BVI noise; presently, no proved explanation of this discrepancy exists. Carefully performed parametric variations over a complete matrix of testing conditions have shown that all of the four governing nondimensional parameters - tip Mach number at hover, advance ratio, local inflow ratio, and thrust coefficient - are highly sensitive to BVI noise radiation.

  15. Constrained Convolutional Sparse Coding for Parametric Based Reconstruction of Line Drawings

    KAUST Repository

    Shaheen, Sara

    2017-12-25

    Convolutional sparse coding (CSC) plays an essential role in many computer vision applications ranging from image compression to deep learning. In this work, we spot the light on a new application where CSC can effectively serve, namely line drawing analysis. The process of drawing a line drawing can be approximated as the sparse spatial localization of a number of typical basic strokes, which in turn can be cast as a non-standard CSC model that considers the line drawing formation process from parametric curves. These curves are learned to optimize the fit between the model and a specific set of line drawings. Parametric representation of sketches is vital in enabling automatic sketch analysis, synthesis and manipulation. A couple of sketch manipulation examples are demonstrated in this work. Consequently, our novel method is expected to provide a reliable and automatic method for parametric sketch description. Through experiments, we empirically validate the convergence of our method to a feasible solution.

  16. A goodness-of-fit test for occupancy models with correlated within-season revisits

    Science.gov (United States)

    Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.

    2016-01-01

    Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and

  17. Brief communication: human cranial variation fits iterative founder effect model with African origin.

    Science.gov (United States)

    von Cramon-Taubadel, Noreen; Lycett, Stephen J

    2008-05-01

    Recent studies comparing craniometric and neutral genetic affinity matrices have concluded that, on average, human cranial variation fits a model of neutral expectation. While human craniometric and genetic data fit a model of isolation by geographic distance, it is not yet clear whether this is due to geographically mediated gene flow or human dispersal events. Recently, human genetic data have been shown to fit an iterative founder effect model of dispersal with an African origin, in line with the out-of-Africa replacement model for modern human origins, and Manica et al. (Nature 448 (2007) 346-349) have demonstrated that human craniometric data also fit this model. However, in contrast with the neutral model of cranial evolution suggested by previous studies, Manica et al. (2007) made the a priori assumption that cranial form has been subject to climatically driven natural selection and therefore correct for climate prior to conducting their analyses. Here we employ a modified theoretical and methodological approach to test whether human cranial variability fits the iterative founder effect model. In contrast with Manica et al. (2007) we employ size-adjusted craniometric variables, since climatic factors such as temperature have been shown to correlate with aspects of cranial size. Despite these differences, we obtain similar results to those of Manica et al. (2007), with up to 26% of global within-population craniometric variation being explained by geographic distance from sub-Saharan Africa. Comparative analyses using non-African origins do not yield significant results. The implications of these results are discussed in the light of the modern human origins debate. (c) 2007 Wiley-Liss, Inc.

  18. On the sensitivity of teleseismic full-waveform inversion to earth parametrization, initial model and acquisition design

    Science.gov (United States)

    Beller, S.; Monteiller, V.; Combe, L.; Operto, S.; Nolet, G.

    2018-02-01

    Full-waveform inversion (FWI) is not yet a mature imaging technology for lithospheric imaging from teleseismic data. Therefore, its promise and pitfalls need to be assessed more accurately according to the specifications of teleseismic experiments. Three important issues are related to (1) the choice of the lithospheric parametrization for optimization and visualization, (2) the initial model and (3) the acquisition design, in particular in terms of receiver spread and sampling. These three issues are investigated with a realistic synthetic example inspired by the CIFALPS experiment in the Western Alps. Isotropic elastic FWI is implemented with an adjoint-state formalism and aims to update three parameter classes by minimization of a classical least-squares difference-based misfit function. Three different subsurface parametrizations, combining density (ρ) with P and S wave speeds (Vp and Vs) , P and S impedances (Ip and Is), or elastic moduli (λ and μ) are first discussed based on their radiation patterns before their assessment by FWI. We conclude that the (ρ, λ, μ) parametrization provides the FWI models that best correlate with the true ones after recombining a posteriori the (ρ, λ, μ) optimization parameters into Ip and Is. Owing to the low frequency content of teleseismic data, 1-D reference global models as PREM provide sufficiently accurate initial models for FWI after smoothing that is necessary to remove the imprint of the layering. Two kinds of station deployments are assessed: coarse areal geometry versus dense linear one. We unambiguously conclude that a coarse areal geometry should be favoured as it dramatically increases the penetration in depth of the imaging as well as the horizontal resolution. This results because the areal geometry significantly increases local wavenumber coverage, through a broader sampling of the scattering and dip angles, compared to a linear deployment.

  19. Improvements in Sensible Heat-Flux Parametrization in the High-Resolution Regional Model (HRM) Through the Modified Treatment of the Roughness Length for Heat

    Science.gov (United States)

    Anurose, T. J.; Subrahamanyam, D. Bala

    2013-06-01

    We discuss the impact of the differential treatment of the roughness lengths for momentum and heat (z_{0m} and z_{0h}) in the flux parametrization scheme of the high-resolution regional model (HRM) for a heterogeneous terrain centred around Thiruvananthapuram, India (8.5°N, 76.9°E). The magnitudes of sensible heat flux ( H) obtained from HRM simulations using the original parametrization scheme differed drastically from the concurrent in situ observations. With a view to improving the performance of this parametrization scheme, two distinct modifications are incorporated: (1) In the first method, a constant value of 100 is assigned to the z_{0m}/z_{0h} ratio; (2) and in the second approach, this ratio is treated as a function of time. Both these modifications in the HRM model showed significant improvements in the H simulations for Thiruvananthapuram and its adjoining regions. Results obtained from the present study provide a first-ever comparison of H simulations using the modified parametrization scheme in the HRM model with in situ observations for the Indian coastal region, and suggest a differential treatment of z_{0m} and z_{0h} in the flux parametrization scheme.

  20. Design and Parametric Study of the Magnetic Sensor for Position Detection in Linear Motor Based on Nonlinear Parametric model order reduction.

    Science.gov (United States)

    Paul, Sarbajit; Chang, Junghwan

    2017-07-01

    This paper presents a design approach for a magnetic sensor module to detect mover position using the proper orthogonal decomposition-dynamic mode decomposition (POD-DMD)-based nonlinear parametric model order reduction (PMOR). The parameterization of the sensor module is achieved by using the multipolar moment matching method. Several geometric variables of the sensor module are considered while developing the parametric study. The operation of the sensor module is based on the principle of the airgap flux density distribution detection by the Hall Effect IC. Therefore, the design objective is to achieve a peak flux density (PFD) greater than 0.1 T and total harmonic distortion (THD) less than 3%. To fulfill the constraint conditions, the specifications for the sensor module is achieved by using POD-DMD based reduced model. The POD-DMD based reduced model provides a platform to analyze the high number of design models very fast, with less computational burden. Finally, with the final specifications, the experimental prototype is designed and tested. Two different modes, 90° and 120° modes respectively are used to obtain the position information of the linear motor mover. The position information thus obtained are compared with that of the linear scale data, used as a reference signal. The position information obtained using the 120° mode has a standard deviation of 0.10 mm from the reference linear scale signal, whereas the 90° mode position signal shows a deviation of 0.23 mm from the reference. The deviation in the output arises due to the mechanical tolerances introduced into the specification during the manufacturing process. This provides a scope for coupling the reliability based design optimization in the design process as a future extension.

  1. Development of a parametric containment event tree model for a severe BWR accident

    Energy Technology Data Exchange (ETDEWEB)

    Okkonen, T [OTO-Consulting Ay, Helsinki (Finland)

    1995-04-01

    A containment event tree (CET) is built for analysis of severe accidents at the TVO boiling water reactor (BWR) units. Parametric models of severe accident progression and fission product behaviour are developed and integrated in order to construct a compact and self-contained Level 2 PSA model. The model can be easily updated to correspond to new research results. The analyses of the study are limited to severe accidents starting from full-power operation and leading to core melting, and are focused mainly on the use and effects of the dedicated severe accident management (SAM) systems. Severe accident progression from eight plant damage states (PDS), involving different pre-core-damage accident evolution, is examined, but the inclusion of their relative or absolute probabilities, by integration with Level 1, is deferred to integral safety assessments. (33 refs., 5 figs., 7 tabs.).

  2. Parametric modelling design applied to weft knitted surfaces and its effects in their physical properties

    Science.gov (United States)

    Oliveira, N. P.; Maciel, L.; Catarino, A. P.; Rocha, A. M.

    2017-10-01

    This work proposes the creation of models of surfaces using a parametric computer modelling software to obtain three-dimensional structures in weft knitted fabrics produced on single needle system machines. Digital prototyping, another feature of digital modelling software, was also explored in three-dimensional drawings generated using the Rhinoceros software. With this approach, different 3D structures were developed and produced. Physical characterization tests were then performed on the resulting 3D weft knitted structures to assess their ability to promote comfort. From the obtained results, it is apparent that the developed structures have potential for application in different market segments, such as clothing and interior textiles.

  3. Parametric modeling of the intervertebral disc space in 3D: application to CT images of the lumbar spine.

    Science.gov (United States)

    Korez, Robert; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2014-10-01

    Gradual degeneration of intervertebral discs of the lumbar spine is one of the most common causes of low back pain. Although conservative treatment for low back pain may provide relief to most individuals, surgical intervention may be required for individuals with significant continuing symptoms, which is usually performed by replacing the degenerated intervertebral disc with an artificial implant. For designing implants with good bone contact and continuous force distribution, the morphology of the intervertebral disc space and vertebral body endplates is of considerable importance. In this study, we propose a method for parametric modeling of the intervertebral disc space in three dimensions (3D) and show its application to computed tomography (CT) images of the lumbar spine. The initial 3D model of the intervertebral disc space is generated according to the superquadric approach and therefore represented by a truncated elliptical cone, which is initialized by parameters obtained from 3D models of adjacent vertebral bodies. In an optimization procedure, the 3D model of the intervertebral disc space is incrementally deformed by adding parameters that provide a more detailed morphometric description of the observed shape, and aligned to the observed intervertebral disc space in the 3D image. By applying the proposed method to CT images of 20 lumbar spines, the shape and pose of each of the 100 intervertebral disc spaces were represented by a 3D parametric model. The resulting mean (±standard deviation) accuracy of modeling was 1.06±0.98mm in terms of radial Euclidean distance against manually defined ground truth points, with the corresponding success rate of 93% (i.e. 93 out of 100 intervertebral disc spaces were modeled successfully). As the resulting 3D models provide a description of the shape of intervertebral disc spaces in a complete parametric form, morphometric analysis was straightforwardly enabled and allowed the computation of the corresponding

  4. Right-sizing statistical models for longitudinal data.

    Science.gov (United States)

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  5. Motion Imitation and Recognition using Parametric Hidden Markov Models

    DEFF Research Database (Denmark)

    Herzog, Dennis; Ude, Ales; Krüger, Volker

    2008-01-01

    ) are important. Only together they convey the whole meaning of an action. Similarly, to imitate a movement, the robot needs to select the proper action and parameterize it, e.g., by the relative position of the object that needs to be grasped. We propose to utilize parametric hidden Markov models (PHMMs), which...... extend the classical HMMs by introducing a joint parameterization of the observation densities, to simultaneously solve the problems of action recognition, parameterization of the observed actions, and action synthesis. The proposed approach was fully implemented on a humanoid robot HOAP-3. To evaluate...... the approach, we focused on reaching and pointing actions. Even though the movements are very similar in appearance, our approach is able to distinguish the two movement types and discover the parameterization, and is thus enabling both, action recognition and action synthesis. Through parameterization we...

  6. Fitting and comparing competing models of the species abundance distribution: assessment and prospect

    Directory of Open Access Journals (Sweden)

    Thomas J Matthews

    2014-06-01

    Full Text Available A species abundance distribution (SAD characterises patterns in the commonness and rarity of all species within an ecological community. As such, the SAD provides the theoretical foundation for a number of other biogeographical and macroecological patterns, such as the species–area relationship, as well as being an interesting pattern in its own right. While there has been resurgence in the study of SADs in the last decade, less focus has been placed on methodology in SAD research, and few attempts have been made to synthesise the vast array of methods which have been employed in SAD model evaluation. As such, our review has two aims. First, we provide a general overview of SADs, including descriptions of the commonly used distributions, plotting methods and issues with evaluating SAD models. Second, we review a number of recent advances in SAD model fitting and comparison. We conclude by providing a list of recommendations for fitting and evaluating SAD models. We argue that it is time for SAD studies to move away from many of the traditional methods available for fitting and evaluating models, such as sole reliance on the visual examination of plots, and embrace statistically rigorous techniques. In particular, we recommend the use of both goodness-of-fit tests and model-comparison analyses because each provides unique information which one can use to draw inferences.

  7. Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.

    Science.gov (United States)

    Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei

    2015-02-01

    This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.

  8. Impact Response Comparison Between Parametric Human Models and Postmortem Human Subjects with a Wide Range of Obesity Levels.

    Science.gov (United States)

    Zhang, Kai; Cao, Libo; Wang, Yulong; Hwang, Eunjoo; Reed, Matthew P; Forman, Jason; Hu, Jingwen

    2017-10-01

    Field data analyses have shown that obesity significantly increases the occupant injury risks in motor vehicle crashes, but the injury assessment tools for people with obesity are largely lacking. The objectives of this study were to use a mesh morphing method to rapidly generate parametric finite element models with a wide range of obesity levels and to evaluate their biofidelity against impact tests using postmortem human subjects (PMHS). Frontal crash tests using three PMHS seated in a vehicle rear seat compartment with body mass index (BMI) from 24 to 40 kg/m 2 were selected. To develop the human models matching the PMHS geometry, statistical models of external body shape, rib cage, pelvis, and femur were applied to predict the target geometry using age, sex, stature, and BMI. A mesh morphing method based on radial basis functions was used to rapidly morph a baseline human model into the target geometry. The model-predicted body excursions and injury measures were compared to the PMHS tests. Comparisons of occupant kinematics and injury measures between the tests and simulations showed reasonable correlations across the wide range of BMI levels. The parametric human models have the capability to account for the obesity effects on the occupant impact responses and injury risks. © 2017 The Obesity Society.

  9. A flexible, interactive software tool for fitting the parameters of neuronal models.

    Science.gov (United States)

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.

  10. A flexible, interactive software tool for fitting the parameters of neuronal models

    Directory of Open Access Journals (Sweden)

    Péter eFriedrich

    2014-07-01

    Full Text Available The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problem of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting

  11. A Novel Parametric Modeling Method and Optimal Design for Savonius Wind Turbines

    Directory of Open Access Journals (Sweden)

    Baoshou Zhang

    2017-03-01

    Full Text Available Under the inspiration of polar coordinates, a novel parametric modeling and optimization method for Savonius wind turbines was proposed to obtain the highest power output, in which a quadratic polynomial curve was bent to describe a blade. Only two design parameters are needed for the shape-complicated blade. Therefore, this novel method reduces sampling scale. A series of transient simulations was run to get the optimal performance coefficient (power coefficient C p for different modified turbines based on computational fluid dynamics (CFD method. Then, a global response surface model and a more precise local response surface model were created according to Kriging Method. These models defined the relationship between optimization objective Cp and design parameters. Particle swarm optimization (PSO algorithm was applied to find the optimal design based on these response surface models. Finally, the optimal Savonius blade shaped like a “hook” was obtained. Cm (torque coefficient, Cp and flow structure were compared for the optimal design and the classical design. The results demonstrate that the optimal Savonius turbine has excellent comprehensive performance. The power coefficient Cp is significantly increased from 0.247 to 0.262 (6% higher. The weight of the optimal blade is reduced by 17.9%.

  12. Parametric modeling and stagger angle optimization of an axial flow fan

    International Nuclear Information System (INIS)

    Li, M X; Zhang, C H; Liu, Y; Zheng, S Y

    2013-01-01

    Axial flow fans are widely used in every field of social production. Improving their efficiency is a sustained and urgent demand of domestic industry. The optimization of stagger angle is an important method to improve fan performance. Parametric modeling and calculation process automation are realized in this paper to improve optimization efficiency. Geometric modeling and mesh division are parameterized based on GAMBIT. Parameter setting and flow field calculation are completed in the batch mode of FLUENT. A control program is developed in Visual C++ to dominate the data exchange of mentioned software. It also extracts calculation results for optimization algorithm module (provided by Matlab) to generate directive optimization control parameters, which as feedback are transferred upwards to modeling module. The center line of the blade airfoil, based on CLARK y profile, is constructed by non-constant circulation and triangle discharge method. Stagger angles of six airfoil sections are optimized, to reduce the influence of inlet shock loss as well as gas leak in blade tip clearance and hub resistance at blade root. Finally an optimal solution is obtained, which meets the total pressure requirement under given conditions and improves total pressure efficiency by about 6%

  13. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    International Nuclear Information System (INIS)

    Berry, Tyrus; Harlim, John

    2016-01-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consists of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.

  14. An Online Method for Interpolating Linear Parametric Reduced-Order Models

    KAUST Repository

    Amsallem, David; Farhat, Charbel

    2011-01-01

    A two-step online method is proposed for interpolating projection-based linear parametric reduced-order models (ROMs) in order to construct a new ROM for a new set of parameter values. The first step of this method transforms each precomputed ROM into a consistent set of generalized coordinates. The second step interpolates the associated linear operators on their appropriate matrix manifold. Real-time performance is achieved by precomputing inner products between the reduced-order bases underlying the precomputed ROMs. The proposed method is illustrated by applications in mechanical and aeronautical engineering. In particular, its robustness is demonstrated by its ability to handle the case where the sampled parameter set values exhibit a mode veering phenomenon. © 2011 Society for Industrial and Applied Mathematics.

  15. Kernel-density estimation and approximate Bayesian computation for flexible epidemiological model fitting in Python.

    Science.gov (United States)

    Irvine, Michael A; Hollingsworth, T Déirdre

    2018-05-26

    Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Modelling biochemical networks with intrinsic time delays: a hybrid semi-parametric approach

    Directory of Open Access Journals (Sweden)

    Oliveira Rui

    2010-09-01

    Full Text Available Abstract Background This paper presents a method for modelling dynamical biochemical networks with intrinsic time delays. Since the fundamental mechanisms leading to such delays are many times unknown, non conventional modelling approaches become necessary. Herein, a hybrid semi-parametric identification methodology is proposed in which discrete time series are incorporated into fundamental material balance models. This integration results in hybrid delay differential equations which can be applied to identify unknown cellular dynamics. Results The proposed hybrid modelling methodology was evaluated using two case studies. The first of these deals with dynamic modelling of transcriptional factor A in mammalian cells. The protein transport from the cytosol to the nucleus introduced a delay that was accounted for by discrete time series formulation. The second case study focused on a simple network with distributed time delays that demonstrated that the discrete time delay formalism has broad applicability to both discrete and distributed delay problems. Conclusions Significantly better prediction qualities of the novel hybrid model were obtained when compared to dynamical structures without time delays, being the more distinctive the more significant the underlying system delay is. The identification of the system delays by studies of different discrete modelling delays was enabled by the proposed structure. Further, it was shown that the hybrid discrete delay methodology is not limited to discrete delay systems. The proposed method is a powerful tool to identify time delays in ill-defined biochemical networks.

  17. Parametric Study of Synthetic-Jet-Based Flow Control on a Vertical Tail Model

    Science.gov (United States)

    Monastero, Marianne; Lindstrom, Annika; Beyar, Michael; Amitay, Michael

    2015-11-01

    Separation control over the rudder of the vertical tail of a commercial airplane using synthetic-jet-based flow control can lead to a reduction in tail size, with an associated decrease in drag and increase in fuel savings. A parametric, experimental study was undertaken using an array of finite span synthetic jets to investigate the sensitivity of the enhanced vertical tail side force to jet parameters, such as jet spanwise spacing and jet momentum coefficient. A generic wind tunnel model was designed and fabricated to fundamentally study the effects of the jet parameters at varying rudder deflection and model sideslip angles. Wind tunnel results obtained from pressure measurements and tuft flow visualization in the Rensselaer Polytechnic Subsonic Wind Tunnel show a decrease in separation severity and increase in model performance in comparison to the baseline, non-actuated case. The sensitivity to various parameters will be presented.

  18. Parametric geometric model and hydrodynamic shape optimization of a flying-wing structure underwater glider

    Science.gov (United States)

    Wang, Zhen-yu; Yu, Jian-cheng; Zhang, Ai-qun; Wang, Ya-xing; Zhao, Wen-tao

    2017-12-01

    Combining high precision numerical analysis methods with optimization algorithms to make a systematic exploration of a design space has become an important topic in the modern design methods. During the design process of an underwater glider's flying-wing structure, a surrogate model is introduced to decrease the computation time for a high precision analysis. By these means, the contradiction between precision and efficiency is solved effectively. Based on the parametric geometry modeling, mesh generation and computational fluid dynamics analysis, a surrogate model is constructed by adopting the design of experiment (DOE) theory to solve the multi-objects design optimization problem of the underwater glider. The procedure of a surrogate model construction is presented, and the Gaussian kernel function is specifically discussed. The Particle Swarm Optimization (PSO) algorithm is applied to hydrodynamic design optimization. The hydrodynamic performance of the optimized flying-wing structure underwater glider increases by 9.1%.

  19. Moment stability for a predator–prey model with parametric dichotomous noises

    International Nuclear Information System (INIS)

    Jin Yan-Fei

    2015-01-01

    In this paper, we investigate the solution moment stability for a Harrison-type predator–prey model with parametric dichotomous noises. Using the Shapiro–Loginov formula, the equations for the first-order and second-order moments are obtained and the corresponding stable conditions are given. It is found that the solution moment stability depends on the noise intensity and correlation time of noise. The first-order and second-order moments become unstable with the decrease of correlation time. That is, the dichotomous noise can improve the solution moment stability with respect to Gaussian white noise. Finally, some numerical results are presented to verify the theoretical analyses. (paper)

  20. An Improved Cognitive Model of the Iowa and Soochow Gambling Tasks With Regard to Model Fitting Performance and Tests of Parameter Consistency

    Directory of Open Access Journals (Sweden)

    Junyi eDai

    2015-03-01

    Full Text Available The Iowa Gambling Task (IGT and the Soochow Gambling Task (SGT are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning model (EVL and the prospect valence learning model (PVL, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79 and 27 control participants (mean age 35; SD 10.44 completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.

  1. HDFITS: Porting the FITS data model to HDF5

    Science.gov (United States)

    Price, D. C.; Barsdell, B. R.; Greenhill, L. J.

    2015-09-01

    The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new 'HDFITS' format, where data are stored in HDF5 in a FITS-like manner. We show that HDFITS allows faster reading of data (up to 100x of FITS in some use cases), and improved compression (higher compression ratios and higher throughput). Finally, we show that by only changing the import lines in Python-based FITS utilities, HDFITS formatted data can be presented transparently as an in-memory FITS equivalent.

  2. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    Science.gov (United States)

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  3. Application of semi parametric modelling to times series forecasting: case of the electricity consumption; Modeles semi-parametriques appliques a la prevision des series temporelles. Cas de la consommation d'electricite

    Energy Technology Data Exchange (ETDEWEB)

    Lefieux, V

    2007-10-15

    Reseau de Transport d'Electricite (RTE), in charge of operating the French electric transportation grid, needs an accurate forecast of the power consumption in order to operate it correctly. The forecasts used everyday result from a model combining a nonlinear parametric regression and a SARIMA model. In order to obtain an adaptive forecasting model, nonparametric forecasting methods have already been tested without real success. In particular, it is known that a nonparametric predictor behaves badly with a great number of explanatory variables, what is commonly called the curse of dimensionality. Recently, semi parametric methods which improve the pure nonparametric approach have been proposed to estimate a regression function. Based on the concept of 'dimension reduction', one those methods (called MAVE : Moving Average -conditional- Variance Estimate) can apply to time series. We study empirically its effectiveness to predict the future values of an autoregressive time series. We then adapt this method, from a practical point of view, to forecast power consumption. We propose a partially linear semi parametric model, based on the MAVE method, which allows to take into account simultaneously the autoregressive aspect of the problem and the exogenous variables. The proposed estimation procedure is practically efficient. (author)

  4. Study on fitness functions of genetic algorithm for dynamically correcting nuclide atmospheric diffusion model

    International Nuclear Information System (INIS)

    Ji Zhilong; Ma Yuanwei; Wang Dezhong

    2014-01-01

    Background: In radioactive nuclides atmospheric diffusion models, the empirical dispersion coefficients were deduced under certain experiment conditions, whose difference with nuclear accident conditions is a source of deviation. A better estimation of the radioactive nuclide's actual dispersion process could be done by correcting dispersion coefficients with observation data, and Genetic Algorithm (GA) is an appropriate method for this correction procedure. Purpose: This study is to analyze the fitness functions' influence on the correction procedure and the forecast ability of diffusion model. Methods: GA, coupled with Lagrange dispersion model, was used in a numerical simulation to compare 4 fitness functions' impact on the correction result. Results: In the numerical simulation, the fitness function with observation deviation taken into consideration stands out when significant deviation exists in the observed data. After performing the correction procedure on the Kincaid experiment data, a significant boost was observed in the diffusion model's forecast ability. Conclusion: As the result shows, in order to improve dispersion models' forecast ability using GA, observation data should be given different weight in the fitness function corresponding to their error. (authors)

  5. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    Science.gov (United States)

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  6. Parametric Accuracy: Building Information Modeling Process Applied to the Cultural Heritage Preservation

    Science.gov (United States)

    Garagnani, S.; Manferdini, A. M.

    2013-02-01

    Since their introduction, modeling tools aimed to architectural design evolved in today's "digital multi-purpose drawing boards" based on enhanced parametric elements able to originate whole buildings within virtual environments. Semantic splitting and elements topology are features that allow objects to be "intelligent" (i.e. self-aware of what kind of element they are and with whom they can interact), representing this way basics of Building Information Modeling (BIM), a coordinated, consistent and always up to date workflow improved in order to reach higher quality, reliability and cost reductions all over the design process. Even if BIM was originally intended for new architectures, its attitude to store semantic inter-related information can be successfully applied to existing buildings as well, especially if they deserve particular care such as Cultural Heritage sites. BIM engines can easily manage simple parametric geometries, collapsing them to standard primitives connected through hierarchical relationships: however, when components are generated by existing morphologies, for example acquiring point clouds by digital photogrammetry or laser scanning equipment, complex abstractions have to be introduced while remodeling elements by hand, since automatic feature extraction in available software is still not effective. In order to introduce a methodology destined to process point cloud data in a BIM environment with high accuracy, this paper describes some experiences on monumental sites documentation, generated through a plug-in written for Autodesk Revit and codenamed GreenSpider after its capability to layout points in space as if they were nodes of an ideal cobweb.

  7. Function parametrization by using 4-point transforms

    International Nuclear Information System (INIS)

    Dikusar, N.D.

    1996-01-01

    A continuous parametrization of the smooth curve f(x)=f(x;R) is suggested on a basis of four-point transformations. Coordinates of three reference points of the curve are chosen as parameters R. This approach allows to derive a number of advantages in function approximation and fitting of empiric data. The transformations have made possible to derive a new class of polynomials (monosplines) having the better approximation quality than monomials {x n }. A behaviour of an error of the approximation has a uniform character. A three-point model of the cubic spline (TPS) is proposed. The model allows to reduce a number of unknown parameters in twice and to obtain an advantage in a computing aspect. The new approach to the function approximation and fitting are shown on a number of examples. The proposed approach gives a new mathematical tool and a new possibility in both practical applications and theoretical research of numerical and computational methods. 13 refs., 13 figs., 2 tabs

  8. Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.

    Science.gov (United States)

    Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F

    2009-11-01

    Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database.

  9. Parametric Cost Modeling of Space Missions Using the Develop New Projects (DMP) Implementation Process

    Science.gov (United States)

    Rosenberg, Leigh; Hihn, Jairus; Roust, Kevin; Warfield, Keith

    2000-01-01

    This paper presents an overview of a parametric cost model that has been built at JPL to estimate costs of future, deep space, robotic science missions. Due to the recent dramatic changes in JPL business practices brought about by an internal reengineering effort known as develop new products (DNP), high-level historic cost data is no longer considered analogous to future missions. Therefore, the historic data is of little value in forecasting costs for projects developed using the DNP process. This has lead to the development of an approach for obtaining expert opinion and also for combining actual data with expert opinion to provide a cost database for future missions. In addition, the DNP cost model has a maximum of objective cost drivers which reduces the likelihood of model input error. Version 2 is now under development which expands the model capabilities, links it more tightly with key design technical parameters, and is grounded in more rigorous statistical techniques. The challenges faced in building this model will be discussed, as well as it's background, development approach, status, validation, and future plans.

  10. Person-fit to the Five Factor Model of personality

    Czech Academy of Sciences Publication Activity Database

    Allik, J.; Realo, A.; Mõttus, R.; Borkenau, P.; Kuppens, P.; Hřebíčková, Martina

    2012-01-01

    Roč. 71, č. 1 (2012), s. 35-45 ISSN 1421-0185 R&D Projects: GA ČR GAP407/10/2394 Institutional research plan: CEZ:AV0Z70250504 Keywords : Five Factor Model * cross - cultural comparison * person-fit Subject RIV: AN - Psychology Impact factor: 0.638, year: 2012

  11. Model-independent partial wave analysis using a massively-parallel fitting framework

    Science.gov (United States)

    Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.

    2017-10-01

    The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.

  12. Automating the Simulation of SME Processes through a Discrete Event Parametric Model

    Directory of Open Access Journals (Sweden)

    Francesco Aggogeri

    2015-02-01

    Full Text Available At the factory level, the manufacturing system can be described as a group of processes governed by complex weaves of engineering strategies and technologies. Decision- making processes involve a lot of information, driven by managerial strategies, technological implications and layout constraints. Many factors affect decisions, and their combination must be carefully managed to determine the best solutions to optimize performances. In this way, advanced simulation tools could support the decisional process of many SMEs. The accessibility of these tools is limited by knowledge, cost, data availability and development time. These tools should be used to support strategic decisions rather than specific situations. In this paper, a novel approach is proposed that aims to facilitate the simulation of manufacturing processes by fast modelling and evaluation. The idea is to realize a model that is able to be automatically adapted to the user’s specific needs. The model must be characterized by a high degree of flexibility, configurability and adaptability in order to automatically simulate multiple/heterogeneous industrial scenarios. In this way, even a SME can easily access a complex tool, perform thorough analyses and be supported in taking strategic decisions. The parametric DES model is part of a greater software platform developed during COPERNICO EU funded project.

  13. Moment stability for a predator-prey model with parametric dichotomous noises

    Science.gov (United States)

    Jin, Yan-Fei

    2015-06-01

    In this paper, we investigate the solution moment stability for a Harrison-type predator-prey model with parametric dichotomous noises. Using the Shapiro-Loginov formula, the equations for the first-order and second-order moments are obtained and the corresponding stable conditions are given. It is found that the solution moment stability depends on the noise intensity and correlation time of noise. The first-order and second-order moments become unstable with the decrease of correlation time. That is, the dichotomous noise can improve the solution moment stability with respect to Gaussian white noise. Finally, some numerical results are presented to verify the theoretical analyses. Project supported by the National Natural Science Foundation of China (Grant No. 11272051).

  14. A Model for Straight and Helical Solar Jets: II. Parametric Study of the Plasma Beta

    Science.gov (United States)

    Pariat, E.; Dalmasse, K.; DeVore, C. R.; Antiochos, S. K.; Karpen, J. T.

    2016-01-01

    Context. Jets are dynamic, impulsive, well-collimated plasma events that develop at many different scales and in different layers of the solar atmosphere. Aims. Jets are believed to be induced by magnetic reconnection, a process central to many astrophysical phenomena. Within the solar atmosphere, jet-like events develop in many different environments, e.g. in the vicinity of active regions as well as in coronal holes, and at various scales, from small photospheric spicules to large coronal jets. In all these events, signatures of helical structure and/or twisting/rotating motions are regularly observed. The present study aims to establish that a single model can generally reproduce the observed properties of these jet-like events. Methods. In this study, using our state-of-the-art numerical solver ARMS, we present a parametric study of a numerical tridimensional magnetohydrodynamic (MHD) model of solar jet-like events. Within the MHD paradigm, we study the impact of varying the atmospheric plasma beta on the generation and properties of solar-like jets. Results. The parametric study validates our model of jets for plasma beta ranging from 10(sup 3) to 1, typical of the different layers and magnetic environments of the solar atmosphere. Our model of jets can robustly explain the generation of helical solar jet-like events at various beta less than or equal to 1. We show that the plasma beta modifies the morphology of the helical jet, explaining the different observed shapes of jets at different scales and in different layers of the solar atmosphere. Conclusions. Our results allow us to understand the energisation, triggering, and driving processes of jet-like events. Our model allows us to make predictions of the impulsiveness and energetics of jets as determined by the surrounding environment, as well as the morphological properties of the resulting jets.

  15. Introducing the fit-criteria assessment plot - A visualisation tool to assist class enumeration in group-based trajectory modelling.

    Science.gov (United States)

    Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria

    2017-10-01

    Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.

  16. Fast fitting of non-Gaussian state-space models to animal movement data via Template Model Builder

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Whoriskey, Kim; Yurkowski, David

    2015-01-01

    recommend using the Laplace approximation combined with automatic differentiation (as implemented in the novel R package Template Model Builder; TMB) for the fast fitting of continuous-time multivariate non-Gaussian SSMs. Through Argos satellite tracking data, we demonstrate that the use of continuous...... are able to estimate additional parameters compared to previous methods, all without requiring a substantial increase in computational time. The model implementation is made available through the R package argosTrack....

  17. Thermal modeling and parametric studies of a greenhouse fish pond in the Central Himalayan Region

    International Nuclear Information System (INIS)

    Sarkar, Bikash; Tiwari, G.N.

    2006-01-01

    This study describes the thermal modeling and its validation of greenhouse fish pond systems. Numerical computations have been performed for a typical day in the month of June, 2005, for the climatic condition of Champawat in the Central Himalayan Region. The energy balance equations have been written considering the effects of conduction, convection, radiation, evaporation and ventilation. The governing equations are numerically solved with Matlab 7.0 software to predict the water temperature. A parametric study has also been performed to find the effects of various parameters, namely the number of air changes per hour, the transmissivity (τ) and the isothermal mass and height of the greenhouse. It is observed that there is no significant effect in the parametric studies on water temperature due to the larger isothermal mass. The model has been validated with experimental data. On an average, the even span passive greenhouse fish pond can increase the inside temperature 4.14 deg. C higher than the temperature of an outdoor pond. Statistical analysis shows that the predicted and experimental values of water temperature exhibited fair agreement with a coefficient of correlation r = 0.90 and root mean square percent deviation e = 1.67%

  18. Modeling of constituent redistribution in U-Pu-Zr metallic fuel

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yeon Soo [Argonne National Laboratory, Nuclear Engineering, RERTR, 9700 South Cass Avenue, Argonne, IL 60439 (United States)]. E-mail: yskim@anl.gov; Hayes, S.L. [Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-6188 (United States); Hofman, G.L. [Argonne National Laboratory, Nuclear Engineering, RERTR, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Yacout, A.M. [Argonne National Laboratory, Nuclear Engineering, RERTR, 9700 South Cass Avenue, Argonne, IL 60439 (United States)

    2006-12-01

    A computer model was developed to analyze constituent redistribution in U-Pu-Zr metallic nuclear fuels. Diffusion and thermochemical properties were parametrically determined to fit the postirradiation data from a fuel test performed in the Experimental Breeder Reactor II (EBR-II). The computer model was used to estimate redistribution profiles of fuels proposed for the conceptual designs of small modular fast reactors. The model results showed that the level of redistribution of the fuel constituents of the designs was similar to the measured data from EBR-II.

  19. A methodological approach to parametric product modelling in motor car development; Ein methodischer Ansatz zur parametrischen Produktmodellierung in der Fahrzeugentwicklung

    Energy Technology Data Exchange (ETDEWEB)

    Boehme, M.

    2004-07-01

    Continuos improvement of processes and methodologies is one key element to shorten development time, reduce costs, and improve quality, and therefore to answer growing customer demands and global competition. This work describes a new concept of introducing the principles of parametric modeling to the entire product data model in the area of automotive development. Based on the idea, that not only geometric dimensions can be described by parameters, the method of parametric modeling is applied to the complete product model. The concept assumes four major principles: First, the parameters of the product model are handled independently from their proprietary data formats. Secondly, a strictly hierarchical structure is required for the parametric description of the product. The third principle demands an object-based parameterization. Finally the use of parameter-sets for the description of logical units of the product model tree is part of the concept. Those four principles are addressing the following main objectives: Supporting and improving Simultaneous Engineering, achieving data consistency over all development phases, digital approval of product properties, and incorporation of the design intent into the product model. Further improvement of the automotive development process can be achieved with the introduction of parametric product modeling using the principles described in this paper. (orig.) [German] Die Forderung nach kuerzeren Entwicklungszeiten, Reduzierung der Kosten und verbesserter Qualitaet erfordert eine stetige Verbesserung von Prozessen und Methoden in der Produktentwicklung. In dieser Arbeit wird ein neuer Ansatz vorgestellt, der die Methodik des parametrischen Konstruierens auf das gesamte Produktmodell in der Fahrzeugentwicklung anwendet, und somit weitere Potentiale zur Verbesserung des Produktentstehungsprozesses erschliesst. Ausgehend von der Annahme, dass nicht nur geometrische Abmessungen als Parameter beschrieben werden koennen, wird die

  20. Realistic modelling of the seismic input: Site effects and parametric studies

    International Nuclear Information System (INIS)

    Romanelli, F.; Vaccari, F.; Panza, G.F.

    2002-11-01

    We illustrate the work done in the framework of a large international cooperation, showing the very recent numerical experiments carried out within the framework of the EC project 'Advanced methods for assessing the seismic vulnerability of existing motorway bridges' (VAB) to assess the importance of non-synchronous seismic excitation of long structures. The definition of the seismic input at the Warth bridge site, i.e. the determination of the seismic ground motion due to an earthquake with a given magnitude and epicentral distance from the site, has been done following a theoretical approach. In order to perform an accurate and realistic estimate of site effects and of differential motion it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters, in realistic geological structures. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different sources and structural models, allows us the construction of damage scenarios that are out of the reach of stochastic models, at a very low cost/benefit ratio. (author)

  1. A Data-Driven Method for Selecting Optimal Models Based on Graphical Visualisation of Differences in Sequentially Fitted ROC Model Parameters

    Directory of Open Access Journals (Sweden)

    K S Mwitondi

    2013-05-01

    Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.

  2. Incorporation prior belief in the general path model: A comparison of information sources

    International Nuclear Information System (INIS)

    Coble, Jamie; Hines, Wesley

    2014-01-01

    The general path model (GPM) is one approach for performing degradation-based, or Type III, prognostics. The GPM fits a parametric function to the collected observations of a prognostic parameter and extrapolates the fit to a failure threshold. This approach has been successfully applied to a variety of systems when a sufficient number of prognostic parameter observations are available. However, the parametric fit can suffer significantly when few data are available or the data are very noisy. In these instances, it is beneficial to include additional information to influence the fit to conform to a prior belief about the evolution of system degradation. Bayesian statistical approaches have been proposed to include prior information in the form of distributions of expected model parameters. This requires a number of run-to-failure cases with tracked prognostic parameters; these data may not be readily available for many systems. Reliability information and stressor-based (Type I and Type II, respectively) prognostic estimates can provide the necessary prior belief for the GPM. This article presents the Bayesian updating framework to include prior information in the GPM and compares the efficacy of including different information sources on two data sets.

  3. The Meaning of Goodness-of-Fit Tests: Commentary on "Goodness-of-Fit Assessment of Item Response Theory Models"

    Science.gov (United States)

    Thissen, David

    2013-01-01

    In this commentary, David Thissen states that "Goodness-of-fit assessment for IRT models is maturing; it has come a long way from zero." Thissen then references prior works on "goodness of fit" in the index of Lord and Novick's (1968) classic text; Yen (1984); Drasgow, Levine, Tsien, Williams, and Mead (1995); Chen and…

  4. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  5. A parametric investigation of hydrogen hcci combustion using a multi-zone model approach

    International Nuclear Information System (INIS)

    Komninos, N.P.; Hountalas, D.T.; Rakopoulos, C.D.

    2007-01-01

    The purpose of the present study is to examine the effect of various operating variables of a homogeneous charge compression ignition (HCCI) engine fueled with hydrogen, using a multi-zone model developed by the authors. The multi-zone model consists of zones, which are allotted spatial locations within the combustion chamber. The model takes into account heat transfer between the zones and the combustion chamber walls, providing a spatial temperature distribution during the closed part of the engine cycle, i.e. compression, combustion and expansion. Mass transfer between zones is also accounted for, based on the geometric configuration of the zones, and includes the flow of mass in and out of the crevice regions, represented by the crevice zone. Combustion is incorporated using chemical kinetics based on a chemical reaction mechanism for the oxidation of hydrogen. This chemical reaction mechanism also includes the reactions for nitrogen oxides formation. Using the multi-zone model a parametric investigation is conducted, in order to determine the effect of engine speed, equivalence ratio, compression ratio, inlet pressure and inlet temperature, on the performance, combustion characteristics and emissions of an HCCI engine fueled with hydrogen

  6. Fitting Diffusion Item Response Theory Models for Responses and Response Times Using the R Package diffIRT

    Directory of Open Access Journals (Sweden)

    Dylan Molenaar

    2015-08-01

    Full Text Available In the psychometric literature, item response theory models have been proposed that explicitly take the decision process underlying the responses of subjects to psychometric test items into account. Application of these models is however hampered by the absence of general and flexible software to fit these models. In this paper, we present diffIRT, an R package that can be used to fit item response theory models that are based on a diffusion process. We discuss parameter estimation and model fit assessment, show the viability of the package in a simulation study, and illustrate the use of the package with two datasets pertaining to extraversion and mental rotation. In addition, we illustrate how the package can be used to fit the traditional diffusion model (as it has been originally developed in experimental psychology to data.

  7. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  8. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  9. Parametrization in models of subcritical glass fracture: Activation offset and concerted activation

    Science.gov (United States)

    Rodrigues, Bruno Poletto; Hühn, Carolin; Erlebach, Andreas; Mey, Dorothea; Sierka, Marek; Wondraczek, Lothar

    2017-08-01

    There are two established but fundamentally different empirical approaches to parametrize the rate of subcritical fracture in brittle materials. While both are relying on a thermally activated reaction of bond rupture, the difference lies in the way as to how the externally applied stresses affect the local energy landscape. In the consideration of inorganic glasses, the strain energy is typically taken as an off-set on the activation barrier. As an alternative interpretation, the system’s volumetric strain-energy is added to its thermal energy. Such an interpretation is consistent with the democratic fiber bundle model. Here, we test this approach of concerted activation against macroscopic data of bond cleavage activation energy, and also against ab initio quantum chemical simulation of the energy barrier for cracking in silica. The fact that both models are able to reproduce experimental observation to a remarkable degree highlights the importance of a holistic consideration towards non-empirical understanding.

  10. Detailed Physical Trough Model for NREL's Solar Advisor Model: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, M. J.; Blair, N.; Dobos, A.

    2010-10-01

    Solar Advisor Model (SAM) is a free software package made available by the National Renewable Energy Laboratory (NREL), Sandia National Laboratory, and the US Department of Energy. SAM contains hourly system performance and economic models for concentrating solar power (CSP) systems, photovoltaic, solar hot-water, and generic fuel-use technologies. Versions of SAM prior to 2010 included only the parabolic trough model based on Excelergy. This model uses top-level empirical performance curves to characterize plant behavior, and thus is limited in predictive capability for new technologies or component configurations. To address this and other functionality challenges, a new trough model; derived from physical first principles was commissioned to supplement the Excelergy-based empirical model. This new 'physical model' approaches the task of characterizing the performance of the whole parabolic trough plant by replacing empirical curve-fit relationships with more detailed calculations where practical. The resulting model matches the annual performance of the SAM empirical model (which has been previously verified with plant data) while maintaining run-times compatible with parametric analysis, adding additional flexibility in modeled system configurations, and providing more detailed performance calculations in the solar field, power block, piping, and storage subsystems.

  11. Assessing model fit in latent class analysis when asymptotics do not hold

    NARCIS (Netherlands)

    van Kollenburg, Geert H.; Mulder, Joris; Vermunt, Jeroen K.

    2015-01-01

    The application of latent class (LC) analysis involves evaluating the LC model using goodness-of-fit statistics. To assess the misfit of a specified model, say with the Pearson chi-squared statistic, a p-value can be obtained using an asymptotic reference distribution. However, asymptotic p-values

  12. Parametric level correlations in random-matrix models

    International Nuclear Information System (INIS)

    Weidenmueller, Hans A

    2005-01-01

    We show that parametric level correlations in random-matrix theories are closely related to a breaking of the symmetry between the advanced and the retarded Green functions. The form of the parametric level correlation function is the same as for the disordered case considered earlier by Simons and Altshuler and is given by the graded trace of the commutator of the saddle-point solution with the particular matrix that describes the symmetry breaking in the actual case of interest. The strength factor differs from the case of disorder. It is determined solely by the Goldstone mode. It is essentially given by the number of levels that are strongly mixed as the external parameter changes. The factor can easily be estimated in applications

  13. PARAMETRIC DISTRIBUTION FAMILIES USEFUL FOR MODELLING NON - LIFE INSURANCE PAYMENTS DATA. TAIL BEHAVIOUR

    Directory of Open Access Journals (Sweden)

    Sandra Teodorescu

    2013-11-01

    Full Text Available The present paper describes a series of parametric distributions used for modeling non-life insurance payments data.Of those listed, special attention is paid to the transformed Beta distribution family.This distribution as well as those which are obtained from it(special cases of four-parameter transformed Beta distribution are used in the modeling of high costs, or even extreme ones.In the literature it follows the tail behaviour of distributions depending on the parameters, because the insurance payments data are tipically highly positively skewed and distributed with large upper tails.In the paper is described the tail behavior of the distribution in the left and right side respectively, and deduced from it, a general case.There are also some graphs of probability density function for one of the transformed Beta family members, which comes to reinforce the comments made.

  14. Fitting the Fractional Polynomial Model to Non-Gaussian Longitudinal Data

    Directory of Open Access Journals (Sweden)

    Ji Hoon Ryoo

    2017-08-01

    Full Text Available As in cross sectional studies, longitudinal studies involve non-Gaussian data such as binomial, Poisson, gamma, and inverse-Gaussian distributions, and multivariate exponential families. A number of statistical tools have thus been developed to deal with non-Gaussian longitudinal data, including analytic techniques to estimate parameters in both fixed and random effects models. However, as yet growth modeling with non-Gaussian data is somewhat limited when considering the transformed expectation of the response via a linear predictor as a functional form of explanatory variables. In this study, we introduce a fractional polynomial model (FPM that can be applied to model non-linear growth with non-Gaussian longitudinal data and demonstrate its use by fitting two empirical binary and count data models. The results clearly show the efficiency and flexibility of the FPM for such applications.

  15. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    Science.gov (United States)

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  16. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  17. Reliable single chip genotyping with semi-parametric log-concave mixtures.

    Directory of Open Access Journals (Sweden)

    Ralph C A Rippe

    Full Text Available The common approach to SNP genotyping is to use (model-based clustering per individual SNP, on a set of arrays. Genotyping all SNPs on a single array is much more attractive, in terms of flexibility, stability and applicability, when developing new chips. A new semi-parametric method, named SCALA, is proposed. It is based on a mixture model using semi-parametric log-concave densities. Instead of using the raw data, the mixture is fitted on a two-dimensional histogram, thereby making computation time almost independent of the number of SNPs. Furthermore, the algorithm is effective in low-MAF situations.Comparisons between SCALA and CRLMM on HapMap genotypes show very reliable calling of single arrays. Some heterozygous genotypes from HapMap are called homozygous by SCALA and to lesser extent by CRLMM too. Furthermore, HapMap's NoCalls (NN could be genotyped by SCALA, mostly with high probability. The software is available as R scripts from the website www.math.leidenuniv.nl/~rrippe.

  18. A scaled Lagrangian method for performing a least squares fit of a model to plant data

    International Nuclear Information System (INIS)

    Crisp, K.E.

    1988-01-01

    Due to measurement errors, even a perfect mathematical model will not be able to match all the corresponding plant measurements simultaneously. A further discrepancy may be introduced if an un-modelled change in conditions occurs within the plant which should have required a corresponding change in model parameters - e.g. a gradual deterioration in the performance of some component(s). Taking both these factors into account, what is required is that the overall discrepancy between the model predictions and the plant data is kept to a minimum. This process is known as 'model fitting', A method is presented for minimising any function which consists of the sum of squared terms, subject to any constraints. Its most obvious application is in the process of model fitting, where a weighted sum of squares of the differences between model predictions and plant data is the function to be minimised. When implemented within existing Central Electricity Generating Board computer models, it will perform a least squares fit of a model to plant data within a single job submission. (author)

  19. Modeling river discharge and sediment transport in the Wax Lake-Atchafalaya basin with remote sensing parametrization.

    Science.gov (United States)

    Simard, M.; Liu, K.; Denbina, M. W.; Jensen, D.; Rodriguez, E.; Liao, T. H.; Christensen, A.; Jones, C. E.; Twilley, R.; Lamb, M. P.; Thomas, N. A.

    2017-12-01

    Our goal is to estimate the fluxes of water and sediments throughout the Wax Lake-Atchafalaya basin. This was achieved by parametrization of a set of 1D (HEC-RAS) and 2D (DELFT3D) hydrology models with state of the art remote sensing measurements of water surface elevation, water surface slope and total suspended sediment (TSS) concentrations. The model implementations are spatially explicit, simulating river currents, lateral flows to distributaries and marshes, and spatial variations of sediment concentrations. Three remote sensing instruments were flown simultaneously to collect data over the Wax Lake-Atchafalaya basin, and along with in situ field data. A Riegl Lidar was used to measure water surface elevation and slope, while the UAVSAR L-band radar collected data in repeat-pass interferometric mode to measure water level change within adjacent marshes and islands. These data were collected several times as the tide rose and fell. AVRIS-NG instruments measured water surface reflectance spectra, used to estimate TSS. Bathymetry was obtained from sonar transects and water level changes were recorded by 19 water level pressure transducers. We used several Acoustic Doppler Current Profiler (ADCP) transects to estimate river discharge. The remotely sensed measurements of water surface slope were small ( 1cm/km) and varied slightly along the channel, especially at the confluence with bayous and the intra-coastal waterway. The slope also underwent significant changes during the tidal cycle. Lateral fluxes to island marshes were mainly observed by UAVSAR close to the distributaries. The extensive remote sensing measurements showed significant disparity with the hydrology model outputs. Observed variations in water surface slopes were unmatched by the model and tidal wave propagation was much faster than gauge measurements. The slope variations were compensated for in the models by tuning local lateral fluxes, bathymetry and riverbed friction. Overall, the simpler 1D

  20. A Spline-Based Lack-Of-Fit Test for Independent Variable Effect in Poisson Regression.

    Science.gov (United States)

    Li, Chin-Shang; Tu, Wanzhu

    2007-05-01

    In regression analysis of count data, independent variables are often modeled by their linear effects under the assumption of log-linearity. In reality, the validity of such an assumption is rarely tested, and its use is at times unjustifiable. A lack-of-fit test is proposed for the adequacy of a postulated functional form of an independent variable within the framework of semiparametric Poisson regression models based on penalized splines. It offers added flexibility in accommodating the potentially non-loglinear effect of the independent variable. A likelihood ratio test is constructed for the adequacy of the postulated parametric form, for example log-linearity, of the independent variable effect. Simulations indicate that the proposed model performs well, and misspecified parametric model has much reduced power. An example is given.

  1. Thermodynamic modeling of the power plant based on the SOFC with internal steam reforming of methane

    International Nuclear Information System (INIS)

    Ivanov, Peter

    2007-01-01

    Mathematical model based on the thermodynamic modeling of gaseous mixtures is developed for SOFC with internal steam reforming of methane. Macroscopic porous-electrode theory, including non-linear kinetics and gas-phase diffusion, is used to calculate the reforming reaction and the concentration polarization. Provided the data concerning properties and costs of materials the model is fit for wide range of parametric analysis of thermodynamic cycles including SOFC

  2. Correlation between a Student's Performance on the Mental Cutting Test and Their 3D Parametric Modeling Ability

    Science.gov (United States)

    Steinhauer, H. M.

    2012-01-01

    Engineering graphics has historically been viewed as a challenging course to teach as students struggle to grasp and understand the fundamental concepts and then to master their proper application. The emergence of stable, fast, affordable 3D parametric modeling platforms such as CATIA, Pro-E, and AutoCAD while providing several pedagogical…

  3. Model-free prediction and regression a transformation-based approach to inference

    CERN Document Server

    Politis, Dimitris N

    2015-01-01

    The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, co...

  4. 3D Product Development for Loose-Fitting Garments Based on Parametric Human Models

    Science.gov (United States)

    Krzywinski, S.; Siegmund, J.

    2017-10-01

    Researchers and commercial suppliers worldwide pursue the objective of achieving a more transparent garment construction process that is computationally linked to a virtual body, in order to save development costs over the long term. The current aim is not to transfer the complete pattern making step to a 3D design environment but to work out basic constructions in 3D that provide excellent fit due to their accurate construction and morphological pattern grading (automatic change of sizes in 3D) in respect of sizes and body types. After a computer-aided derivation of 2D pattern parts, these can be made available to the industry as a basis on which to create more fashionable variations.

  5. Fitted HBT radii versus space-time variances in flow-dominated models

    International Nuclear Information System (INIS)

    Lisa, Mike; Frodermann, Evan; Heinz, Ulrich

    2007-01-01

    The inability of otherwise successful dynamical models to reproduce the 'HBT radii' extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the 'RHIC HBT Puzzle'. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source which can be directly computed from the emission function, without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models some of which exhibit significant deviations from simple Gaussian behaviour. By Fourier transforming the emission function we compute the 2-particle correlation function and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and measured HBT radii remain, we show that a more 'apples-to-apples' comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data. (author)

  6. MEASURING DARK MATTER PROFILES NON-PARAMETRICALLY IN DWARF SPHEROIDALS: AN APPLICATION TO DRACO

    International Nuclear Information System (INIS)

    Jardel, John R.; Gebhardt, Karl; Fabricius, Maximilian H.; Williams, Michael J.; Drory, Niv

    2013-01-01

    We introduce a novel implementation of orbit-based (or Schwarzschild) modeling that allows dark matter density profiles to be calculated non-parametrically in nearby galaxies. Our models require no assumptions to be made about velocity anisotropy or the dark matter profile. The technique can be applied to any dispersion-supported stellar system, and we demonstrate its use by studying the Local Group dwarf spheroidal galaxy (dSph) Draco. We use existing kinematic data at larger radii and also present 12 new radial velocities within the central 13 pc obtained with the VIRUS-W integral field spectrograph on the 2.7 m telescope at McDonald Observatory. Our non-parametric Schwarzschild models find strong evidence that the dark matter profile in Draco is cuspy for 20 ≤ r ≤ 700 pc. The profile for r ≥ 20 pc is well fit by a power law with slope α = –1.0 ± 0.2, consistent with predictions from cold dark matter simulations. Our models confirm that, despite its low baryon content relative to other dSphs, Draco lives in a massive halo.

  7. Feature extraction through least squares fit to a simple model

    International Nuclear Information System (INIS)

    Demuth, H.B.

    1976-01-01

    The Oak Ridge National Laboratory (ORNL) presented the Los Alamos Scientific Laboratory (LASL) with 18 radiographs of fuel rod test bundles. The problem is to estimate the thickness of the gap between some cylindrical rods and a flat wall surface. The edges of the gaps are poorly defined due to finite source size, x-ray scatter, parallax, film grain noise, and other degrading effects. The radiographs were scanned and the scan-line data were averaged to reduce noise and to convert the problem to one dimension. A model of the ideal gap, convolved with an appropriate point-spread function, was fit to the averaged data with a least squares program; and the gap width was determined from the final fitted-model parameters. The least squares routine did converge and the gaps obtained are of reasonable size. The method is remarkably insensitive to noise. This report describes the problem, the techniques used to solve it, and the results and conclusions. Suggestions for future work are also given

  8. Twitter classification model: the ABC of two million fitness tweets.

    Science.gov (United States)

    Vickey, Theodore A; Ginis, Kathleen Martin; Dabrowski, Maciej

    2013-09-01

    The purpose of this project was to design and test data collection and management tools that can be used to study the use of mobile fitness applications and social networking within the context of physical activity. This project was conducted over a 6-month period and involved collecting publically shared Twitter data from five mobile fitness apps (Nike+, RunKeeper, MyFitnessPal, Endomondo, and dailymile). During that time, over 2.8 million tweets were collected, processed, and categorized using an online tweet collection application and a customized JavaScript. Using the grounded theory, a classification model was developed to categorize and understand the types of information being shared by application users. Our data show that by tracking mobile fitness app hashtags, a wealth of information can be gathered to include but not limited to daily use patterns, exercise frequency, location-based workouts, and overall workout sentiment.

  9. Rail Track Detection and Modelling in Mobile Laser Scanner Data

    Directory of Open Access Journals (Sweden)

    S. Oude Elberink

    2013-10-01

    Full Text Available We present a method for detecting and modelling rails in mobile laser scanner data. The detection is based on the properties of the rail tracks and contact wires such as relative height, linearity and relative position with respect to other objects. Points classified as rail track are used in a 3D modelling algorithm. The modelling is done by first fitting a parametric model of a rail piece to the points along each track, and estimating the position and orientation parameters of each piece model. For each position and orientation parameter a smooth low-order Fourier curve is interpolated. Using all interpolated parameters a mesh model of the rail is reconstructed. The method is explained using two areas from a dataset acquired by a LYNX mobile mapping system in a mountainous area. Residuals between railway laser points and 3D models are in the range of 2 cm. It is concluded that a curve fitting algorithm is essential to reliably and accurately model the rail tracks by using the knowledge that railways are following a continuous and smooth path.

  10. Assessing pupil and school performance by non-parametric and parametric techniques

    NARCIS (Netherlands)

    de Witte, K.; Thanassoulis, E.; Simpson, G.; Battisti, G.; Charlesworth-May, A.

    2010-01-01

    This paper discusses the use of the non-parametric free disposal hull (FDH) and the parametric multi-level model (MLM) as alternative methods for measuring pupil and school attainment where hierarchical structured data are available. Using robust FDH estimates, we show how to decompose the overall

  11. Eight New Luminous z > 6 Quasars Selected via SED Model Fitting of VISTA, WISE and Dark Energy Survey Year 1 Observations

    Energy Technology Data Exchange (ETDEWEB)

    Reed, S.L.; et al.

    2017-01-17

    We present the discovery and spectroscopic confirmation with the ESO NTT and Gemini South telescopes of eight new 6.0 < z < 6.5 quasars with z$_{AB}$ < 21.0. These quasars were photometrically selected without any star-galaxy morphological criteria from 1533 deg$^{2}$ using SED model fitting to photometric data from the Dark Energy Survey (g, r, i, z, Y), the VISTA Hemisphere Survey (J, H, K) and the Wide-Field Infrared Survey Explorer (W1, W2). The photometric data was fitted with a grid of quasar model SEDs with redshift dependent Lyman-{\\alpha} forest absorption and a range of intrinsic reddening as well as a series of low mass cool star models. Candidates were ranked using on a SED-model based $\\chi^{2}$-statistic, which is extendable to other future imaging surveys (e.g. LSST, Euclid). Our spectral confirmation success rate is 100% without the need for follow-up photometric observations as used in other studies of this type. Combined with automatic removal of the main types of non-astrophysical contaminants the method allows large data sets to be processed without human intervention and without being over run by spurious false candidates. We also present a robust parametric redshift estimating technique that gives comparable accuracy to MgII and CO based redshift estimators. We find two z $\\sim$ 6.2 quasars with HII near zone sizes < 3 proper Mpc which could indicate that these quasars may be young with ages < 10$^6$ - 10$^7$ years or lie in over dense regions of the IGM. The z = 6.5 quasar VDESJ0224-4711 has J$_{AB}$ = 19.75 is the second most luminous quasar known with z > 6.5.

  12. Update on Multi-Variable Parametric Cost Models for Ground and Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda

    2012-01-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper reports on recent revisions and improvements to our ground telescope cost model and refinements of our understanding of space telescope cost models. One interesting observation is that while space telescopes are 50X to 100X more expensive than ground telescopes, their respective scaling relationships are similar. Another interesting speculation is that the role of technology development may be different between ground and space telescopes. For ground telescopes, the data indicates that technology development tends to reduce cost by approximately 50% every 20 years. But for space telescopes, there appears to be no such cost reduction because we do not tend to re-fly similar systems. Thus, instead of reducing cost, 20 years of technology development may be required to enable a doubling of space telescope capability. Other findings include: mass should not be used to estimate cost; spacecraft and science instrument costs account for approximately 50% of total mission cost; and, integration and testing accounts for only about 10% of total mission cost.

  13. Towards a Multi-Variable Parametric Cost Model for Ground and Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd

    2016-01-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper hypothesizes a single model, based on published models and engineering intuition, for both ground and space telescopes: OTA Cost approximately (X) D(exp (1.75 +/- 0.05)) lambda(exp(-0.5 +/- 0.25) T(exp -0.25) e (exp (-0.04)Y). Specific findings include: space telescopes cost 50X to 100X more ground telescopes; diameter is the most important CER; cost is reduced by approximately 50% every 20 years (presumably because of technology advance and process improvements); and, for space telescopes, cost associated with wavelength performance is balanced by cost associated with operating temperature. Finally, duplication only reduces cost for the manufacture of identical systems (i.e. multiple aperture sparse arrays or interferometers). And, while duplication does reduce the cost of manufacturing the mirrors of segmented primary mirror, this cost savings does not appear to manifest itself in the final primary mirror assembly (presumably because the structure for a segmented mirror is more complicated than for a monolithic mirror).

  14. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  15. Monte Carlo tests of the Rasch model based on scalability coefficients

    DEFF Research Database (Denmark)

    Christensen, Karl Bang; Kreiner, Svend

    2010-01-01

    that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence......For item responses fitting the Rasch model, the assumptions underlying the Mokken model of double monotonicity are met. This makes non-parametric item response theory a natural starting-point for Rasch item analysis. This paper studies scalability coefficients based on Loevinger's H coefficient...

  16. NASA's X-Plane Database and Parametric Cost Model v 2.0

    Science.gov (United States)

    Sterk, Steve; Ogluin, Anthony; Greenberg, Marc

    2016-01-01

    The NASA Armstrong Cost Engineering Team with technical assistance from NASA HQ (SID)has gone through the full process in developing new CERs from Version #1 to Version #2 CERs. We took a step backward and reexamined all of the data collected, such as dependent and independent variables, cost, dry weight, length, wingspan, manned versus unmanned, altitude, Mach number, thrust, and skin. We used a well- known statistical analysis tool called CO$TAT instead of using "R" multiple linear or the "Regression" tool found in Microsoft Excel(TradeMark). We setup an "array of data" by adding 21" dummy variables;" we analyzed the standard error (SE) and then determined the "best fit." We have parametrically priced-out several future X-planes and compared our results to those of other resources. More work needs to be done in getting "accurate and traceable cost data" from historical X-plane records!

  17. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    Science.gov (United States)

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  18. A general three-dimensional parametric geometry of the native aortic valve and root for biomechanical modeling.

    Science.gov (United States)

    Haj-Ali, Rami; Marom, Gil; Ben Zekry, Sagit; Rosenfeld, Moshe; Raanani, Ehud

    2012-09-21

    The complex three-dimensional (3D) geometry of the native tricuspid aortic valve (AV) is represented by select parametric curves allowing for a general construction and representation of the 3D-AV structure including the cusps, commissures and sinuses. The proposed general mathematical description is performed by using three independent parametric curves, two for the cusp and one for the sinuses. These curves are used to generate different surfaces that form the structure of the AV. Additional dependent curves are also generated and utilized in this process, such as the joint curve between the cusps and the sinuses. The model's feasibility to generate patient-specific parametric geometry is examined against 3D-transesophageal echocardiogram (3D-TEE) measurements from a non-pathological AV. Computational finite-element (FE) mesh can then be easily constructed from these surfaces. Examples are given for constructing several 3D-AV geometries by estimating the needed parameters from echocardiographic measurements. The average distance (error) between the calculated geometry and the 3D-TEE measurements was only 0.78±0.63mm. The proposed general 3D parametric method is very effective in quantitatively representing a wide range of native AV structures, with and without pathology. It can also facilitate a methodical quantitative investigation over the effect of pathology and mechanical loading on these major AV parameters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. GOODNESS-OF-FIT TEST FOR THE ACCELERATED FAILURE TIME MODEL BASED ON MARTINGALE RESIDUALS

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2013-01-01

    Roč. 49, č. 1 (2013), s. 40-59 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:GA MŠk(CZ) SVV 261315/2011 Keywords : accelerated failure time model * survival analysis * goodness-of-fit Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/novak-goodness-of-fit test for the aft model based on martingale residuals.pdf

  20. Inflation Rate Modelling in Indonesia

    Directory of Open Access Journals (Sweden)

    Rezzy Eko Caraka

    2016-10-01

    Full Text Available The purposes of this research were to analyse: (i Modelling the inflation rate in Indonesia with parametric regression. (ii Modelling the inflation rate in Indonesia using non-parametric regression spline multivariable (iii Determining the best model the inflation rate in Indonesia (iv Explaining the relationship inflation model parametric and non-parametric regression spline multivariable. Based on the analysis using the two methods mentioned the coefficient of determination (R2 in parametric regression of 65.1% while non-parametric amounted to 99.39%. To begin with, the factor of money supply or money stock, crude oil prices and the rupiah exchange rate against the dollar is significant on the rate of inflation. The stability of inflation is essential to support sustainable economic development and improve people's welfare. In conclusion, unstable inflation will complicate business planning business activities, both in production and investment activities as well as in the pricing of goods and services produced.DOI: 10.15408/etk.v15i2.3260

  1. A predictor-corrector algorithm to estimate the fractional flow in oil-water models

    International Nuclear Information System (INIS)

    Savioli, Gabriela B; Berdaguer, Elena M Fernandez

    2008-01-01

    We introduce a predictor-corrector algorithm to estimate parameters in a nonlinear hyperbolic problem. It can be used to estimate the oil-fractional flow function from the Buckley-Leverett equation. The forward model is non-linear: the sought- for parameter is a function of the solution of the equation. Traditionally, the estimation of functions requires the selection of a fitting parametric model. The algorithm that we develop does not require a predetermined parameter model. Therefore, the estimation problem is carried out over a set of parameters which are functions. The algorithm is based on the linearization of the parameter-to-output mapping. This technique is new in the field of nonlinear estimation. It has the advantage of laying aside parametric models. The algorithm is iterative and is of predictor-corrector type. We present theoretical results on the inverse problem. We use synthetic data to test the new algorithm.

  2. Assessing performance of Bayesian state-space models fit to Argos satellite telemetry locations processed with Kalman filtering.

    Directory of Open Access Journals (Sweden)

    Mónica A Silva

    Full Text Available Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF. The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km was nearly half that of LS estimates (11.6 ± 8.4 km. Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.

  3. The effect of measurement quality on targeted structural model fit indices: A comment on Lance, Beck, Fan, and Carter (2016).

    Science.gov (United States)

    McNeish, Daniel; Hancock, Gregory R

    2018-03-01

    Lance, Beck, Fan, and Carter (2016) recently advanced 6 new fit indices and associated cutoff values for assessing data-model fit in the structural portion of traditional latent variable path models. The authors appropriately argued that, although most researchers' theoretical interest rests with the latent structure, they still rely on indices of global model fit that simultaneously assess both the measurement and structural portions of the model. As such, Lance et al. proposed indices intended to assess the structural portion of the model in isolation of the measurement model. Unfortunately, although these strategies separate the assessment of the structure from the fit of the measurement model, they do not isolate the structure's assessment from the quality of the measurement model. That is, even with a perfectly fitting measurement model, poorer quality (i.e., less reliable) measurements will yield a more favorable verdict regarding structural fit, whereas better quality (i.e., more reliable) measurements will yield a less favorable structural assessment. This phenomenon, referred to by Hancock and Mueller (2011) as the reliability paradox, affects not only traditional global fit indices but also those structural indices proposed by Lance et al. as well. Fortunately, as this comment will clarify, indices proposed by Hancock and Mueller help to mitigate this problem and allow the structural portion of the model to be assessed independently of both the fit of the measurement model as well as the quality of indicator variables contained therein. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    Science.gov (United States)

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  5. Prior and posterior probabilistic models of uncertainties in a model for producing voice

    International Nuclear Information System (INIS)

    Cataldo, Edson; Sampaio, Rubens; Soize, Christian

    2010-01-01

    The aim of this paper is to use Bayesian statistics to update a probability density function related to the tension parameter, which is one of the main parameters responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. We follow a parametric approach for stochastic modeling, which requires the adoption of random variables to represent the uncertain parameters present in the cited model. For each random variable, a probability density function is constructed using the Maximum Entropy Principle and the Monte Carlo method is used to generate voice signals as the output of the model. Then, a probability density function of the voice fundamental frequency is constructed. The random variables are fit to experimental data so that the probability density function of the fundamental frequency obtained by the model can be as near as possible of a probability density function obtained from experimental data. New values are obtained experimentally for the fundamental frequency and they are used to update the probability density function of the tension parameter, via Bayes's Theorem.

  6. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies.

    Science.gov (United States)

    Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross

    2016-06-01

    To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.

  7. Estimating genetic covariance functions assuming a parametric correlation structure for environmental effects

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2001-11-01

    Full Text Available Abstract A random regression model for the analysis of "repeated" records in animal breeding is described which combines a random regression approach for additive genetic and other random effects with the assumption of a parametric correlation structure for within animal covariances. Both stationary and non-stationary correlation models involving a small number of parameters are considered. Heterogeneity in within animal variances is modelled through polynomial variance functions. Estimation of parameters describing the dispersion structure of such model by restricted maximum likelihood via an "average information" algorithm is outlined. An application to mature weight records of beef cow is given, and results are contrasted to those from analyses fitting sets of random regression coefficients for permanent environmental effects.

  8. Mathematical model of combined parametrical analysis of indicator process and thermal loading on the Diesel engine piston

    Directory of Open Access Journals (Sweden)

    G. Lebedeva

    2004-06-01

    Full Text Available In the publication the methodical aspects of a mathematical model of the combined parametrical analysis of an indicator process and thermal loading on the diesel engine piston have been considered. A thermodynamic model of a diesel engine cycle is developed. The executed development is intended for use during researches and on the initial stages of design work. Its realization for high revolution diesel engines of perspective type CHN15/15 allowed to choose rational variants for the organization of an indicator process and to prove power ranges of application for not cooled and created cooled oil welded pistons.

  9. Checking Fine and Gray subdistribution hazards model with cumulative sums of residuals

    DEFF Research Database (Denmark)

    Li, Jianing; Scheike, Thomas; Zhang, Mei Jie

    2015-01-01

    Recently, Fine and Gray (J Am Stat Assoc 94:496–509, 1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter...... estimation, and only a limited contribution has been made to check the model assumptions. In this paper, we present a class of analytical methods and graphical approaches for checking the assumptions of Fine and Gray’s model. The proposed goodness-of-fit test procedures are based on the cumulative sums...

  10. Eight new luminous z ≥ 6 quasars discovered via SED model fitting of VISTA, WISE and Dark Energy Survey Year 1 observations

    International Nuclear Information System (INIS)

    Reed, S. L.; McMahon, R. G.; Martini, P.; Banerji, M.; Auger, M.

    2017-01-01

    Here, we present the discovery and spectroscopic confirmation with the European Southern Observatory New Technology Telescope (NTT) and Gemini South telescopes of eight new, and the rediscovery of two previously known, 6.0 < z < 6.5 quasars with zAB < 21.0. These quasars were photometrically selected without any morphological criteria from 1533 deg2 using spectral energy distribution (SED) model fitting to photometric data from Dark Energy Survey (g, r, i, z, Y), VISTA Hemisphere Survey (J, H, K) and Wide-field Infrared Survey Explorer (W1, W2). The photometric data were fitted with a grid of quasar model SEDs with redshift-dependent Ly α forest absorption and a range of intrinsic reddening as well as a series of low-mass cool star models. Candidates were ranked using an SED-model-based χ2-statistic, which is extendable to other future imaging surveys (e.g. LSST and Euclid). Our spectral confirmation success rate is 100 per cent without the need for follow-up photometric observations as used in other studies of this type. Combined with automatic removal of the main types of non-astrophysical contaminants, the method allows large data sets to be processed without human intervention and without being overrun by spurious false candidates. We also present a robust parametric redshift estimator that gives comparable accuracy to Mg ii and CO-based redshift estimators. We find two z ~6.2 quasars with H ii near zone sizes ≤3 proper Mpc that could indicate that these quasars may be young with ages ≲ 10 6 -10 7 years or lie in over dense regions of the IGM. The z = 6.5 quasar VDES J0224–4711 has JAB = 19.75 and is the second most luminous quasar known with z ≥ 6.5.

  11. Computer-aided diagnosis of prostate cancer using multi-parametric MRI: comparison between PUN and Tofts models

    Science.gov (United States)

    Mazzetti, S.; Giannini, V.; Russo, F.; Regge, D.

    2018-05-01

    Computer-aided diagnosis (CAD) systems are increasingly being used in clinical settings to report multi-parametric magnetic resonance imaging (mp-MRI) of the prostate. Usually, CAD systems automatically highlight cancer-suspicious regions to the radiologist, reducing reader variability and interpretation errors. Nevertheless, implementing this software requires the selection of which mp-MRI parameters can best discriminate between malignant and non-malignant regions. To exploit functional information, some parameters are derived from dynamic contrast-enhanced (DCE) acquisitions. In particular, much CAD software employs pharmacokinetic features, such as K trans and k ep, derived from the Tofts model, to estimate a likelihood map of malignancy. However, non-pharmacokinetic models can be also used to describe DCE-MRI curves, without any requirement for prior knowledge or measurement of the arterial input function, which could potentially lead to large errors in parameter estimation. In this work, we implemented an empirical function derived from the phenomenological universalities (PUN) class to fit DCE-MRI. The parameters of the PUN model are used in combination with T2-weighted and diffusion-weighted acquisitions to feed a support vector machine classifier to produce a voxel-wise malignancy likelihood map of the prostate. The results were all compared to those for a CAD system based on Tofts pharmacokinetic features to describe DCE-MRI curves, using different quality aspects of image segmentation, while also evaluating the number and size of false positive (FP) candidate regions. This study included 61 patients with 70 biopsy-proven prostate cancers (PCa). The metrics used to evaluate segmentation quality between the two CAD systems were not statistically different, although the PUN-based CAD reported a lower number of FP, with reduced size compared to the Tofts-based CAD. In conclusion, the CAD software based on PUN parameters is a feasible means with which to

  12. The bystander effect model of Brenner and Sachs fitted to lung cancer data in 11 cohorts of underground miners, and equivalence of fit of a linear relative risk model with adjustment for attained age and age at exposure

    International Nuclear Information System (INIS)

    Little, M P

    2004-01-01

    Bystander effects following exposure to α-particles have been observed in many experimental systems, and imply that linearly extrapolating low dose risks from high dose data might materially underestimate risk. Brenner and Sachs (2002 Int. J. Radiat. Biol. 78 593-604; 2003 Health Phys. 85 103-8) have recently proposed a model of the bystander effect which they use to explain the inverse dose rate effect observed for lung cancer in underground miners exposed to radon daughters. In this paper we fit the model of the bystander effect proposed by Brenner and Sachs to 11 cohorts of underground miners, taking account of the covariance structure of the data and the period of latency between the development of the first pre-malignant cell and clinically overt cancer. We also fitted a simple linear relative risk model, with adjustment for age at exposure and attained age. The methods that we use for fitting both models are different from those used by Brenner and Sachs, in particular taking account of the covariance structure, which they did not, and omitting certain unjustifiable adjustments to the miner data. The fit of the original model of Brenner and Sachs (with 0 y period of latency) is generally poor, although it is much improved by assuming a 5 or 6 y period of latency from the first appearance of a pre-malignant cell to cancer. The fit of this latter model is equivalent to that of a linear relative risk model with adjustment for age at exposure and attained age. In particular, both models are capable of describing the observed inverse dose rate effect in this data set

  13. The impact of parametrized convection on cloud feedback

    Science.gov (United States)

    Webb, Mark J.; Lock, Adrian P.; Bretherton, Christopher S.; Bony, Sandrine; Cole, Jason N. S.; Idelkadi, Abderrahmane; Kang, Sarah M.; Koshiro, Tsuyoshi; Kawai, Hideaki; Ogura, Tomoo; Roehrig, Romain; Shin, Yechul; Mauritsen, Thorsten; Sherwood, Steven C.; Vial, Jessica; Watanabe, Masahiro; Woelfle, Matthew D.; Zhao, Ming

    2015-01-01

    We investigate the sensitivity of cloud feedbacks to the use of convective parametrizations by repeating the CMIP5/CFMIP-2 AMIP/AMIP + 4K uniform sea surface temperature perturbation experiments with 10 climate models which have had their convective parametrizations turned off. Previous studies have suggested that differences between parametrized convection schemes are a leading source of inter-model spread in cloud feedbacks. We find however that ‘ConvOff’ models with convection switched off have a similar overall range of cloud feedbacks compared with the standard configurations. Furthermore, applying a simple bias correction method to allow for differences in present-day global cloud radiative effects substantially reduces the differences between the cloud feedbacks with and without parametrized convection in the individual models. We conclude that, while parametrized convection influences the strength of the cloud feedbacks substantially in some models, other processes must also contribute substantially to the overall inter-model spread. The positive shortwave cloud feedbacks seen in the models in subtropical regimes associated with shallow clouds are still present in the ConvOff experiments. Inter-model spread in shortwave cloud feedback increases slightly in regimes associated with trade cumulus in the ConvOff experiments but is quite similar in the most stable subtropical regimes associated with stratocumulus clouds. Inter-model spread in longwave cloud feedbacks in strongly precipitating regions of the tropics is substantially reduced in the ConvOff experiments however, indicating a considerable local contribution from differences in the details of convective parametrizations. In both standard and ConvOff experiments, models with less mid-level cloud and less moist static energy near the top of the boundary layer tend to have more positive tropical cloud feedbacks. The role of non-convective processes in contributing to inter-model spread in cloud

  14. Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling

    Science.gov (United States)

    Dore, C.; Murphy, M.

    2013-02-01

    This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.

  15. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    Directory of Open Access Journals (Sweden)

    Javier Macias-Guarasa

    2012-10-01

    Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  16. Multiple organ definition in CT using a Bayesian approach for 3D model fitting

    Science.gov (United States)

    Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.

    1995-08-01

    Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.

  17. The disconnected values model improves mental well-being and fitness in an employee wellness program.

    Science.gov (United States)

    Anshel, Mark H; Brinthaupt, Thomas M; Kang, Minsoo

    2010-01-01

    This study examined the effect of a 10-week wellness program on changes in physical fitness and mental well-being. The conceptual framework for this study was the Disconnected Values Model (DVM). According to the DVM, detecting the inconsistencies between negative habits and values (e.g., health, family, faith, character) and concluding that these "disconnects" are unacceptable promotes the need for health behavior change. Participants were 164 full-time employees at a university in the southeastern U.S. The program included fitness coaching and a 90-minute orientation based on the DVM. Multivariate Mixed Model analyses indicated significantly improved scores from pre- to post-intervention on selected measures of physical fitness and mental well-being. The results suggest that the Disconnected Values Model provides an effective cognitive-behavioral approach to generating health behavior change in a 10-week workplace wellness program.

  18. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    Science.gov (United States)

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  19. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    Science.gov (United States)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  20. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.

    2015-01-01

    -wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via...... analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn’s disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While...... minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local...

  1. Density Forecasts of Crude-Oil Prices Using Option-Implied and ARCH-Type Models

    DEFF Research Database (Denmark)

    Tsiaras, Leonidas; Høg, Esben

      The predictive accuracy of competing crude-oil price forecast densities is investigated for the 1994-2006 period. Moving beyond standard ARCH models that rely exclusively on past returns, we examine the benefits of utilizing the forward-looking information that is embedded in the prices...... as for regions and intervals that are of special interest for the economic agent. We find that non-parametric adjustments of risk-neutral density forecasts perform significantly better than their parametric counterparts. Goodness-of-fit tests and out-of-sample likelihood comparisons favor forecast densities...

  2. Parametric Covariance Model for Horizon-Based Optical Navigation

    Science.gov (United States)

    Hikes, Jacob; Liounis, Andrew J.; Christian, John A.

    2016-01-01

    This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.

  3. Parametric and non-parametric models for lifespan modeling of insulation systems in electrical machines

    OpenAIRE

    Salameh , Farah; Picot , Antoine; Chabert , Marie; Maussion , Pascal

    2017-01-01

    International audience; This paper describes an original statistical approach for the lifespan modeling of electric machine insulation materials. The presented models aim to study the effect of three main stress factors (voltage, frequency and temperature) and their interactions on the insulation lifespan. The proposed methodology is applied to two different insulation materials tested in partial discharge regime. Accelerated ageing tests are organized according to experimental optimization m...

  4. Fit reduced GUTS models online: From theory to practice.

    Science.gov (United States)

    Baudrot, Virgile; Veber, Philippe; Gence, Guillaume; Charles, Sandrine

    2018-05-20

    Mechanistic modeling approaches, such as the toxicokinetic-toxicodynamic (TKTD) framework, are promoted by international institutions such as the European Food Safety Authority and the Organization for Economic Cooperation and Development to assess the environmental risk of chemical products generated by human activities. TKTD models can encompass a large set of mechanisms describing the kinetics of compounds inside organisms (e.g., uptake and elimination) and their effect at the level of individuals (e.g., damage accrual, recovery, and death mechanism). Compared to classical dose-response models, TKTD approaches have many advantages, including accounting for temporal aspects of exposure and toxicity, considering data points all along the experiment and not only at the end, and making predictions for untested situations as realistic exposure scenarios. Among TKTD models, the general unified threshold model of survival (GUTS) is within the most recent and innovative framework but is still underused in practice, especially by risk assessors, because specialist programming and statistical skills are necessary to run it. Making GUTS models easier to use through a new module freely available from the web platform MOSAIC (standing for MOdeling and StAtistical tools for ecotoxIClogy) should promote GUTS operability in support of the daily work of environmental risk assessors. This paper presents the main features of MOSAIC_GUTS: uploading of the experimental data, GUTS fitting analysis, and LCx estimates with their uncertainty. These features will be exemplified from literature data. Integr Environ Assess Manag 2018;00:000-000. © 2018 SETAC. © 2018 SETAC.

  5. Assessing a moderating effect and the global fit of a PLS model on online trading

    Directory of Open Access Journals (Sweden)

    Juan J. García-Machado

    2017-12-01

    Full Text Available This paper proposes a PLS Model for the study of Online Trading. Traditional investing has experienced a revolution due to the rise of e-trading services that enable investors to use Internet conduct secure trading. On the hand, model results show that there is a positive, direct and statistically significant relationship between personal outcome expectations, perceived relative advantage, shared vision and economy-based trust with the quality of knowledge. On the other hand, trading frequency and portfolio performance has also this relationship. After including the investor’s income and financial wealth (IFW as moderating effect, the PLS model was enhanced, and we found that the interaction term is negative and statistically significant, so, higher IFW levels entail a weaker relationship between trading frequency and portfolio performance and vice-versa. Finally, with regard to the goodness of overall model fit measures, they showed that the model is fit for SRMR and dG measures, so it is likely that the model is true.

  6. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Huang, Can

    2018-01-01

    –slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost......In order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master...... optimality. Numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods....

  7. Using geometry to improve model fitting and experiment design for glacial isostasy

    Science.gov (United States)

    Kachuck, S. B.; Cathles, L. M.

    2017-12-01

    As scientists we routinely deal with models, which are geometric objects at their core - the manifestation of a set of parameters as predictions for comparison with observations. When the number of observations exceeds the number of parameters, the model is a hypersurface (the model manifold) in the space of all possible predictions. The object of parameter fitting is to find the parameters corresponding to the point on the model manifold as close to the vector of observations as possible. But the geometry of the model manifold can make this difficult. By curving, ending abruptly (where, for instance, parameters go to zero or infinity), and by stretching and compressing the parameters together in unexpected directions, it can be difficult to design algorithms that efficiently adjust the parameters. Even at the optimal point on the model manifold, parameters might not be individually resolved well enough to be applied to new contexts. In our context of glacial isostatic adjustment, models of sparse surface observations have a broad spread of sensitivity to mixtures of the earth's viscous structure and the surface distribution of ice over the last glacial cycle. This impedes precise statements about crucial geophysical processes, such as the planet's thermal history or the climates that controlled the ice age. We employ geometric methods developed in the field of systems biology to improve the efficiency of fitting (geodesic accelerated Levenberg-Marquardt) and to identify the maximally informative sources of additional data to make better predictions of sea levels and ice configurations (optimal experiment design). We demonstrate this in particular in reconstructions of the Barents Sea Ice Sheet, where we show that only certain kinds of data from the central Barents have the power to distinguish between proposed models.

  8. Validation and sensitivity tests on improved parametrizations of a land surface process model (LSPM) in the Po Valley

    International Nuclear Information System (INIS)

    Cassardo, C.; Carena, E.; Longhetto, A.

    1998-01-01

    The Land Surface Process Model (LSPM) has been improved with respect to the 1. version of 1994. The modifications have involved the parametrizations of the radiation terms and of turbulent heat fluxes. A parametrization of runoff has also been developed, in order to close the hydrologic balance. This 2. version of LSPM has been validated against experimental data gathered at Mottarone (Verbania, Northern Italy) during a field experiment. The results of this validation show that this new version is able to apportionate the energy into sensible and latent heat fluxes. LSPM has also been submitted to a series of sensitivity tests in order to investigate the hydrological part of the model. The physical quantities selected in these sensitivity experiments have been the initial soil moisture content and the rainfall intensity. In each experiment, the model has been forced by using the observations carried out at the synoptic stations of San Pietro Capofiume (Po Valley, Italy). The observed characteristics of soil and vegetation (not involved in the sensitivity tests) have been used as initial and boundary conditions. The results of the simulation show that LSPM can reproduce well the energy, heat and water budgets and their behaviours with varying the selected parameters. A careful analysis of the LSPM output shows also the importance to identify the effective soil type

  9. Fitting the CDO correlation skew: a tractable structural jump-diffusion model

    DEFF Research Database (Denmark)

    Willemann, Søren

    2007-01-01

    We extend a well-known structural jump-diffusion model for credit risk to handle both correlations through diffusion of asset values and common jumps in asset value. Through a simplifying assumption on the default timing and efficient numerical techniques, we develop a semi-analytic framework...... allowing for instantaneous calibration to heterogeneous CDS curves and fast computation of CDO tranche spreads. We calibrate the model to CDX and iTraxx data from February 2007 and achieve a satisfactory fit. To price the senior tranches for both indices, we require a risk-neutral probability of a market...

  10. On the parallelization of a three-parametric log-logistic estimation algorithm

    OpenAIRE

    Asenjo-Plaza, Rafael; Rodríguez, Andrés; Navarro, Ángeles; Fernández-Madrigal, Juan Antonio; Cruz-Martin, Ana Maria

    2014-01-01

    Networked telerobots transmit data from its sensors to the remote controller. To provide guarantees on the time requirements of these systems it is mandatory to keep the transmission time delays below a given threshold, and to that end we should predict them. In this paper we tackle the parallelization of a procedure that models these stochastic time delays. More precisely, we focus on fitting the time delay signal using a three-parametrical log-logistic distribution. Since, the robot and the...

  11. A generalized additive regression model for survival times

    DEFF Research Database (Denmark)

    Scheike, Thomas H.

    2001-01-01

    Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...

  12. The Stellar Initial Mass Function in Early-type Galaxies from Absorption Line Spectroscopy. IV. A Super-Salpeter IMF in the Center of NGC 1407 from Non-parametric Models

    Energy Technology Data Exchange (ETDEWEB)

    Conroy, Charlie [Department of Astronomy, Harvard University, Cambridge, MA, 02138 (United States); Van Dokkum, Pieter G. [Department of Astronomy, Yale University, New Haven, CT, 06511 (United States); Villaume, Alexa [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States)

    2017-03-10

    It is now well-established that the stellar initial mass function (IMF) can be determined from the absorption line spectra of old stellar systems, and this has been used to measure the IMF and its variation across the early-type galaxy population. Previous work focused on measuring the slope of the IMF over one or more stellar mass intervals, implicitly assuming that this is a good description of the IMF and that the IMF has a universal low-mass cutoff. In this work we consider more flexible IMFs, including two-component power laws with a variable low-mass cutoff and a general non-parametric model. We demonstrate with mock spectra that the detailed shape of the IMF can be accurately recovered as long as the data quality is high (S/N ≳ 300 Å{sup −1}) and cover a wide wavelength range (0.4–1.0 μ m). We apply these flexible IMF models to a high S/N spectrum of the center of the massive elliptical galaxy NGC 1407. Fitting the spectrum with non-parametric IMFs, we find that the IMF in the center shows a continuous rise extending toward the hydrogen-burning limit, with a behavior that is well-approximated by a power law with an index of −2.7. These results provide strong evidence for the existence of extreme (super-Salpeter) IMFs in the cores of massive galaxies.

  13. A genetic algorithm for optimizing multi-pole Debye models of tissue dielectric properties

    International Nuclear Information System (INIS)

    Clegg, J; Robinson, M P

    2012-01-01

    Models of tissue dielectric properties (permittivity and conductivity) enable the interactions of tissues and electromagnetic fields to be simulated, which has many useful applications in microwave imaging, radio propagation, and non-ionizing radiation dosimetry. Parametric formulae are available, based on a multi-pole model of tissue dispersions, but although they give the dielectric properties over a wide frequency range, they do not convert easily to the time domain. An alternative is the multi-pole Debye model which works well in both time and frequency domains. Genetic algorithms are an evolutionary approach to optimization, and we found that this technique was effective at finding the best values of the multi-Debye parameters. Our genetic algorithm optimized these parameters to fit to either a Cole–Cole model or to measured data, and worked well over wide or narrow frequency ranges. Over 10 Hz–10 GHz the best fits for muscle, fat or bone were each found for ten dispersions or poles in the multi-Debye model. The genetic algorithm is a fast and effective method of developing tissue models that compares favourably with alternatives such as the rational polynomial fit. (paper)

  14. Comparison of biospheric models of radionuclides transfer

    International Nuclear Information System (INIS)

    Garcia-Olivares, A.; Carrasco, E.

    1992-01-01

    The international BIOMOVS A4 exercise has made possible that a set of biospheric transfer models could predict the daily radionuclide concentration in soils, forage and some animal products (cow milk and beef) after the Chernobyl accident. The aim was to compare these predictions with experimental results in 13 locations around the world. The data provided were essentially the daily air contamination and precipitation and some site-dependent parameters. It was a blind test, the locations and experimental measures were not revealed in advance. Twenty-three models (quasi-steady state and time-dependent models) were involved in the study. In this paper an explicit criterion has been used in order to select the models that better fitted the experimental results. In nine selected locations a comparative analysis between these models has been carried out for obtaining the structural and parametric coincidences that could explain their relatively good performance. The first evidence obtained has been that a wide set of models were able to predict the order of magnitude of the nuclides time-integrated concentrations in several important biospheric comportments. But only a few models, all of them with a 'dynamical' structure, fitted the daily behavior with the reasonable agreement. The dynamical structure of the five most successful models at predicting for Caesium 137 (CIRCLE, ECOSYS, PATHWAY, PRYMA and RAGTIME) shows some common patterns that may be relevant for a better modelling of nuclear accident scenarios. (author)

  15. Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling

    Science.gov (United States)

    Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw

    2005-01-01

    The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.

  16. Modeling Uncertainty in Climate Change: A Multi-Model Comparison

    Energy Technology Data Exchange (ETDEWEB)

    Gillingham, Kenneth; Nordhaus, William; Anthoff, David; Blanford, Geoffrey J.; Bosetti, Valentina; Christensen, Peter; McJeon, Haewon C.; Reilly, J. M.; Sztorc, Paul

    2015-10-01

    The economics of climate change involves a vast array of uncertainties, complicating both the analysis and development of climate policy. This study presents the results of the first comprehensive study of uncertainty in climate change using multiple integrated assessment models. The study looks at model and parametric uncertainties for population, total factor productivity, and climate sensitivity and estimates the pdfs of key output variables, including CO2 concentrations, temperature, damages, and the social cost of carbon (SCC). One key finding is that parametric uncertainty is more important than uncertainty in model structure. Our resulting pdfs also provide insight on tail events.

  17. Models selection and fitting

    International Nuclear Information System (INIS)

    Martin Llorente, F.

    1990-01-01

    The models of atmospheric pollutants dispersion are based in mathematic algorithms that describe the transport, diffusion, elimination and chemical reactions of atmospheric contaminants. These models operate with data of contaminants emission and make an estimation of quality air in the area. This model can be applied to several aspects of atmospheric contamination

  18. An approximation to the adaptive exponential integrate-and-fire neuron model allows fast and predictive fitting to physiological data

    Directory of Open Access Journals (Sweden)

    Loreen eHertäg

    2012-09-01

    Full Text Available For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('in-vivo-like' input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  19. Mechanistic models of bone cancer induction by radium and plutonium in animals compared to humans

    International Nuclear Information System (INIS)

    Bijwaard, H.

    2006-01-01

    Two-mutation carcinogenesis models of mice and rats injected with 239 Pu and 226 Ra have been derived extending previous modellings of beagle dogs injected with 239 Pu and 226 Ra and radium dial painters. In all cases statistically significant parameters could be derived fitting data from several research groups jointly. This also lead to similarly parametrized models for 239 Pu and 226 Ra for all species. For each data set not more than five free model parameters were needed to fit the data adequately. From the toxicity ratios of the animal models for 239 Pu and 226 Ra, together with the human model for 226 Ra, an approximate model for the exposure of humans to 239 Pu has been derived. Relative risk calculations with this approximate model are in good agreement with epidemiological findings for the plutonium-exposed Mayak workers. This promising result may indicate new possibilities for estimating risks for humans from animal experiments. (authors)

  20. Logistic regression model for diagnosis of transition zone prostate cancer on multi-parametric MRI

    Energy Technology Data Exchange (ETDEWEB)

    Dikaios, Nikolaos; Halligan, Steve; Taylor, Stuart; Atkinson, David; Punwani, Shonit [University College London, Centre for Medical Imaging, London (United Kingdom); University College London Hospital, Departments of Radiology, London (United Kingdom); Alkalbani, Jokha; Sidhu, Harbir Singh; Fujiwara, Taiki [University College London, Centre for Medical Imaging, London (United Kingdom); Abd-Alazeez, Mohamed; Ahmed, Hashim; Emberton, Mark [University College London, Research Department of Urology, London (United Kingdom); Kirkham, Alex; Allen, Clare [University College London Hospital, Departments of Radiology, London (United Kingdom); Freeman, Alex [University College London Hospital, Department of Histopathology, London (United Kingdom)

    2014-09-17

    We aimed to develop logistic regression (LR) models for classifying prostate cancer within the transition zone on multi-parametric magnetic resonance imaging (mp-MRI). One hundred and fifty-five patients (training cohort, 70 patients; temporal validation cohort, 85 patients) underwent mp-MRI and transperineal-template-prostate-mapping (TPM) biopsy. Positive cores were classified by cancer definitions: (1) any-cancer; (2) definition-1 [≥Gleason 4 + 3 or ≥ 6 mm cancer core length (CCL)] [high risk significant]; and (3) definition-2 (≥Gleason 3 + 4 or ≥ 4 mm CCL) cancer [intermediate-high risk significant]. For each, logistic-regression mp-MRI models were derived from the training cohort and validated internally and with the temporal cohort. Sensitivity/specificity and the area under the receiver operating characteristic (ROC-AUC) curve were calculated. LR model performance was compared to radiologists' performance. Twenty-eight of 70 patients from the training cohort, and 25/85 patients from the temporal validation cohort had significant cancer on TPM. The ROC-AUC of the LR model for classification of cancer was 0.73/0.67 at internal/temporal validation. The radiologist A/B ROC-AUC was 0.65/0.74 (temporal cohort). For patients scored by radiologists as Prostate Imaging Reporting and Data System (Pi-RADS) score 3, sensitivity/specificity of radiologist A 'best guess' and LR model was 0.14/0.54 and 0.71/0.61, respectively; and radiologist B 'best guess' and LR model was 0.40/0.34 and 0.50/0.76, respectively. LR models can improve classification of Pi-RADS score 3 lesions similar to experienced radiologists. (orig.)

  1. Logistic regression model for diagnosis of transition zone prostate cancer on multi-parametric MRI

    International Nuclear Information System (INIS)

    Dikaios, Nikolaos; Halligan, Steve; Taylor, Stuart; Atkinson, David; Punwani, Shonit; Alkalbani, Jokha; Sidhu, Harbir Singh; Fujiwara, Taiki; Abd-Alazeez, Mohamed; Ahmed, Hashim; Emberton, Mark; Kirkham, Alex; Allen, Clare; Freeman, Alex

    2015-01-01

    We aimed to develop logistic regression (LR) models for classifying prostate cancer within the transition zone on multi-parametric magnetic resonance imaging (mp-MRI). One hundred and fifty-five patients (training cohort, 70 patients; temporal validation cohort, 85 patients) underwent mp-MRI and transperineal-template-prostate-mapping (TPM) biopsy. Positive cores were classified by cancer definitions: (1) any-cancer; (2) definition-1 [≥Gleason 4 + 3 or ≥ 6 mm cancer core length (CCL)] [high risk significant]; and (3) definition-2 (≥Gleason 3 + 4 or ≥ 4 mm CCL) cancer [intermediate-high risk significant]. For each, logistic-regression mp-MRI models were derived from the training cohort and validated internally and with the temporal cohort. Sensitivity/specificity and the area under the receiver operating characteristic (ROC-AUC) curve were calculated. LR model performance was compared to radiologists' performance. Twenty-eight of 70 patients from the training cohort, and 25/85 patients from the temporal validation cohort had significant cancer on TPM. The ROC-AUC of the LR model for classification of cancer was 0.73/0.67 at internal/temporal validation. The radiologist A/B ROC-AUC was 0.65/0.74 (temporal cohort). For patients scored by radiologists as Prostate Imaging Reporting and Data System (Pi-RADS) score 3, sensitivity/specificity of radiologist A 'best guess' and LR model was 0.14/0.54 and 0.71/0.61, respectively; and radiologist B 'best guess' and LR model was 0.40/0.34 and 0.50/0.76, respectively. LR models can improve classification of Pi-RADS score 3 lesions similar to experienced radiologists. (orig.)

  2. Measuring fit of sequence data to phylogenetic model: gain of power using marginal tests.

    Science.gov (United States)

    Waddell, Peter J; Ota, Rissa; Penny, David

    2009-10-01

    Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (Unended quest: an intellectual autobiography. Fontana, London, 1976) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (Nature 297:197-200, 1982) to the present. We compare the general log-likelihood ratio (the G or G (2) statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (P approximately 0.5), but the marginalized tests do. Tests on pairwise frequency (F) matrices, strongly (P < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (P < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4( t ) patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with P < 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published trees may really be far larger than the analytical methods (e.g., bootstrap) report.

  3. UROX 2.0: an interactive tool for fitting atomic models into electron-microscopy reconstructions

    International Nuclear Information System (INIS)

    Siebert, Xavier; Navaza, Jorge

    2009-01-01

    UROX is software designed for the interactive fitting of atomic models into electron-microscopy reconstructions. The main features of the software are presented, along with a few examples. Electron microscopy of a macromolecular structure can lead to three-dimensional reconstructions with resolutions that are typically in the 30–10 Å range and sometimes even beyond 10 Å. Fitting atomic models of the individual components of the macromolecular structure (e.g. those obtained by X-ray crystallography or nuclear magnetic resonance) into an electron-microscopy map allows the interpretation of the latter at near-atomic resolution, providing insight into the interactions between the components. Graphical software is presented that was designed for the interactive fitting and refinement of atomic models into electron-microscopy reconstructions. Several characteristics enable it to be applied over a wide range of cases and resolutions. Firstly, calculations are performed in reciprocal space, which results in fast algorithms. This allows the entire reconstruction (or at least a sizeable portion of it) to be used by taking into account the symmetry of the reconstruction both in the calculations and in the graphical display. Secondly, atomic models can be placed graphically in the map while the correlation between the model-based electron density and the electron-microscopy reconstruction is computed and displayed in real time. The positions and orientations of the models are refined by a least-squares minimization. Thirdly, normal-mode calculations can be used to simulate conformational changes between the atomic model of an individual component and its corresponding density within a macromolecular complex determined by electron microscopy. These features are illustrated using three practical cases with different symmetries and resolutions. The software, together with examples and user instructions, is available free of charge at http://mem.ibs.fr/UROX/

  4. A parametric reconstruction of the deceleration parameter

    Energy Technology Data Exchange (ETDEWEB)

    Al Mamon, Abdulla [Manipal University, Manipal Centre for Natural Sciences, Manipal (India); Visva-Bharati, Department of Physics, Santiniketan (India); Das, Sudipta [Visva-Bharati, Department of Physics, Santiniketan (India)

    2017-07-15

    The present work is based on a parametric reconstruction of the deceleration parameter q(z) in a model for the spatially flat FRW universe filled with dark energy and non-relativistic matter. In cosmology, the parametric reconstruction technique deals with an attempt to build up a model by choosing some specific evolution scenario for a cosmological parameter and then estimate the values of the parameters with the help of different observational datasets. In this paper, we have proposed a logarithmic parametrization of q(z) to probe the evolution history of the universe. Using the type Ia supernova, baryon acoustic oscillation and the cosmic microwave background datasets, the constraints on the arbitrary model parameters q{sub 0} and q{sub 1} are obtained (within 1σ and 2σ confidence limits) by χ{sup 2}-minimization technique. We have then reconstructed the deceleration parameter, the total EoS parameter ω{sub tot}, the jerk parameter and have compared the reconstructed results of q(z) with other well-known parametrizations of q(z). We have also shown that two model selection criteria (namely, the Akaike information criterion and Bayesian information criterion) provide a clear indication that our reconstructed model is well consistent with other popular models. (orig.)

  5. Development and Analysis of Volume Multi-Sphere Method Model Generation using Electric Field Fitting

    Science.gov (United States)

    Ingram, G. J.

    Electrostatic modeling of spacecraft has wide-reaching applications such as detumbling space debris in the Geosynchronous Earth Orbit regime before docking, servicing and tugging space debris to graveyard orbits, and Lorentz augmented orbits. The viability of electrostatic actuation control applications relies on faster-than-realtime characterization of the electrostatic interaction. The Volume Multi-Sphere Method (VMSM) seeks the optimal placement and radii of a small number of equipotential spheres to accurately model the electrostatic force and torque on a conducting space object. Current VMSM models tuned using force and torque comparisons with commercially available finite element software are subject to the modeled probe size and numerical errors of the software. This work first investigates fitting of VMSM models to Surface-MSM (SMSM) generated electrical field data, removing modeling dependence on probe geometry while significantly increasing performance and speed. A proposed electric field matching cost function is compared to a force and torque cost function, the inclusion of a self-capacitance constraint is explored and 4 degree-of-freedom VMSM models generated using electric field matching are investigated. The resulting E-field based VMSM development framework is illustrated on a box-shaped hub with a single solar panel, and convergence properties of select models are qualitatively analyzed. Despite the complex non-symmetric spacecraft geometry, elegantly simple 2-sphere VMSM solutions provide force and torque fits within a few percent.

  6. Logistic regression model for diagnosis of transition zone prostate cancer on multi-parametric MRI.

    Science.gov (United States)

    Dikaios, Nikolaos; Alkalbani, Jokha; Sidhu, Harbir Singh; Fujiwara, Taiki; Abd-Alazeez, Mohamed; Kirkham, Alex; Allen, Clare; Ahmed, Hashim; Emberton, Mark; Freeman, Alex; Halligan, Steve; Taylor, Stuart; Atkinson, David; Punwani, Shonit

    2015-02-01

    We aimed to develop logistic regression (LR) models for classifying prostate cancer within the transition zone on multi-parametric magnetic resonance imaging (mp-MRI). One hundred and fifty-five patients (training cohort, 70 patients; temporal validation cohort, 85 patients) underwent mp-MRI and transperineal-template-prostate-mapping (TPM) biopsy. Positive cores were classified by cancer definitions: (1) any-cancer; (2) definition-1 [≥Gleason 4 + 3 or ≥ 6 mm cancer core length (CCL)] [high risk significant]; and (3) definition-2 (≥Gleason 3 + 4 or ≥ 4 mm CCL) cancer [intermediate-high risk significant]. For each, logistic-regression mp-MRI models were derived from the training cohort and validated internally and with the temporal cohort. Sensitivity/specificity and the area under the receiver operating characteristic (ROC-AUC) curve were calculated. LR model performance was compared to radiologists' performance. Twenty-eight of 70 patients from the training cohort, and 25/85 patients from the temporal validation cohort had significant cancer on TPM. The ROC-AUC of the LR model for classification of cancer was 0.73/0.67 at internal/temporal validation. The radiologist A/B ROC-AUC was 0.65/0.74 (temporal cohort). For patients scored by radiologists as Prostate Imaging Reporting and Data System (Pi-RADS) score 3, sensitivity/specificity of radiologist A 'best guess' and LR model was 0.14/0.54 and 0.71/0.61, respectively; and radiologist B 'best guess' and LR model was 0.40/0.34 and 0.50/0.76, respectively. LR models can improve classification of Pi-RADS score 3 lesions similar to experienced radiologists. • MRI helps find prostate cancer in the anterior of the gland • Logistic regression models based on mp-MRI can classify prostate cancer • Computers can help confirm cancer in areas doctors are uncertain about.

  7. Parametric CAD and Fea Model of a Saddle Tapping Tee

    DEFF Research Database (Denmark)

    A. Kristensen, Anders Schmidt; Lund Jepsen, Kristian

    2007-01-01

     Often it is necessary to branch of a pipe section on an oilrig. This operation is often performed by making a so-called "Hot Tapping", which involves welding a pipe and a flange on to the pipe section. A spherical valve and a gate are mounted on to the flange, i.e. weld-o-let. In order to perform...... the welding operations a so-called habitat must be constructed. This habitat encapsulates the "Hot Tapping" spot and is relatively costly. Thus, to avoid weld operations on to the pipeline, a solution with clamps has been developed, i.e. a Saddle Tapping Tee. The Saddle Tapping Tee is clamped on the pipe...... is determined from paragraph K302.3.2 in ASME B31.3. A full parametric 3D CAD model of the Saddle Tapping Tee is developed where a number of user-defined parameters are controlled from an Excel spreadsheet allowing parameter studies and technical documentation to be generated effectively. The same Excel spread...

  8. Predictive power of theoretical modelling of the nuclear mean field: examples of improving predictive capacities

    Science.gov (United States)

    Dedes, I.; Dudek, J.

    2018-03-01

    We examine the effects of the parametric correlations on the predictive capacities of the theoretical modelling keeping in mind the nuclear structure applications. The main purpose of this work is to illustrate the method of establishing the presence and determining the form of parametric correlations within a model as well as an algorithm of elimination by substitution (see text) of parametric correlations. We examine the effects of the elimination of the parametric correlations on the stabilisation of the model predictions further and further away from the fitting zone. It follows that the choice of the physics case and the selection of the associated model are of secondary importance in this case. Under these circumstances we give priority to the relative simplicity of the underlying mathematical algorithm, provided the model is realistic. Following such criteria, we focus specifically on an important but relatively simple case of doubly magic spherical nuclei. To profit from the algorithmic simplicity we chose working with the phenomenological spherically symmetric Woods–Saxon mean-field. We employ two variants of the underlying Hamiltonian, the traditional one involving both the central and the spin orbit potential in the Woods–Saxon form and the more advanced version with the self-consistent density-dependent spin–orbit interaction. We compare the effects of eliminating of various types of correlations and discuss the improvement of the quality of predictions (‘predictive power’) under realistic parameter adjustment conditions.

  9. Invited commentary: Lost in estimation--searching for alternatives to markov chains to fit complex Bayesian models.

    Science.gov (United States)

    Molitor, John

    2012-03-01

    Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.

  10. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies

    Energy Technology Data Exchange (ETDEWEB)

    Häggström, Ida, E-mail: haeggsti@mskcc.org [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 and Department of Radiation Sciences, Umeå University, Umeå 90187 (Sweden); Beattie, Bradley J.; Schmidtlein, C. Ross [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States)

    2016-06-15

    Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for

  11. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies

    International Nuclear Information System (INIS)

    Häggström, Ida; Beattie, Bradley J.; Schmidtlein, C. Ross

    2016-01-01

    Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for

  12. Phylogenetic tree reconstruction accuracy and model fit when proportions of variable sites change across the tree.

    Science.gov (United States)

    Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R

    2010-05-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.

  13. Application of photometric models to asteroids

    International Nuclear Information System (INIS)

    Bowell, E.; Dominque, D.; Hapke, B.

    1989-01-01

    The way an asteroid or other atmosphereless solar system body varies in brightness in response to changing illumination and viewing geometry depends in a very complicated way on the physical and optical properties of its surface and on its overall shape. The authors summarize the formulation and application of recent photometric models by Hapke and by Lumme and Bowell. In both models, the brightness of a rough and porous surface is parametrized in terms of the optical properties of individual particles, by shadowing between particles, and by the way in which light scattered among collections of particles. Both models succeed in their goal of fitting the observed photometric behavior of a wide variety of bodies, but neither has led to a very complete understanding of the properties of asteroid regoliths, primarily because in most cases the parameters in the present models cannot be adequately constrained by observations of integral brightness alone over a restricted range of phase angles

  14. On concurvity in nonlinear and nonparametric regression models

    Directory of Open Access Journals (Sweden)

    Sonia Amodio

    2014-12-01

    Full Text Available When data are affected by multicollinearity in the linear regression framework, then concurvity will be present in fitting a generalized additive model (GAM. The term concurvity describes nonlinear dependencies among the predictor variables. As collinearity results in inflated variance of the estimated regression coefficients in the linear regression model, the result of the presence of concurvity leads to instability of the estimated coefficients in GAMs. Even if the backfitting algorithm will always converge to a solution, in case of concurvity the final solution of the backfitting procedure in fitting a GAM is influenced by the starting functions. While exact concurvity is highly unlikely, approximate concurvity, the analogue of multicollinearity, is of practical concern as it can lead to upwardly biased estimates of the parameters and to underestimation of their standard errors, increasing the risk of committing type I error. We compare the existing approaches to detect concurvity, pointing out their advantages and drawbacks, using simulated and real data sets. As a result, this paper will provide a general criterion to detect concurvity in nonlinear and non parametric regression models.

  15. Model structure learning: A support vector machine approach for LPV linear-regression models

    NARCIS (Netherlands)

    Toth, R.; Laurain, V.; Zheng, W-X.; Poolla, K.

    2011-01-01

    Accurate parametric identification of Linear Parameter-Varying (LPV) systems requires an optimal prior selection of a set of functional dependencies for the parametrization of the model coefficients. Inaccurate selection leads to structural bias while over-parametrization results in a variance

  16. Tanning Shade Gradations of Models in Mainstream Fitness and Muscle Enthusiast Magazines: Implications for Skin Cancer Prevention in Men.

    Science.gov (United States)

    Basch, Corey H; Hillyer, Grace Clarke; Ethan, Danna; Berdnik, Alyssa; Basch, Charles E

    2015-07-01

    Tanned skin has been associated with perceptions of fitness and social desirability. Portrayal of models in magazines may reflect and perpetuate these perceptions. Limited research has investigated tanning shade gradations of models in men's versus women's fitness and muscle enthusiast magazines. Such findings are relevant in light of increased incidence and prevalence of melanoma in the United States. This study evaluated and compared tanning shade gradations of adult Caucasian male and female model images in mainstream fitness and muscle enthusiast magazines. Sixty-nine U.S. magazine issues (spring and summer, 2013) were utilized. Two independent reviewers rated tanning shade gradations of adult Caucasian male and female model images on magazines' covers, advertisements, and feature articles. Shade gradations were assessed using stock photographs of Caucasian models with varying levels of tanned skin on an 8-shade scale. A total of 4,683 images were evaluated. Darkest tanning shades were found among males in muscle enthusiast magazines and lightest among females in women's mainstream fitness magazines. By gender, male model images were 54% more likely to portray a darker tanning shade. In this study, images in men's (vs. women's) fitness and muscle enthusiast magazines portrayed Caucasian models with darker skin shades. Despite these magazines' fitness-related messages, pro-tanning images may promote attitudes and behaviors associated with higher skin cancer risk. To date, this is the first study to explore tanning shades in men's magazines of these genres. Further research is necessary to identify effects of exposure to these images among male readers. © The Author(s) 2014.

  17. A study of composite models at LEP with ALEPH

    International Nuclear Information System (INIS)

    Badaud, F.

    1992-04-01

    Tests of composite models are performed in e + e - collisions in the vicinity of the Z 0 pole using the ALEPH detector. Two kinds of substructure effects are searched for: deviations of differential cross section for reactions e + e - → l + l - and e + e - → γ γ from standard model predictions, and direct search for excited neutrino. A new interaction, parametrized by a 4-fermion contact term, cell, is studied in lepton pair production reactions, assuming different chiralities of the currents. Lower limits on the compositeness scale Λ are obtained by fitting model predictions to the data. They are in the range from 1 to a few TeV depending on model and lepton flavour. Researches for the lightest excited particle that could be the excited neutrino, are presented

  18. Improved scheme for parametrization of convection in the Met Office's Numerical Atmospheric-dispersion Modelling Environment (NAME)

    Science.gov (United States)

    Meneguz, Elena; Thomson, David; Witham, Claire; Kusmierczyk-Michulec, Jolanta

    2015-04-01

    NAME is a Lagrangian atmospheric dispersion model used by the Met Office to predict the dispersion of both natural and man-made contaminants in the atmosphere, e.g. volcanic ash, radioactive particles and chemical species. Atmospheric convection is responsible for transport and mixing of air resulting in a large exchange of heat and energy above the boundary layer. Although convection can transport material through the whole troposphere, convective clouds have a small horizontal length scale (of the order of few kilometres). Therefore, for large-scale transport the horizontal scale on which the convection exists is below the global NWP resolution used as input to NAME and convection must be parametrized. Prior to the work presented here, the enhanced vertical mixing generated by non-resolved convection was reproduced by randomly redistributing Lagrangian particles between the cloud base and cloud top with probability equal to 1/25th of the NWP predicted convective cloud fraction. Such a scheme is essentially diffusive and it does not make optimal use of all the information provided by the driving meteorological model. To make up for these shortcomings and make the parametrization more physically based, the convection scheme has been recently revised. The resulting version, presented in this paper, is now based on the balance equation between upward, entrainment and detrainment fluxes. In particular, upward mass fluxes are calculated with empirical formulas derived from Cloud Resolving Models and using the NWP convective precipitation diagnostic as closure. The fluxes are used to estimate how many particles entrain, move upward and detrain. Lastly, the scheme is completed by applying a compensating subsidence flux. The performance of the updated convection scheme is benchmarked against available observational data of passive tracers. In particular, radioxenon is a noble gas that can undergo significant long range transport: this study makes use of observations of

  19. Nonlinear Container Ship Model for the Study of Parametric Roll Resonance

    DEFF Research Database (Denmark)

    Holden, Christian; Galeazzi, Roberto; Rodríguez, Claudio

    2007-01-01

    Parametric roll is a critical phenomenon for ships, whose onset may cause roll oscillations up to 40, leading to very dangerous situations and possibly capsizing. Container ships have been shown to be particularly prone to parametric roll resonance when they are sailing in moderate to heavy head ...

  20. Model reduction of parametrized systems

    CERN Document Server

    Ohlberger, Mario; Patera, Anthony; Rozza, Gianluigi; Urban, Karsten

    2017-01-01

    The special volume offers a global guide to new concepts and approaches concerning the following topics: reduced basis methods, proper orthogonal decomposition, proper generalized decomposition, approximation theory related to model reduction, learning theory and compressed sensing, stochastic and high-dimensional problems, system-theoretic methods, nonlinear model reduction, reduction of coupled problems/multiphysics, optimization and optimal control, state estimation and control, reduced order models and domain decomposition methods, Krylov-subspace and interpolatory methods, and applications to real industrial and complex problems. The book represents the state of the art in the development of reduced order methods. It contains contributions from internationally respected experts, guaranteeing a wide range of expertise and topics. Further, it reflects an important effor t, carried out over the last 12 years, to build a growing research community in this field. Though not a textbook, some of the chapters ca...

  1. Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models

    Science.gov (United States)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.

  2. Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models

    International Nuclear Information System (INIS)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-01-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data

  3. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  4. Theoretical and algorithmic advances in multi-parametric programming and control

    KAUST Repository

    Pistikopoulos, Efstratios N.; Dominguez, Luis; Panos, Christos; Kouramas, Konstantinos; Chinchuluun, Altannar

    2012-01-01

    This paper presents an overview of recent theoretical and algorithmic advances, and applications in the areas of multi-parametric programming and explicit/multi-parametric model predictive control (mp-MPC). In multi-parametric programming, advances include areas such as nonlinear multi-parametric programming (mp-NLP), bi-level programming, dynamic programming and global optimization for multi-parametric mixed-integer linear programming problems (mp-MILPs). In multi-parametric/explicit MPC (mp-MPC), advances include areas such as robust multi-parametric control, multi-parametric nonlinear MPC (mp-NMPC) and model reduction in mp-MPC. A comprehensive framework for multi-parametric programming and control is also presented. Recent applications include a hydrogen storage device, a fuel cell power generation system, an unmanned autonomous vehicle (UAV) and a hybrid pressure swing adsorption (PSA) system. © 2012 Springer-Verlag.

  5. Theoretical and algorithmic advances in multi-parametric programming and control

    KAUST Repository

    Pistikopoulos, Efstratios N.

    2012-04-21

    This paper presents an overview of recent theoretical and algorithmic advances, and applications in the areas of multi-parametric programming and explicit/multi-parametric model predictive control (mp-MPC). In multi-parametric programming, advances include areas such as nonlinear multi-parametric programming (mp-NLP), bi-level programming, dynamic programming and global optimization for multi-parametric mixed-integer linear programming problems (mp-MILPs). In multi-parametric/explicit MPC (mp-MPC), advances include areas such as robust multi-parametric control, multi-parametric nonlinear MPC (mp-NMPC) and model reduction in mp-MPC. A comprehensive framework for multi-parametric programming and control is also presented. Recent applications include a hydrogen storage device, a fuel cell power generation system, an unmanned autonomous vehicle (UAV) and a hybrid pressure swing adsorption (PSA) system. © 2012 Springer-Verlag.

  6. Global fits of GUT-scale SUSY models with GAMBIT

    Science.gov (United States)

    Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin

    2017-12-01

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

  7. Global fits of GUT-scale SUSY models with GAMBIT

    Energy Technology Data Exchange (ETDEWEB)

    Athron, Peter [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Bringmann, Torsten; Dal, Lars A.; Krislock, Abram; Raklev, Are [University of Oslo, Department of Physics, Oslo (Norway); Buckley, Andy [University of Glasgow, SUPA, School of Physics and Astronomy, Glasgow (United Kingdom); Chrzaszcz, Marcin [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); H. Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Krakow (Poland); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Jackson, Paul; White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, CNRS, ENS de Lyon, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval (France); Theoretical Physics Department, CERN, Geneva (Switzerland); Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Rogan, Christopher [Harvard University, Department of Physics, Cambridge, MA (United States); Ruiz de Austri, Roberto [IFIC-UV/CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Saavedra, Aldo [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); The University of Sydney, Faculty of Engineering and Information Technologies, Centre for Translational Data Science, School of Physics, Camperdown, NSW (Australia); Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Serra, Nicola [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); Collaboration: The GAMBIT Collaboration

    2017-12-15

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos. (orig.)

  8. Parametric Hidden Markov Models for Recognition and Synthesis of Movements

    DEFF Research Database (Denmark)

    Herzog, Dennis; Krüger, Volker; Grest, Daniel

    2008-01-01

    In humanoid robotics, the recognition and synthesis of parametric movements plays an extraordinary role for robot human interaction. Such a parametric movement is a movement of a particular type (semantic), for example, similar pointing movements performed at different table-top positions....... For understanding the whole meaning of a movement of a human, the recognition of its type, likewise its parameterization are important. Only both together convey the whole meaning. Vice versa, for mimicry, the synthesis of movements for the motor control of a robot needs to be parameterized, e.g., by the relative...... the applicability for online recognition based on very noisy 3D tracking data. The use of a parametric representation of movements is shown in a robot demo, where a robot removes objects from a table as demonstrated by an advisor. The synthesis for motor control is performed for arbitrary table-top positions....

  9. The regression-calibration method for fitting generalized linear models with additive measurement error

    OpenAIRE

    James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll

    2003-01-01

    This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...

  10. Non-parametric model selection for subject-specific topological organization of resting-state functional connectivity.

    Science.gov (United States)

    Ferrarini, Luca; Veer, Ilya M; van Lew, Baldur; Oei, Nicole Y L; van Buchem, Mark A; Reiber, Johan H C; Rombouts, Serge A R B; Milles, J

    2011-06-01

    In recent years, graph theory has been successfully applied to study functional and anatomical connectivity networks in the human brain. Most of these networks have shown small-world topological characteristics: high efficiency in long distance communication between nodes, combined with highly interconnected local clusters of nodes. Moreover, functional studies performed at high resolutions have presented convincing evidence that resting-state functional connectivity networks exhibits (exponentially truncated) scale-free behavior. Such evidence, however, was mostly presented qualitatively, in terms of linear regressions of the degree distributions on log-log plots. Even when quantitative measures were given, these were usually limited to the r(2) correlation coefficient. However, the r(2) statistic is not an optimal estimator of explained variance, when dealing with (truncated) power-law models. Recent developments in statistics have introduced new non-parametric approaches, based on the Kolmogorov-Smirnov test, for the problem of model selection. In this work, we have built on this idea to statistically tackle the issue of model selection for the degree distribution of functional connectivity at rest. The analysis, performed at voxel level and in a subject-specific fashion, confirmed the superiority of a truncated power-law model, showing high consistency across subjects. Moreover, the most highly connected voxels were found to be consistently part of the default mode network. Our results provide statistically sound support to the evidence previously presented in literature for a truncated power-law model of resting-state functional connectivity. Copyright © 2010 Elsevier Inc. All rights reserved.

  11. Fit-for-purpose: species distribution model performance depends on evaluation criteria - Dutch Hoverflies as a case study.

    Science.gov (United States)

    Aguirre-Gutiérrez, Jesús; Carvalheiro, Luísa G; Polce, Chiara; van Loon, E Emiel; Raes, Niels; Reemer, Menno; Biesmeijer, Jacobus C

    2013-01-01

    Understanding species distributions and the factors limiting them is an important topic in ecology and conservation, including in nature reserve selection and predicting climate change impacts. While Species Distribution Models (SDM) are the main tool used for these purposes, choosing the best SDM algorithm is not straightforward as these are plentiful and can be applied in many different ways. SDM are used mainly to gain insight in 1) overall species distributions, 2) their past-present-future probability of occurrence and/or 3) to understand their ecological niche limits (also referred to as ecological niche modelling). The fact that these three aims may require different models and outputs is, however, rarely considered and has not been evaluated consistently. Here we use data from a systematically sampled set of species occurrences to specifically test the performance of Species Distribution Models across several commonly used algorithms. Species range in distribution patterns from rare to common and from local to widespread. We compare overall model fit (representing species distribution), the accuracy of the predictions at multiple spatial scales, and the consistency in selection of environmental correlations all across multiple modelling runs. As expected, the choice of modelling algorithm determines model outcome. However, model quality depends not only on the algorithm, but also on the measure of model fit used and the scale at which it is used. Although model fit was higher for the consensus approach and Maxent, Maxent and GAM models were more consistent in estimating local occurrence, while RF and GBM showed higher consistency in environmental variables selection. Model outcomes diverged more for narrowly distributed species than for widespread species. We suggest that matching study aims with modelling approach is essential in Species Distribution Models, and provide suggestions how to do this for different modelling aims and species' data

  12. CONSTRUCTION THE BRIDGE PIER AS PARAMETRIC OBJECT USING AUTODESK REVIT

    Directory of Open Access Journals (Sweden)

    K. I. Hladka

    2017-08-01

    Full Text Available Purpose. The work is aimed to solve the following tasks: 1 to investigate the possibilities of Autodesk Revit to create parametric objects; 2 to create an information model of the bridge pier with the possibility of changing the model size without changing geometry of the object; 3 to trace the complexity and feasibility of using parametric models when designing the elements of bridges. Methodology. The studies were carried out using spatial modeling in the Autodesk Revit system. The ratio of the parameters of the object was set, the relationship between individual geometric elements was determined and the changes that were made to the model with the change of the specified parameters were checked. Findings. Support model of two types has been created: for railway bridges and for road bridges. Both types of models change the dimensions and the number of constituent elements in accordance with the entered parameters. The performed work confirms the possibility of creating information parametric models of complex form and the expediency of using them in the design of bridges and not only. Originality. Creation of information models is a modern and relevant topic. But both in the literature and in Internet resources, parametrization is considered on the example of simple objects. The model proposed in the article consists of several dependent geometric bodies; therefore, it opens the topic of objects parameterization more fully and in detail, in comparison with the existing sources. As for the creation of parametric models of the bridge elements - such information is not found in the literature, that is, it is proposed for the first time. Practical value. Parametrization of spatial models allows significantly to accelerate and simplify the process of designing any objects due to the use of typical parametric models in many projects. Especially it concerns the design of bridges, since the standard elements for them, such as support or span are not

  13. Human X-chromosome inactivation pattern distributions fit a model of genetically influenced choice better than models of completely random choice

    Science.gov (United States)

    Renault, Nisa K E; Pritchett, Sonja M; Howell, Robin E; Greer, Wenda L; Sapienza, Carmen; Ørstavik, Karen Helene; Hamilton, David C

    2013-01-01

    In eutherian mammals, one X-chromosome in every XX somatic cell is transcriptionally silenced through the process of X-chromosome inactivation (XCI). Females are thus functional mosaics, where some cells express genes from the paternal X, and the others from the maternal X. The relative abundance of the two cell populations (X-inactivation pattern, XIP) can have significant medical implications for some females. In mice, the ‘choice' of which X to inactivate, maternal or paternal, in each cell of the early embryo is genetically influenced. In humans, the timing of XCI choice and whether choice occurs completely randomly or under a genetic influence is debated. Here, we explore these questions by analysing the distribution of XIPs in large populations of normal females. Models were generated to predict XIP distributions resulting from completely random or genetically influenced choice. Each model describes the discrete primary distribution at the onset of XCI, and the continuous secondary distribution accounting for changes to the XIP as a result of development and ageing. Statistical methods are used to compare models with empirical data from Danish and Utah populations. A rigorous data treatment strategy maximises information content and allows for unbiased use of unphased XIP data. The Anderson–Darling goodness-of-fit statistics and likelihood ratio tests indicate that a model of genetically influenced XCI choice better fits the empirical data than models of completely random choice. PMID:23652377

  14. A fitting LEGACY – modelling Kepler's best stars

    Directory of Open Access Journals (Sweden)

    Aarslev Magnus J.

    2017-01-01

    Full Text Available The LEGACY sample represents the best solar-like stars observed in the Kepler mission[5, 8]. The 66 stars in the sample are all on the main sequence or only slightly more evolved. They each have more than one year's observation data in short cadence, allowing for precise extraction of individual frequencies. Here we present model fits using a modified ASTFIT procedure employing two different near-surface-effect corrections, one by Christensen-Dalsgaard[4] and a newer correction proposed by Ball & Gizon[1]. We then compare the results obtained using the different corrections. We find that using the latter correction yields lower masses and significantly lower χ2 values for a large part of the sample.

  15. Simulation and parametric study of a film-coated controlled-release pharmaceutical.

    Science.gov (United States)

    Borgquist, Per; Zackrisson, Gunnar; Nilsson, Bernt; Axelsson, Anders

    2002-04-23

    Pharmaceutical formulations can be designed as Multiple Unit Systems, such as Roxiam CR, studied in this work. The dose is administrated as a capsule, which contains about 100 individual pellets, which in turn contain the active drug remoxipride. Experimental data for a large number of single pellets can be obtained by studying the release using microtitre plates. This makes it possible to study the release of the individual subunits making up the total dose. A mathematical model for simulating the release of remoxipride from single film-coated pellets is presented including internal and external mass transfer hindrance apart from the most important film resistance. The model can successfully simulate the release of remoxipride from single film-coated pellets if the lag phase of the experimental data is ignored. This was shown to have a minor influence on the release rate. The use of the present model is demonstrated by a parametric study showing that the release process is film-controlled, i.e. is limited by the mass transport through the polymer coating. The model was used to fit the film thickness and the drug loading to the experimental release data. The variation in the fitted values was similar to that obtained in the experiments.

  16. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references.

  17. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    International Nuclear Information System (INIS)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references

  18. A cautionary note on the use of information fit indexes in covariance structure modeling with means

    NARCIS (Netherlands)

    Wicherts, J.M.; Dolan, C.V.

    2004-01-01

    Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases

  19. Mathematical algorithm development and parametric studies with the GEOFRAC three-dimensional stochastic model of natural rock fracture systems

    Science.gov (United States)

    Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.

    2014-06-01

    This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.

  20. Automated prostate cancer detection via comprehensive multi-parametric magnetic resonance imaging texture feature models

    International Nuclear Information System (INIS)

    Khalvati, Farzad; Wong, Alexander; Haider, Masoom A.

    2015-01-01

    Prostate cancer is the most common form of cancer and the second leading cause of cancer death in North America. Auto-detection of prostate cancer can play a major role in early detection of prostate cancer, which has a significant impact on patient survival rates. While multi-parametric magnetic resonance imaging (MP-MRI) has shown promise in diagnosis of prostate cancer, the existing auto-detection algorithms do not take advantage of abundance of data available in MP-MRI to improve detection accuracy. The goal of this research was to design a radiomics-based auto-detection method for prostate cancer via utilizing MP-MRI data. In this work, we present new MP-MRI texture feature models for radiomics-driven detection of prostate cancer. In addition to commonly used non-invasive imaging sequences in conventional MP-MRI, namely T2-weighted MRI (T2w) and diffusion-weighted imaging (DWI), our proposed MP-MRI texture feature models incorporate computed high-b DWI (CHB-DWI) and a new diffusion imaging modality called correlated diffusion imaging (CDI). Moreover, the proposed texture feature models incorporate features from individual b-value images. A comprehensive set of texture features was calculated for both the conventional MP-MRI and new MP-MRI texture feature models. We performed feature selection analysis for each individual modality and then combined best features from each modality to construct the optimized texture feature models. The performance of the proposed MP-MRI texture feature models was evaluated via leave-one-patient-out cross-validation using a support vector machine (SVM) classifier trained on 40,975 cancerous and healthy tissue samples obtained from real clinical MP-MRI datasets. The proposed MP-MRI texture feature models outperformed the conventional model (i.e., T2w+DWI) with regard to cancer detection accuracy. Comprehensive texture feature models were developed for improved radiomics-driven detection of prostate cancer using MP-MRI. Using a