WorldWideScience

Sample records for surrogate fitness model

  1. Surrogate Modeling for Geometry Optimization

    DEFF Research Database (Denmark)

    Rojas Larrazabal, Marielba de la Caridad; Abraham, Yonas; Holzwarth, Natalie;

    2009-01-01

    A new approach for optimizing the nuclear geometry of an atomic system is described. Instead of the original expensive objective function (energy functional), a small number of simpler surrogates is used.......A new approach for optimizing the nuclear geometry of an atomic system is described. Instead of the original expensive objective function (energy functional), a small number of simpler surrogates is used....

  2. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models

    Directory of Open Access Journals (Sweden)

    Scott E. Field

    2014-07-01

    Full Text Available We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform’s value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mc_{fit} online operations, where c_{fit} denotes the fitting function operation count and, typically, m≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 10^{5}M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in

  3. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian; Alfonsi, Andrea; Huang, Dongli; Gleicher, Frederick; Wang, Bei; Adbel-Khalik, Hany S.; Pascucci, Valerio; Smith, Curtis L.

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  4. Space Mapping Optimization of Microwave Circuits Exploiting Surrogate Models

    DEFF Research Database (Denmark)

    Bakr, M. H.; Bandler, J. W.; Madsen, Kaj

    2000-01-01

    A powerful new space-mapping (SM) optimization algorithm is presented in this paper. It draws upon recent developments in both surrogate model-based optimization and modeling of microwave devices, SM optimization is formulated as a general optimization problem of a surrogate model. This model...

  5. Preclinical and human surrogate models of itch

    DEFF Research Database (Denmark)

    Hoeck, Emil August; Marker, Jens Broch; Gazerani, Parisa;

    2016-01-01

    Pruritus, or simply itch, is a debilitating symptom that significantly decreases the quality of life in a wide range of clinical conditions. While histamine remains the most studied mediator of itch in humans, treatment options for chronic itch, in particular antihistamine-resistant itch, are lim...... currently applied in animals and humans. This article is protected by copyright. All rights reserved.......Pruritus, or simply itch, is a debilitating symptom that significantly decreases the quality of life in a wide range of clinical conditions. While histamine remains the most studied mediator of itch in humans, treatment options for chronic itch, in particular antihistamine-resistant itch......, are limited. Relevant preclinical and human surrogate models of non-histaminergic itch are needed to accelerate the development of novel antipruritics and diagnostic tools. Advances in basic itch research have facilitated the development of diverse models of itch and associated dysesthesiae. While...

  6. Surrogate Modeling for Geometry Optimization in Material Design

    DEFF Research Database (Denmark)

    Rojas Larrazabal, Marielba de la Caridad; Abraham, Yonas B.; Holzwarth, Natalie A.W.;

    2007-01-01

    We propose a new approach based on surrogate modeling for geometry optimization in material design. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)......We propose a new approach based on surrogate modeling for geometry optimization in material design. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)...

  7. Human surrogate models of neuropathic pain: validity and limitations.

    Science.gov (United States)

    Binder, Andreas

    2016-02-01

    Human surrogate models of neuropathic pain in healthy subjects are used to study symptoms, signs, and the hypothesized underlying mechanisms. Although different models are available, different spontaneous and evoked symptoms and signs are inducible; 2 key questions need to be answered: are human surrogate models conceptually valid, ie, do they share the sensory phenotype of neuropathic pain states, and are they sufficiently reliable to allow consistent translational research?

  8. A Method of Surrogate Model Construction which Leverages Lower-Fidelity Information using Space Mapping Techniques

    Science.gov (United States)

    2014-03-27

    errors found using the polynomial response surrogate (LS PRM ) overlaid on the data from the space-mapped (SM) surrogate...nonlinear space-mapped surrogate responses, with the least-squares PRM surrogate response plotted for comparison . . . . . . . . . . . . . . . . . 65 42...Percent error comparison between the least-squares space-mapping and the PRM surrogate models derived from samples in the second dataset

  9. Two-dimensional surrogate contact modeling for computationally efficient dynamic simulation of total knee replacements.

    Science.gov (United States)

    Lin, Yi-Chung; Haftka, Raphael T; Queipo, Nestor V; Fregly, Benjamin J

    2009-04-01

    Computational speed is a major limiting factor for performing design sensitivity and optimization studies of total knee replacements. Much of this limitation arises from extensive geometry calculations required by contact analyses. This study presents a novel surrogate contact modeling approach to address this limitation. The approach involves fitting contact forces from a computationally expensive contact model (e.g., a finite element model) as a function of the relative pose between the contacting bodies. Because contact forces are much more sensitive to displacements in some directions than others, standard surrogate sampling and modeling techniques do not work well, necessitating the development of special techniques for contact problems. We present a computational evaluation and practical application of the approach using dynamic wear simulation of a total knee replacement constrained to planar motion in a Stanmore machine. The sample points needed for surrogate model fitting were generated by an elastic foundation (EF) contact model. For the computational evaluation, we performed nine different dynamic wear simulations with both the surrogate contact model and the EF contact model. In all cases, the surrogate contact model accurately reproduced the contact force, motion, and wear volume results from the EF model, with computation time being reduced from 13 min to 13 s. For the practical application, we performed a series of Monte Carlo analyses to determine the sensitivity of predicted wear volume to Stanmore machine setup issues. Wear volume was highly sensitive to small variations in motion and load inputs, especially femoral flexion angle, but not to small variations in component placements. Computational speed was reduced from an estimated 230 h to 4 h per analysis. Surrogate contact modeling can significantly improve the computational speed of dynamic contact and wear simulations of total knee replacements and is appropriate for use in design sensitivity

  10. Parameter identification and calibration of the Xin'anjiang model using the surrogate modeling approach

    Science.gov (United States)

    Ye, Yan; Song, Xiaomeng; Zhang, Jianyun; Kong, Fanzhe; Ma, Guangwen

    2014-06-01

    Practical experience has demonstrated that single objective functions, no matter how carefully chosen, prove to be inadequate in providing proper measurements for all of the characteristics of the observed data. One strategy to circumvent this problem is to define multiple fitting criteria that measure different aspects of system behavior, and to use multi-criteria optimization to identify non-dominated optimal solutions. Unfortunately, these analyses require running original simulation models thousands of times. As such, they demand prohibitively large computational budgets. As a result, surrogate models have been used in combination with a variety of multi-objective optimization algorithms to approximate the true Pareto-front within limited evaluations for the original model. In this study, multi-objective optimization based on surrogate modeling (multivariate adaptive regression splines, MARS) for a conceptual rainfall-runoff model (Xin'anjiang model, XAJ) was proposed. Taking the Yanduhe basin of Three Gorges in the upper stream of the Yangtze River in China as a case study, three evaluation criteria were selected to quantify the goodness-of-fit of observations against calculated values from the simulation model. The three criteria chosen were the Nash-Sutcliffe efficiency coefficient, the relative error of peak flow, and runoff volume (REPF and RERV). The efficacy of this method is demonstrated on the calibration of the XAJ model. Compared to the single objective optimization results, it was indicated that the multi-objective optimization method can infer the most probable parameter set. The results also demonstrate that the use of surrogate-modeling enables optimization that is much more efficient; and the total computational cost is reduced by about 92.5%, compared to optimization without using surrogate modeling. The results obtained with the proposed method support the feasibility of applying parameter optimization to computationally intensive simulation

  11. Surrogate Modeling of Deformable Joint Contact using Artificial Neural Networks

    Science.gov (United States)

    Eskinazi, Ilan; Fregly, Benjamin J.

    2016-01-01

    Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. PMID:26220591

  12. Reliability-based design optimization with progressive surrogate models

    Science.gov (United States)

    Kanakasabai, Pugazhendhi; Dhingra, Anoop K.

    2014-12-01

    Reliability-based design optimization (RBDO) has traditionally been solved as a nested (bilevel) optimization problem, which is a computationally expensive approach. Unilevel and decoupled approaches for solving the RBDO problem have also been suggested in the past to improve the computational efficiency. However, these approaches also require a large number of response evaluations during optimization. To alleviate the computational burden, surrogate models have been used for reliability evaluation. These approaches involve construction of surrogate models for the reliability computation at each point visited by the optimizer in the design variable space. In this article, a novel approach to solving the RBDO problem is proposed based on a progressive sensitivity surrogate model. The sensitivity surrogate models are built in the design variable space outside the optimization loop using the kriging method or the moving least squares (MLS) method based on sample points generated from low-discrepancy sampling (LDS) to estimate the most probable point of failure (MPP). During the iterative deterministic optimization, the MPP is estimated from the surrogate model for each design point visited by the optimizer. The surrogate sensitivity model is also progressively updated for each new iteration of deterministic optimization by adding new points and their responses. Four example problems are presented showing the relative merits of the kriging and MLS approaches and the overall accuracy and improved efficiency of the proposed approach.

  13. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  14. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  15. Surrogate modeling for initial rotational stiffness of welded tubular joints

    Directory of Open Access Journals (Sweden)

    M.R. Garifullin

    2016-10-01

    Full Text Available Recently, buildings and structures erected in Russia and abroad have to comply with stringent economic requirements. Buildings should not only be reliable and safe, have a beautiful architectural design, but also meet the criteria of rationality and energy efficiency. In practice, this usually means the need for additional comparative analysis in order to determine the optimal solution to the engineering task. Usually such an analysis is time-consuming and requires huge computational efforts. In this regard, surrogate modeling can be an effective tool for solving such problems. This article provides a brief description of surrogate models and the basic techniques of their construction, describes the construction process of a surrogate model to calculate initial rotational stiffness of welded RHS joints made of high strength steel (HSS.

  16. Strength Reliability Analysis of Turbine Blade Using Surrogate Models

    Directory of Open Access Journals (Sweden)

    Wei Duan

    2014-05-01

    Full Text Available There are many stochastic parameters that have an effect on the reliability of steam turbine blades performance in practical operation. In order to improve the reliability of blade design, it is necessary to take these stochastic parameters into account. In this study, a variable cross-section twisted blade is investigated and geometrical parameters, material parameters and load parameters are considered as random variables. A reliability analysis method as a combination of a Finite Element Method (FEM, a surrogate model and Monte Carlo Simulation (MCS, is applied to solve the blade reliability analysis. Based on the blade finite element parametrical model and the experimental design, two kinds of surrogate models, Polynomial Response Surface (PRS and Artificial Neural Network (ANN, are applied to construct the approximation analytical expressions between the blade responses (including maximum stress and deflection and random input variables, which act as a surrogate of finite element solver to drastically reduce the number of simulations required. Then the surrogate is used for most of the samples needed in the Monte Carlo method and the statistical parameters and cumulative distribution functions of the maximum stress and deflection are obtained by Monte Carlo simulation. Finally, the probabilistic sensitivities analysis, which combines the magnitude of the gradient and the width of the scatter range of the random input variables, is applied to evaluate how much the maximum stress and deflection of the blade are influenced by the random nature of input parameters.

  17. Very Short Literature Survey From Supervised Learning To Surrogate Modeling

    CERN Document Server

    Brusan, Altay

    2012-01-01

    The past century was era of linear systems. Either systems (especially industrial ones) were simple (quasi)linear or linear approximations were accurate enough. In addition, just at the ending decades of the century profusion of computing devices were available, before then due to lack of computational resources it was not easy to evaluate available nonlinear system studies. At the moment both these two conditions changed, systems are highly complex and also pervasive amount of computation strength is cheap and easy to achieve. For recent era, a new branch of supervised learning well known as surrogate modeling (meta-modeling, surface modeling) has been devised which aimed at answering new needs of modeling realm. This short literature survey is on to introduce surrogate modeling to whom is familiar with the concepts of supervised learning. Necessity, challenges and visions of the topic are considered.

  18. Multi-objective optimisation of a vehicle energy absorption structure based on surrogate model

    Institute of Scientific and Technical Information of China (English)

    谢素超; 周辉

    2014-01-01

    In order to optimize the crashworthy characteristic of energy-absorbing structures, the surrogate models of specific energy absorption (SEA) and ratio of SEA to initial peak force (REAF) with respect to the design parameters were respectively constructed based on surrogate model optimization methods (polynomial response surface method (PRSM) and Kriging method (KM)). Firstly, the sample data were prepared through the design of experiment (DOE). Then, the test data models were set up based on the theory of surrogate model, and the data samples were trained to obtain the response relationship between the SEA & REAF and design parameters. At last, the structure optimal parameters were obtained by visual analysis and genetic algorithm (GA). The results indicate that the KM, where the local interpolation method is used in Gauss correlation function, has the highest fitting accuracy and the structure optimal parameters are obtained as: the SEA of 29.8558 kJ/kg (corresponding toa=70 mm andt= 3.5 mm) and REAF of 0.2896 (corresponding toa=70 mm andt=1.9615 mm). The basis function of the quartic PRSM with higher order than that of the quadratic PRSM, and the mutual influence of the design variables are considered, so the fitting accuracy of the quartic PRSM is higher than that of the quadratic PRSM.

  19. Beauty and the beast: Some perspectives on efficient model analysis, surrogate models, and the future of modeling

    Science.gov (United States)

    Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.

    2015-12-01

    For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical

  20. Sequential optimization of strip bending process using multiquadric radial basis function surrogate models

    NARCIS (Netherlands)

    Havinga, Gosse Tjipke; van den Boogaard, Antonius H.; Klaseboer, G.

    2013-01-01

    Surrogate models are used within the sequential optimization strategy for forming processes. A sequential improvement (SI) scheme is used to refine the surrogate model in the optimal region. One of the popular surrogate modeling methods for SI is Kriging. However, the global response of Kriging mode

  1. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel

    2012-06-01

    Understanding the influence of multiple parameters in a complex simulation setting is a difficult task. In the ideal case, the scientist can freely steer such a simulation and is immediately presented with the results for a certain configuration of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute for the time-consuming simulation. The surrogate model we propose is based on the sparse grid technique, and we identify the main computational tasks associated with its evaluation and its extension. We further show how distributed data management combined with the specific use of accelerators allows us to approximate and deliver simulation results to a high-resolution visualization system in real-time. This significantly enhances the steering workflow and facilitates the interactive exploration of large datasets. © 2012 IEEE.

  2. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    Science.gov (United States)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  3. Uncertainty quantification of squeal instability via surrogate modelling

    Science.gov (United States)

    Nobari, Amir; Ouyang, Huajiang; Bannister, Paul

    2015-08-01

    One of the major issues that car manufacturers are facing is the noise and vibration of brake systems. Of the different sorts of noise and vibration, which a brake system may generate, squeal as an irritating high-frequency noise costs the manufacturers significantly. Despite considerable research that has been conducted on brake squeal, the root cause of squeal is still not fully understood. The most common assumption, however, is mode-coupling. Complex eigenvalue analysis is the most widely used approach to the analysis of brake squeal problems. One of the major drawbacks of this technique, nevertheless, is that the effects of variability and uncertainty are not included in the results. Apparently, uncertainty and variability are two inseparable parts of any brake system. Uncertainty is mainly caused by friction, contact, wear and thermal effects while variability mostly stems from the manufacturing process, material properties and component geometries. Evaluating the effects of uncertainty and variability in the complex eigenvalue analysis improves the predictability of noise propensity and helps produce a more robust design. The biggest hurdle in the uncertainty analysis of brake systems is the computational cost and time. Most uncertainty analysis techniques rely on the results of many deterministic analyses. A full finite element model of a brake system typically consists of millions of degrees-of-freedom and many load cases. Running time of such models is so long that automotive industry is reluctant to do many deterministic analyses. This paper, instead, proposes an efficient method of uncertainty propagation via surrogate modelling. A surrogate model of a brake system is constructed in order to reproduce the outputs of the large-scale finite element model and overcome the issue of computational workloads. The probability distribution of the real part of an unstable mode can then be obtained by using the surrogate model with a massive saving of

  4. Efficient Calibration of Computationally Intensive Groundwater Models through Surrogate Modelling with Lower Levels of Fidelity

    Science.gov (United States)

    Razavi, S.; Anderson, D.; Martin, P.; MacMillan, G.; Tolson, B.; Gabriel, C.; Zhang, B.

    2012-12-01

    Many sophisticated groundwater models tend to be computationally intensive as they rigorously represent detailed scientific knowledge about the groundwater systems. Calibration (model inversion), which is a vital step of groundwater model development, can require hundreds or thousands of model evaluations (runs) for different sets of parameters and as such demand prohibitively large computational time and resources. One common strategy to circumvent this computational burden is surrogate modelling which is concerned with developing and utilizing fast-to-run surrogates of the original computationally intensive models (also called fine models). Surrogates can be either based on statistical and data-driven models such as kriging and neural networks or simplified physically-based models with lower fidelity to the original system (also called coarse models). Fidelity in this context refers to the degree of the realism of a simulation model. This research initially investigates different strategies for developing lower-fidelity surrogates of a fine groundwater model and their combinations. These strategies include coarsening the fine model, relaxing the numerical convergence criteria, and simplifying the model geological conceptualisation. Trade-offs between model efficiency and fidelity (accuracy) are of special interest. A methodological framework is developed for coordinating the original fine model with its lower-fidelity surrogates with the objective of efficiently calibrating the parameters of the original model. This framework is capable of mapping the original model parameters to the corresponding surrogate model parameters and also mapping the surrogate model response for the given parameters to the original model response. This framework is general in that it can be used with different optimization and/or uncertainty analysis techniques available for groundwater model calibration and parameter/predictive uncertainty assessment. A real-world computationally

  5. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  6. Bayesian Calibration of the Community Land Model using Surrogates

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Sargsyan, K.; Swiler, Laura P.

    2015-01-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditioned on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that accurate surrogate models can be created for CLM in most cases. The posterior distributions lead to better prediction than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters’ distributions significantly. The structural error model reveals a correlation time-scale which can potentially be used to identify physical processes that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.

  7. Bayesian calibration of the Community Land Model using surrogates

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Swiler, Laura Painton

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.

  8. Surrogate model based iterative ensemble smoother for subsurface flow data assimilation

    Science.gov (United States)

    Chang, Haibin; Liao, Qinzhuo; Zhang, Dongxiao

    2017-02-01

    Subsurface geological formation properties often involve some degree of uncertainty. Thus, for most conditions, uncertainty quantification and data assimilation are necessary for predicting subsurface flow. The surrogate model based method is one common type of uncertainty quantification method, in which a surrogate model is constructed for approximating the relationship between model output and model input. Based on the prediction ability, the constructed surrogate model can be utilized for performing data assimilation. In this work, we develop an algorithm for implementing an iterative ensemble smoother (ES) using the surrogate model. We first derive an iterative ES scheme using a regular routine. In order to utilize surrogate models, we then borrow the idea of Chen and Oliver (2013) to modify the Hessian, and further develop an independent parameter based iterative ES formula. Finally, we establish the algorithm for the implementation of iterative ES using surrogate models. Two surrogate models, the PCE surrogate and the interpolation surrogate, are introduced for illustration. The performances of the proposed algorithm are tested by synthetic cases. The results show that satisfactory data assimilation results can be obtained by using surrogate models that have sufficient accuracy.

  9. Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation

    KAUST Repository

    Kadoura, Ahmad Salim

    2016-06-01

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6.

  10. Proper Orthogonal Decomposition as Surrogate Model for Aerodynamic Optimization

    Directory of Open Access Journals (Sweden)

    Valentina Dolci

    2016-01-01

    Full Text Available A surrogate model based on the proper orthogonal decomposition is developed in order to enable fast and reliable evaluations of aerodynamic fields. The proposed method is applied to subsonic turbulent flows and the proper orthogonal decomposition is based on an ensemble of high-fidelity computations. For the construction of the ensemble, fractional and full factorial planes together with central composite design-of-experiment strategies are applied. For the continuous representation of the projection coefficients in the parameter space, response surface methods are employed. Three case studies are presented. In the first case, the boundary shape of the problem is deformed and the flow past a backward facing step with variable step slope is studied. In the second case, a two-dimensional flow past a NACA 0012 airfoil is considered and the surrogate model is constructed in the (Mach, angle of attack parameter space. In the last case, the aerodynamic optimization of an automotive shape is considered. The results demonstrate how a reduced-order model based on the proper orthogonal decomposition applied to a small number of high-fidelity solutions can be used to generate aerodynamic data with good accuracy at a low cost.

  11. Fast and accurate prediction of numerical relativity waveforms from binary black hole mergers using surrogate models

    CERN Document Server

    Blackman, Jonathan; Galley, Chad R; Szilagyi, Bela; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-01-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. In this paper, we construct an accurate and fast-to-evaluate surrogate model for numerical relativity (NR) waveforms from non-spinning binary black hole coalescences with mass ratios from $1$ to $10$ and durations corresponding to about $15$ orbits before merger. Our surrogate, which is built using reduced order modeling techniques, is distinct from traditional modeling efforts. We find that the full multi-mode surrogate model agrees with waveforms generated by NR to within the numerical error of the NR code. In particular, we show that our modeling strategy produces surrogates which can correctly predict NR waveforms that were {\\em not} used for the surrogate's training. For all practical purposes, then, the surrogate waveform model is equivalent to the high-accuracy, large-scale simulation waveform but can be evaluated in a millisecond to a second dependin...

  12. On Using Surrogates with Genetic Programming.

    Science.gov (United States)

    Hildebrandt, Torsten; Branke, Jürgen

    2015-01-01

    One way to accelerate evolutionary algorithms with expensive fitness evaluations is to combine them with surrogate models. Surrogate models are efficiently computable approximations of the fitness function, derived by means of statistical or machine learning techniques from samples of fully evaluated solutions. But these models usually require a numerical representation, and therefore cannot be used with the tree representation of genetic programming (GP). In this paper, we present a new way to use surrogate models with GP. Rather than using the genotype directly as input to the surrogate model, we propose using a phenotypic characterization. This phenotypic characterization can be computed efficiently and allows us to define approximate measures of equivalence and similarity. Using a stochastic, dynamic job shop scenario as an example of simulation-based GP with an expensive fitness evaluation, we show how these ideas can be used to construct surrogate models and improve the convergence speed and solution quality of GP.

  13. Comparison of surrogate models with different methods in groundwater remediation process

    Indian Academy of Sciences (India)

    Jiannan Luo; Wenxi Lu

    2014-10-01

    Surrogate modelling is an effective tool for reducing computational burden of simulation optimization. In this article, polynomial regression (PR), radial basis function artificial neural network (RBFANN), and kriging methods were compared for building surrogate models of a multiphase flow simulation model in a simplified nitrobenzene contaminated aquifer remediation problem. In the model accuracy analysis process, a 10-fold cross validation method was adopted to evaluate the approximation accuracy of the three surrogate models. The results demonstrated that: RBFANN surrogate model and kriging surrogate model had acceptable approximation accuracy, and further that kriging model’s approximation accuracy was slightly higher than RBFANN model. However, the PR model demonstrated unacceptably poor approximation accuracy. Therefore, the RBFANN and kriging surrogates were selected and used in the optimization process to identify the most cost-effective remediation strategy at a nitrobenzene-contaminated site. The optimal remediation costs obtained with the two surrogate-based optimization models were similar, and had similar computational burden. These two surrogate-based optimization models are efficient tools for optimal groundwater remediation strategy identification.

  14. Sheet metal forming optimization by using surrogate modeling techniques

    Science.gov (United States)

    Wang, Hu; Ye, Fan; Chen, Lei; Li, Enying

    2017-01-01

    Surrogate assisted optimization has been widely applied in sheet metal forming design due to its efficiency. Therefore, to improve the efficiency of design and reduce the product development cycle, it is important for scholars and engineers to have some insight into the performance of each surrogate assisted optimization method and make them more flexible practically. For this purpose, the state-of-the-art surrogate assisted optimizations are investigated. Furthermore, in view of the bottleneck and development of the surrogate assisted optimization and sheet metal forming design, some important issues on the surrogate assisted optimization in support of the sheet metal forming design are analyzed and discussed, involving the description of the sheet metal forming design, off-line and online sampling strategies, space mapping algorithm, high dimensional problems, robust design, some challenges and potential feasible methods. Generally, this paper provides insightful observations into the performance and potential development of these methods in sheet metal forming design.

  15. Numerical investigation for erratic behavior of Kriging surrogate model

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Hyun Gil; Yi, Seul Gi [KAIST, Daejeon (Korea, Republic of); Choi, Seong Im [Virginia Polytechnic Institute and State University, Blacksburg (United States)

    2014-09-15

    Kriging model is one of popular spatial/temporal interpolation models in engineering field since it could reduce the time resources for the expensive analysis. But generation of the Kriging model is hardly a sinecure because internal semi-variogram structure of the Kriging often reveals numerically unstable or erratic behaviors. In present study, the issues in the maximum likelihood estimation which are the vital-parts of the construction of the Kriging model, is investigated. These issues are divided into two aspects; Issue I is for the erratic response of likelihood function itself, and Issue II is for numerically unstable behaviors in the correlation matrix. For both issues, studies for specific circumstances which might raise the issue, and the reason of that are conducted. Some practical ways further are suggested to cope with them. Furthermore, the issue is studied for practical problem; aerodynamic performance coefficients of two-dimensional airfoil predicted by CFD analysis. Result shows that such erratic behavior of Kriging surrogate model can be effectively resolved by proposed solution. In conclusion, it is expected this paper could be helpful to prevent such an erratic and unstable behavior.

  16. Multi-objective optimization of gear forging process based on adaptive surrogate meta-models

    Science.gov (United States)

    Meng, Fanjuan; Labergere, Carl; Lafon, Pascal; Daniel, Laurent

    2013-05-01

    In forging industry, net shape or near net shape forging of gears has been the subject of considerable research effort in the last few decades. So in this paper, a multi-objective optimization methodology of net shape gear forging process design has been discussed. The study is mainly done in four parts: building parametric CAD geometry model, simulating the forging process, fitting surrogate meta-models and optimizing the process by using an advanced algorithm. In order to maximally appropriate meta-models of the real response, an adaptive meta-model based design strategy has been applied. This is a continuous process: first, bui Id a preliminary version of the meta-models after the initial simulated calculations; second, improve the accuracy and update the meta-models by adding some new representative samplings. By using this iterative strategy, the number of the initial sample points for real numerical simulations is greatly decreased and the time for the forged gear design is significantly shortened. Finally, an optimal design for an industrial application of a 27-teeth gear forging process was introduced, which includes three optimization variables and two objective functions. A 3D FE nu merical simulation model is used to realize the process and an advanced thermo-elasto-visco-plastic constitutive equation is considered to represent the material behavior. The meta-model applied for this example is kriging and the optimization algorithm is NSGA-II. At last, a relatively better Pareto optimal front (POF) is gotten with gradually improving the obtained surrogate meta-models.

  17. Surrogate Models for Online Monitoring and Process Troubleshooting of NBR Emulsion Copolymerization

    Directory of Open Access Journals (Sweden)

    Chandra Mouli R. Madhuranthakam

    2016-03-01

    Full Text Available Chemical processes with complex reaction mechanisms generally lead to dynamic models which, while beneficial for predicting and capturing the detailed process behavior, are not readily amenable for direct use in online applications related to process operation, optimisation, control, and troubleshooting. Surrogate models can help overcome this problem. In this research article, the first part focuses on obtaining surrogate models for emulsion copolymerization of nitrile butadiene rubber (NBR, which is usually produced in a train of continuous stirred tank reactors. The predictions and/or profiles for several performance characteristics such as conversion, number of polymer particles, copolymer composition, and weight-average molecular weight, obtained using surrogate models are compared with those obtained using the detailed mechanistic model. In the second part of this article, optimal flow profiles based on dynamic optimisation using the surrogate models are obtained for the production of NBR emulsions with the objective of minimising the off-specification product generated during grade transitions.

  18. Finite Element-Derived Surrogate Models of Locked Plate Fracture Fixation Biomechanics.

    Science.gov (United States)

    Wee, Hwabok; Reid, J Spence; Chinchilli, Vernon M; Lewis, Gregory S

    2017-03-01

    Internal fixation of bone fractures using plates and screws involves many choices-implant type, material, sizes, and geometric configuration-made by the surgeon. These decisions can be important for providing adequate stability to promote healing and prevent implant mechanical failure. The purpose of this study was to develop mathematical models of the relationships between fracture fixation construct parameters and resulting 3D biomechanics, based on parametric computer simulations. Finite element models of hundreds of different locked plate fixation constructs for midshaft diaphyseal fractures were systematically assembled using custom algorithms, and axial, torsional, and bending loadings were simulated. Multivariate regression was used to fit response surface polynomial equations relating fixation design parameters to outputs including maximum implant stresses, axial and shear strain at the fracture site, and construct stiffness. Surrogate models with as little as three regressors showed good fitting (R (2) = 0.62-0.97). Inner working length was the strongest predictor of maximum plate and screw stresses, and a variety of quadratic and interaction terms influenced resulting biomechanics. The framework presented in this study can be applied to additional types of bone fractures to provide clinicians and implant designers with clinical insight, surgical optimization, and a comprehensive mathematical description of biomechanics.

  19. Transport of Pathogen Surrogates in Soil Treatment Units: Numerical Modeling

    Directory of Open Access Journals (Sweden)

    Ivan Morales

    2014-04-01

    Full Text Available Segmented mesocosms (n = 3 packed with sand, sandy loam or clay loam soil were used to determine the effect of soil texture and depth on transport of two septic tank effluent (STE-borne microbial pathogen surrogates—green fluorescent protein-labeled E. coli (GFPE and MS-2 coliphage—in soil treatment units. HYDRUS 2D/3D software was used to model the transport of these microbes from the infiltrative surface. Mesocosms were spiked with GFPE and MS-2 coliphage at 105 cfu/mL STE and 105–106 pfu/mL STE, respectively. In all soils, removal rates were >99.99% at 25 cm. The transport simulation compared (1 optimization; and (2 trial-and-error modeling approaches. Only slight differences between the transport parameters were observed between these approaches. Treating both the die-off rates and attachment/detachment rates as variables resulted in an overall better model fit, particularly for the tailing phase of the experiments. Independent of the fitting procedure, attachment rates computed by the model were higher in sandy and sandy loam soils than clay, which was attributed to unsaturated flow conditions at lower water content in the coarser-textured soils. Early breakthrough of the bacteria and virus indicated the presence of preferential flow in the system in the structured clay loam soil, resulting in faster movement of water and microbes through the soil relative to a conservative tracer (bromide.

  20. An Efficient Constraint Boundary Sampling Method for Sequential RBDO Using Kriging Surrogate Model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jihoon; Jang, Junyong; Kim, Shinyu; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Cho, Sugil; Kim, Hyung Woo; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Busan (Korea, Republic of)

    2016-06-15

    Reliability-based design optimization (RBDO) requires a high computational cost owing to its reliability analysis. A surrogate model is introduced to reduce the computational cost in RBDO. The accuracy of the reliability depends on the accuracy of the surrogate model of constraint boundaries in the surrogated-model-based RBDO. In earlier researches, constraint boundary sampling (CBS) was proposed to approximate accurately the boundaries of constraints by locating sample points on the boundaries of constraints. However, because CBS uses sample points on all constraint boundaries, it creates superfluous sample points. In this paper, efficient constraint boundary sampling (ECBS) is proposed to enhance the efficiency of CBS. ECBS uses the statistical information of a kriging surrogate model to locate sample points on or near the RBDO solution. The efficiency of ECBS is verified by mathematical examples.

  1. Alternative cokriging model for variable-fidelity surrogate modeling

    DEFF Research Database (Denmark)

    Han, Zhong Hua; Zimmermann, Ralf; Goertz, Stefan

    2012-01-01

    to construct global approximation models of the aerodynamic coefficients as well as the drag polar of an RAE 2822 airfoil. The kriging and cokriging models for the moment coefficient show that the poor space-filling properties of the quasi Monte Carlo sampling of the RANS simulations leaves a noticeable gap...

  2. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    Science.gov (United States)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of

  3. Real-time characterization of partially observed epidemics using surrogate models.

    Energy Technology Data Exchange (ETDEWEB)

    Safta, Cosmin; Ray, Jaideep; Lefantzi, Sophia; Crary, David (Applied Research Associates, Arlington, VA); Sargsyan, Khachik; Cheng, Karen (Applied Research Associates, Arlington, VA)

    2011-09-01

    We present a statistical method, predicated on the use of surrogate models, for the 'real-time' characterization of partially observed epidemics. Observations consist of counts of symptomatic patients, diagnosed with the disease, that may be available in the early epoch of an ongoing outbreak. Characterization, in this context, refers to estimation of epidemiological parameters that can be used to provide short-term forecasts of the ongoing epidemic, as well as to provide gross information on the dynamics of the etiologic agent in the affected population e.g., the time-dependent infection rate. The characterization problem is formulated as a Bayesian inverse problem, and epidemiological parameters are estimated as distributions using a Markov chain Monte Carlo (MCMC) method, thus quantifying the uncertainty in the estimates. In some cases, the inverse problem can be computationally expensive, primarily due to the epidemic simulator used inside the inversion algorithm. We present a method, based on replacing the epidemiological model with computationally inexpensive surrogates, that can reduce the computational time to minutes, without a significant loss of accuracy. The surrogates are created by projecting the output of an epidemiological model on a set of polynomial chaos bases; thereafter, computations involving the surrogate model reduce to evaluations of a polynomial. We find that the epidemic characterizations obtained with the surrogate models is very close to that obtained with the original model. We also find that the number of projections required to construct a surrogate model is O(10)-O(10{sup 2}) less than the number of samples required by the MCMC to construct a stationary posterior distribution; thus, depending upon the epidemiological models in question, it may be possible to omit the offline creation and caching of surrogate models, prior to their use in an inverse problem. The technique is demonstrated on synthetic data as well as

  4. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    Science.gov (United States)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  5. Using multiscale spatial models to assess potential surrogate habitat for an imperiled reptile.

    Directory of Open Access Journals (Sweden)

    Jennifer M Fill

    Full Text Available In evaluating conservation and management options for species, practitioners might consider surrogate habitats at multiple scales when estimating available habitat or modeling species' potential distributions based on suitable habitats, especially when native environments are rare. Species' dependence on surrogates likely increases as optimal habitat is degraded and lost due to anthropogenic landscape change, and thus surrogate habitats may be vital for an imperiled species' survival in highly modified landscapes. We used spatial habitat models to examine a potential surrogate habitat for an imperiled ambush predator (eastern diamondback rattlesnake, Crotalus adamanteus; EDB at two scales. The EDB is an apex predator indigenous to imperiled longleaf pine ecosystems (Pinus palustris of the southeastern United States. Loss of native open-canopy pine savannas and woodlands has been suggested as the principal cause of the species' extensive decline. We examined EDB habitat selection in the Coastal Plain tidewater region to evaluate the role of marsh as a potential surrogate habitat and to further quantify the species' habitat requirements at two scales: home range (HR and within the home range (WHR. We studied EDBs using radiotelemetry and employed an information-theoretic approach and logistic regression to model habitat selection as use vs.We failed to detect a positive association with marsh as a surrogate habitat at the HR scale; rather, EDBs exhibited significantly negative associations with all landscape patches except pine savanna. Within home range selection was characterized by a negative association with forest and a positive association with ground cover, which suggests that EDBs may use surrogate habitats of similar structure, including marsh, within their home ranges. While our HR analysis did not support tidal marsh as a surrogate habitat, marsh may still provide resources for EDBs at smaller scales.

  6. Adaptive surrogate modeling for response surface approximations with application to bayesian inference

    KAUST Repository

    Prudhomme, Serge

    2015-09-17

    Parameter estimation for complex models using Bayesian inference is usually a very costly process as it requires a large number of solves of the forward problem. We show here how the construction of adaptive surrogate models using a posteriori error estimates for quantities of interest can significantly reduce the computational cost in problems of statistical inference. As surrogate models provide only approximations of the true solutions of the forward problem, it is nevertheless necessary to control these errors in order to construct an accurate reduced model with respect to the observables utilized in the identification of the model parameters. Effectiveness of the proposed approach is demonstrated on a numerical example dealing with the Spalart–Allmaras model for the simulation of turbulent channel flows. In particular, we illustrate how Bayesian model selection using the adapted surrogate model in place of solving the coupled nonlinear equations leads to the same quality of results while requiring fewer nonlinear PDE solves.

  7. Modeling of Heating and Evaporation of FACE I Gasoline Fuel and its Surrogates

    KAUST Repository

    Elwardani, Ahmed Elsaid

    2016-04-05

    The US Department of Energy has formulated different gasoline fuels called \\'\\'Fuels for Advanced Combustion Engines (FACE)\\'\\' to standardize their compositions. FACE I is a low octane number gasoline fuel with research octane number (RON) of approximately 70. The detailed hydrocarbon analysis (DHA) of FACE I shows that it contains 33 components. This large number of components cannot be handled in fuel spray simulation where thousands of droplets are directly injected in combustion chamber. These droplets are to be heated, broken-up, collided and evaporated simultaneously. Heating and evaporation of single droplet FACE I fuel was investigated. The heating and evaporation model accounts for the effects of finite thermal conductivity, finite liquid diffusivity and recirculation inside the droplet, referred to as the effective thermal conductivity/effective diffusivity (ETC/ED) model. The temporal variations of the liquid mass fractions of the droplet components were used to characterize the evaporation process. Components with similar evaporation characteristics were merged together. A representative component was initially chosen based on the highest initial mass fraction. Three 6 components surrogates, Surrogate 1-3, that match evaporation characteristics of FACE I have been formulated without keeping same mass fractions of different hydrocarbon types. Another two surrogates (Surrogate 4 and 5) were considered keeping same hydrocarbon type concentrations. A distillation based surrogate that matches measured distillation profile was proposed. The calculated molar mass, hydrogen-to-carbon (H/C) ratio and RON of Surrogate 4 and distillation based one are close to those of FACE I.

  8. Development of a multi-objective optimization algorithm using surrogate models for coastal aquifer management

    Science.gov (United States)

    Kourakos, George; Mantoglou, Aristotelis

    2013-02-01

    SummaryThe demand for fresh water in coastal areas and islands can be very high due to increased local needs and tourism. A multi-objective optimization methodology is developed, involving minimization of economic and environmental costs while satisfying water demand. The methodology considers desalinization of pumped water and injection of treated water into the aquifer. Variable density aquifer models are computationally intractable when integrated in optimization algorithms. In order to alleviate this problem, a multi-objective optimization algorithm is developed combining surrogate models based on Modular Neural Networks [MOSA(MNNs)]. The surrogate models are trained adaptively during optimization based on a genetic algorithm. In the crossover step, each pair of parents generates a pool of offspring which are evaluated using the fast surrogate model. Then, the most promising offspring are evaluated using the exact numerical model. This procedure eliminates errors in Pareto solution due to imprecise predictions of the surrogate model. The method has important advancements compared to previous methods such as precise evaluation of the Pareto set and alleviation of propagation of errors due to surrogate model approximations. The method is applied to an aquifer in the Greek island of Santorini. The results show that the new MOSA(MNN) algorithm offers significant reduction in computational time compared to previous methods (in the case study it requires only 5% of the time required by other methods). Further, the Pareto solution is better than the solution obtained by alternative algorithms.

  9. A Bayesian Surrogate Model for Rapid Time Series Analysis and Application to Exoplanet Observations

    CERN Document Server

    Ford, Eric B; Veras, Dimitri

    2011-01-01

    We present a Bayesian surrogate model for the analysis of periodic or quasi-periodic time series data. We describe a computationally efficient implementation that enables Bayesian model comparison. We apply this model to simulated and real exoplanet observations. We discuss the results and demonstrate some of the challenges for applying our surrogate model to realistic exoplanet data sets. In particular, we find that analyses of real world data should pay careful attention to the effects of uneven spacing of observations and the choice of prior for the "jitter" parameter.

  10. Stochastic structural optimization using particle swarm optimization, surrogate models and Bayesian statistics

    Institute of Scientific and Technical Information of China (English)

    Jongbin Im; Jungsun Park

    2013-01-01

    This paper focuses on a method to solve structural optimization problems using particle swarm optimization (PSO),surrogate models and Bayesian statistics.PSO is a random/stochastic search algorithm designed to find the global optimum.However,PSO needs many evaluations compared to gradient-based optimization.This means PSO increases the analysis costs of structural optimization.One of the methods to reduce computing costs in stochastic optimization is to use approximation techniques.In this work,surrogate models are used,including the response surface method (RSM) and Kriging.When surrogate models are used,there are some errors between exact values and approximated values.These errors decrease the reliability of the optimum values and discard the realistic approximation of using surrogate models.In this paper,Bayesian statistics is used to obtain more reliable results.To verify and confirm the efficiency of the proposed method using surrogate models and Bayesian statistics for stochastic structural optimization,two numerical examples are optimized,and the optimization of a hub sleeve is demonstrated as a practical problem.

  11. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    Science.gov (United States)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  12. Surrogate POD models for building forming limit diagrams of parameterized sheet metal forming applications

    Science.gov (United States)

    Hamdaoui, M.; Le Quilliec, Guénhaël; Breitkopf, Piotr; Villon, Pierre

    2013-05-01

    The aim of this work is to present a surrogate POD (Proper Orthogonal Decomposition) approach for building forming limit diagrams at minimum cost for parameterized sheet metal formed work-pieces. First, a Latin Hypercube Sampling is performed on the design parameter space. Then, at each design site, displacement fields are computed using the popular open-source finite element software Code_Aster. Then, the method of snapshots is used for POD mode determination. POD coefficients are interpolated using kriging. Furthermore, an error analysis of the surrogate POD model is performed on a validation set. It is shown that on the considered use case the accuracy of the surrogate POD model is excellent for the representation of finite element displacement fields. The validated surrogate POD model is then used to build forming limit diagrams (FLD) for any design parameter to assess the quality of stamped metal sheets. Using the surrogate POD model, the Green-Lagrange strain tensor is derived, then major and minor principal deformations are determined at Gauss points for each mesh element. Furthermore, a signed distance between the forming limit curve in rupture and the obtained cloud of points in the plane (ɛ2, ɛ1) is computed to assess the quality of the formed workpiece. The minimization of this signed distance allows determining the safest design for the chosen use case.

  13. The Model Characteristics of Physical Fitness in CrossFit

    Directory of Open Access Journals (Sweden)

    Vasilii V. Volkov

    2014-06-01

    Full Text Available The aim of the study is to work out the model characteristics of the physical fitness of CrossFit athletes based on laboratory functional testing (n=10. The analysis of the body composition was conducted using the dual-energy absorptiometry method. The morpho-functional characteristics of the heart were explored using a high-resolution ultrasound scanner. Oxygen consumption at the aerobic-anaerobic threshold and maximum oxygen consumption were determined in a step test on arm and leg cycle ergometers using a gas-analyzer. The level of the physical fitness of leg muscles in the males and females who took part in the study was satisfactory. However, it was considerably higher than the norm for untrained people. The level of the physical fitness of arm muscles was higher than the average and matched the Master of Sport of International Class standards. The productivity of the cardio-vascular system was much higher than in healthy males and females who do not work out and comparable to the standards for advanced soccer players.

  14. Multimission Fuel-Burn Minimization in Aircraft Design: A Surrogate-Modeling Approach

    Science.gov (United States)

    Liem, Rhea Patricia

    Aerodynamic shape and aerostructural design optimizations that maximize the performance at a single flight condition result in designs with unacceptable off-design performance. While considering multiple flight conditions in the optimization improves the robustness of the designs, there is a need to develop a rational strategy for choosing the flight conditions and their relative emphases such that multipoint optimizations reflect the true objective function. In addition, there is a need to consider uncertain missions and flight conditions. In this thesis, the strategies to formulate the multipoint objective functions for aerodynamic shape and aerostructural optimization are presented. To determine the flight conditions and their corresponding weights, a novel surrogate-based mission analysis is developed to efficiently analyze hundreds of actual mission data to emulate their flight condition distribution. Using accurate and reliable surrogate models to approximate the aerodynamic coefficients used in the analysis makes this procedure computationally tractable. A mixture of experts (ME) approach is developed to overcome the limitations of conventional surrogate models in modeling the complex transonic drag profile. The ME approach combines multiple surrogate models probabilistically based on the divide-andconquer strategy. Using this model in the mission analysis significantly improves the range estimation accuracy, as compared to other conventional surrogate models. As expected, the multipoint aerodynamic shape and aerostructural optimizations demonstrate a consistent drag reduction, instead of the localized improvement by the single-point optimizations. The improved robustness in the multipoint optimized designs was also observed in terms of the improved range performance and more consistent fuel-burn reduction across the different missions. The results presented in this thesis show that the surrogate-model-assisted multipoint optimization produces a robust

  15. Kinetic Modeling of Gasoline Surrogate Components and Mixtures under Engine Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Mehl, M; Pitz, W J; Westbrook, C K; Curran, H J

    2010-01-11

    Real fuels are complex mixtures of thousands of hydrocarbon compounds including linear and branched paraffins, naphthenes, olefins and aromatics. It is generally agreed that their behavior can be effectively reproduced by simpler fuel surrogates containing a limited number of components. In this work, an improved version of the kinetic model by the authors is used to analyze the combustion behavior of several components relevant to gasoline surrogate formulation. Particular attention is devoted to linear and branched saturated hydrocarbons (PRF mixtures), olefins (1-hexene) and aromatics (toluene). Model predictions for pure components, binary mixtures and multicomponent gasoline surrogates are compared with recent experimental information collected in rapid compression machine, shock tube and jet stirred reactors covering a wide range of conditions pertinent to internal combustion engines (3-50 atm, 650-1200K, stoichiometric fuel/air mixtures). Simulation results are discussed focusing attention on the mixing effects of the fuel components.

  16. Effective use of integrated hydrological models in basin-scale water resources management: surrogate modeling approaches

    Science.gov (United States)

    Zheng, Y.; Wu, B.; Wu, X.

    2015-12-01

    Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real

  17. Evaluation of Model Fit in Cognitive Diagnosis Models

    Science.gov (United States)

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  18. Biomedical model fitting and error analysis.

    Science.gov (United States)

    Costa, Kevin D; Kleinstein, Steven H; Hershberg, Uri

    2011-09-20

    This Teaching Resource introduces students to curve fitting and error analysis; it is the second of two lectures on developing mathematical models of biomedical systems. The first focused on identifying, extracting, and converting required constants--such as kinetic rate constants--from experimental literature. To understand how such constants are determined from experimental data, this lecture introduces the principles and practice of fitting a mathematical model to a series of measurements. We emphasize using nonlinear models for fitting nonlinear data, avoiding problems associated with linearization schemes that can distort and misrepresent the data. To help ensure proper interpretation of model parameters estimated by inverse modeling, we describe a rigorous six-step process: (i) selecting an appropriate mathematical model; (ii) defining a "figure-of-merit" function that quantifies the error between the model and data; (iii) adjusting model parameters to get a "best fit" to the data; (iv) examining the "goodness of fit" to the data; (v) determining whether a much better fit is possible; and (vi) evaluating the accuracy of the best-fit parameter values. Implementation of the computational methods is based on MATLAB, with example programs provided that can be modified for particular applications. The problem set allows students to use these programs to develop practical experience with the inverse-modeling process in the context of determining the rates of cell proliferation and death for B lymphocytes using data from BrdU-labeling experiments.

  19. Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position

    Energy Technology Data Exchange (ETDEWEB)

    Malinowski, Kathleen T. [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States); McAvoy, Thomas J. [Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States); Department of Chemical and Biomolecular Engineering and Institute of Systems Research, University of Maryland, College Park, MD (United States); George, Rohini [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Dieterich, Sonja [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States); D' Souza, Warren D., E-mail: wdsou001@umaryland.edu [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States)

    2012-04-01

    Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precision in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.

  20. Surrogate model approach for improving the performance of reactive transport simulations

    Science.gov (United States)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2016-04-01

    Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines

  1. Are Physical Education Majors Models for Fitness?

    Science.gov (United States)

    Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela

    2012-01-01

    The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…

  2. Enhanced surrogate models for statistical design exploiting space mapping technology

    DEFF Research Database (Denmark)

    Koziel, Slawek; Bandler, John W.; Mohamed, Achmed S.;

    2005-01-01

    We present advances in microwave and RF device modeling exploiting Space Mapping (SM) technology. We propose new SM modeling formulations utilizing input mappings, output mappings, frequency scaling and quadratic approximations. Our aim is to enhance circuit models for statistical analysis...

  3. Adaptive Surrogate Modeling for Response Surface Approximations with Application to Bayesian Inference

    KAUST Repository

    Prudhomme, Serge

    2015-01-07

    The need for surrogate models and adaptive methods can be best appreciated if one is interested in parameter estimation using a Bayesian calibration procedure for validation purposes. We extend here our latest work on error decomposition and adaptive refinement for response surfaces to the development of surrogate models that can be substituted for the full models to estimate the parameters of Reynolds-averaged Navier-Stokes models. The error estimates and adaptive schemes are driven here by a quantity of interest and are thus based on the approximation of an adjoint problem. We will focus in particular to the accurate estimation of evidences to facilitate model selection. The methodology will be illustrated on the Spalart-Allmaras RANS model for turbulence simulation.

  4. Fitting Neuron Models to Spike Trains

    Science.gov (United States)

    Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925

  5. Contrast Gain Control Model Fits Masking Data

    Science.gov (United States)

    Watson, Andrew B.; Solomon, Joshua A.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    We studied the fit of a contrast gain control model to data of Foley (JOSA 1994), consisting of thresholds for a Gabor patch masked by gratings of various orientations, or by compounds of two orientations. Our general model includes models of Foley and Teo & Heeger (IEEE 1994). Our specific model used a bank of Gabor filters with octave bandwidths at 8 orientations. Excitatory and inhibitory nonlinearities were power functions with exponents of 2.4 and 2. Inhibitory pooling was broad in orientation, but narrow in spatial frequency and space. Minkowski pooling used an exponent of 4. All of the data for observer KMF were well fit by the model. We have developed a contrast gain control model that fits masking data. Unlike Foley's, our model accepts images as inputs. Unlike Teo & Heeger's, our model did not require multiple channels for different dynamic ranges.

  6. Surrogate modelling and optimization using shape-preserving response prediction: A review

    Science.gov (United States)

    Leifsson, Leifur; Koziel, Slawomir

    2016-03-01

    Computer simulation models are ubiquitous in modern engineering design. In many cases, they are the only way to evaluate a given design with sufficient fidelity. Unfortunately, an added computational expense is associated with higher fidelity models. Moreover, the systems being considered are often highly nonlinear and may feature a large number of designable parameters. Therefore, it may be impractical to solve the design problem with conventional optimization algorithms. A promising approach to alleviate these difficulties is surrogate-based optimization (SBO). Among proven SBO techniques, the methods utilizing surrogates constructed from corrected physics-based low-fidelity models are, in many cases, the most efficient. This article reviews a particular technique of this type, namely, shape-preserving response prediction (SPRP), which works on the level of the model responses to correct the underlying low-fidelity models. The formulation and limitations of SPRP are discussed. Applications to several engineering design problems are provided.

  7. Space Mapping Optimization of Microwave Circuits Exploiting Surrogate Models

    DEFF Research Database (Denmark)

    Bakr, M. H.; Bandler, J. W.; Madsen, Kaj

    2000-01-01

    is a convex combination of a mapped coarse model and a linearized fine model. It exploits, in a novel way, a linear frequency-sensitive mapping. During the optimization iterates, the coarse and fine models are simulated at different sets of frequencies. This approach is shown to be especially powerful...

  8. Development of Depletion Code Surrogate Models for Uncertainty Propagation in Scenario Studies

    Science.gov (United States)

    Krivtchik, Guillaume; Coquelet-Pascal, Christine; Blaise, Patrick; Garzenne, Claude; Le Mer, Joël; Freynet, David

    2014-06-01

    The result of transition scenario studies, which enable the comparison of different options of the reactor fleet evolution and management of the future fuel cycle materials, allow to perform technical and economic feasibility studies. The COSI code is developed by CEA and used to perform scenario calculations. It allows to model any fuel type, reactor fleet, fuel facility, and permits the tracking of U, Pu, minor actinides and fission products nuclides on a large time scale. COSI is coupled with the CESAR code which performs the depletion calculations based on one-group cross-section libraries and nuclear data. Different types of uncertainties have an impact on scenario studies: nuclear data and scenario assumptions. Therefore, it is necessary to evaluate their impact on the major scenario results. The methodology adopted to propagate these uncertainties throughout the scenario calculations is a stochastic approach. Considering the amount of inputs to be sampled in order to perform a stochastic calculation of the propagated uncertainty, it appears necessary to reduce the calculation time. Given that evolution calculations represent approximately 95% of the total scenario simulation time, an optimization can be done, with the development and implementation of a surrogate models library of CESAR in COSI. The input parameters of CESAR are sampled with URANIE, the CEA uncertainty platform, and for every sample, the isotopic composition after evolution evaluated with CESAR is stored. Then statistical analysis of the input and output tables allow to model the behavior of CESAR on each CESAR library, i.e. building a surrogate model. Several quality tests are performed on each surrogate model to insure the prediction power is satisfying. Afterward, a new routine implemented in COSI reads these surrogate models and using them in replacement of CESAR calculations. A preliminary study of the calculation time gain shows that the use of surrogate models allows stochastic

  9. Comparative Numerical Study of Four Biodiesel Surrogates for Application on Diesel 0D Phenomenological Modeling

    Directory of Open Access Journals (Sweden)

    Claude Valery Ngayihi Abbe

    2016-01-01

    Full Text Available To meet more stringent norms and standards concerning engine performances and emissions, engine manufacturers need to develop new technologies enhancing the nonpolluting properties of the fuels. In that sense, the testing and development of alternative fuels such as biodiesel are of great importance. Fuel testing is nowadays a matter of experimental and numerical work. Researches on diesel engine’s fuel involve the use of surrogates, for which the combustion mechanisms are well known and relatively similar to the investigated fuel. Biodiesel, due to its complex molecular configuration, is still the subject of numerous investigations in that area. This study presents the comparison of four biodiesel surrogates, methyl-butanoate, ethyl-butyrate, methyl-decanoate, and methyl-9-decenoate, in a 0D phenomenological combustion model. They were investigated for in-cylinder pressure, thermal efficiency, and NOx emissions. Experiments were performed on a six-cylinder turbocharged DI diesel engine fuelled by methyl ester (MEB and ethyl ester (EEB biodiesel from wasted frying oil. Results showed that, among the four surrogates, methyl butanoate presented better results for all the studied parameters. In-cylinder pressure and thermal efficiency were predicted with good accuracy by the four surrogates. NOx emissions were well predicted for methyl butanoate but for the other three gave approximation errors over 50%.

  10. Fitting Hidden Markov Models to Psychological Data

    Directory of Open Access Journals (Sweden)

    Ingmar Visser

    2002-01-01

    Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.

  11. Multiobjective Optimization Design of Double-Row Blades Hydraulic Retarder with Surrogate Model

    Directory of Open Access Journals (Sweden)

    Liu Chunbao

    2015-02-01

    Full Text Available For the design of double-row blades hydraulic retarder involves too many parameters, the solution process of the optimal parameter combination is characterized by the large calculation load, the long calculation time, and the high cost. In this paper, we proposed a multiobjective optimization method to obtain the optimal balanced solution between the braking torque and volume of double-row blades hydraulic retarder. Moreover, we established the surrogate model for objective function with radial basis function (RBF, thus avoiding the time-consuming three-dimensional modeling and fluid simulation. Then, nondominated sorting genetic algorithm-II (NSGA-II was adopted to obtain the optimal combination solution of design variables. Moreover, the comparison results of computational fluid dynamics (CFD values of the optimal combination parameters and original design parameters indicated that the multiobjective optimization method based on surrogate model was applicable for the design of double-row blades hydraulic retarder.

  12. 'Tissue surrogates' as a model for archival formalin-fixed paraffin-embedded tissues.

    Science.gov (United States)

    Fowler, Carol B; Cunningham, Robert E; O'Leary, Timothy J; Mason, Jeffrey T

    2007-08-01

    High-throughput proteomic studies of archival formalin-fixed paraffin-embedded (FFPE) tissues have the potential to be a powerful tool for examining the clinical course of disease. However, advances in FFPE tissue-based proteomics have been hampered by inefficient methods to extract proteins from archival tissue and by an incomplete knowledge of formaldehyde-induced modifications in proteins. To help address these problems, we have developed a procedure for the formation of 'tissue surrogates' to model FFPE tissues. Cytoplasmic proteins, such as lysozyme or ribonuclease A, at concentrations approaching the protein content in whole cells, are fixed with 10% formalin to form gelatin-like plugs. These plugs have sufficient physical integrity to be processed through graded alcohols, xylene, and embedded in paraffin according to standard histological procedures. In this study, we used tissue surrogates formed from one or two proteins to evaluate extraction protocols for their ability to quantitatively extract proteins from the surrogates. Optimal protein extraction was obtained using a combination of heat, a detergent, and a protein denaturant. The addition of a reducing agent did not improve protein recovery; however, recovery varied significantly with pH. Protein extraction of >80% was observed for pH 4 buffers containing 2% (w/v) sodium dodecyl sulfate (SDS) when heated at 100 degrees C for 20 min, followed by incubation at 60 degrees C for 2 h. SDS-polyacrylamide gel electrophoresis of the extracted proteins revealed that the surrogate extracts contained a mixture of monomeric and multimeric proteins, regardless of the extraction protocol employed. Additionally, protein extracts from surrogates containing carbonic anhydrase:lysozyme (1:2 mol/mol) had disproportionate percentages of lysozyme, indicating that selective protein extraction in complex multiprotein systems may be a concern in proteomic studies of FFPE tissues.

  13. Blast Loading Experiments of Surrogate Models for Tbi Scenarios

    Science.gov (United States)

    Alley, M. D.; Son, S. F.

    2009-12-01

    This study aims to characterize the interaction of explosive blast waves through simulated anatomical models. We have developed physical models and a systematic approach for testing traumatic brain injury (TBI) mechanisms and occurrences. A simplified series of models consisting of spherical PMMA shells housing synthetic gelatins as brain simulants have been utilized. A series of experiments was conducted to compare the sensitivity of the system response to mechanical properties of the simulants under high strain-rate explosive blasts. Small explosive charges were directed at the models to produce a realistic blast wave in a scaled laboratory test cell setting. Blast profiles were measured and analyzed to compare system response severity. High-speed shadowgraph imaging captured blast wave interaction with the head model while particle tracking captured internal response for displacement and strain correlation. The results suggest amplification of shock waves inside the head near material interfaces due to impedance mismatches. In addition, significant relative displacement was observed between the interacting materials suggesting large strain values of nearly 5%. Further quantitative results were obtained through shadowgraph imaging of the blasts confirming a separation of time scales between blast interaction and bulk movement. These results lead to the conclusion that primary blast effects could cause TBI occurrences.

  14. Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing

    Science.gov (United States)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.

    2015-12-01

    For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  15. Surrogate-based modeling and dimension reduction techniques for multi-scale mechanics problems

    Institute of Scientific and Technical Information of China (English)

    Wei Shyy; Young-Chang Cho; Wenbo Du; Amit Gupta; Chien-Chou Tseng; Ann Marie Sastry

    2011-01-01

    Successful modeling and/or design of engineering systems often requires one to address the impact of multiple “design variables” on the prescribed outcome.There are often multiple,competing objectives based on which we assess the outcome of optimization.Since accurate,high fidelity models are typically time consuming and computationally expensive,comprehensive evaluations can be conducted only if an efficient framework is available.Furthermore,informed decisions of the model/hardware's overall performance rely on an adequate understanding of the global,not local,sensitivity of the individual design variables on the objectives.The surrogate-based approach,which involves approximating the objectives as continuous functions of design variables from limited data,offers a rational framework to reduce the number of important input variables,i.e.,the dimension of a design or modeling space.In this paper,we review the fundamental issues that arise in surrogate-based analysis and optimization,highlighting concepts,methods,techniques,as well as modeling implications for mechanics problems.To aid the discussions of the issues involved,we summarize recent efforts in investigating cryogenic cavitating flows,active flow control based on dielectric barrier discharge concepts,and lithium (Li)-ion batteries.It is also stressed that many multi-scale mechanics problems can naturally benefit from the surrogate approach for “scale bridging.”

  16. Surrogate models of precessing numerical relativity gravitational waveforms for use in parameter estimation

    Science.gov (United States)

    Blackman, Jonathan; Field, Scott; Galley, Chad; Hemberger, Daniel; Scheel, Mark; Schmidt, Patricia; Smith, Rory; SXS Collaboration Collaboration

    2016-03-01

    We are now in the advanced detector era of gravitational wave astronomy, and the merger of two black holes (BHs) is one of the most promising sources of gravitational waves that could be detected on earth. To infer the BH masses and spins, the observed signal must be compared to waveforms predicted by general relativity for millions of binary configurations. Numerical relativity (NR) simulations can produce accurate waveforms, but are prohibitively expensive to use for parameter estimation. Other waveform models are fast enough but may lack accuracy in portions of the parameter space. Numerical relativity surrogate models attempt to rapidly predict the results of a NR code with a small or negligible modeling error, after being trained on a set of input waveforms. Such surrogate models are ideal for parameter estimation, as they are both fast and accurate, and have already been built for the case of non-spinning BHs. Using 250 input waveforms, we build a surrogate model for waveforms from the Spectral Einstein Code (SpEC) for a subspace of precessing systems.

  17. Blast Loading Experiments of Developed Surrogate Models for TBI Scenarios

    Science.gov (United States)

    Alley, Matthew; Son, Steven

    2009-06-01

    This study aims to characterize the interaction of explosive blast waves through simulated anatomical systems. We have developed physical models and a systematic approach for testing traumatic brain injury (TBI) mechanisms and occurrences. A simplified series of models consisting of spherical PMMA shells followed by SLA prototyped skulls housing synthetic gelatins as brain simulants have been utilized. A series of experiments was conducted with the simple geometries to compare the sensitivity of the system response to mechanical properties of the simulants under high strain-rate explosive blasts. Small explosive charges were directed at the models to produce a realistic blast wave in a scaled laboratory setting. Blast profiles were measured and analyzed to compare system response severity. High-speed shadowgraph imaging captured blast wave interaction with the head model while particle tracking captured internal response for displacement and strain correlation. The results suggest amplification of shock waves inside the head due to impedance mismatches. Results from the strain correlations added to the theory of internal shearing between tissues.

  18. Laboratory animals as surrogate models of human obesity

    Institute of Scientific and Technical Information of China (English)

    Cecilia NILSSON; Kirsten RAUN; Fei-fei YAN; Marianne O LARSEN; Mads TANG-CHRISTENSEN

    2012-01-01

    Obesity and obesity-related metabolic diseases represent a growing socioeconomic problem throughout the world.Great emphasis has been put on establishing treatments for this condition,including pharmacological intervention.However,there are many obstacles and pitfalls in the development process from pre-clinical research to the pharmacy counter,and there is no certainty that what has been observed pre-clinically will translate into an improvement in human health.Hence,it is important to test potential new drugs in a valid translational model early in their development.In the current mini-review,a number of monogenetic and polygenic models of obesity will be discussed in view of their translational character.

  19. Efficient stochastic EMC/EMI analysis using HDMR-generated surrogate models

    KAUST Repository

    Yücel, Abdulkadir C.

    2011-08-01

    Stochastic methods have been used extensively to quantify effects due to uncertainty in system parameters (e.g. material, geometrical, and electrical constants) and/or excitation on observables pertinent to electromagnetic compatibility and interference (EMC/EMI) analysis (e.g. voltages across mission-critical circuit elements) [1]. In recent years, stochastic collocation (SC) methods, especially those leveraging generalized polynomial chaos (gPC) expansions, have received significant attention [2, 3]. SC-gPC methods probe surrogate models (i.e. compact polynomial input-output representations) to statistically characterize observables. They are nonintrusive, that is they use existing deterministic simulators, and often cost only a fraction of direct Monte-Carlo (MC) methods. Unfortunately, SC-gPC-generated surrogate models often lack accuracy (i) when the number of uncertain/random system variables is large and/or (ii) when the observables exhibit rapid variations. © 2011 IEEE.

  20. Uncertainty propagation through an aeroelastic wind turbine model using polynomial surrogates

    DEFF Research Database (Denmark)

    Murcia Leon, Juan Pablo; Réthoré, Pierre-Elouan Mikael; Dimitrov, Nikolay Krasimirov

    2017-01-01

    Polynomial surrogates are used to characterize the energy production and lifetime equivalent fatigue loads for different components of the DTU 10 MW reference wind turbine under realistic atmospheric conditions. The variability caused by different turbulent inflow fields are captured by creating......-alignment. The methodology presented extends the deterministic power and thrust coefficient curves to uncertainty models and adds new variables like damage equivalent fatigue loads in different components of the turbine. These surrogate models can then be implemented inside other work-flows such as: estimation...... of the uncertainty in annual energy production due to wind resource variability and/or robust wind power plant layout optimization. It can be concluded that it is possible to capture the global behavior of a modern wind turbine and its uncertainty under realistic inflow conditions using polynomial response surfaces...

  1. Kriging-based generation of optimal databases as forward and inverse surrogate models

    Science.gov (United States)

    Bilicz, S.; Lambert, M.; Gyimóthy, Sz

    2010-07-01

    Numerical methods are used to simulate mathematical models for a wide range of engineering problems. The precision provided by such simulators is usually fine, but at the price of computational cost. In some applications this cost might be crucial. This leads us to consider cheap surrogate models in order to reduce the computation time still meeting the precision requirements. Among all available surrogate models, we deal herein with the generation of an 'optimal' database of pre-calculated results combined with a simple interpolator. A database generation approach is investigated which is intended to achieve an optimal sampling. Such databases can be used for the approximate solution of both forward and inverse problems. Their structure carries some meta-information about the involved physical problem. In the case of the inverse problem, an approach for predicting the uncertainty of the solution (due to the applied surrogate model and/or the uncertainty of the measured data) is presented. All methods are based on kriging—a stochastic tool for function approximation. Illustrative examples are drawn from eddy current non-destructive evaluation.

  2. Crack Identification of Cantilever Plates Based on a Kriging Surrogate Model.

    Science.gov (United States)

    Gao, Haiyang; Guo, Xinglin; Ouyang, Huajiang; Han, Fang

    2013-10-01

    This work presents an effective method to identify the tip locations of an internal crack in cantilever plates based on a Kriging surrogate model. Samples of varying crack parameters (tip locations) and their corresponding root mean square (RMS) of random responses are used to construct the initial Kriging surrogate model. Moreover, the pseudo excitation method (PEM) is employed to speed up the spectral analysis. For identifying crack parameters based on the constructed Kriging model, a robust stochastic particle swarm optimization (SPSO) algorithm is adopted for enhancing the global searching ability. To improve the accuracy of the surrogate model without using extensive samples, a small number of samples are first used. Then an optimal point-adding process is carried out to reduce computational cost. Numerical studies of a cantilever plate with an internal crack are performed. The effectiveness and efficiency of this method are demonstrated by the identified results. The effect of initial sampling size on the precision of the identified results is also investigated.

  3. An Efficient Variable Screening Method for Effective Surrogate Models for Reliability-Based Design Optimization

    Science.gov (United States)

    2014-04-01

    reliability-based design optimization ( RBDO ) process, surrogate models are frequently used to reduce the number of simulations because analysis of a...the RBDO problem and thus mitigate the curse of dimensionality. Therefore, it is desirable to develop an efficient and effective variable...screening method for reduction of the dimension of the RBDO problem. In this paper, requirements of the variable screening method for deterministic design

  4. Multiobjective adaptive surrogate modeling-based optimization for parameter estimation of large, complex geophysical models

    Science.gov (United States)

    Gong, Wei; Duan, Qingyun; Li, Jianduo; Wang, Chen; Di, Zhenhua; Ye, Aizhong; Miao, Chiyuan; Dai, Yongjiu

    2016-03-01

    Parameter specification is an important source of uncertainty in large, complex geophysical models. These models generally have multiple model outputs that require multiobjective optimization algorithms. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this paper, a multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) algorithm is introduced that aims to reduce computational cost while maintaining optimization effectiveness. Geophysical dynamic models usually have a prior parameterization scheme derived from the physical processes involved, and our goal is to improve all of the objectives by parameter calibration. In this study, we developed a method for directing the search processes toward the region that can improve all of the objectives simultaneously. We tested the MO-ASMO algorithm against NSGA-II and SUMO with 13 test functions and a land surface model - the Common Land Model (CoLM). The results demonstrated the effectiveness and efficiency of MO-ASMO.

  5. Coastal aquifer management based on surrogate models and multi-objective optimization

    Science.gov (United States)

    Mantoglou, A.; Kourakos, G.

    2011-12-01

    The demand for fresh water in coastal areas and islands can be very high, especially in summer months, due to increased local needs and tourism. In order to satisfy demand, a combined management plan is proposed which involves: i) desalinization (if needed) of pumped water to a potable level using reverse osmosis and ii) injection of biologically treated waste water into the aquifer. The management plan is formulated into a multiobjective optimization framework, where simultaneous minimization of economic and environmental costs is desired; subject to a constraint to satisfy demand. The method requires modeling tools, which are able to predict the salinity levels of the aquifer in response to different alternative management scenarios. Variable density models can simulate the interaction between fresh and saltwater; however, they are computationally intractable when integrated in optimization algorithms. In order to alleviate this problem, a multi objective optimization algorithm is developed combining surrogate models based on Modular Neural Networks [MOSA(MNN)]. The surrogate models are trained adaptively during optimization based on a Genetic Algorithm. In the crossover step of the genetic algorithm, each pair of parents generates a pool of offspring. All offspring are evaluated based on the fast surrogate model. Then only the most promising offspring are evaluated based on the exact numerical model. This eliminates errors in Pareto solution due to imprecise predictions of the surrogate model. Three new criteria for selecting the most promising offspring were proposed, which improve the Pareto set and maintain the diversity of the optimum solutions. The method has important advancements compared to previous methods, e.g. alleviation of propagation of errors due to surrogate model approximations. The method is applied to a real coastal aquifer in the island of Santorini which is a very touristy island with high water demands. The results show that the algorithm

  6. Surrogate models for identifying robust, high yield regions of parameter space for ICF implosion simulations

    Science.gov (United States)

    Humbird, Kelli; Peterson, J. Luc; Brandon, Scott; Field, John; Nora, Ryan; Spears, Brian

    2016-10-01

    Next-generation supercomputer architecture and in-transit data analysis have been used to create a large collection of 2-D ICF capsule implosion simulations. The database includes metrics for approximately 60,000 implosions, with x-ray images and detailed physics parameters available for over 20,000 simulations. To map and explore this large database, surrogate models for numerous quantities of interest are built using supervised machine learning algorithms. Response surfaces constructed using the predictive capabilities of the surrogates allow for continuous exploration of parameter space without requiring additional simulations. High performing regions of the input space are identified to guide the design of future experiments. In particular, a model for the yield built using a random forest regression algorithm has a cross validation score of 94.3% and is consistently conservative for high yield predictions. The model is used to search for robust volumes of parameter space where high yields are expected, even given variations in other input parameters. Surrogates for additional quantities of interest relevant to ignition are used to further characterize the high yield regions. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, Lawrence Livermore National Security, LLC. LLNL-ABS-697277.

  7. Effective-one-body waveforms for binary neutron stars using surrogate models

    CERN Document Server

    Lackey, Benjamin D; Galley, Chad R; Meidam, Jeroen; Broeck, Chris Van Den

    2016-01-01

    Gravitational-wave observations of binary neutron star systems can provide information about the masses, spins, and structure of neutron stars. However, this requires accurate and computationally efficient waveform models that take <1s to evaluate for use in Bayesian parameter estimation codes that perform 10^7 - 10^8 waveform evaluations. We present a surrogate model of a nonspinning effective-one-body waveform model with l = 2, 3, and 4 tidal multipole moments that reproduces waveforms of binary neutron star numerical simulations up to merger. The surrogate is built from compact sets of effective-one-body waveform amplitude and phase data that each form a reduced basis. We find that 12 amplitude and 7 phase basis elements are sufficient to reconstruct any binary neutron star waveform with a starting frequency of 10Hz. The surrogate has maximum errors of 3.8% in amplitude (0.04% excluding the last 100M before merger) and 0.043 radians in phase. The version implemented in the LIGO Algorithm Library takes ~...

  8. Probabilistic Fatigue Damage Prognosis Using a Surrogate Model Trained Via 3D Finite Element Analysis

    Science.gov (United States)

    Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo

    2015-01-01

    Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.

  9. Surrogate-based Multi-Objective Optimization and Uncertainty Quantification Methods for Large, Complex Geophysical Models

    Science.gov (United States)

    Gong, Wei; Duan, Qingyun

    2016-04-01

    Parameterization scheme has significant influence to the simulation ability of large, complex dynamic geophysical models, such as distributed hydrological models, land surface models, weather and climate models, etc. with the growing knowledge of physical processes, the dynamic geophysical models include more and more processes and producing more output variables. Consequently the parameter optimization / uncertainty quantification algorithms should also be multi-objective compatible. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this research, we have developed surrogate-based multi-objective optimization method (MO-ASMO) and Markov Chain Monte Carlo method (MC-ASMO) for uncertainty quantification for these expensive dynamic models. The aim of MO-ASMO and MC-ASMO is to reduce the total number of model runs with appropriate adaptive sampling strategy assisted by surrogate modeling. Moreover, we also developed a method that can steer the search process with the help of prior parameterization scheme derived from the physical processes involved, so that all of the objectives can be improved simultaneously. The proposed algorithms have been evaluated with test problems and a land surface model - the Common Land Model (CoLM). The results demonstrated their effectiveness and efficiency.

  10. Application of Design of Experiments and Surrogate Modeling within the NASA Advanced Concepts Office, Earth-to-Orbit Design Process

    Science.gov (United States)

    Zwack, Mathew R.; Dees, Patrick D.; Holt, James B.

    2016-01-01

    Decisions made during early conceptual design have a large impact upon the expected life-cycle cost (LCC) of a new program. It is widely accepted that up to 80% of such cost is committed during these early design phases. Therefore, to help minimize LCC, decisions made during conceptual design must be based upon as much information as possible. To aid in the decision making for new launch vehicle programs, the Advanced Concepts Office (ACO) at NASA Marshall Space Flight Center (MSFC) provides rapid turnaround pre-phase A and phase A concept definition studies. The ACO team utilizes a proven set of tools to provide customers with a full vehicle mass breakdown to tertiary subsystems, preliminary structural sizing based upon worst-case flight loads, and trajectory optimization to quantify integrated vehicle performance for a given mission. Although the team provides rapid turnaround for single vehicle concepts, the scope of the trade space can be limited due to analyst availability and the manpower requirements for manual execution of the analysis tools. In order to enable exploration of a broader design space, the ACO team has implemented an advanced design methods (ADM) based approach. This approach applies the concepts of design of experiments (DOE) and surrogate modeling to more exhaustively explore the trade space and provide the customer with additional design information to inform decision making. This paper will first discuss the automation of the ACO tool set, which represents a majority of the development effort. In order to fit a surrogate model within tolerable error bounds a number of DOE cases are needed. This number will scale with the number of variable parameters desired and the complexity of the system's response to those variables. For all but the smallest design spaces, the number of cases required cannot be produced within an acceptable timeframe using a manual process. Therefore, automation of the tools was a key enabler for the successful

  11. Efficient Calibration/Uncertainty Analysis Using Paired Complex/Surrogate Models.

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2015-01-01

    The use of detailed groundwater models to simulate complex environmental processes can be hampered by (1) long run-times and (2) a penchant for solution convergence problems. Collectively, these can undermine the ability of a modeler to reduce and quantify predictive uncertainty, and therefore limit the use of such detailed models in the decision-making context. We explain and demonstrate a novel approach to calibration and the exploration of posterior predictive uncertainty, of a complex model, that can overcome these problems in many modelling contexts. The methodology relies on conjunctive use of a simplified surrogate version of the complex model in combination with the complex model itself. The methodology employs gradient-based subspace analysis and is thus readily adapted for use in highly parameterized contexts. In its most basic form, one or more surrogate models are used for calculation of the partial derivatives that collectively comprise the Jacobian matrix. Meanwhile, testing of parameter upgrades and the making of predictions is done by the original complex model. The methodology is demonstrated using a density-dependent seawater intrusion model in which the model domain is characterized by a heterogeneous distribution of hydraulic conductivity.

  12. A predictive fitness model for influenza

    Science.gov (United States)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  13. Quality assessment of coarse models and surrogates for space mapping optimization

    DEFF Research Database (Denmark)

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2008-01-01

    in lack of convergence. Although similarity requirements can be expressed with proper analytical conditions, it is difficult to verify such conditions beforehand for real-world engineering optimization problems. In this paper, we provide methods of assessing the quality of coarse/surrogate models....... These methods can be used to predict whether a given model might be successfully used in space mapping optimization, to compare the quality of different coarse models, or to choose the proper type of space mapping which would be suitable to a given engineering design problem. Our quality estimation methods...

  14. Development of surrogate correlation models to predict trace organic contaminant oxidation and microbial inactivation during ozonation.

    Science.gov (United States)

    Gerrity, Daniel; Gamage, Sujanie; Jones, Darryl; Korshin, Gregory V; Lee, Yunho; Pisarenko, Aleksey; Trenholm, Rebecca A; von Gunten, Urs; Wert, Eric C; Snyder, Shane A

    2012-12-01

    The performance of ozonation in wastewater depends on water quality and the ability to form hydroxyl radicals (·OH) to meet disinfection or contaminant transformation objectives. Since there are no on-line methods to assess ozone and ·OH exposure in wastewater, many agencies are now embracing indicator frameworks and surrogate monitoring for regulatory compliance. Two of the most promising surrogate parameters for ozone-based treatment of secondary and tertiary wastewater effluents are differential UV(254) absorbance (ΔUV(254)) and total fluorescence (ΔTF). In the current study, empirical correlations for ΔUV(254) and ΔTF were developed for the oxidation of 18 trace organic contaminants (TOrCs), including 1,4-dioxane, atenolol, atrazine, bisphenol A, carbamazepine, diclofenac, gemfibrozil, ibuprofen, meprobamate, naproxen, N,N-diethyl-meta-toluamide (DEET), para-chlorobenzoic acid (pCBA), phenytoin, primidone, sulfamethoxazole, triclosan, trimethoprim, and tris-(2-chloroethyl)-phosphate (TCEP) (R(2) = 0.50-0.83) and the inactivation of three microbial surrogates, including Escherichia coli, MS2, and Bacillus subtilis spores (R(2) = 0.46-0.78). Nine wastewaters were tested in laboratory systems, and eight wastewaters were evaluated at pilot- and full-scale. A predictive model for OH exposure based on ΔUV(254) or ΔTF was also proposed.

  15. Statistical Surrogate Models for Estimating Probability of High-Consequence Climate Change

    Science.gov (United States)

    Field, R.; Constantine, P.; Boslough, M.

    2011-12-01

    We have posed the climate change problem in a framework similar to that used in safety engineering, by acknowledging that probabilistic risk assessments focused on low-probability, high-consequence climate events are perhaps more appropriate than studies focused simply on best estimates. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We have developed specialized statistical surrogate models (SSMs) that can be used to make predictions about the tails of the associated probability distributions. A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field, that is, a random variable for every fixed location in the atmosphere at all times. The SSM can be calibrated to available spatial and temporal data from existing climate databases, or to a collection of outputs from general circulation models. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework was also developed to provide quantitative measures of confidence, via Bayesian credible intervals, to assess these risks. To illustrate the use of the SSM, we considered two collections of NCAR CCSM 3.0 output data. The first collection corresponds to average December surface temperature for years 1990-1999 based on a collection of 8 different model runs obtained from the Program for Climate Model Diagnosis and Intercomparison (PCMDI). We calibrated the surrogate model to the available model data and make various point predictions. We also analyzed average precipitation rate in June, July, and August over a 54-year period assuming a cyclic Y2K ocean model. We

  16. Modeling and Fitting Exoplanet Transit Light Curves

    Science.gov (United States)

    Millholland, Sarah; Ruch, G. T.

    2013-01-01

    We present a numerical model along with an original fitting routine for the analysis of transiting extra-solar planet light curves. Our light curve model is unique in several ways from other available transit models, such as the analytic eclipse formulae of Mandel & Agol (2002) and Giménez (2006), the modified Eclipsing Binary Orbit Program (EBOP) model implemented in Southworth’s JKTEBOP code (Popper & Etzel 1981; Southworth et al. 2004), or the transit model developed as a part of the EXOFAST fitting suite (Eastman et al. in prep.). Our model employs Keplerian orbital dynamics about the system’s center of mass to properly account for stellar wobble and orbital eccentricity, uses a unique analytic solution derived from Kepler’s Second Law to calculate the projected distance between the centers of the star and planet, and calculates the effect of limb darkening using a simple technique that is different from the commonly used eclipse formulae. We have also devised a unique Monte Carlo style optimization routine for fitting the light curve model to observed transits. We demonstrate that, while the effect of stellar wobble on transit light curves is generally small, it becomes significant as the planet to stellar mass ratio increases and the semi-major axes of the orbits decrease. We also illustrate the appreciable effects of orbital ellipticity on the light curve and the necessity of accounting for its impacts for accurate modeling. We show that our simple limb darkening calculations are as accurate as the analytic equations of Mandel & Agol (2002). Although our Monte Carlo fitting algorithm is not as mathematically rigorous as the Markov Chain Monte Carlo based algorithms most often used to determine exoplanetary system parameters, we show that it is straightforward and returns reliable results. Finally, we show that analyses performed with our model and optimization routine compare favorably with exoplanet characterizations published by groups such as the

  17. Performance comparison of several response surface surrogate models and ensemble methods for water injection optimization under uncertainty

    Science.gov (United States)

    Babaei, Masoud; Pan, Indranil

    2016-06-01

    In this paper we defined a relatively complex reservoir engineering optimization problem of maximizing the net present value of the hydrocarbon production in a water flooding process by controlling the water injection rates in multiple control periods. We assessed the performance of a number of response surface surrogate models and their ensembles which are combined by Dempster-Shafer theory and Weighted Averaged Surrogates as found in contemporary literature works. Most of these ensemble methods are based on the philosophy that multiple weak learners can be leveraged to obtain one strong learner which is better than the individual weak ones. Even though these techniques have been shown to work well for test bench functions, we found them not offering a considerable improvement compared to an individually used cubic radial basis function surrogate model. Our simulations on two and three dimensional cases, with varying number of optimization variables suggest that cubic radial basis functions-based surrogate model is reliable, outperforms Kriging surrogates and multivariate adaptive regression splines, and if it does not outperform, it is rarely outperformed by the ensemble surrogate models.

  18. A surrogate modelling framework for the optimal deployment of check dams in erosion-prone areas

    Science.gov (United States)

    Pal, Debasish; Tang, Honglei; Galelli, Stefano; Ran, Qihua

    2017-04-01

    Despite the great progresses made in the last decades, the control of soil erosion still remains a key challenge for land-use planning. The nonlinear interactions between hydrologic and morphologic processes and increase in extreme rainfall events predicted with climatic change create new areas of concern and make the problem unresolved. Spatially distributed models are a useful tool for modelling such processes and assessing the effect of large-scale engineering measures, but their computational requests prevent the resolution of problems requiring several model evaluations—sensitivity analysis or optimization, for instance. In this study, we tackle this problem by developing a surrogate modelling framework for the optimal deployment of check dams. The framework combines a spatially distributed model (WaTEM/SEDEM), a multi-objective evolutionary algorithm and artificial neural networks as surrogate model. We test the framework on Shejiagou catchment—a 14 km2 area located in the Loess Plateau, China—where we optimize check dam locations by maximizing the mass of sediments retained in the catchment and minimizing the total number of dams. Preliminary results show that the performance of the existing check dam system could be improved by changing the dam locations.

  19. Multi-model polynomial chaos surrogate dictionary for Bayesian inference in elasticity problems

    KAUST Repository

    Contreras, Andres A.

    2016-09-19

    A method is presented for inferring the presence of an inclusion inside a domain; the proposed approach is suitable to be used in a diagnostic device with low computational power. Specifically, we use the Bayesian framework for the inference of stiff inclusions embedded in a soft matrix, mimicking tumors in soft tissues. We rely on a polynomial chaos (PC) surrogate to accelerate the inference process. The PC surrogate predicts the dependence of the displacements field with the random elastic moduli of the materials, and are computed by means of the stochastic Galerkin (SG) projection method. Moreover, the inclusion\\'s geometry is assumed to be unknown, and this is addressed by using a dictionary consisting of several geometrical models with different configurations. A model selection approach based on the evidence provided by the data (Bayes factors) is used to discriminate among the different geometrical models and select the most suitable one. The idea of using a dictionary of pre-computed geometrical models helps to maintain the computational cost of the inference process very low, as most of the computational burden is carried out off-line for the resolution of the SG problems. Numerical tests are used to validate the methodology, assess its performance, and analyze the robustness to model errors. © 2016 Elsevier Ltd

  20. Model-based estimation of individual fitness

    Science.gov (United States)

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).

  1. Seeing Perfectly Fitting Factor Models That Are Causally Misspecified: Understanding That Close-Fitting Models Can Be Worse

    Science.gov (United States)

    Hayduk, Leslie

    2014-01-01

    Researchers using factor analysis tend to dismiss the significant ill fit of factor models by presuming that if their factor model is close-to-fitting, it is probably close to being properly causally specified. Close fit may indeed result from a model being close to properly causally specified, but close-fitting factor models can also be seriously…

  2. Seeing Perfectly Fitting Factor Models That Are Causally Misspecified: Understanding That Close-Fitting Models Can Be Worse

    Science.gov (United States)

    Hayduk, Leslie

    2014-01-01

    Researchers using factor analysis tend to dismiss the significant ill fit of factor models by presuming that if their factor model is close-to-fitting, it is probably close to being properly causally specified. Close fit may indeed result from a model being close to properly causally specified, but close-fitting factor models can also be seriously…

  3. Modeling of NO sensitization of IC engines surrogate fuels auto-ignition and combustion

    CERN Document Server

    Anderlohr, Jörg; Bounaceur, Roda; Battin-Leclerc, Frédérique

    2009-01-01

    This paper presents a new chemical kinetic model developed for the simulation of auto-ignition and combustion of engine surrogate fuel mixtures sensitized by the presence of NOx. The chemical mechanism is based on the PRF auto-ignition model (n-heptane/iso-octane) of Buda et al. [1] and the NO/n-butane/n-pentane model of Glaude et al. [2]. The later mechanism has been taken as a reference for the reactions of NOx with larger alcanes (n-heptane, iso-octane). A coherent two components engine fuel surrogate mechanism has been generated which accounts for the influence of NOx on auto-ignition. The mechanism has been validated for temperatures between 700 K and 1100 K and pressures between 1 and 10 atm covering the temperature and pressure ranges characteristic of engine post-oxidation thermodynamic conditions. Experiments used for validation include jet stirred reactor conditions for species evolution as a function of temperature, as well as diesel HCCI engine experiments for auto-ignition delay time measurements...

  4. Near Real-Time Probabilistic Damage Diagnosis Using Surrogate Modeling and High Performance Computing

    Science.gov (United States)

    Warner, James E.; Zubair, Mohammad; Ranjan, Desh

    2017-01-01

    This work investigates novel approaches to probabilistic damage diagnosis that utilize surrogate modeling and high performance computing (HPC) to achieve substantial computational speedup. Motivated by Digital Twin, a structural health management (SHM) paradigm that integrates vehicle-specific characteristics with continual in-situ damage diagnosis and prognosis, the methods studied herein yield near real-time damage assessments that could enable monitoring of a vehicle's health while it is operating (i.e. online SHM). High-fidelity modeling and uncertainty quantification (UQ), both critical to Digital Twin, are incorporated using finite element method simulations and Bayesian inference, respectively. The crux of the proposed Bayesian diagnosis methods, however, is the reformulation of the numerical sampling algorithms (e.g. Markov chain Monte Carlo) used to generate the resulting probabilistic damage estimates. To this end, three distinct methods are demonstrated for rapid sampling that utilize surrogate modeling and exploit various degrees of parallelism for leveraging HPC. The accuracy and computational efficiency of the methods are compared on the problem of strain-based crack identification in thin plates. While each approach has inherent problem-specific strengths and weaknesses, all approaches are shown to provide accurate probabilistic damage diagnoses and several orders of magnitude computational speedup relative to a baseline Bayesian diagnosis implementation.

  5. An Integrated Optimization Design Method Based on Surrogate Modeling Applied to Diverging Duct Design

    Science.gov (United States)

    Hanan, Lu; Qiushi, Li; Shaobin, Li

    2016-12-01

    This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.

  6. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  7. Estimation of k-ε parameters using surrogate models and jet-in-crossflow data

    Energy Technology Data Exchange (ETDEWEB)

    Lefantzi, Sophia [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Arunajatesan, Srinivasan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Dechant, Lawrence [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2014-11-01

    We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of the calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal

  8. Estimation of k-e parameters using surrogate models and jet-in-crossflow data.

    Energy Technology Data Exchange (ETDEWEB)

    Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan; Dechant, Lawrence

    2015-02-01

    We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of the calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k [?] e parameters ( C u , C e 2 , C e 1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model

  9. Hydrodynamic surrogate models for bio-inspired micro-swimming robots

    CERN Document Server

    Tabak, Ahmet Fatih

    2013-01-01

    Research on untethered micro-swimming robots is growing fast owing to their potential impact on minimally invasive medical procedures. Candidate propulsion mechanisms of robots are based on flagellar mechanisms of micro organisms such as rotating rigid helices and traveling plane-waves on flexible rods. For design and control of swimming robots, accurate real-time models are necessary to compute trajectories, velocities and hydrodynamic forces acting on robots. Resistive force theory (RFT) provides an excellent framework for the development of real-time six degrees-of-freedom surrogate models for design optimization and control. However the accuracy of RFT-based models depends strongly on hydrodynamic interactions. Here, we introduce interaction coefficients that only multiply body resistance coefficients with no modification to local resistance coefficients on the tail. Interaction coefficients are obtained for a single specimen of Vibrio Algino reported in literature, and used in the RFT model for compariso...

  10. Modeling of autoignition and NO sensitization for the oxidation of IC engine surrogate fuels

    CERN Document Server

    Anderlohr, Jörg; Da Cruz, A Pires; Battin-Leclerc, Frédérique; 10.1016/j.combustflame.2008.09.009

    2009-01-01

    This paper presents an approch for modeling with one single kinetic mechanism the chemistry of the autoignition and combustion processes inside an internal combustion engine, as well as the chemical kinetics governing the post-oxidation of unburned hydrocarbons in engine exhaust gases. Therefore a new kinetic model was developed, valid over a wide range of temperatures including the negative temperature coefficient regime. The model simulates the autoignition and the oxidation of engine surrogate fuels composed of n-heptane, iso-octane and toluene, which are sensitized by the presence of nitric oxides. The new model was obtained from previously published mechanisms for the oxidation of alkanes and toluene where the coupling reactions describing interactions between hydrocarbons and NOx were added. The mechanism was validated against a wide range of experimental data obtained in jet-stirred reactors, rapid compression machines, shock tubes and homogenous charge compression ignition engines. Flow rate and sensi...

  11. Evaluation of model fit in nonlinear multilevel structural equation modeling

    Directory of Open Access Journals (Sweden)

    Karin eSchermelleh-Engel

    2014-03-01

    Full Text Available Evaluating model fit in nonlinear multilevel structural equation models (MSEM presents a challenge as no adequate test statistic is available. Nevertheless, using a product indicator approach a likelihood ratio test for linear models is provided which may also be useful for nonlinear MSEM. The main problem with nonlinear models is that product variables are nonnormally distributed. Although robust test statistics have been developed for linear SEM to ensure valid results under the condition of nonnormality, they were not yet investigated for nonlinear MSEM. In a Monte Carlo study, the performance of the robust likelihood ratio test was investigated for models with single-level latent interaction effects using the unconstrained product indicator approach. As overall model fit evaluation has a potential limitation in detecting the lack of fit at a single level even for linear models, level-specific model fit evaluation was also investigated using partially saturated models. Four population models were considered: a model with interaction effects at both levels, an interaction effect at the within-group level, an interaction effect at the between-group level, and a model with no interaction effects at both levels. For these models the number of groups, predictor correlation, and model misspecification was varied. The results indicate that the robust test statistic performed sufficiently well. Advantages of level-specific model fit evaluation for the detection of model misfit are demonstrated.

  12. Evaluation of model fit in nonlinear multilevel structural equation modeling.

    Science.gov (United States)

    Schermelleh-Engel, Karin; Kerwer, Martin; Klein, Andreas G

    2014-01-01

    Evaluating model fit in nonlinear multilevel structural equation models (MSEM) presents a challenge as no adequate test statistic is available. Nevertheless, using a product indicator approach a likelihood ratio test for linear models is provided which may also be useful for nonlinear MSEM. The main problem with nonlinear models is that product variables are non-normally distributed. Although robust test statistics have been developed for linear SEM to ensure valid results under the condition of non-normality, they have not yet been investigated for nonlinear MSEM. In a Monte Carlo study, the performance of the robust likelihood ratio test was investigated for models with single-level latent interaction effects using the unconstrained product indicator approach. As overall model fit evaluation has a potential limitation in detecting the lack of fit at a single level even for linear models, level-specific model fit evaluation was also investigated using partially saturated models. Four population models were considered: a model with interaction effects at both levels, an interaction effect at the within-group level, an interaction effect at the between-group level, and a model with no interaction effects at both levels. For these models the number of groups, predictor correlation, and model misspecification was varied. The results indicate that the robust test statistic performed sufficiently well. Advantages of level-specific model fit evaluation for the detection of model misfit are demonstrated.

  13. Design optimization of stent and its dilatation balloon using kriging surrogate model.

    Science.gov (United States)

    Li, Hongxia; Liu, Tao; Wang, Minjie; Zhao, Danyang; Qiao, Aike; Wang, Xue; Gu, Junfeng; Li, Zheng; Zhu, Bao

    2017-01-11

    Although stents have great success of treating cardiovascular disease, it actually undermined by the in-stent restenosis and their long-term fatigue failure. The geometry of stent affects its service performance and ultimately affects its fatigue life. Besides, improper length of balloon leads to transient mechanical injury to the vessel wall and in-stent restenosis. Conventional optimization method of stent and its dilatation balloon by comparing several designs and choosing the best one as the optimal design cannot find the global optimal design in the design space. In this study, an adaptive optimization method based on Kriging surrogate model was proposed to optimize the structure of stent and the length of stent dilatation balloon so as to prolong stent service life and improve the performance of stent. A finite element simulation based optimization method combing with Kriging surrogate model is proposed to optimize geometries of stent and length of stent dilatation balloon step by step. Kriging surrogate model coupled with design of experiment method is employed to construct the approximate functional relationship between optimization objectives and design variables. Modified rectangular grid is used to select initial training samples in the design space. Expected improvement function is used to balance the local and global searches to find the global optimal result. Finite element method is adopted to simulate the free expansion of balloon-expandable stent and the expansion of stent in stenotic artery. The well-known Goodman diagram was used for the fatigue life prediction of stent, while dogboning effect was used for stent expansion performance measurement. As the real design cases, diamond-shaped stent and sv-shaped stent were studied to demonstrate how the proposed method can be harnessed to design and refine stent fatigue life and expansion performance computationally. The fatigue life and expansion performance of both the diamond-shaped stent and sv

  14. Optimum design of vortex generator elements using Kriging surrogate modelling and genetic algorithm

    Science.gov (United States)

    Neelakantan, Rithwik; Balu, Raman; Saji, Abhinav

    Vortex Generators (VG's) are small angled plates located in a span wise fashion aft of the leading edge of an aircraft wing. They control airflow over the upper surface of the wing by creating vortices which energise the boundary layer. The parameters considered for the optimisation study of the VG's are its height, orientation angle and location along the chord in a low subsonic flow over a NACA0012 airfoil. The objective function to be maximised is the L/D ratio of the airfoil. The design data are generated using the commercially available ANSYS FLUENT software and are modelled using a Kriging based interpolator. This surrogate model is used along with a Generic Algorithm software to arrive at the optimum shape of the VG's. The results of this study will be confirmed with actual wind tunnel tests on scaled models.

  15. Surrogate runner model for draft tube losses computation within a wide range of operating points

    Science.gov (United States)

    Susan-Resiga, R.; Muntean, S.; Ciocan, T.; de Colombel, T.; Leroy, P.

    2014-03-01

    We introduce a quasi two-dimensional (Q2D) methodology for assessing the swirling flow exiting the runner of hydraulic turbines at arbitrary operating points, within a wide operating range. The Q2D model does not need actual runner computations, and as a result it represents a surrogate runner model for a-priori assessment of the swirling flow ingested by the draft tube. The axial, radial and circumferential velocity components are computed on a conical section located immediately downstream the runner blades trailing edge, then used as inlet conditions for regular draft tube computations. The main advantage of our model is that it allows the determination of the draft tube losses within the intended turbine operating range in the early design stages of a new or refurbished runner, thus providing a robust and systematic methodology to meet the optimal requirements for the flow at the runner outlet.

  16. A neural network construction method for surrogate modeling of physics-based analysis

    Science.gov (United States)

    Sung, Woong Je

    connection as a zero-weight connection, the potential contribution to training error reduction of any present or absent connection can readily be evaluated using the BP algorithm. Instead of being broken, the connections that contribute less remain frozen with constant weight values optimized to that point but they are excluded from further weight optimization until reselected. In this way, a selective weight optimization is executed only for the dynamically maintained pool of high gradient connections. By searching the rapidly changing weights and concentrating optimization resources on them, the learning process is accelerated without either a significant increase in computational cost or a need for re-training. This results in a more task-adapted network connection structure. Combined with another important criterion for the division of a neuron which adds a new computational unit to a network, a highly fitted network can be grown out of the minimal random structure. This particular learning strategy can belong to a more broad class of the variable connectivity learning scheme and the devised algorithm has been named Optimal Brain Growth (OBG). The OBG algorithm has been tested on two canonical problems; a regression analysis using the Complicated Interaction Regression Function and a classification of the Two-Spiral Problem. A comparative study with conventional Multilayer Perceptrons (MLPs) consisting of single- and double-hidden layers shows that OBG is less sensitive to random initial conditions and generalizes better with only a minimal increase in computational time. This partially proves that a variable connectivity learning scheme has great potential to enhance computational efficiency and reduce efforts to select proper network architecture. To investigate the applicability of the OBG to more practical surrogate modeling tasks, the geometry-to-pressure mapping of a particular class of airfoils in the transonic flow regime has been sought using both the

  17. An Investigation of Item Fit Statistics for Mixed IRT Models

    Science.gov (United States)

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  18. Persistence and decontamination of surrogate radioisotopes in a model drinking water distribution system.

    Science.gov (United States)

    Szabo, Jeffrey G; Impellitteri, Christopher A; Govindaswamy, Shekar; Hall, John S

    2009-12-01

    Contamination of a model drinking water system with surrogate radioisotopes was examined with respect to persistence on and decontamination of infrastructure surfaces. Cesium and cobalt chloride salts were used as surrogates for cesium-137 and cobalt-60. Studies were conducted in biofilm annular reactors containing heavily corroded iron surfaces formed under shear and constantly submerged in drinking water. Cesium was not detected on the corroded iron surface after equilibration with 10 and 100mgL(-1) solutions of cesium chloride, but cobalt was detected on corroded iron coupons at both initial concentrations. The amount of adhered cobalt decreased over the next six weeks, but was still present when monitoring stopped. X-ray absorption near-edge spectroscopy (XANES) showed that adhered cobalt was in the III oxidation state. The adsorbed cobalt was strongly resistant to decontamination by various physicochemical methods. Simulated flushing, use of free chlorine and dilute ammonia were found to be ineffective whereas use of aggressive methods like 14.5M ammonia and 0.36M sulfuric acid removed 37 and 92% of the sorbed cobalt, respectively.

  19. A reduced order aerothermodynamic modeling framework for hypersonic vehicles based on surrogate and POD

    Directory of Open Access Journals (Sweden)

    Chen Xin

    2015-10-01

    Full Text Available Aerothermoelasticity is one of the key technologies for hypersonic vehicles. Accurate and efficient computation of the aerothermodynamics is one of the primary challenges for hypersonic aerothermoelastic analysis. Aimed at solving the shortcomings of engineering calculation, computation fluid dynamics (CFD and experimental investigation, a reduced order modeling (ROM framework for aerothermodynamics based on CFD predictions using an enhanced algorithm of fast maximin Latin hypercube design is developed. Both proper orthogonal decomposition (POD and surrogate are considered and compared to construct ROMs. Two surrogate approaches named Kriging and optimized radial basis function (ORBF are utilized to construct ROMs. Furthermore, an enhanced algorithm of fast maximin Latin hypercube design is proposed, which proves to be helpful to improve the precisions of ROMs. Test results for the three-dimensional aerothermodynamic over a hypersonic surface indicate that: the ROMs precision based on Kriging is better than that by ORBF, ROMs based on Kriging are marginally more accurate than ROMs based on POD-Kriging. In a word, the ROM framework for hypersonic aerothermodynamics has good precision and efficiency.

  20. A reduced order aerothermodynamic modeling framework for hypersonic vehicles based on surrogate and POD

    Institute of Scientific and Technical Information of China (English)

    Chen X in; Liu Li; Long Teng; Yue Zhenjiang

    2015-01-01

    Aerothermoelasticity is one of the key technologies for hypersonic vehicles. Accurate and efficient computation of the aerothermodynamics is one of the primary challenges for hypersonic aerothermoelastic analysis. Aimed at solving the shortcomings of engineering calculation, compu-tation fluid dynamics (CFD) and experimental investigation, a reduced order modeling (ROM) framework for aerothermodynamics based on CFD predictions using an enhanced algorithm of fast maximin Latin hypercube design is developed. Both proper orthogonal decomposition (POD) and surrogate are considered and compared to construct ROMs. Two surrogate approaches named Kriging and optimized radial basis function (ORBF) are utilized to construct ROMs. Furthermore, an enhanced algorithm of fast maximin Latin hypercube design is proposed, which proves to be helpful to improve the precisions of ROMs. Test results for the three-dimensional aerothermody-namic over a hypersonic surface indicate that:the ROMs precision based on Kriging is better than that by ORBF, ROMs based on Kriging are marginally more accurate than ROMs based on POD-Kriging. In a word, the ROM framework for hypersonic aerothermodynamics has good precision and efficiency.

  1. On improving Efficiency and Accuracy of Variable-Fidelity Surrogate Modeling in Aero-data for Loads Context

    DEFF Research Database (Denmark)

    Han, Zhonghua; Zimmermann, Ralf; Goertz, Stefan

    2009-01-01

    ) and a generalized hybrid bridge function, have been developed to improve the efficiency and accuracy of the existing Variable-Fidelity Modeling (VFM) approach. These new algorithms and features are demonstrated and evaluated for analytical functions and used to construct a global surrogate model for the aerodynamic......Variable-fidelity surrogate modeling offers an efficient way to generate aerodynamic data for aero-loads prediction based on a set of CFD methods with varying degree of fidelity and computational expense. In this paper, new algorithms, such as a Gradient-Enhanced Kriging method (direct GEK...

  2. Gasoline surrogate modeling of gasoline ignition in a rapid compression machine and comparison to experiments

    Energy Technology Data Exchange (ETDEWEB)

    Mehl, M; Kukkadapu, G; Kumar, K; Sarathy, S M; Pitz, W J; Sung, S J

    2011-09-15

    The use of gasoline in homogeneous charge compression ignition engines (HCCI) and in duel fuel diesel - gasoline engines, has increased the need to understand its compression ignition processes under engine-like conditions. These processes need to be studied under well-controlled conditions in order to quantify low temperature heat release and to provide fundamental validation data for chemical kinetic models. With this in mind, an experimental campaign has been undertaken in a rapid compression machine (RCM) to measure the ignition of gasoline mixtures over a wide range of compression temperatures and for different compression pressures. By measuring the pressure history during ignition, information on the first stage ignition (when observed) and second stage ignition are captured along with information on the phasing of the heat release. Heat release processes during ignition are important because gasoline is known to exhibit low temperature heat release, intermediate temperature heat release and high temperature heat release. In an HCCI engine, the occurrence of low-temperature and intermediate-temperature heat release can be exploited to obtain higher load operation and has become a topic of much interest for engine researchers. Consequently, it is important to understand these processes under well-controlled conditions. A four-component gasoline surrogate model (including n-heptane, iso-octane, toluene, and 2-pentene) has been developed to simulate real gasolines. An appropriate surrogate mixture of the four components has been developed to simulate the specific gasoline used in the RCM experiments. This chemical kinetic surrogate model was then used to simulate the RCM experimental results for real gasoline. The experimental and modeling results covered ultra-lean to stoichiometric mixtures, compressed temperatures of 640-950 K, and compression pressures of 20 and 40 bar. The agreement between the experiments and model is encouraging in terms of first

  3. Diesel Surrogate Fuels for Engine Testing and Chemical-Kinetic Modeling: Compositions and Properties

    Science.gov (United States)

    Mueller, Charles J.; Cannella, William J.; Bays, J. Timothy; Bruno, Thomas J.; DeFabio, Kathy; Dettman, Heather D.; Gieleciak, Rafal M.; Huber, Marcia L.; Kweon, Chol-Bum; McConnell, Steven S.; Pitz, William J.; Ratcliff, Matthew A.

    2016-01-01

    The primary objectives of this work were to formulate, blend, and characterize a set of four ultralow-sulfur diesel surrogate fuels in quantities sufficient to enable their study in single-cylinder-engine and combustion-vessel experiments. The surrogate fuels feature increasing levels of compositional accuracy (i.e., increasing exactness in matching hydrocarbon structural characteristics) relative to the single target diesel fuel upon which the surrogate fuels are based. This approach was taken to assist in determining the minimum level of surrogate-fuel compositional accuracy that is required to adequately emulate the performance characteristics of the target fuel under different combustion modes. For each of the four surrogate fuels, an approximately 30 L batch was blended, and a number of the physical and chemical properties were measured. This work documents the surrogate-fuel creation process and the results of the property measurements. PMID:27330248

  4. Cognitive theories as reinforcement history surrogates: the case of likelihood ratio models of human recognition memory.

    Science.gov (United States)

    Wixted, John T; Gaitan, Santino C

    2002-11-01

    B. F. Skinner (1977) once argued that cognitive theories are essentially surrogates for the organism's (usually unknown) reinforcement history. In this article, we argue that this notion applies rather directly to a class of likelihood ratio models of human recognition memory. The point is not that such models are fundamentally flawed or that they are not useful and should be abandoned. Instead, the point is that the role of reinforcement history in shaping memory decisions could help to explain what otherwise must be explained by assuming that subjects are inexplicably endowed with the relevant distributional information and computational abilities. To the degree that a role for an organism's reinforcement history is appreciated, the importance of animal memory research in understanding human memory comes into clearer focus. As Skinner was also fond of pointing out, it is only in the animal laboratory that an organism's history of reinforcement can be precisely controlled and its effects on behavior clearly understood.

  5. Surrogate-driven deformable motion model for organ motion tracking in particle radiation therapy

    Science.gov (United States)

    Fassi, Aurora; Seregni, Matteo; Riboldi, Marco; Cerveri, Pietro; Sarrut, David; Battista Ivaldi, Giovanni; Tabarelli de Fatis, Paola; Liotta, Marco; Baroni, Guido

    2015-02-01

    The aim of this study is the development and experimental testing of a tumor tracking method for particle radiation therapy, providing the daily respiratory dynamics of the patient’s thoraco-abdominal anatomy as a function of an external surface surrogate combined with an a priori motion model. The proposed tracking approach is based on a patient-specific breathing motion model, estimated from the four-dimensional (4D) planning computed tomography (CT) through deformable image registration. The model is adapted to the interfraction baseline variations in the patient’s anatomical configuration. The driving amplitude and phase parameters are obtained intrafractionally from a respiratory surrogate signal derived from the external surface displacement. The developed technique was assessed on a dataset of seven lung cancer patients, who underwent two repeated 4D CT scans. The first 4D CT was used to build the respiratory motion model, which was tested on the second scan. The geometric accuracy in localizing lung lesions, mediated over all breathing phases, ranged between 0.6 and 1.7 mm across all patients. Errors in tracking the surrounding organs at risk, such as lungs, trachea and esophagus, were lower than 1.3 mm on average. The median absolute variation in water equivalent path length (WEL) within the target volume did not exceed 1.9 mm-WEL for simulated particle beams. A significant improvement was achieved compared with error compensation based on standard rigid alignment. The present work can be regarded as a feasibility study for the potential extension of tumor tracking techniques in particle treatments. Differently from current tracking methods applied in conventional radiotherapy, the proposed approach allows for the dynamic localization of all anatomical structures scanned in the planning CT, thus providing complete information on density and WEL variations required for particle beam range adaptation.

  6. Modeling Pancreatic Tumor Motion Using 4-Dimensional Computed Tomography and Surrogate Markers

    Energy Technology Data Exchange (ETDEWEB)

    Huguet, Florence [Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York (United States); Department of Radiation Oncology, Hôpitaux Universitaires Paris Est, Hôpital Tenon, University Paris VI, Paris (France); Yorke, Ellen D.; Davidson, Margaret [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York (United States); Zhang, Zhigang [Department of Biostatistics, Memorial Sloan Kettering Cancer Center, New York, New York (United States); Jackson, Andrew; Mageras, Gig S. [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York (United States); Wu, Abraham J. [Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York (United States); Goodman, Karyn A., E-mail: GoodmanK@mskcc.org [Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York (United States)

    2015-03-01

    Purpose: To assess intrafractional positional variations of pancreatic tumors using 4-dimensional computed tomography (4D-CT), their impact on gross tumor volume (GTV) coverage, the reliability of biliary stent, fiducial seeds, and the real-time position management (RPM) external marker as tumor surrogates for setup of respiratory gated treatment, and to build a correlative model of tumor motion. Methods and Materials: We analyzed the respiration-correlated 4D-CT images acquired during simulation of 36 patients with either a biliary stent (n=16) or implanted fiducials (n=20) who were treated with RPM respiratory gated intensity modulated radiation therapy for locally advanced pancreatic cancer. Respiratory displacement relative to end-exhalation was measured for the GTV, the biliary stent, or fiducial seeds, and the RPM marker. The results were compared between the full respiratory cycle and the gating interval. Linear mixed model was used to assess the correlation of GTV motion with the potential surrogate markers. Results: The average ± SD GTV excursions were 0.3 ± 0.2 cm in the left-right direction, 0.6 ± 0.3 cm in the anterior-posterior direction, and 1.3 ± 0.7 cm in the superior-inferior direction. Gating around end-exhalation reduced GTV motion by 46% to 60%. D95% was at least the prescribed 56 Gy in 76% of patients. GTV displacement was associated with the RPM marker, the biliary stent, and the fiducial seeds. The correlation was better with fiducial seeds and with biliary stent. Conclusions: Respiratory gating reduced the margin necessary for radiation therapy for pancreatic tumors. GTV motion was well correlated with biliary stent or fiducial seed displacements, validating their use as surrogates for daily assessment of GTV position during treatment. A patient-specific internal target volume based on 4D-CT is recommended both for gated and not-gated treatment; otherwise, our model can be used to predict the degree of GTV motion.

  7. TU-CD-BRA-05: Atlas Selection for Multi-Atlas-Based Image Segmentation Using Surrogate Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, T; Ruan, D [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: The growing size and heterogeneity in training atlas necessitates sophisticated schemes to identify only the most relevant atlases for the specific multi-atlas-based image segmentation problem. This study aims to develop a model to infer the inaccessible oracle geometric relevance metric from surrogate image similarity metrics, and based on such model, provide guidance to atlas selection in multi-atlas-based image segmentation. Methods: We relate the oracle geometric relevance metric in label space to the surrogate metric in image space, by a monotonically non-decreasing function with additive random perturbations. Subsequently, a surrogate’s ability to prognosticate the oracle order for atlas subset selection is quantified probabilistically. Finally, important insights and guidance are provided for the design of fusion set size, balancing the competing demands to include the most relevant atlases and to exclude the most irrelevant ones. A systematic solution is derived based on an optimization framework. Model verification and performance assessment is performed based on clinical prostate MR images. Results: The proposed surrogate model was exemplified by a linear map with normally distributed perturbation, and verified with several commonly-used surrogates, including MSD, NCC and (N)MI. The derived behaviors of different surrogates in atlas selection and their corresponding performance in ultimate label estimate were validated. The performance of NCC and (N)MI was similarly superior to MSD, with a 10% higher atlas selection probability and a segmentation performance increase in DSC by 0.10 with the first and third quartiles of (0.83, 0.89), compared to (0.81, 0.89). The derived optimal fusion set size, valued at 7/8/8/7 for MSD/NCC/MI/NMI, agreed well with the appropriate range [4, 9] from empirical observation. Conclusion: This work has developed an efficacious probabilistic model to characterize the image-based surrogate metric on atlas selection

  8. Geometric Generalisation of Surrogate Model-Based Optimisation to Combinatorial and Program Spaces

    Directory of Open Access Journals (Sweden)

    Yong-Hyuk Kim

    2014-01-01

    Full Text Available Surrogate models (SMs can profitably be employed, often in conjunction with evolutionary algorithms, in optimisation in which it is expensive to test candidate solutions. The spatial intuition behind SMs makes them naturally suited to continuous problems, and the only combinatorial problems that have been previously addressed are those with solutions that can be encoded as integer vectors. We show how radial basis functions can provide a generalised SM for combinatorial problems which have a geometric solution representation, through the conversion of that representation to a different metric space. This approach allows an SM to be cast in a natural way for the problem at hand, without ad hoc adaptation to a specific representation. We test this adaptation process on problems involving binary strings, permutations, and tree-based genetic programs.

  9. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization

    Science.gov (United States)

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously.

  10. Prediction models of long-term leaching behavior and leaching mechanism of glass components and surrogated nuclides in radioactive vitrified waste forms

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Y. C.; Lee, K. S. [Department of Industrial Environment and Health, Yonsei University, Wonju (Korea, Republic of); Kim, I. T.; Kim, H. T.; Kim, J. H. [Korea Atomic Energy Research Institute (KAERI), Taejon (Korea, Republic of)

    1999-07-01

    Melting solidification is considered to be a perspective technology for stabilizing incineration ash remaining after incineration of combustible radioactive waste since it has the advantage of improving the physicochemical properties of waste forms. Final waste forms should be characterized to determine the degree to which they fulfills the acceptance criteria of the disposal facility. Chemical durability (leaching resistance) is known to be the most important factor in the assessment of waste forms. In this study, vitrified waste forms are manufactured and characterized. Feed materials consist of simulated radioactive incineration ash and base-glass with different mixing ratios. To assess the chemical durability of vitrified waste forms, the International Standard Organization (ISO) leach test has been conducted at 70 degree C with deionized distilled water as a leachant for 820 days, and the concentrations of glass components and surrogates in the leachates are then analyzed. Two models for predicting long-term leaching behavior of glass components and radionuclides in a glass form are applied to the leached data after 820 days. The model including a fitted parameter from the longer experimental data shows more accuracy, however, the model with shorter leaching test results offers the advantage of being able to predict the long-term behavior from the short-term experimental data. The leaching mechanisms of surrogates and glass components were also investigated by using two semi-empirical kinetic models and were found to be dissolution with diffusion. (author)

  11. A model identification technique to characterize the low frequency behaviour of surrogate explosive materials

    Science.gov (United States)

    Paripovic, Jelena; Davies, Patricia

    2016-09-01

    The mechanical response of energetic materials, especially those used in improvised explosive devices, is of great interest to improve understanding of how mechanical excitations may lead to improved detection or detonation. The materials are comprised of crystals embedded into a binder. Microstructural modelling can give insight into the interactions between the binder and the crystals and thus the mechanisms that may lead to material heating and but there needs to be validation of these models and they also require estimates of constituent material properties. Addressing these issues, nonlinear viscoelastic models of the low frequency behavior of a surrogate material-mass system undergoing base excitation have been constructed, and experimental data have been collected and used to estimate the order of components in the system model and the parameters in the model. The estimation technique is described and examples of its application to both simulated and experimental data are given. From the estimated system model the material properties are extracted. Material properties are estimated for a variety of materials and the effect of aging on the estimated material properties is shown.

  12. The best-fit universe. [cosmological models

    Science.gov (United States)

    Turner, Michael S.

    1991-01-01

    Inflation provides very strong motivation for a flat Universe, Harrison-Zel'dovich (constant-curvature) perturbations, and cold dark matter. However, there are a number of cosmological observations that conflict with the predictions of the simplest such model: one with zero cosmological constant. They include the age of the Universe, dynamical determinations of Omega, galaxy-number counts, and the apparent abundance of large-scale structure in the Universe. While the discrepancies are not yet serious enough to rule out the simplest and most well motivated model, the current data point to a best-fit model with the following parameters: Omega(sub B) approximately equal to 0.03, Omega(sub CDM) approximately equal to 0.17, Omega(sub Lambda) approximately equal to 0.8, and H(sub 0) approximately equal to 70 km/(sec x Mpc) which improves significantly the concordance with observations. While there is no good reason to expect such a value for the cosmological constant, there is no physical principle that would rule out such.

  13. The best-fit universe. [cosmological models

    Science.gov (United States)

    Turner, Michael S.

    1991-01-01

    Inflation provides very strong motivation for a flat Universe, Harrison-Zel'dovich (constant-curvature) perturbations, and cold dark matter. However, there are a number of cosmological observations that conflict with the predictions of the simplest such model: one with zero cosmological constant. They include the age of the Universe, dynamical determinations of Omega, galaxy-number counts, and the apparent abundance of large-scale structure in the Universe. While the discrepancies are not yet serious enough to rule out the simplest and most well motivated model, the current data point to a best-fit model with the following parameters: Omega(sub B) approximately equal to 0.03, Omega(sub CDM) approximately equal to 0.17, Omega(sub Lambda) approximately equal to 0.8, and H(sub 0) approximately equal to 70 km/(sec x Mpc) which improves significantly the concordance with observations. While there is no good reason to expect such a value for the cosmological constant, there is no physical principle that would rule out such.

  14. Constructing Surrogate Models of Complex Systems with Enhanced Sparsity: Quantifying the influence of conformational uncertainty in biomolecular solvation

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Huan; Yang, Xiu; Zheng, Bin; Baker, Nathan A.

    2015-11-05

    Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance of the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.

  15. Utilisation of transparent synthetic soil surrogates in geotechnical physical models: A review

    Directory of Open Access Journals (Sweden)

    Abideen Adekunle Ganiyu

    2016-08-01

    Full Text Available Efforts to obtain non-intrusive measurement of deformations and spatial flow within soil mass prior to the advent of transparent soils have perceptible limitations. The transparent soil is a two-phase medium composed of both the synthetic aggregate and fluid components of identical refractive indices aiming at attaining transparency of the resulting soil. The transparency facilitates real life visualisation of soil continuum in physical models. When applied in conjunction with advanced photogrammetry and image processing techniques, transparent soils enable the quantification of the spatial deformation, displacement and multi-phase flow in physical model tests. Transparent synthetic soils have been successfully employed in geotechnical model tests as soil surrogates based on the testing results of their geotechnical properties which replicate those of natural soils. This paper presents a review on transparent synthetic soils and their numerous applications in geotechnical physical models. The properties of the aggregate materials are outlined and the features of the various transparent clays and sands available in the literature are described. The merits of transparent soil are highlighted and the need to amplify its application in geotechnical physical model researches is emphasised. This paper will serve as a concise compendium on the subject of transparent soils for future researchers in this field.

  16. Utilisation of transparent synthetic soil surrogates in geotechnical physical models:A review

    Institute of Scientific and Technical Information of China (English)

    Abideen Adekunle Ganiyu; Ahmad Safuan A. Rashid; Mohd Hanim Osman

    2016-01-01

    Efforts to obtain non-intrusive measurement of deformations and spatial flow within soil mass prior to the advent of transparent soils have perceptible limitations. The transparent soil is a two-phase medium composed of both the synthetic aggregate and fluid components of identical refractive indices aiming at attaining transparency of the resulting soil. The transparency facilitates real life visualisation of soil continuum in physical models. When applied in conjunction with advanced photogrammetry and image processing techniques, transparent soils enable the quantification of the spatial deformation, displace-ment and multi-phase flow in physical model tests. Transparent synthetic soils have been successfully employed in geotechnical model tests as soil surrogates based on the testing results of their geotechnical properties which replicate those of natural soils. This paper presents a review on transparent synthetic soils and their numerous applications in geotechnical physical models. The properties of the aggregate materials are outlined and the features of the various transparent clays and sands available in the literature are described. The merits of transparent soil are highlighted and the need to amplify its application in geotechnical physical model researches is emphasised. This paper will serve as a concise compendium on the subject of transparent soils for future researchers in this field.

  17. Modelling metal accumulation using humic acid as a surrogate for plant roots.

    Science.gov (United States)

    Le, T T Yen; Swartjes, Frank; Römkens, Paul; Groenenberg, Jan E; Wang, Peng; Lofts, Stephen; Hendriks, A Jan

    2015-04-01

    Metal accumulation in roots was modelled with WHAM VII using humic acid (HA) as a surrogate for root surface. Metal accumulation was simulated as a function of computed metal binding to HA, with a correction term (E(HA)) to account for the differences in binding site density between HA and root surface. The approach was able to model metal accumulation in roots to within one order of magnitude for 95% of the data points. Total concentrations of Mn in roots of Vigna unguiculata, total concentrations of Ni, Zn, Cu and Cd in roots of Pisum sativum, as well as internalized concentrations of Cd, Ni, Pb and Zn in roots of Lolium perenne, were significantly correlated to the computed metal binding to HA. The method was less successful at modelling metal accumulation at low concentrations and in soil experiments. Measured concentrations of Cu internalized in L. perenne roots were not related to Cu binding to HA modelled and deviated from the predictions by over one order of magnitude. The results indicate that metal uptake by roots may under certain conditions be influenced by conditional physiological processes that cannot simulated by geochemical equilibrium. Processes occurring in chronic exposure of plants grown in soil to metals at low concentrations complicate the relationship between computed metal binding to HA and measured metal accumulation in roots.

  18. A Multi-Fidelity Surrogate Model for Handling Real Gas Equations of State

    Science.gov (United States)

    Ouellet, Frederick; Park, Chanyoung; Rollin, Bertrand; Balachandar, S."bala"

    2016-11-01

    The explosive dispersal of particles is an example of a complex multiphase and multi-species fluid flow problem. This problem has many engineering applications including particle-laden explosives. In these flows, the detonation products of the explosive cannot be treated as a perfect gas so a real gas equation of state is used to close the governing equations (unlike air, which uses the ideal gas equation for closure). As the products expand outward from the detonation point, they mix with ambient air and create a mixing region where both of the state equations must be satisfied. One of the more accurate, yet computationally expensive, methods to deal with this is a scheme that iterates between the two equations of state until pressure and thermal equilibrium are achieved inside of each computational cell. This work strives to create a multi-fidelity surrogate model of this process. We then study the performance of the model with respect to the iterative method by performing both gas-only and particle laden flow simulations using an Eulerian-Lagrangian approach with a finite volume code. Specifically, the model's (i) computational speed, (ii) memory requirements and (iii) computational accuracy are analyzed to show the benefits of this novel modeling approach. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA00023.

  19. Epistasis and the Structure of Fitness Landscapes: Are Experimental Fitness Landscapes Compatible with Fisher's Geometric Model?

    Science.gov (United States)

    Blanquart, François; Bataillon, Thomas

    2016-06-01

    The fitness landscape defines the relationship between genotypes and fitness in a given environment and underlies fundamental quantities such as the distribution of selection coefficient and the magnitude and type of epistasis. A better understanding of variation in landscape structure across species and environments is thus necessary to understand and predict how populations will adapt. An increasing number of experiments investigate the properties of fitness landscapes by identifying mutations, constructing genotypes with combinations of these mutations, and measuring the fitness of these genotypes. Yet these empirical landscapes represent a very small sample of the vast space of all possible genotypes, and this sample is often biased by the protocol used to identify mutations. Here we develop a rigorous statistical framework based on Approximate Bayesian Computation to address these concerns and use this flexible framework to fit a broad class of phenotypic fitness models (including Fisher's model) to 26 empirical landscapes representing nine diverse biological systems. Despite uncertainty owing to the small size of most published empirical landscapes, the inferred landscapes have similar structure in similar biological systems. Surprisingly, goodness-of-fit tests reveal that this class of phenotypic models, which has been successful so far in interpreting experimental data, is a plausible in only three of nine biological systems. More precisely, although Fisher's model was able to explain several statistical properties of the landscapes-including the mean and SD of selection and epistasis coefficients-it was often unable to explain the full structure of fitness landscapes.

  20. Predictive Model for Inactivation of Feline Calicivirus, a Norovirus Surrogate, by Heat and High Hydrostatic Pressure▿

    Science.gov (United States)

    Buckow, Roman; Isbarn, Sonja; Knorr, Dietrich; Heinz, Volker; Lehmacher, Anselm

    2008-01-01

    Noroviruses, which are members of the Caliciviridae family, represent the leading cause of nonbacterial gastroenteritis in developed countries; such norovirus infections result in high economic costs for health protection. Person-to-person contact, contaminated water, and foods, especially raw shellfish, vegetables, and fruits, can transmit noroviruses. We inactivated feline calicivirus, a surrogate for the nonculturable norovirus, in cell culture medium and mineral water by heat and high hydrostatic pressure. Incubation at ambient pressure and 75°C for 2 min as well as treatment at 450 MPa and 15°C for 1 min inactivated more than 7 log10 PFU of calicivirus per ml in cell culture medium or mineral water. The heat and pressure time-inactivation curves obtained with the calicivirus showed tailing in the logarithmic scale. Modeling by nth-order kinetics of the virus inactivation was successful in predicting the inactivation of the infective virus particles. The developed model enables the prediction of the calicivirus reduction in response to pressures up to 500 MPa, temperatures ranging from 5 to 75°C, and various treatment times. We suggest high pressure for processing of foods to reduce the health threat posed by noroviruses. PMID:18156330

  1. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  2. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  3. Selecting a Conservation Surrogate Species for Small Fragmented Habitats Using Ecological Niche Modelling

    Directory of Open Access Journals (Sweden)

    K. Anne-Isola Nekaris

    2015-01-01

    Full Text Available Flagship species are traditionally large, charismatic animals used to rally conservation efforts. Accepted flagship definitions suggest they need only fulfil a strategic role, unlike umbrella species that are used to shelter cohabitant taxa. The criteria used to select both flagship and umbrella species may not stand up in the face of dramatic forest loss, where remaining fragments may only contain species that do not suit either set of criteria. The Cinderella species concept covers aesthetically pleasing and overlooked species that fulfil the criteria of flagships or umbrellas. Such species are also more likely to occur in fragmented habitats. We tested Cinderella criteria on mammals in the fragmented forests of the Sri Lankan Wet Zone. We selected taxa that fulfilled both strategic and ecological roles. We created a shortlist of ten species, and from a survey of local perceptions highlighted two finalists. We tested these for umbrella characteristics against the original shortlist, utilizing Maximum Entropy (MaxEnt modelling, and analysed distribution overlap using ArcGIS. The criteria highlighted Loris tardigradus tardigradus and Prionailurus viverrinus as finalists, with the former having highest flagship potential. We suggest Cinderella species can be effective conservation surrogates especially in habitats where traditional flagship species have been extirpated.

  4. A Comparison of Item Fit Statistics for Mixed IRT Models

    Science.gov (United States)

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  5. Mixed butanols addition to gasoline surrogates: Shock tube ignition delay time measurements and chemical kinetic modeling

    KAUST Repository

    AlRamadan, Abdullah S.

    2015-10-01

    The demand for fuels with high anti-knock quality has historically been rising, and will continue to increase with the development of downsized and turbocharged spark-ignition engines. Butanol isomers, such as 2-butanol and tert-butanol, have high octane ratings (RON of 105 and 107, respectively), and thus mixed butanols (68.8% by volume of 2-butanol and 31.2% by volume of tert-butanol) can be added to the conventional petroleum-derived gasoline fuels to improve octane performance. In the present work, the effect of mixed butanols addition to gasoline surrogates has been investigated in a high-pressure shock tube facility. The ignition delay times of mixed butanols stoichiometric mixtures were measured at 20 and 40bar over a temperature range of 800-1200K. Next, 10vol% and 20vol% of mixed butanols (MB) were blended with two different toluene/n-heptane/iso-octane (TPRF) fuel blends having octane ratings of RON 90/MON 81.7 and RON 84.6/MON 79.3. These MB/TPRF mixtures were investigated in the shock tube conditions similar to those mentioned above. A chemical kinetic model was developed to simulate the low- and high-temperature oxidation of mixed butanols and MB/TPRF blends. The proposed model is in good agreement with the experimental data with some deviations at low temperatures. The effect of mixed butanols addition to TPRFs is marginal when examining the ignition delay times at high temperatures. However, when extended to lower temperatures (T < 850K), the model shows that the mixed butanols addition to TPRFs causes the ignition delay times to increase and hence behaves like an octane booster at engine-like conditions. © 2015 The Combustion Institute.

  6. Hyper-Fit: Fitting Linear Models to Multidimensional Data with Multivariate Gaussian Uncertainties

    CERN Document Server

    Robotham, A S G

    2015-01-01

    Astronomical data is often uncertain with errors that are heteroscedastic (different for each data point) and covariant between different dimensions. Assuming that a set of D-dimensional data points can be described by a (D - 1)-dimensional plane with intrinsic scatter, we derive the general likelihood function to be maximised to recover the best fitting model. Alongside the mathematical description, we also release the hyper-fit package for the R statistical language (github.com/asgr/hyper.fit) and a user-friendly web interface for online fitting (hyperfit.icrar.org). The hyper-fit package offers access to a large number of fitting routines, includes visualisation tools, and is fully documented in an extensive user manual. Most of the hyper-fit functionality is accessible via the web interface. In this paper we include applications to toy examples and to real astronomical data from the literature: the mass-size, Tully-Fisher, Fundamental Plane, and mass-spin-morphology relations. In most cases the hyper-fit ...

  7. Analytical approximations for temperature dependent thermophysical properties of supercritical diesel fuel surrogates used in combustion modeling

    Science.gov (United States)

    Kumar, Abhinav; Saini, Vishnu; Dondapati, Raja Sekhar; Usurumarti, Preeti Rao

    2017-07-01

    Supercritical fluid technology is introduced to combat the critical challenges related with emissions, incomplete and clean diesel fuel combustion. The chemical kinetics of diesel fuel is a strong function of temperature. As surrogate fuels have a potential to represent a real diesel fuel, thermophysical properties of such fuels have been studied in this present work as a function of temperature. Further, two diesel surrogate fuels which have been identified as the components of actual diesel fuel for jet engines are studied and thermophysical properties of these two surrogates are evaluated as a function of temperature at critical pressure. In addition, the accuracy and reliability of the developed correlations is estimated using two statistical parameters such as Absolute Average of Relative Error (AARE) and Sum of Average Residues (SAR). Results show an excellent agreement between the standard data and the correlated property values.

  8. Development of a surrogate model for analysis of ex-vessel steam explosion in Nordic type BWRs

    Energy Technology Data Exchange (ETDEWEB)

    Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se; Basso, Simone, E-mail: simoneb@kth.se; Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se

    2016-12-15

    Highlights: • Severe accident. • Steam explosion. • Surrogate model. • Sensitivity study. • Artificial neural networks. - Abstract: Severe accident mitigation strategy adopted in Nordic type Boiling Water Reactors (BWRs) employs ex-vessel core melt cooling in a deep pool of water below reactor vessel. Energetic fuel–coolant interaction (steam explosion) can occur during molten core release into water. Dynamic loads can threaten containment integrity increasing the risk of fission products release to the environment. Comprehensive uncertainty analysis is necessary in order to assess the risks. Computational costs of the existing fuel–coolant interaction (FCI) codes is often prohibitive for addressing the uncertainties, including the effect of stochastic triggering time. This paper discusses development of a computationally efficient surrogate model (SM) for prediction of statistical characteristics of steam explosion impulses in Nordic BWRs. The TEXAS-V code was used as the Full Model (FM) for the calculation of explosion impulses. The surrogate model was developed using artificial neural networks (ANNs) and the database of FM solutions. Statistical analysis was employed in order to treat chaotic response of steam explosion impulse to variations in the triggering time. Details of the FM and SM implementation and their verification are discussed in the paper.

  9. Surrogate models and optimal design of experiments for chemical kinetics applications

    KAUST Repository

    Bisetti, Fabrizio

    2015-01-07

    Kinetic models for reactive flow applications comprise hundreds of reactions describing the complex interaction among many chemical species. The detailed knowledge of the reaction parameters is a key component of the design cycle of next-generation combustion devices, which aim at improving conversion efficiency and reducing pollutant emissions. Shock tubes are a laboratory scale experimental configuration, which is widely used for the study of reaction rate parameters. Important uncertainties exist in the values of the thousands of parameters included in the most advanced kinetic models. This talk discusses the application of uncertainty quantification (UQ) methods to the analysis of shock tube data as well as the design of shock tube experiments. Attention is focused on a spectral framework in which uncertain inputs are parameterized in terms of canonical random variables, and quantities of interest (QoIs) are expressed in terms of a mean-square convergent series of orthogonal polynomials acting on these variables. We outline the implementation of a recent spectral collocation approach for determining the unknown coefficients of the expansion, namely using a sparse, adaptive pseudo-spectral construction that enables us to obtain surrogates for the QoIs accurately and efficiently. We first discuss the utility of the resulting expressions in quantifying the sensitivity of QoIs to uncertain inputs, and in the Bayesian inference key physical parameters from experimental measurements. We then discuss the application of these techniques to the analysis of shock-tube data and the optimal design of shock-tube experiments for two key reactions in combustion kinetics: the chain-brancing reaction H + O2 ←→ OH + O and the reaction of Furans with the hydroxyl radical OH.

  10. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is consider

  11. Model-Free CUSUM Methods for Person Fit

    Science.gov (United States)

    Armstrong, Ronald D.; Shi, Min

    2009-01-01

    This article demonstrates the use of a new class of model-free cumulative sum (CUSUM) statistics to detect person fit given the responses to a linear test. The fundamental statistic being accumulated is the likelihood ratio of two probabilities. The detection performance of this CUSUM scheme is compared to other model-free person-fit statistics…

  12. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  13. A fitness screening model for increasing fitness assessment and research experiences in undergraduate exercise science students.

    Science.gov (United States)

    Brown, Gregory A; Lynott, Frank; Heelan, Kate A

    2008-09-01

    When students analyze and present original data they have collected, and hence have a cultivated sense of curiosity about the data, student learning is enhanced. It is often difficult to provide students an opportunity to practice their skills, use their knowledge, and gain research experiences during a typical course laboratory. This article describes a model of an out-of-classroom experience during which undergraduate exercise science students provide a free health and fitness screening to the campus community. Although some evidence of the effectiveness of this experience is presented, this is not a detailed evaluation of either the service or learning benefits of the fitness screening. Working in small learning groups in the classroom, students develop hypotheses about the health and fitness of the population to be screened. Then, as part of the health and fitness screening, participants are evaluated for muscular strength, aerobic fitness, body composition, blood pressure, physical activity, and blood cholesterol levels. Students then analyze the data collected during the screening, accept or reject their hypotheses based on statistical analyses of the data, and make in-class presentations of their findings. This learning experience has been used successfully to illustrate the levels of obesity, hypercholesterolemia, and lack of physical fitness in the campus community as well as provide an opportunity for students to use statistical procedures to analyze data. It has also provided students with an opportunity to practice fitness assessment and interpersonal skills that will enhance their future careers.

  14. topicmodels: An R Package for Fitting Topic Models

    Directory of Open Access Journals (Sweden)

    Bettina Grun

    2011-05-01

    Full Text Available Topic models allow the probabilistic modeling of term frequency occurrences in documents. The fitted model can be used to estimate the similarity between documents as well as between a set of specified keywords using an additional layer of latent variables which are referred to as topics. The R package topicmodels provides basic infrastructure for fitting topic models based on data structures from the text mining package tm. The package includes interfaces to two algorithms for fitting topic models: the variational expectation-maximization algorithm provided by David M. Blei and co-authors and an algorithm using Gibbs sampling by Xuan-Hieu Phan and co-authors.

  15. Cutthroat trout virus as a surrogate in vitro infection model for testing inhibitors of hepatitis E virus replication

    Science.gov (United States)

    Debing, Yannick; Winton, James; Neyts, Johan; Dallmeier, Kai

    2013-01-01

    Hepatitis E virus (HEV) is one of the most important causes of acute hepatitis worldwide. Although most infections are self-limiting, mortality is particularly high in pregnant women. Chronic infections can occur in transplant and other immune-compromised patients. Successful treatment of chronic hepatitis E has been reported with ribavirin and pegylated interferon-alpha, however severe side effects were observed. We employed the cutthroat trout virus (CTV), a non-pathogenic fish virus with remarkable similarities to HEV, as a potential surrogate for HEV and established an antiviral assay against this virus using the Chinook salmon embryo (CHSE-214) cell line. Ribavirin and the respective trout interferon were found to efficiently inhibit CTV replication. Other known broad-spectrum inhibitors of RNA virus replication such as the nucleoside analog 2′-C-methylcytidine resulted only in a moderate antiviral activity. In its natural fish host, CTV levels largely fluctuate during the reproductive cycle with the virus detected mainly during spawning. We wondered whether this aspect of CTV infection may serve as a surrogate model for the peculiar pathogenesis of HEV in pregnant women. To that end the effect of three sex steroids on in vitro CTV replication was evaluated. Whereas progesterone resulted in marked inhibition of virus replication, testosterone and 17β-estradiol stimulated viral growth. Our data thus indicate that CTV may serve as a surrogate model for HEV, both for antiviral experiments and studies on the replication biology of the Hepeviridae.

  16. Experimental and numerical studies of burning velocities and kinetic modeling for practical and surrogate fuels

    Science.gov (United States)

    Zhao, Zhenwei

    To help understand the fuel oxidation process in practical combustion environments, laminar flame speeds and high temperature chemical kinetic models were studied for several practical fuels and "surrogate" fuels, such as propane, dimethyl ether (DME), and primary reference fuel (PRF) mixtures, gasoline and n-decane. The PIV system developed for the present work is described. The general principles for PIV measurements are outlined and the specific considerations are also reported. Laminar flame speeds were determined for propane/air over a range of equivalence ratios at initial temperature of 298 K, 500 K and 650 K and atmospheric pressure. Several data sets for propane/air laminar flame speeds with N 2 dilution are also reported. These results are compared to the literature data collected at the same conditions. The propane flame speed is also numerically calculated with a detailed kinetic model and multi component diffusion, including Soret effects. This thesis also presents experimentally determined laminar flame speeds for primary reference fuel (PRF) mixtures of n-heptane/iso-octane and real gasoline fuel at different initial temperature and at atmospheric pressure. Nitrogen dilution effects on the laminar flame speed are also studied for selected equivalence ratios at the same conditions. A minimization of detailed kinetic model for PRF mixtures on laminar flame speed conditions was performed and the measured flame speeds were compared with numerical predictions using this model. The measured laminar flame speeds of n-decane/air mixtures at 500 K and at atmospheric pressure with and without dilution were determined. The measured flame speeds are significantly different that those predicted using existing published kinetic models, including a model validated previously against high temperature data from flow reactor, jet-stirred reactor, shock tube ignition delay, and burner stabilized flame experiments. A significant update of this model is described which

  17. An R package for fitting age, period and cohort models

    Directory of Open Access Journals (Sweden)

    Adriano Decarli

    2014-11-01

    Full Text Available In this paper we present the R implementation of a GLIM macro which fits age-period-cohort model following Osmond and Gardner. In addition to the estimates of the corresponding model, owing to the programming capability of R as an object oriented language, methods for printing, plotting and summarizing the results are provided. Furthermore, the researcher has fully access to the output of the main function (apc which returns all the models fitted within the function. It is so possible to critically evaluate the goodness of fit of the resulting model.

  18. Robust discriminative response map fitting with constrained local models

    NARCIS (Netherlands)

    Asthana, Akshay; Zafeiriou, Stefanos; Cheng, Shiyang; Pantic, Maja

    2013-01-01

    We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that, u

  19. SU-E-J-73: Generation of Volumetric Images with a Respiratory Motion Model Based On An External Surrogate Signal

    Energy Technology Data Exchange (ETDEWEB)

    Hurwitz, M; Williams, C; Mishra, P; Dhou, S; Lewis, J [Brigham and Women' s Hospital, Dana-Farber Cancer Center, Harvard Medical School, Boston, MA, Boston, MA (United States)

    2014-06-01

    Purpose: Respiratory motion during radiotherapy treatment can differ significantly from motion observed during imaging for treatment planning. Our goal is to use an initial 4DCT scan and the trace of an external surrogate marker to generate 3D images of patient anatomy during treatment. Methods: Deformable image registration is performed on images from an initial 4DCT scan. The deformation vectors are used to develop a patient-specific linear relationship between the motion of each voxel and the trajectory of an external surrogate signal. Correlations in motion are taken into account with principal component analysis, reducing the number of free parameters. This model is tested with digital phantoms reproducing the breathing patterns of ten measured patient tumor trajectories, using five seconds of data to develop the model and the subsequent thirty seconds to test its predictions. The model is also tested with a breathing physical anthropomorphic phantom programmed to reproduce a patient breathing pattern. Results: The error (mean absolute, 95th percentile) over 30 seconds in the predicted tumor centroid position ranged from (0.8, 1.3) mm to (2.2, 4.3) mm for the ten patient breathing patterns. The model reproduced changes in both phase and amplitude of the breathing pattern. Agreement between prediction and truth over the entire image was confirmed by assessing the global voxel intensity RMS error. In the physical phantom, the error in the tumor centroid position was less than 1 mm for all images. Conclusion: We are able to reconstruct 3D images of patient anatomy with a model correlating internal respiratory motion with motion of an external surrogate marker, reproducing the expected tumor centroid position with an average accuracy of 1.4 mm. The images generated by this model could be used to improve dose calculations for treatment planning and delivered dose estimates. This work was partially funded by a research grant from Varian Medical Systems.

  20. Improving Mixed Variable Optimization of Computational and Model Parameters Using Multiple Surrogate Functions

    Science.gov (United States)

    2008-03-01

    Mathematics and Theoretical Physics, Cambridge University, August 1992. 23. Booker, A. J., J. E. Dennis, Jr, P. D. Frank, D. B. Serafini , V. Torczon, and M...J., J. E. Dennis, Jr., P. D. Frank, D. W. Moore, and D. B. Serafini . Managing Surrogate Objectives to Optimize a Helicopter Rotor Design - Further...L. A., Jr., and H. Miura. “Approximation Concepts for Efficient Struc- tural Synthesis”. Technical Report CR-2552, NASA, 1976. 78. Serafini , D. B. A

  1. Generation of Comprehensive Surrogate Kinetic Models and Validation Databases for Simulating Large Molecular Weight Hydrocarbon Fuels

    Science.gov (United States)

    2012-10-25

    composed of highly isomerized paraffinic kerosene (denoted IPK). Comparisons of a 2nd generation surrogate formulated to match all four of the above...and di methyl alkanes was emulated using mixtures of n-dodecane and iso-octane that replicated its combustion property targets, demonstrating that... methyl heptane was shown to replicate the global combustion properties of the weakly branched isomer, further supporting that distinct functional

  2. Optimizing water resources management in large river basins with integrated surface water-groundwater modeling: A surrogate-based approach

    Science.gov (United States)

    Wu, Bin; Zheng, Yi; Wu, Xin; Tian, Yong; Han, Feng; Liu, Jie; Zheng, Chunmiao

    2015-04-01

    Integrated surface water-groundwater modeling can provide a comprehensive and coherent understanding on basin-scale water cycle, but its high computational cost has impeded its application in real-world management. This study developed a new surrogate-based approach, SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), to incorporate the integrated modeling into water management optimization. Its applicability and advantages were evaluated and validated through an optimization research on the conjunctive use of surface water (SW) and groundwater (GW) for irrigation in a semiarid region in northwest China. GSFLOW, an integrated SW-GW model developed by USGS, was employed. The study results show that, due to the strong and complicated SW-GW interactions, basin-scale water saving could be achieved by spatially optimizing the ratios of groundwater use in different irrigation districts. The water-saving potential essentially stems from the reduction of nonbeneficial evapotranspiration from the aqueduct system and shallow groundwater, and its magnitude largely depends on both water management schemes and hydrological conditions. Important implications for water resources management in general include: first, environmental flow regulation needs to take into account interannual variation of hydrological conditions, as well as spatial complexity of SW-GW interactions; and second, to resolve water use conflicts between upper stream and lower stream, a system approach is highly desired to reflect ecological, economic, and social concerns in water management decisions. Overall, this study highlights that surrogate-based approaches like SOIM represent a promising solution to filling the gap between complex environmental modeling and real-world management decision-making.

  3. Bayesian item fit analysis for unidimensional item response theory models.

    Science.gov (United States)

    Sinharay, Sandip

    2006-11-01

    Assessing item fit for unidimensional item response theory models for dichotomous items has always been an issue of enormous interest, but there exists no unanimously agreed item fit diagnostic for these models, and hence there is room for further investigation of the area. This paper employs the posterior predictive model-checking method, a popular Bayesian model-checking tool, to examine item fit for the above-mentioned models. An item fit plot, comparing the observed and predicted proportion-correct scores of examinees with different raw scores, is suggested. This paper also suggests how to obtain posterior predictive p-values (which are natural Bayesian p-values) for the item fit statistics of Orlando and Thissen that summarize numerically the information in the above-mentioned item fit plots. A number of simulation studies and a real data application demonstrate the effectiveness of the suggested item fit diagnostics. The suggested techniques seem to have adequate power and reasonable Type I error rate, and psychometricians will find them promising.

  4. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    Science.gov (United States)

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  5. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    Science.gov (United States)

    du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian

    2016-09-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations.

  6. Fitting polytomous Rasch models in SAS

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    The item parameters of a polytomous Rasch model can be estimated using marginal and conditional approaches. This paper describes how this can be done in SAS (V8.2) for three item parameter estimation procedures: marginal maximum likelihood estimation, conditional maximum likelihood estimation......, and pairwise conditional estimation. The use of the procedures for extensions of the Rasch model is also discussed. The accuracy of the methods are evaluated using a simulation study....

  7. FITTING PHOTOIONIZATION MODELS TO PLANETARY NEBULAE

    Directory of Open Access Journals (Sweden)

    J. Bohigas

    2009-01-01

    Full Text Available Good to excellent photoionization models based on the Cloudy code were obtained for 13 out of 19 spectra of planetary nebulae. The two most important assumptions are that the photoionizing continuum is a Rauch model star, with gravity set by the condition that the stellar mass must be 1 M , and density is constant and determined from the observed [S II]6717/6731 ratio. The temperature and luminosity of the central star, the inner radius of the nebula and the abundance of carbon are treated as free parameters in each model run, destined to obtain the best possible t to the relative intensities of He II 4686, [O III]5007 and [N II]6584. Observed and modeled nebular temperatures derived from [N II] (6548+6584 /5755 agree within 10%, but models usually underestimate temperatures found from [O III] (4959+5007 /4363, more so when the slit does not cover the in-depth extent of the ionized region. Helium, nitrogen, oxygen, neon, sulfur and argon model abundances are uncertain at the 15%, 15%, 10%, 7%, 30% and 7% level. It is shown that neon abundance in PNe has been consistently overestimated, and an alternative ionization correction factor is recommended.

  8. An Algorithm for Optimally Fitting a Wiener Model

    Directory of Open Access Journals (Sweden)

    Lucas P. Beverlin

    2011-01-01

    Full Text Available The purpose of this work is to present a new methodology for fitting Wiener networks to datasets with a large number of variables. Wiener networks have the ability to model a wide range of data types, and their structures can yield parameters with phenomenological meaning. There are several challenges to fitting such a model: model stiffness, the nonlinear nature of a Wiener network, possible overfitting, and the large number of parameters inherent with large input sets. This work describes a methodology to overcome these challenges by using several iterative algorithms under supervised learning and fitting subsets of the parameters at a time. This methodology is applied to Wiener networks that are used to predict blood glucose concentrations. The predictions of validation sets from models fit to four subjects using this methodology yielded a higher correlation between observed and predicted observations than other algorithms, including the Gauss-Newton and Levenberg-Marquardt algorithms.

  9. Predictive models for population performance on real biological fitness landscapes.

    Science.gov (United States)

    Rowe, William; Wedge, David C; Platt, Mark; Kell, Douglas B; Knowles, Joshua

    2010-09-01

    Directed evolution, in addition to its principal application of obtaining novel biomolecules, offers significant potential as a vehicle for obtaining useful information about the topologies of biomolecular fitness landscapes. In this article, we make use of a special type of model of fitness landscapes-based on finite state machines-which can be inferred from directed evolution experiments. Importantly, the model is constructed only from the fitness data and phylogeny, not sequence or structural information, which is often absent. The model, called a landscape state machine (LSM), has already been used successfully in the evolutionary computation literature to model the landscapes of artificial optimization problems. Here, we use the method for the first time to simulate a biological fitness landscape based on experimental evaluation. We demonstrate in this study that LSMs are capable not only of representing the structure of model fitness landscapes such as NK-landscapes, but also the fitness landscape of real DNA oligomers binding to a protein (allophycocyanin), data we derived from experimental evaluations on microarrays. The LSMs prove adept at modelling the progress of evolution as a function of various controlling parameters, as validated by evaluations on the real landscapes. Specifically, the ability of the model to 'predict' optimal mutation rates and other parameters of the evolution is demonstrated. A modification to the standard LSM also proves accurate at predicting the effects of recombination on the evolution.

  10. Relative and Absolute Fit Evaluation in Cognitive Diagnosis Modeling

    Science.gov (United States)

    Chen, Jinsong; de la Torre, Jimmy; Zhang, Zao

    2013-01-01

    As with any psychometric models, the validity of inferences from cognitive diagnosis models (CDMs) determines the extent to which these models can be useful. For inferences from CDMs to be valid, it is crucial that the fit of the model to the data is ascertained. Based on a simulation study, this study investigated the sensitivity of various fit…

  11. Fitting ARMA Time Series by Structural Equation Models.

    Science.gov (United States)

    van Buuren, Stef

    1997-01-01

    This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)

  12. Improving variable-fidelity surrogate modeling via gradient-enhanced kriging and a generalized hybrid bridge function

    DEFF Research Database (Denmark)

    Han, Zhong Hua; Goertz, Stefan; Zimmermann, Ralf

    2013-01-01

    Variable-fidelity surrogate modeling offers an efficient way to generate aerodynamic data for aero-loads prediction based on a set of CFD methods with varying degree of fidelity and computational expense. In this paper, direct Gradient-Enhanced Kriging (GEK) and a newly developed Generalized Hybrid...... for the aerodynamic coefficients and drag polar of an RAE 2822 airfoil. It is shown that the gradient-enhanced GHBF proposed in this paper is very promising and can be used to significantly improve the efficiency, accuracy and robustness of VFM in the context of aero-loads prediction. © 2012 Elsevier Masson SAS. All...

  13. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    Science.gov (United States)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  14. Surrogate data modeling the relationship between high frequency amplitudes and Higuchi fractal dimension of EEG signals in anesthetized rats.

    Science.gov (United States)

    Spasic, Sladjana; Kalauzi, Aleksandar; Kesic, Srdjan; Obradovic, Milica; Saponjic, Jasna

    2011-11-21

    We used spectral analysis and Higuchi fractal dimension (FD) to correlate the EEG spectral characteristics of the sensorimotor cortex, hippocampus, and pons with their corresponding EEG signal complexities in anesthetized rats. We have explored the quantitative relationship between the mean FDs and EEG wide range high frequency (8-50 Hz) activity during ketamine/xylazine versus nembutal anesthesia at surgical plane. Using FD we detected distinct inter-structure complexity pattern and uncovered for the first time that the polygraphically and behaviorally defined anesthetized state at surgical plane as equal during experiment in two anesthetic regimens, is not the same with respect to the degree of neuronal activity (degree of generalized neuronal inhibition achieved) at different brain levels. Using the correlation of certain brain structure EEG spectral characteristics with their corresponding FDs, and the surrogate data modeling, we determined what particular frequency band contributes to EEG complexities in ketamine/xylazine versus nembutal anesthesia. In this study we have shown that the quantitative relationship between higher frequency EEG amplitude and EEG complexity is the best-modeled by surrogate data as a 3rd order polynomial. On the base of our EEG amplitude/EEG complexity relationship model, and the evidenced spectral differences in ketamine versus nembutal anesthesia we have proved that higher amplitudes of sigma, beta, and gamma frequency in ketamine anesthesia yields to higher FDs.

  15. Surrogate modeling-based optimization for the integration of static and dynamic data into a reservoir description

    Energy Technology Data Exchange (ETDEWEB)

    Queipo, Nestor V.; Pintos, Salvador; Rincon, Nestor; Contreras, Nemrod; Colmenares, Juan [Applied Computing Institute, Faculty of Engineering, University of Zulia, Zulia (Venezuela)

    2002-08-01

    This paper presents a solution methodology for the inverse problem of estimating the distributions of permeability and porosity in heterogeneous and multiphase petroleum reservoirs by matching the static and dynamic data available. The solution methodology includes, the construction of a 'fast surrogate' of an objective function whose evaluation involves the execution of a time-consuming mathematical model (i.e., reservoir numerical simulator) based on neural networks, DACE (design and analysis of computer experiment) modeling, and adaptive sampling. Using adaptive sampling, promising areas are searched considering the information provided by the surrogate model and the expected value of the errors. The proposed methodology provides a global optimization method, hence avoiding the potential problem of convergence to a local minimum in the objective function exhibited by the commonly Gauss-Newton methods. Furthermore, it exhibits an affordable computational cost, is amenable to parallel processing, and is expected to outperform other general-purpose global optimization methods such as, simulated annealing, and genetic algorithms.The methodology is evaluated using two case studies of increasing complexity (from 6 to 23 independent parameters). From the results, it is concluded that the methodology can be used effectively and efficiently for reservoir characterization purposes. In addition, the optimization approach holds promise to be useful in the optimization of objective functions involving the execution of computationally expensive reservoir numerical simulators, such as those found, not only in reservoir characterization, but also in other areas of petroleum engineering (e.g., EOR optimization)

  16. A Kriging surrogate model coupled in simulation-optimization approach for identifying release history of groundwater sources

    Science.gov (United States)

    Zhao, Ying; Lu, Wenxi; Xiao, Chuanning

    2016-02-01

    As the incidence frequency of groundwater pollution increases, many methods that identify source characteristics of pollutants are being developed. In this study, a simulation-optimization approach was applied to determine the duration and magnitude of pollutant sources. Such problems are time consuming because thousands of simulation models are required to run the optimization model. To address this challenge, the Kriging surrogate model was proposed to increase computational efficiency. Accuracy, time consumption, and the robustness of the Kriging model were tested on both homogenous and non-uniform media, as well as steady-state and transient flow and transport conditions. The results of three hypothetical cases demonstrate that the Kriging model has the ability to solve groundwater contaminant source problems that could occur during field site source identification problems with a high degree of accuracy and short computation times and is thus very robust.

  17. Time-varying surrogate data to assess nonlinearity in nonstationary time series: application to heart rate variability.

    Science.gov (United States)

    Faes, Luca; Zhao, He; Chon, Ki H; Nollo, Giandomenico

    2009-03-01

    We propose a method to extend to time-varying (TV) systems the procedure for generating typical surrogate time series, in order to test the presence of nonlinear dynamics in potentially nonstationary signals. The method is based on fitting a TV autoregressive (AR) model to the original series and then regressing the model coefficients with random replacements of the model residuals to generate TV AR surrogate series. The proposed surrogate series were used in combination with a TV sample entropy (SE) discriminating statistic to assess nonlinearity in both simulated and experimental time series, in comparison with traditional time-invariant (TIV) surrogates combined with the TIV SE discriminating statistic. Analysis of simulated time series showed that using TIV surrogates, linear nonstationary time series may be erroneously regarded as nonlinear and weak TV nonlinearities may remain unrevealed, while the use of TV AR surrogates markedly increases the probability of a correct interpretation. Application to short (500 beats) heart rate variability (HRV) time series recorded at rest (R), after head-up tilt (T), and during paced breathing (PB) showed: 1) modifications of the SE statistic that were well interpretable with the known cardiovascular physiology; 2) significant contribution of nonlinear dynamics to HRV in all conditions, with significant increase during PB at 0.2 Hz respiration rate; and 3) a disagreement between TV AR surrogates and TIV surrogates in about a quarter of the series, suggesting that nonstationarity may affect HRV recordings and bias the outcome of the traditional surrogate-based nonlinearity test.

  18. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    Directory of Open Access Journals (Sweden)

    Yongkai An

    2015-07-01

    Full Text Available This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately.

  19. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    Science.gov (United States)

    An, Yongkai; Lu, Wenxi; Cheng, Weiguo

    2015-01-01

    This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008

  20. Automatic fitting of spiking neuron models to electrophysiological recordings

    Directory of Open Access Journals (Sweden)

    Cyrille Rossant

    2010-03-01

    Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.

  1. Scalability of surrogate-assisted multi-objective optimization of antenna structures exploiting variable-fidelity electromagnetic simulation models

    Science.gov (United States)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2016-10-01

    Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.

  2. HDFITS: porting the FITS data model to HDF5

    CERN Document Server

    Price, D C; Greenhill, L J

    2015-01-01

    The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new `HDFITS' format, where data are stored in HDF5 in...

  3. A new surrogate modeling technique combining Kriging and polynomial chaos expansions - Application to uncertainty analysis in computational dosimetry

    Science.gov (United States)

    Kersaudy, Pierric; Sudret, Bruno; Varsier, Nadège; Picon, Odile; Wiart, Joe

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representation of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.

  4. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com [Orange Labs, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée (France); Sudret, Bruno [ETH Zürich, Chair of Risk, Safety and Uncertainty Quantification, Stefano-Franscini-Platz 5, 8093 Zürich (Switzerland); Varsier, Nadège [Orange Labs, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Picon, Odile [ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée (France); Wiart, Joe [Orange Labs, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France)

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representation of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.

  5. Fitting Equilibrium Search Models to Labour Market Data

    DEFF Research Database (Denmark)

    Bowlus, Audra J.; Kiefer, Nicholas M.; Neumann, George R.

    1996-01-01

    Specification and estimation of a Burdett-Mortensen type equilibrium search model is considered. The estimation is nonstandard. An estimation strategy asymptotically equivalent to maximum likelihood is proposed and applied. The results indicate that specifications with a small number of productiv...... of productivity types fit the data well compared to the homogeneous model....

  6. Inactivation modeling of human enteric virus surrogates, MS2, Qβ, and ΦX174, in water using UVC-LEDs, a novel disinfecting system.

    Science.gov (United States)

    Kim, Do-Kyun; Kim, Soo-Ji; Kang, Dong-Hyun

    2017-01-01

    In order to assure the microbial safety of drinking water, UVC-LED treatment has emerged as a possible technology to replace the use of conventional low pressure (LP) mercury vapor UV lamps. In this investigation, inactivation of Human Enteric Virus (HuEV) surrogates with UVC-LEDs was investigated in a water disinfection system, and kinetic model equations were applied to depict the surviving infectivities of the viruses. MS2, Qβ, and ΦX 174 bacteriophages were inoculated into sterile distilled water (DW) and irradiated with UVC-LED printed circuit boards (PCBs) (266nm and 279nm) or conventional LP lamps. Infectivities of bacteriophages were effectively reduced by up to 7-log after 9mJ/cm(2) treatment for MS2 and Qβ, and 1mJ/cm(2) for ΦX 174. UVC-LEDs showed a superior viral inactivation effect compared to conventional LP lamps at the same dose (1mJ/cm(2)). Non-log linear plot patterns were observed, so that Weibull, Biphasic, Log linear-tail, and Weibull-tail model equations were used to fit the virus survival curves. For MS2 and Qβ, Weibull and Biphasic models fit well with R(2) values approximately equal to 0.97-0.99, and the Weibull-tail equation accurately described survival of ΦX 174. The level of UV-susceptibility among coliphages measured by the inactivation rate constant, k, was statistically different (ΦX 174 (ssDNA)>MS2, Qβ (ssRNA)), and indicated that sensitivity to UV was attributed to viral genetic material. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Kriging Surrogate Models for Predicting the Complex Eigenvalues of Mechanical Systems Subjected to Friction-Induced Vibration

    Directory of Open Access Journals (Sweden)

    E. Denimal

    2016-01-01

    Full Text Available This study focuses on the kriging based metamodeling for the prediction of parameter-dependent mode coupling instabilities. The high cost of the currently used parameter-dependent Complex Eigenvalue Analysis (CEA has induced a growing need for alternative methods. Hence, this study investigates capabilities of kriging metamodels to be a suitable alternative. For this aim, kriging metamodels are proposed to predict the stability behavior of a four-degree-of-freedom mechanical system submitted to friction-induced vibrations. This system is considered under two configurations defining two stability behaviors with coalescence patterns of different complexities. Efficiency of kriging is then assessed on both configurations. In this framework, the proposed kriging surrogate approach includes a mode tracking method based on the Modal Assurance Criterion (MAC in order to follow the physical modes of the mechanical system. Based on the numerical simulations, it is demonstrated by a comparison with the reference parameter-dependent CEA that the proposed kriging surrogate model can provide efficient and reliable predictions of mode coupling instabilities with different complex patterns.

  8. Curve Fitting And Interpolation Model Applied In Nonel Dosage Detection

    Directory of Open Access Journals (Sweden)

    Jiuling Li

    2013-06-01

    Full Text Available The Curve Fitting and Interpolation Model are applied in Nonel dosage detection in this paper firstly, and the gray of continuous explosive in the Nonel has been forecasted. Although the traditional infrared equipment establishes the relationship of explosive dosage and light intensity, but the forecast accuracy is very low. Therefore, gray prediction models based on curve fitting and interpolation are framed separately, and the deviations from the different models are compared. Simultaneously, combining on the sample library features, the cubic polynomial fitting curve of the higher precision is used to predict grays, and 5mg-28mg Nonel gray values are calculated by MATLAB. Through the predictive values, the dosage detection operations are simplified, and the defect missing rate of the Nonel are reduced. Finally, the quality of Nonel is improved.

  9. Effects of Sample Size, Estimation Methods, and Model Specification on Structural Equation Modeling Fit Indexes.

    Science.gov (United States)

    Fan, Xitao; Wang, Lin; Thompson, Bruce

    1999-01-01

    A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)

  10. Time-domain fitting of battery electrochemical impedance models

    Science.gov (United States)

    Alavi, S. M. M.; Birkl, C. R.; Howey, D. A.

    2015-08-01

    Electrochemical impedance spectroscopy (EIS) is an effective technique for diagnosing the behaviour of electrochemical devices such as batteries and fuel cells, usually by fitting data to an equivalent circuit model (ECM). The common approach in the laboratory is to measure the impedance spectrum of a cell in the frequency domain using a single sine sweep signal, then fit the ECM parameters in the frequency domain. This paper focuses instead on estimation of the ECM parameters directly from time-domain data. This may be advantageous for parameter estimation in practical applications such as automotive systems including battery-powered vehicles, where the data may be heavily corrupted by noise. The proposed methodology is based on the simplified refined instrumental variable for continuous-time fractional systems method ('srivcf'), provided by the Crone toolbox [1,2], combined with gradient-based optimisation to estimate the order of the fractional term in the ECM. The approach was tested first on synthetic data and then on real data measured from a 26650 lithium-ion iron phosphate cell with low-cost equipment. The resulting Nyquist plots from the time-domain fitted models match the impedance spectrum closely (much more accurately than when a Randles model is assumed), and the fitted parameters as separately determined through a laboratory potentiostat with frequency domain fitting match to within 13%.

  11. Kompaneets Model Fitting of the Orion-Eridanus Superbubble

    CERN Document Server

    Pon, Andy; Bally, John; Heiles, Carl

    2014-01-01

    Winds and supernovae from OB associations create large cavities in the interstellar medium referred to as superbubbles. The Orion molecular clouds are the nearest high mass star-forming region and have created a highly elongated, 20 degree x 45 degree, superbubble. We fit Kompaneets models to the Orion-Eridanus superbubble and find that a model where the Eridanus side of the superbubble is oriented away from the Sun provides a marginal fit. Because this model requires an unusually small scale height of 40 pc and has the superbubble inclined 35 degrees from the normal to the Galactic plane, we propose that this model should be treated as a general framework for modeling the Orion-Eridanus superbubble, with a secondary physical mechanism not included in the Kompaneets model required to fully account for the orientation and elongation of the superbubble.

  12. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Kunkun, E-mail: ktg@illinois.edu [The Center for Exascale Simulation of Plasma-Coupled Combustion (XPACC), University of Illinois at Urbana–Champaign, 1308 W Main St, Urbana, IL 61801 (United States); Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Congedo, Pietro M. [Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Abgrall, Rémi [Institut für Mathematik, Universität Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  13. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    Science.gov (United States)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  14. Ongoing Processes in a Fitness Network Model under Restricted Resources.

    Directory of Open Access Journals (Sweden)

    Takayuki Niizato

    Full Text Available In real networks, the resources that make up the nodes and edges are finite. This constraint poses a serious problem for network modeling, namely, the compatibility between robustness and efficiency. However, these concepts are generally in conflict with each other. In this study, we propose a new fitness-driven network model for finite resources. In our model, each individual has its own fitness, which it tries to increase. The main assumption in fitness-driven networks is that incomplete estimation of fitness results in a dynamical growing network. By taking into account these internal dynamics, nodes and edges emerge as a result of exchanges between finite resources. We show that our network model exhibits exponential distributions in the in- and out-degree distributions and a power law distribution of edge weights. Furthermore, our network model resolves the trade-off relationship between robustness and efficiency. Our result suggests that growing and anti-growing networks are the result of resolving the trade-off problem itself.

  15. Topological performance measures as surrogates for physical flow models for risk and vulnerability analysis for electric power systems.

    Science.gov (United States)

    LaRocca, Sarah; Johansson, Jonas; Hassel, Henrik; Guikema, Seth

    2015-04-01

    Critical infrastructure systems must be both robust and resilient in order to ensure the functioning of society. To improve the performance of such systems, we often use risk and vulnerability analysis to find and address system weaknesses. A critical component of such analyses is the ability to accurately determine the negative consequences of various types of failures in the system. Numerous mathematical and simulation models exist that can be used to this end. However, there are relatively few studies comparing the implications of using different modeling approaches in the context of comprehensive risk analysis of critical infrastructures. In this article, we suggest a classification of these models, which span from simple topologically-oriented models to advanced physical-flow-based models. Here, we focus on electric power systems and present a study aimed at understanding the tradeoffs between simplicity and fidelity in models used in the context of risk analysis. Specifically, the purpose of this article is to compare performance estimates achieved with a spectrum of approaches typically used for risk and vulnerability analysis of electric power systems and evaluate if more simplified topological measures can be combined using statistical methods to be used as a surrogate for physical flow models. The results of our work provide guidance as to appropriate models or combinations of models to use when analyzing large-scale critical infrastructure systems, where simulation times quickly become insurmountable when using more advanced models, severely limiting the extent of analyses that can be performed.

  16. [How to fit and interpret multilevel models using SPSS].

    Science.gov (United States)

    Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael

    2007-05-01

    Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.

  17. A neutrino model fit to the CMB power spectrum

    Science.gov (United States)

    Shanks, T.; Johnson, R. W. F.; Schewtschenko, J. A.; Whitbourn, J. R.

    2014-12-01

    The standard cosmological model, Λ cold dark matter (ΛCDM), provides an excellent fit to cosmic microwave background (CMB) data. However, the model has well-known problems. For example, the cosmological constant, Λ, is fine-tuned to 1 part in 10100 and the CDM particle is not yet detected in the laboratory. Shanks previously investigated a model which assumed neither exotic particles nor a cosmological constant but instead postulated a low Hubble constant (H0) to allow a baryon density compatible with inflation and zero spatial curvature. However, recent Planck results make it more difficult to reconcile such a model with CMB power spectra. Here, we relax the previous assumptions to assess the effects of assuming three active neutrinos of mass ≈5 eV. If we assume a low H0 ≈ 45 km s-1 Mpc-1 then, compared to the previous purely baryonic model, we find a significantly improved fit to the first three peaks of the Planck power spectrum. Nevertheless, the goodness of fit is still significantly worse than for ΛCDM and would require appeal to unknown systematic effects for the fit ever to be considered acceptable. A further serious problem is that the amplitude of fluctuations is low (σ8 ≈ 0.2), making it difficult to form galaxies by the present day. This might then require seeds, perhaps from a primordial magnetic field, to be invoked for galaxy formation. These and other problems demonstrate the difficulties faced by models other than ΛCDM in fitting ever more precise cosmological data.

  18. Fuzzy Partition Models for Fitting a Set of Partitions.

    Science.gov (United States)

    Gordon, A. D.; Vichi, M.

    2001-01-01

    Describes methods for fitting a fuzzy consensus partition to a set of partitions of the same set of objects. Describes and illustrates three models defining median partitions and compares these methods to an alternative approach to obtaining a consensus fuzzy partition. Discusses interesting differences in the results. (SLD)

  19. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.

    2014-09-16

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models\\' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  20. The Gold Medal Fitness Program: A Model for Teacher Change

    Science.gov (United States)

    Wright, Jan; Konza, Deslea; Hearne, Doug; Okely, Tony

    2008-01-01

    Background: Following the 2000 Sydney Olympics, the NSW Premier, Mr Bob Carr, launched a school-based initiative in NSW government primary schools called the "Gold Medal Fitness Program" to encourage children to be fitter and more active. The Program was introduced into schools through a model of professional development, "Quality…

  1. Application of modified vector fitting to grounding system modeling

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, D.; Camargo, M.; Herrera, J.; Torres, H. [National University of Colombia (Colombia). Research Program on Acquisition and Analysis of Signals - PAAS], Emails: dyjimeneza@unal.edu.co, mpcamargom@unal.edu.co; Vargas, M. [Siemens S.A. - Power Transmission and Distribution - Energy Services (Colombia)

    2007-07-01

    The transient behavior of grounding systems (GS) influences greatly the performance of electrical networks under fault conditions. This fact has led the authors to present an application of the Modified Vector Fitting (MVF)1 methodology based upon the frequency response of the system, in order to find a rational function approximation and an equivalent electrical network whose transient behavior is similar to the original one of the GS. The obtained network can be introduced into the EMTP/ATP program for simulating the transient behavior of the GS. The MVF technique, which is a modification of the Vector Fitting (VF) technique, allows identifying state space models from the Frequency Domain Response for both single and multiple input-output systems. In this work, the methodology is used to fit the frequency response of a grounding grid, which is computed by means of the Hybrid Electromagnetic Model (HEM), finding the relation between voltages and input currents in two points of the grid in frequency domain. The model obtained with the MVF shows a good agreement with the frequency response of the GS. Besides, the model is tested in EMTP/ATP finding a good fitting with the calculated data, which demonstrates the validity and usefulness of the MVF. (author)

  2. Raindrop size distribution: Fitting performance of common theoretical models

    Science.gov (United States)

    Adirosi, E.; Volpi, E.; Lombardo, F.; Baldini, L.

    2016-10-01

    Modelling raindrop size distribution (DSD) is a fundamental issue to connect remote sensing observations with reliable precipitation products for hydrological applications. To date, various standard probability distributions have been proposed to build DSD models. Relevant questions to ask indeed are how often and how good such models fit empirical data, given that the advances in both data availability and technology used to estimate DSDs have allowed many of the deficiencies of early analyses to be mitigated. Therefore, we present a comprehensive follow-up of a previous study on the comparison of statistical fitting of three common DSD models against 2D-Video Distrometer (2DVD) data, which are unique in that the size of individual drops is determined accurately. By maximum likelihood method, we fit models based on lognormal, gamma and Weibull distributions to more than 42.000 1-minute drop-by-drop data taken from the field campaigns of the NASA Ground Validation program of the Global Precipitation Measurement (GPM) mission. In order to check the adequacy between the models and the measured data, we investigate the goodness of fit of each distribution using the Kolmogorov-Smirnov test. Then, we apply a specific model selection technique to evaluate the relative quality of each model. Results show that the gamma distribution has the lowest KS rejection rate, while the Weibull distribution is the most frequently rejected. Ranking for each minute the statistical models that pass the KS test, it can be argued that the probability distributions whose tails are exponentially bounded, i.e. light-tailed distributions, seem to be adequate to model the natural variability of DSDs. However, in line with our previous study, we also found that frequency distributions of empirical DSDs could be heavy-tailed in a number of cases, which may result in severe uncertainty in estimating statistical moments and bulk variables.

  3. MNP: R Package for Fitting the Multinomial Probit Model

    Directory of Open Access Journals (Sweden)

    Kosuke Imai

    2005-05-01

    Full Text Available MNP is a publicly available R package that fits the Bayesian multinomial probit model via Markov chain Monte Carlo. The multinomial probit model is often used to analyze the discrete choices made by individuals recorded in survey data. Examples where the multinomial probit model may be useful include the analysis of product choice by consumers in market research and the analysis of candidate or party choice by voters in electoral studies. The MNP software can also fit the model with different choice sets for each individual, and complete or partial individual choice orderings of the available alternatives from the choice set. The estimation is based on the efficient marginal data augmentation algorithm that is developed by Imai and van Dyk (2005.

  4. Survival model construction guided by fit and predictive strength.

    Science.gov (United States)

    Chauvel, Cécile; O'Quigley, John

    2016-10-05

    Survival model construction can be guided by goodness-of-fit techniques as well as measures of predictive strength. Here, we aim to bring together these distinct techniques within the context of a single framework. The goal is how to best characterize and code the effects of the variables, in particular time dependencies, when taken either singly or in combination with other related covariates. Simple graphical techniques can provide an immediate visual indication as to the goodness-of-fit but, in cases of departure from model assumptions, will point in the direction of a more involved and richer alternative model. These techniques appear to be intuitive. This intuition is backed up by formal theorems that underlie the process of building richer models from simpler ones. Measures of predictive strength are used in conjunction with these goodness-of-fit techniques and, again, formal theorems show that these measures can be used to help identify models closest to the unknown non-proportional hazards mechanism that we can suppose generates the observations. Illustrations from studies in breast cancer show how these tools can be of help in guiding the practical problem of efficient model construction for survival data.

  5. Quantifying model structural error: Efficient Bayesian calibration of a regional groundwater flow model using surrogates and a data-driven error model

    Science.gov (United States)

    Xu, Tianfang; Valocchi, Albert J.; Ye, Ming; Liang, Feng

    2017-05-01

    Groundwater model structural error is ubiquitous, due to simplification and/or misrepresentation of real aquifer systems. During model calibration, the basic hydrogeological parameters may be adjusted to compensate for structural error. This may result in biased predictions when such calibrated models are used to forecast aquifer responses to new forcing. We investigate the impact of model structural error on calibration and prediction of a real-world groundwater flow model, using a Bayesian method with a data-driven error model to explicitly account for model structural error. The error-explicit Bayesian method jointly infers model parameters and structural error and thereby reduces parameter compensation. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models (based on machine learning techniques) as a substitute for the computationally expensive groundwater model. We demonstrate that with explicit treatment of model structural error, the Bayesian method yields parameter posterior distributions that are substantially different from those derived using classical Bayesian calibration that does not account for model structural error. We also found that the error-explicit Bayesian method gives significantly more accurate prediction along with reasonable credible intervals. Finally, through variance decomposition, we provide a comprehensive assessment of prediction uncertainty contributed from parameter, model structure, and measurement uncertainty. The results suggest that the error-explicit Bayesian approach provides a solution to real-world modeling applications for which data support the presence of model structural error, yet model deficiency cannot be specifically identified or corrected.

  6. A model for emergency department end-of-life communications after acute devastating events--part I: decision-making capacity, surrogates, and advance directives.

    Science.gov (United States)

    Limehouse, Walter E; Feeser, V Ramana; Bookman, Kelly J; Derse, Arthur

    2012-09-01

    Making decisions for a patient affected by sudden devastating illness or injury traumatizes a patient's family and loved ones. Even in the absence of an emergency, surrogates making end-of-life treatment decisions may experience negative emotional effects. Helping surrogates with these end-of-life decisions under emergent conditions requires the emergency physician (EP) to be clear, making medical recommendations with sensitivity. This model for emergency department (ED) end-of-life communications after acute devastating events comprises the following steps: 1) determine the patient's decision-making capacity; 2) identify the legal surrogate; 3) elicit patient values as expressed in completed advance directives; 4) determine patient/surrogate understanding of the life-limiting event and expectant treatment goals; 5) convey physician understanding of the event, including prognosis, treatment options, and recommendation; 6) share decisions regarding withdrawing or withholding of resuscitative efforts, using available resources and considering options for organ donation; and 7) revise treatment goals as needed. Emergency physicians should break bad news compassionately, yet sufficiently, so that surrogate and family understand both the gravity of the situation and the lack of long-term benefit of continued life-sustaining interventions. EPs should also help the surrogate and family understand that palliative care addresses comfort needs of the patient including adequate treatment for pain, dyspnea, or anxiety. Part I of this communications model reviews determination of decision-making capacity, surrogacy laws, and advance directives, including legal definitions and application of these steps; Part II (which will appear in a future issue of AEM) covers communication moving from resuscitative to end-of-life and palliative treatment. EPs should recognize acute devastating illness or injuries, when appropriate, as opportunities to initiate end-of-life discussions and to

  7. Growth and reproductive effects from dietary exposure to Aroclor 1268 in mink (Neovison vison), a surrogate model for marine mammals.

    Science.gov (United States)

    Folland, William R; Newsted, John L; Fitzgerald, Scott D; Fuchsman, Phyllis C; Bradley, Patrick W; Kern, John; Kannan, Kurunthachalam; Remington, Richard E; Zwiernik, Matthew J

    2016-03-01

    Polychlorinated biphenyls (PCBs) from the commercial mixture Aroclor 1268 were historically released into the Turtle-Brunswick River estuary (southeastern Georgia, USA) from industrial operations. Sum PCBs (ΣPCBs) in blubber samples from Turtle-Brunswick River estuary bottlenose dolphins (Tursiops truncatus) have been reported at concentrations more than 10-fold higher than those observed in dolphins from adjacent regional estuaries. Given that toxicity data specific to Aroclor 1268 and applicable to marine mammals are limited, predicting the toxic effects of Aroclor 1268 in dolphins is uncertain, particularly because of its unique congener profile and associated physiochemical characteristics compared with other PCB mixtures. American mink (Neovison vison) were chosen as a surrogate model for cetaceans to develop marine mammalian PCB toxicity benchmarks. Mink are a suitable surrogate species for cetaceans in toxicity studies because of similarities in diet and taxonomic class, and a characteristic sensitivity to PCBs provides a potential safety factor when using mink toxicology data for cross-species extrapolations. Effects of dietary exposure to Aroclor 1268 on reproduction, growth, and mortality in mink were compared with both a negative control and a positive control (3,3',4,4',5-pentachlorobiphenyl, PCB 126). Aroclor 1268 dietary ΣPCB concentrations ranged from 1.8 µg/g feed wet weight to 29 µg/g feed wet weight. Whelp success was unaffected by Aroclor 1268 exposure at any level. Treatment mean litter size, kit growth, and kit survival were adversely affected relative to the negative control at dietary ΣPCB concentrations of 10.6 µg/g feed wet weight and greater.

  8. EVALUATION OF MURINE NOROVIRUS, FELINE CALICIVIRUS, POLIOVIRUS, AND MS2 AS SURROGATES FOR HUMAN NOROVIRUS IN a Model of Viral Persistence in SURFACE Water AND GROUNDWATER

    Science.gov (United States)

    Human noroviruses (NoV) are a significant cause of non bacterial gastroenteritis worldwide with contaminated drinking water a potential transmission route. The absence of a cell culture infectivity model for NoV necessitates the use of molecular methods and/or viral surrogate mod...

  9. EVALUATION OF MURINE NOROVIRUS, FELINE CALICIVIRUS, POLIOVIRUS, AND MS2 AS SURROGATES FOR HUMAN NOROVIRUS IN a Model of Viral Persistence in SURFACE Water AND GROUNDWATER

    Science.gov (United States)

    Human noroviruses (NoV) are a significant cause of non bacterial gastroenteritis worldwide with contaminated drinking water a potential transmission route. The absence of a cell culture infectivity model for NoV necessitates the use of molecular methods and/or viral surrogate mod...

  10. Supersymmetry with prejudice: Fitting the wrong model to LHC data

    Science.gov (United States)

    Allanach, B. C.; Dolan, Matthew J.

    2012-09-01

    We critically examine interpretations of hypothetical supersymmetric LHC signals, fitting to alternative wrong models of supersymmetry breaking. The signals we consider are some of the most constraining on the sparticle spectrum: invariant mass distributions with edges and endpoints from the golden decay chain q˜→qχ20(→l˜±l∓q)→χ10l+l-q. We assume a constrained minimal supersymmetric standard model (CMSSM) point to be the ‘correct’ one, but fit the signals instead with minimal gauge mediated supersymmetry breaking models (mGMSB) with a neutralino quasistable lightest supersymmetric particle, minimal anomaly mediation and large volume string compactification models. Minimal anomaly mediation and large volume scenario can be unambiguously discriminated against the CMSSM for the assumed signal and 1fb-1 of LHC data at s=14TeV. However, mGMSB would not be discriminated on the basis of the kinematic endpoints alone. The best-fit point spectra of mGMSB and CMSSM look remarkably similar, making experimental discrimination at the LHC based on the edges or Higgs properties difficult. However, using rate information for the golden chain should provide the additional separation required.

  11. Fitting and Comparison of Models of Radio Spectra

    CERN Document Server

    Nikolic, Bojan

    2009-01-01

    I describe an approach to fitting and comparison of radio spectra based on Bayesian analysis and realised using a new implementation of the nested sampling algorithm. Such an approach improves on the commonly used maximum-likelihood fitting of radio spectra by allowing objective model selection, calculation of the full probability distributions of the model parameters and provides a natural mechanism for including information other than the measured spectra through priors. In this paper I cover the theoretical background, the algorithms used and the implementation details of the computer code. I also briefly illustrate the method with some previously published data for three near-by galaxies. In forthcoming papers we will present the results of applying this analysis larger data sets, including some new observations, and the physical conclusions that can be made. The computer code as well as the overall approach described here may also be useful for analysis of other multi-chromatic broad-band observations an...

  12. Geometrical model fitting for interferometric data: GEM-FIND

    CERN Document Server

    Klotz, D; Paladini, C; Hron, J; Wachter, G

    2012-01-01

    We developed the tool GEM-FIND that allows to constrain the morphology and brightness distribution of objects. The software fits geometrical models to spectrally dispersed interferometric visibility measurements in the N-band using the Levenberg-Marquardt minimization method. Each geometrical model describes the brightness distribution of the object in the Fourier space using a set of wavelength-independent and/or wavelength-dependent parameters. In this contribution we numerically analyze the stability of our nonlinear fitting approach by applying it to sets of synthetic visibilities with statistically applied errors, answering the following questions: How stable is the parameter determination with respect to (i) the number of uv-points, (ii) the distribution of points in the uv-plane, (iii) the noise level of the observations?

  13. Atmospheric Turbulence Modeling for Aerospace Vehicles: Fractional Order Fit

    Science.gov (United States)

    Kopasakis, George (Inventor)

    2015-01-01

    An improved model for simulating atmospheric disturbances is disclosed. A scale Kolmogorov spectral may be scaled to convert the Kolmogorov spectral into a finite energy von Karman spectral and a fractional order pole-zero transfer function (TF) may be derived from the von Karman spectral. Fractional order atmospheric turbulence may be approximated with an integer order pole-zero TF fit, and the approximation may be stored in memory.

  14. Ectromelia Virus Disease Characterization in the BALB/c Mouse: A Surrogate Model for Assessment of Smallpox Medical Countermeasures

    Directory of Open Access Journals (Sweden)

    Jennifer Garver

    2016-07-01

    Full Text Available In 2007, the United States– Food and Drug Administration (FDA issued guidance concerning animal models for testing the efficacy of medical countermeasures against variola virus (VARV, the etiologic agent for smallpox. Ectromelia virus (ECTV is naturally-occurring and responsible for severe mortality and morbidity as a result of mousepox disease in the murine model, displaying similarities to variola infection in humans. Due to the increased need of acceptable surrogate animal models for poxvirus disease, we have characterized ECTV infection in the BALB/c mouse. Mice were inoculated intranasally with a high lethal dose (125 PFU of ECTV, resulting in complete mortality 10 days after infection. Decreases in weight and temperature from baseline were observed eight to nine days following infection. Viral titers via quantitative polymerase chain reaction (qPCR and plaque assay were first observed in the blood at 4.5 days post-infection and in tissue (spleen and liver at 3.5 days post-infection. Adverse clinical signs of disease were first observed four and five days post-infection, with severe signs occurring on day 7. Pathological changes consistent with ECTV infection were first observed five days after infection. Examination of data obtained from these parameters suggests the ECTV BALB/c model is suitable for potential use in medical countermeasures (MCMs development and efficacy testing.

  15. Ectromelia Virus Disease Characterization in the BALB/c Mouse: A Surrogate Model for Assessment of Smallpox Medical Countermeasures.

    Science.gov (United States)

    Garver, Jennifer; Weber, Lauren; Vela, Eric M; Anderson, Mike; Warren, Richard; Merchlinsky, Michael; Houchens, Christopher; Rogers, James V

    2016-07-22

    In 2007, the United States- Food and Drug Administration (FDA) issued guidance concerning animal models for testing the efficacy of medical countermeasures against variola virus (VARV), the etiologic agent for smallpox. Ectromelia virus (ECTV) is naturally-occurring and responsible for severe mortality and morbidity as a result of mousepox disease in the murine model, displaying similarities to variola infection in humans. Due to the increased need of acceptable surrogate animal models for poxvirus disease, we have characterized ECTV infection in the BALB/c mouse. Mice were inoculated intranasally with a high lethal dose (125 PFU) of ECTV, resulting in complete mortality 10 days after infection. Decreases in weight and temperature from baseline were observed eight to nine days following infection. Viral titers via quantitative polymerase chain reaction (qPCR) and plaque assay were first observed in the blood at 4.5 days post-infection and in tissue (spleen and liver) at 3.5 days post-infection. Adverse clinical signs of disease were first observed four and five days post-infection, with severe signs occurring on day 7. Pathological changes consistent with ECTV infection were first observed five days after infection. Examination of data obtained from these parameters suggests the ECTV BALB/c model is suitable for potential use in medical countermeasures (MCMs) development and efficacy testing.

  16. The Meaning of Goodness-of-Fit Tests: Commentary on "Goodness-of-Fit Assessment of Item Response Theory Models"

    Science.gov (United States)

    Thissen, David

    2013-01-01

    In this commentary, David Thissen states that "Goodness-of-fit assessment for IRT models is maturing; it has come a long way from zero." Thissen then references prior works on "goodness of fit" in the index of Lord and Novick's (1968) classic text; Yen (1984); Drasgow, Levine, Tsien, Williams, and Mead (1995); Chen and…

  17. Supersymmetry With Prejudice: Fitting the Wrong Model to LHC Data

    CERN Document Server

    Allanach, B C

    2011-01-01

    We critically examine interpretations of hypothetical supersymmetric LHC signals, fitting to alternative wrong models of supersymmetry breaking. The signals we consider are some of the most constraining on the sparticle spectrum: invariant mass distributions with edges and end-points from the golden cascade decay chain \\tilde{q}_L -> q \\chi_2^0 (-> \\tilde{l}^{\\pm} l^{\\mp} q) -> \\chi_1^0 l^+ l^- q. We assume a CMSSM point to be the `correct' one, and fit the signals instead to minimal gauge mediated supersymmetry breaking models (mGMSB) with a neutralino quasi-stable lightest supersymmetric particle, minimal anomaly mediation (mAMSB) and large volume string compactification models (LVS). mAMSB and LVS can be unambiguously discriminated against the CMSSM for the parameter point assumed and 1 inverse femtobarn of LHC data at 14 TeV. However, mGMSB would not be discriminated on the basis of the kinematic end-points alone, and would require further, more detailed investigation. The best-fit points of mGMSB and CMS...

  18. Surrogate gas prediction model as a proxy for Δ14C-based measurements of fossil fuel CO2

    Science.gov (United States)

    Coakley, Kevin J.; Miller, John B.; Montzka, Stephen A.; Sweeney, Colm; Miller, Ben R.

    2016-06-01

    The measured 14C:12C isotopic ratio of atmospheric CO2 (and its associated derived Δ14C value) is an ideal tracer for determination of the fossil fuel derived CO2 enhancement contributing to any atmospheric CO2 measurement (Cff). Given enough such measurements, independent top-down estimation of U.S. fossil fuel CO2 emissions should be possible. However, the number of Δ14C measurements is presently constrained by cost, available sample volume, and availability of mass spectrometer measurement facilities. Δ14C is therefore measured in just a small fraction of samples obtained by flask air sampling networks around the world. Here we develop a projection pursuit regression (PPR) model to predict Cff as a function of multiple surrogate gases acquired within the NOAA/Earth System Research Laboratory (ESRL) Global Greenhouse Gas Reference Network (GGGRN). The surrogates consist of measured enhancements of various anthropogenic trace gases, including CO, SF6, and halocarbon and hydrocarbon acquired in vertical airborne sampling profiles near Cape May, NJ and Portsmouth, NH from 2005 to 2010. Model performance for these sites is quantified based on predicted values corresponding to test data excluded from the model building process. Chi-square hypothesis test analysis indicates that these predictions and corresponding observations are consistent given our uncertainty budget which accounts for random effects and one particular systematic effect. However, quantification of the combined uncertainty of the prediction due to all relevant systematic effects is difficult because of the limited range of the observations and their relatively high fractional uncertainties at the sampling sites considered here. To account for the possibility of additional systematic effects, we incorporate another component of uncertainty into our budget. Expanding the number of Δ14C measurements in the NOAA GGGRN and building new PPR models at additional sites would improve our understanding of

  19. Surrogate gas prediction model as a proxy for Δ(14)C-based measurements of fossil fuel-CO2.

    Science.gov (United States)

    Coakley, Kevin J; Miller, John B; Montzka, Stephen A; Sweeney, Colm; Miller, Ben R

    2016-06-27

    The measured (14)C:(12)C isotopic ratio of atmospheric CO2 (and its associated derived Δ(14)C value) is an ideal tracer for determination of the fossil fuel derived CO2 enhancement contributing to any atmospheric CO2 measurement (Cff ). Given enough such measurements, independent top-down estimation of US fossil fuel-CO2 emissions should be possible. However, the number of Δ(14)C measurements is presently constrained by cost, available sample volume, and availability of mass spectrometer measurement facilities. Δ(14)C is therefore measured in just a small fraction of samples obtained by ask air sampling networks around the world. Here, we develop a Projection Pursuit Regression (PPR) model to predict Cff as a function of multiple surrogate gases acquired within the NOAA/ESRL Global Greenhouse Gas Reference Network (GGGRN). The surrogates consist of measured enhancements of various anthropogenic trace gases, including CO, SF6, and halo- and hydrocarbons acquired in vertical airborne sampling profiles near Cape May, NJ and Portsmouth, NH from 2005 through 2010. Model performance for these sites is quantified based on predicted values corresponding to test data excluded from the model building process. Chi-square hypothesis test analysis indicates that these predictions and corresponding observations are consistent given our uncertainty budget which accounts for random effects and one particular systematic effect. However, quantification of the combined uncertainty of the prediction due to all relevant systematic effects is difficult because of the limited range of the observations and their relatively high fractional uncertainties at the sampling sites considered here. To account for the possibility of additional systematic effects, we incorporate another component of uncertainty into our budget. Expanding the number of Δ(14)C measurements in the NOAA GGGRN and building new PPR models at additional sites would improve our understanding of uncertainties and

  20. Fitting Additive Binomial Regression Models with the R Package blm

    Directory of Open Access Journals (Sweden)

    Stephanie Kovalchik

    2013-09-01

    Full Text Available The R package blm provides functions for fitting a family of additive regression models to binary data. The included models are the binomial linear model, in which all covariates have additive effects, and the linear-expit (lexpit model, which allows some covariates to have additive effects and other covariates to have logisitc effects. Additive binomial regression is a model of event probability, and the coefficients of linear terms estimate covariate-adjusted risk differences. Thus, in contrast to logistic regression, additive binomial regression puts focus on absolute risk and risk differences. In this paper, we give an overview of the methodology we have developed to fit the binomial linear and lexpit models to binary outcomes from cohort and population-based case-control studies. We illustrate the blm packages methods for additive model estimation, diagnostics, and inference with risk association analyses of a bladder cancer nested case-control study in the NIH-AARP Diet and Health Study.

  1. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    Science.gov (United States)

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  2. Fitting rainfall interception models to forest ecosystems of Mexico

    Science.gov (United States)

    Návar, José

    2017-05-01

    Models that accurately predict forest interception are essential both for water balance studies and for assessing watershed responses to changes in land use and the long-term climate variability. This paper compares the performance of four rainfall interception models-the sparse Gash (1995), Rutter et al. (1975), Liu (1997) and two new models (NvMxa and NvMxb)-using data from four spatially extensive, structurally diverse forest ecosystems in Mexico. Ninety-eight case studies measuring interception in tropical dry (25), arid/semi-arid (29), temperate (26), and tropical montane cloud forests (18) were compiled and analyzed. Coefficients derived from raw data or published statistical relationships were used as model input to evaluate multi-storm forest interception at the case study scale. On average empirical data showed that, tropical montane cloud, temperate, arid/semi-arid and tropical dry forests intercepted 14%, 18%, 22% and 26% of total precipitation, respectively. The models performed well in predicting interception, with mean deviations between measured and modeled interception as a function of total precipitation (ME) generally 0.66. Model fitting precision was dependent on the forest ecosystem. Arid/semi-arid forests exhibited the smallest, while tropical montane cloud forest displayed the largest ME deviations. Improved agreement between measured and modeled data requires modification of in-storm evaporation rate in the Liu; the canopy storage in the sparse Gash model; and the throughfall coefficient in the Rutter and the NvMx models. This research concludes on recommending the wide application of rainfall interception models with some caution as they provide mixed results. The extensive forest interception data source, the fitting and testing of four models, the introduction of a new model, and the availability of coefficient values for all four forest ecosystems are an important source of information and a benchmark for future investigations in this

  3. Surrogate Analysis and Index Developer (SAID) tool

    Science.gov (United States)

    Domanski, Marian M.; Straub, Timothy D.; Landers, Mark N.

    2015-10-01

    The use of acoustic and other parameters as surrogates for suspended-sediment concentrations (SSC) in rivers has been successful in multiple applications across the Nation. Tools to process and evaluate the data are critical to advancing the operational use of surrogates along with the subsequent development of regression models from which real-time sediment concentrations can be made available to the public. Recent developments in both areas are having an immediate impact on surrogate research and on surrogate monitoring sites currently (2015) in operation.

  4. Broadband distortion modeling in Lyman-$\\alpha$ forest BAO fitting

    CERN Document Server

    Blomqvist, Michael; Bautista, Julian E; Ariño, Andreu; Busca, Nicolás G; Miralda-Escudé, Jordi; Slosar, Anže; Font-Ribera, Andreu; Margala, Daniel; Schneider, Donald P; Vazquez, Jose A

    2015-01-01

    In recent years, the Lyman-$\\alpha$ absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-$\\alpha$ forest auto-correlation function at redshift $z\\simeq 2.3$, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. We describe a $k$-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of a Lyman-$\\alpha$ forest spectrum. Implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-$\\alpha$ forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter $b_{F}$ and the redshift-space distortion parameter $\\beta_{F}$ for mock dat...

  5. Chempy: A flexible chemical evolution model for abundance fitting

    Science.gov (United States)

    Rybizki, J.; Just, A.; Rix, H.-W.; Fouesneau, M.

    2017-02-01

    Chempy models Galactic chemical evolution (GCE); it is a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of 5-10 parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: e.g. the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF) and the incidence of supernova of type Ia (SN Ia). Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets, performing essentially as a chemical evolution fitting tool. Chempy can be used to confront predictions from stellar nucleosynthesis with complex abundance data sets and to refine the physical processes governing the chemical evolution of stellar systems.

  6. When the model fits the frame: the impact of regulatory fit on efficacy appraisal and persuasion in health communication.

    Science.gov (United States)

    Bosone, Lucia; Martinez, Frédéric; Kalampalikis, Nikos

    2015-04-01

    In health-promotional campaigns, positive and negative role models can be deployed to illustrate the benefits or costs of certain behaviors. The main purpose of this article is to investigate why, how, and when exposure to role models strengthens the persuasiveness of a message, according to regulatory fit theory. We argue that exposure to a positive versus a negative model activates individuals' goals toward promotion rather than prevention. By means of two experiments, we demonstrate that high levels of persuasion occur when a message advertising healthy dietary habits offers a regulatory fit between its framing and the described role model. Our data also establish that the effects of such internal regulatory fit by vicarious experience depend on individuals' perceptions of response-efficacy and self-efficacy. Our findings constitute a significant theoretical complement to previous research on regulatory fit and contain valuable practical implications for health-promotional campaigns. © 2015 by the Society for Personality and Social Psychology, Inc.

  7. Fitting Latent Cluster Models for Networks with latentnet

    Directory of Open Access Journals (Sweden)

    Pavel N. Krivitsky

    2007-12-01

    Full Text Available latentnet is a package to fit and evaluate statistical latent position and cluster models for networks. Hoff, Raftery, and Handcock (2002 suggested an approach to modeling networks based on positing the existence of an latent space of characteristics of the actors. Relationships form as a function of distances between these characteristics as well as functions of observed dyadic level covariates. In latentnet social distances are represented in a Euclidean space. It also includes a variant of the extension of the latent position model to allow for clustering of the positions developed in Handcock, Raftery, and Tantrum (2007.The package implements Bayesian inference for the models based on an Markov chain Monte Carlo algorithm. It can also compute maximum likelihood estimates for the latent position model and a two-stage maximum likelihood method for the latent position cluster model. For latent position cluster models, the package provides a Bayesian way of assessing how many groups there are, and thus whether or not there is any clustering (since if the preferred number of groups is 1, there is little evidence for clustering. It also estimates which cluster each actor belongs to. These estimates are probabilistic, and provide the probability of each actor belonging to each cluster. It computes four types of point estimates for the coefficients and positions: maximum likelihood estimate, posterior mean, posterior mode and the estimator which minimizes Kullback-Leibler divergence from the posterior. You can assess the goodness-of-fit of the model via posterior predictive checks. It has a function to simulate networks from a latent position or latent position cluster model.

  8. Rapid world modeling: Fitting range data to geometric primitives

    Energy Technology Data Exchange (ETDEWEB)

    Feddema, J.; Little, C.

    1996-12-31

    For the past seven years, Sandia National Laboratories has been active in the development of robotic systems to help remediate DOE`s waste sites and decommissioned facilities. Some of these facilities have high levels of radioactivity which prevent manual clean-up. Tele-operated and autonomous robotic systems have been envisioned as the only suitable means of removing the radioactive elements. World modeling is defined as the process of creating a numerical geometric model of a real world environment or workspace. This model is often used in robotics to plan robot motions which perform a task while avoiding obstacles. In many applications where the world model does not exist ahead of time, structured lighting, laser range finders, and even acoustical sensors have been used to create three dimensional maps of the environment. These maps consist of thousands of range points which are difficult to handle and interpret. This paper presents a least squares technique for fitting range data to planar and quadric surfaces, including cylinders and ellipsoids. Once fit to these primitive surfaces, the amount of data associated with a surface is greatly reduced up to three orders of magnitude, thus allowing for more rapid handling and analysis of world data.

  9. Direct model fitting to combine dithered ACS images

    CERN Document Server

    Mahmoudian, Haniyeh

    2013-01-01

    The information lost in images of undersampled CCD cameras can be recovered with the technique of `dithering'. A number of subexposures is taken with sub-pixel shifts in order to record structures on scales smaller than a pixel. The standard method to combine such exposures, `Drizzle', averages after reversing the displacements, including rotations and distortions. More sophisticated methods are available to produce, e.g., Nyquist sampled representations of band-limited inputs. While the combined images produced by these methods can be of high quality, their use as input for forward-modelling techniques in gravitational lensing is still not optimal, because the residual artefacts still affect the modelling results in unpredictable ways. In this paper we argue for an overall modelling approach that takes into account the dithering and the lensing without the intermediate product of a combined image. As one building block we introduce an alternative approach to combine dithered images by direct model fitting wi...

  10. Integration of computational modeling and experimental techniques to design fuel surrogates

    DEFF Research Database (Denmark)

    Choudhury, H.A.; Intikhab, S.; Kalakul, Sawitree

    2017-01-01

    Conventional gasoline comprises of a large number of hydrocarbons that makes it difficult to utilize in a model for prediction of its properties. Modeling is needed for a better understanding of the fuel flow and combustion behavior that are essential to enhance fuel quality and improve engine pe...

  11. A Commentary on the Relationship between Model Fit and Saturated Path Models in Structural Equation Modeling Applications

    Science.gov (United States)

    Raykov, Tenko; Lee, Chun-Lung; Marcoulides, George A.; Chang, Chi

    2013-01-01

    The relationship between saturated path-analysis models and their fit to data is revisited. It is demonstrated that a saturated model need not fit perfectly or even well a given data set when fit to the raw data is examined, a criterion currently frequently overlooked by researchers utilizing path analysis modeling techniques. The potential of…

  12. Issues in Evaluating Model Fit With Missing Data

    Science.gov (United States)

    Davey, Adam

    2005-01-01

    Effects of incomplete data on fit indexes remain relatively unexplored. We evaluate a wide set of fit indexes (?[squared], root mean squared error of appproximation, Normed Fit Index [NFI], Tucker-Lewis Index, comparative fit index, gamma-hat, and McDonald's Centrality Index) varying conditions of sample size (100-1,000 in increments of 50),…

  13. Stationary flow fields prediction of variable physical domain based on proper orthogonal decomposition and kriging surrogate model

    Institute of Scientific and Technical Information of China (English)

    Qiu Yasong; Bai Junqiang

    2015-01-01

    In this paper a new flow field prediction method which is independent of the governing equations, is developed to predict stationary flow fields of variable physical domain. Predicted flow fields come from linear superposition of selected basis modes generated by proper orthogonal decomposition (POD). Instead of traditional projection methods, kriging surrogate model is used to calculate the superposition coefficients through building approximate function relationships between profile geometry parameters of physical domain and these coefficients. In this context, the problem which troubles the traditional POD-projection method due to viscosity and compress-ibility has been avoided in the whole process. Moreover, there are no constraints for the inner prod-uct form, so two forms of simple ones are applied to improving computational efficiency and cope with variable physical domain problem. An iterative algorithm is developed to determine how many basis modes ranking front should be used in the prediction. Testing results prove the feasibility of this new method for subsonic flow field, but also prove that it is not proper for transonic flow field because of the poor predicted shock waves.

  14. Stationary flow fields prediction of variable physical domain based on proper orthogonal decomposition and kriging surrogate model

    Directory of Open Access Journals (Sweden)

    Qiu Yasong

    2015-02-01

    Full Text Available In this paper a new flow field prediction method which is independent of the governing equations, is developed to predict stationary flow fields of variable physical domain. Predicted flow fields come from linear superposition of selected basis modes generated by proper orthogonal decomposition (POD. Instead of traditional projection methods, kriging surrogate model is used to calculate the superposition coefficients through building approximate function relationships between profile geometry parameters of physical domain and these coefficients. In this context, the problem which troubles the traditional POD-projection method due to viscosity and compressibility has been avoided in the whole process. Moreover, there are no constraints for the inner product form, so two forms of simple ones are applied to improving computational efficiency and cope with variable physical domain problem. An iterative algorithm is developed to determine how many basis modes ranking front should be used in the prediction. Testing results prove the feasibility of this new method for subsonic flow field, but also prove that it is not proper for transonic flow field because of the poor predicted shock waves.

  15. Assessing Model Data Fit of Unidimensional Item Response Theory Models in Simulated Data

    Science.gov (United States)

    Kose, Ibrahim Alper

    2014-01-01

    The purpose of this paper is to give an example of how to assess the model-data fit of unidimensional IRT models in simulated data. Also, the present research aims to explain the importance of fit and the consequences of misfit by using simulated data sets. Responses of 1000 examinees to a dichotomously scoring 20 item test were simulated with 25…

  16. A person-fit index for polytomous Rasch models, latent class models, and their mixture generalizations

    NARCIS (Netherlands)

    von Davier, M; Molenaar, IW

    2003-01-01

    A normally distributed person-fit index is proposed for detecting aberrant response patterns in latent class models and mixture distribution IRT models for dichotomous and polytomous data. This article extends previous work on the null distribution of person-fit indices for the dichotomous Rasch mod

  17. Regression calibration with more surrogates than mismeasured variables

    KAUST Repository

    Kipnis, Victor

    2012-06-29

    In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.

  18. Mechanical Response of Polycarbonate with Strength Model Fits

    Science.gov (United States)

    2012-02-01

    is used as free -parameter to improve the quality of the fit. ̇ is the strain rate and ?̇? is the reference strain rate for which 1/s was used...experimental data. Table 3. ZA model parameters. Bo= 0.006715948 1/K B1= 0.00009503 1/K Bpa = 550 MPa Bopa= 48 MPa ωa= -8 ▬ ωb= -0.01 ▬ β= 0.5...Hybrid Hard/Ductile All-Plastic-and Glass-Plastic-Based Composites ; ARL-TR-3155; U.S. Army Research Laboratory: Aberdeen Proving Ground, MD, February

  19. An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models

    Science.gov (United States)

    Ames, Allison J.; Penfield, Randall D.

    2015-01-01

    Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…

  20. Cavity approach for modeling and fitting polymer stretching

    CERN Document Server

    Massucci, Francesco Alessandro; Vicente, Conrad J Pérez

    2014-01-01

    The mechanical properties of molecules are today captured by single molecule manipulation experiments, so that polymer features are tested at a nanometric scale. Yet devising mathematical models to get further insight beyond the commonly studied force--elongation relation is typically hard. Here we draw from techniques developed in the context of disordered systems to solve models for single and double--stranded DNA stretching in the limit of a long polymeric chain. Since we directly derive the marginals for the molecule local orientation, our approach allows us to readily calculate the experimental elongation as well as other observables at wish. As an example, we evaluate the correlation length as a function of the stretching force. Furthermore, we are able to fit successfully our solution to real experimental data. Although the model is admittedly phenomenological, our findings are very sound. For single--stranded DNA our solution yields the correct (monomer) scale and, yet more importantly, the right pers...

  1. Empirical fitness models for hepatitis C virus immunogen design

    Science.gov (United States)

    Hart, Gregory R.; Ferguson, Andrew L.

    2015-12-01

    Hepatitis C virus (HCV) afflicts 170 million people worldwide, 2%-3% of the global population, and kills 350 000 each year. Prophylactic vaccination offers the most realistic and cost effective hope of controlling this epidemic in the developing world where expensive drug therapies are not available. Despite 20 years of research, the high mutability of the virus and lack of knowledge of what constitutes effective immune responses have impeded development of an effective vaccine. Coupling data mining of sequence databases with spin glass models from statistical physics, we have developed a computational approach to translate clinical sequence databases into empirical fitness landscapes quantifying the replicative capacity of the virus as a function of its amino acid sequence. These landscapes explicitly connect viral genotype to phenotypic fitness, and reveal vulnerable immunological targets within the viral proteome that can be exploited to rationally design vaccine immunogens. We have recovered the empirical fitness landscape for the HCV RNA-dependent RNA polymerase (protein NS5B) responsible for viral genome replication, and validated the predictions of our model by demonstrating excellent accord with experimental measurements and clinical observations. We have used our landscapes to perform exhaustive in silico screening of 16.8 million T-cell immunogen candidates to identify 86 optimal formulations. By reducing the search space of immunogen candidates by over five orders of magnitude, our approach can offer valuable savings in time, expense, and labor for experimental vaccine development and accelerate the search for a HCV vaccine. Abbreviations: HCV—hepatitis C virus, HLA—human leukocyte antigen, CTL—cytotoxic T lymphocyte, NS5B—nonstructural protein 5B, MSA—multiple sequence alignment, PEG-IFN—pegylated interferon.

  2. WE-AB-303-11: Verification of a Deformable 4DCT Motion Model for Lung Tumor Tracking Using Different Driving Surrogates

    Energy Technology Data Exchange (ETDEWEB)

    Woelfelschneider, J [University Hospital Erlangen, Erlangen, DE (Germany); Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, DE (Germany); Seregni, M; Fassi, A; Baroni, G; Riboldi, M [Politecnico di Milano, Milano (Italy); Bert, C [University Hospital Erlangen, Erlangen, DE (Germany); Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, DE (Germany); GSI - Helmholtz Centre for Heavy Ion Research, Darmstadt, DE (Germany)

    2015-06-15

    Purpose: Tumor tracking is an advanced technique to treat intra-fractionally moving tumors. The aim of this study is to validate a surrogate-driven model based on four-dimensional computed tomography (4DCT) that is able to predict CT volumes corresponding to arbitrary respiratory states. Further, the comparison of three different driving surrogates is evaluated. Methods: This study is based on multiple 4DCTs of two patients treated for bronchial carcinoma and metastasis. Analyses for 18 additional patients are currently ongoing. The motion model was estimated from the planning 4DCT through deformable image registration. To predict a certain phase of a follow-up 4DCT, the model considers for inter-fractional variations (baseline correction) and intra-fractional respiratory parameters (amplitude and phase) derived from surrogates. In this evaluation, three different approaches were used to extract the motion surrogate: for each 4DCT phase, the 3D thoraco-abdominal surface motion, the body volume and the anterior-posterior motion of a virtual single external marker defined on the sternum were investigated. The estimated volumes resulting from the model were compared to the ground-truth clinical 4DCTs using absolute HU differences in the lung volume and landmarks localized using the Scale Invariant Feature Transform (SIFT). Results: The results show absolute HU differences between estimated and ground-truth images with median values limited to 55 HU and inter-quartile ranges (IQR) lower than 100 HU. Median 3D distances between about 1500 matching landmarks are below 2 mm for 3D surface motion and body volume methods. The single marker surrogates Result in increased median distances up to 0.6 mm. Analyses for the extended database incl. 20 patients are currently in progress. Conclusion: The results depend mainly on the image quality of the initial 4DCTs and the deformable image registration. All investigated surrogates can be used to estimate follow-up 4DCT phases

  3. Identification of a Surrogate Marker for Infection in the African Green Monkey Model of Inhalation Anthrax

    Science.gov (United States)

    2008-12-01

    model of inhalational anthrax. Infection and Immunity 76:5790-5801 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...Rossi, CA Ulrich, M Norris, S Reed, DS Pitt, MLM Leffel, EK 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S

  4. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    Science.gov (United States)

    Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise

    2013-01-01

    1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.

  5. Nested by design: model fitting and interpretation in a mixed model era

    National Research Council Canada - National Science Library

    Schielzeth, Holger; Nakagawa, Shinichi; Freckleton, Robert

    2013-01-01

    ...‐effects models offer a powerful framework to do so. Nested effects can usually be fitted using the syntax for crossed effects in mixed models, provided that the coding reflects implicit nesting...

  6. Robust goodness-of-fit tests for AR(p) models based on L1-norm fitting

    Institute of Scientific and Technical Information of China (English)

    蒋建成; 郑忠国

    1999-01-01

    A robustified residual autocorrelation is defined based on L1-regression. Under very general conditions,the asymptotic distribution of the robust residual autocorrelation is obtained. A robustified portmanteau statistic is then constructed which can be used in checking the goodness-of-fit of AR(p) models when using L1-norm fitting. Empirical results show that L1-norm estimators and the proposed portmanteau statistic are robust against outliers, error distributions, and accuracy for a given finite sample.

  7. Cartilage regeneration and repair testing in a surrogate large animal model.

    Science.gov (United States)

    Simon, Timothy M; Aberman, Harold M

    2010-02-01

    The aging human population is experiencing increasing numbers of symptoms related to its degenerative articular cartilage (AC), which has stimulated the investigation of methods to regenerate or repair AC. However, the seemingly inherent limited capacity for AC to regenerate persists to confound the various repair treatment strategies proposed or studied. Animal models for testing AC implant devices and reparative materials are an important and required part of the Food and Drug Administration approval process. Although final testing is ultimately performed in humans, animal testing allows for a wider range of parameters and combinations of test materials subjected to all the biological interactions of a living system. We review here considerations, evaluations, and experiences with selection and use of animal models and describe two untreated lesion models useful for testing AC repair strategies. These created lesion models, one deep (6 mm and through the subchondral plate) the other shallow (to the level of the subchondral bone plate) were placed in the middle one-third of the medial femoral condyle of the knee joints of goats. At 1-year neither the deep nor the shallow full-thickness chondral defects generated a repair that duplicated natural AC. Moreover, progressive deleterious changes occurred in the AC surrounding the defects. There are challenges in translation from animals to humans as anatomy and structures are different and immobilization to protect delicate repairs can be difficult. The tissues potentially generated by proposed cartilage repair strategies must be compared with the spontaneous changes that occur in similarly created untreated lesions. The prevention of the secondary changes in the surrounding cartilage and subchondral bone described in this article should be addressed with the introduction of treatments for repairs of the articulating surface.

  8. A neutrino model fit to the CMB power spectrum

    CERN Document Server

    Shanks, T; Schewtschenko, J A; Whitbourn, J R

    2014-01-01

    The current standard cosmological model, LCDM, provides an excellent fit to the WMAP and Planck CMB data. However, the model has well known problems. For example, the cosmological constant is fine tuned to 1 part in 10^100 and the cold dark matter (CDM) particle is not yet detected in the laboratory. Here we seek an alternative model to LCDM which makes minimal assumptions about new physics. This is based on previous work by Shanks who investigated a model which assumed neither exotic particles nor a cosmological constant but instead postulated a low Hubble constant (H_0) to help allow a baryon density which was compatible with an inflationary model with zero spatial curvature. However, the recent Planck results make it more difficult to reconcile such a model with the cosmic microwave background (CMB) temperature fluctuations. Here we relax the previous assumptions to assess the effects of assuming standard model neutrinos of moderate mass (~5eV) but with no CDM and no cosmological constant. If we assume a l...

  9. Online model checking for monitoring surrogate-based respiratory motion tracking in radiation therapy.

    Science.gov (United States)

    Antoni, Sven-Thomas; Rinast, Jonas; Ma, Xintao; Schupp, Sibylle; Schlaefer, Alexander

    2016-11-01

    Correlation between internal and external motion is critical for respiratory motion compensation in radiosurgery. Artifacts like coughing, sneezing or yawning or changes in the breathing pattern can lead to misalignment between beam and tumor and need to be detected to interrupt the treatment. We propose online model checking (OMC), a model-based verification approach from the field of formal methods, to verify that the breathing motion is regular and the correlation holds. We demonstrate that OMC may be more suitable for artifact detection than the prediction error. We established a sinusoidal model to apply OMC to the verification of respiratory motion. The method was parameterized to detect deviations from typical breathing motion. We analyzed the performance on synthetic data and on clinical episodes showing large correlation error. In comparison, we considered the prediction error of different state-of-the-art methods based on least mean squares (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS), recursive least squares (RLSpred) and support vector regression (SVRpred). On synthetic data, OMC outperformed wLMS by at least 30 % and SVRpred by at least 141 %, detecting 70 % of transitions. No artifacts were detected by nLMS and RLSpred. On patient data, OMC detected 23-49 % of the episodes correctly, outperforming nLMS, wLMS, RLSpred and SVRpred by up to 544, 491, 408 and 258 %, respectively. On selected episodes, OMC detected up to 94 % of all events. OMC is able to detect changes in breathing as well as artifacts which previously would have gone undetected, outperforming prediction error-based detection. Synthetic data analysis supports the assumption that prediction is very insensitive to specific changes in breathing. We suggest using OMC as an additional safety measure ensuring reliable and fast stopping of irradiation.

  10. 3D Building Model Fitting Using A New Kinetic Framework

    CERN Document Server

    Brédif, Mathieu; Pierrot-Deseilligny, Marc; Maître, Henri

    2008-01-01

    We describe a new approach to fit the polyhedron describing a 3D building model to the point cloud of a Digital Elevation Model (DEM). We introduce a new kinetic framework that hides to its user the combinatorial complexity of determining or maintaining the polyhedron topology, allowing the design of a simple variational optimization. This new kinetic framework allows the manipulation of a bounded polyhedron with simple faces by specifying the target plane equations of each of its faces. It proceeds by evolving continuously from the polyhedron defined by its initial topology and its initial plane equations to a polyhedron that is as topologically close as possible to the initial polyhedron but with the new plane equations. This kinetic framework handles internally the necessary topological changes that may be required to keep the faces simple and the polyhedron bounded. For each intermediate configurations where the polyhedron looses the simplicity of its faces or its boundedness, the simplest topological mod...

  11. Estimation of Semi-Varying Coefficient Model with Surrogate Data and Validation Sampling

    Institute of Scientific and Technical Information of China (English)

    Ya-zhao L(U); Ri-quan ZHANG; Zhen-sheng HUANG

    2013-01-01

    In this paper,we investigate the estimation of semi-varying coefficient models when the nonlinear covariates are prone to measurement error.With the help of validation sampling,we propose two estimators of the parameter and the coefficient functions by combining dimension reduction and the profile likelihood methods without any error structure equation specification or error distribution assumption.We establish the asymptotic normality of proposed estimators for both the parametric and nonparametric parts and show that the proposed estimators achieves the best convergence rate.Data-driven bandwidth selection methods are also discussed.Simulations are conducted to evaluate the finite sample property of the estimation methods proposed.

  12. Efficient Bayesian inference of subsurface flow models using nested sampling and sparse polynomial chaos surrogates

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior distributions by iteratively re-focusing a set of samples to high likelihood regions. NS allows representing the posterior probability density function (PDF) with a smaller number of samples and reduces the curse of dimensionality effects. The main difficulty of the NS algorithm is in the constrained sampling step which is commonly performed using a random walk Markov Chain Monte-Carlo (MCMC) algorithm. In this work, we perform a two-stage sampling using a polynomial chaos response surface to filter out rejected samples in the Markov Chain Monte-Carlo method. The combined use of nested sampling and the two-stage MCMC based on approximate response surfaces provides significant computational gains in terms of the number of simulation runs. The proposed algorithm is applied for calibration and model selection of subsurface flow models. © 2013.

  13. The FIT Model - Fuel-cycle Integration and Tradeoffs

    Energy Technology Data Exchange (ETDEWEB)

    Steven J. Piet; Nick R. Soelberg; Samuel E. Bays; Candido Pereira; Layne F. Pincock; Eric L. Shaber; Meliisa C Teague; Gregory M Teske; Kurt G Vedros

    2010-09-01

    All mass streams from fuel separation and fabrication are products that must meet some set of product criteria – fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the “system losses study” team that developed it [Shropshire2009, Piet2010] are an initial step by the FCR&D program toward a global analysis that accounts for the requirements and capabilities of each component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R&D needs and set longer-term goals. The question originally posed to the “system losses study” was the cost of separation, fuel fabrication, waste management, etc. versus the separation efficiency. In other words, are the costs associated with marginal reductions in separations losses (or improvements in product recovery) justified by the gains in the performance of other systems? We have learned that that is the wrong question. The right question is: how does one adjust the compositions and quantities of all mass streams, given uncertain product criteria, to balance competing objectives including cost? FIT is a method to analyze different fuel cycles using common bases to determine how chemical performance changes in one part of a fuel cycle (say used fuel cooling times or separation efficiencies) affect other parts of the fuel cycle. FIT estimates impurities in fuel and waste via a rough estimate of physics and mass balance for a set of technologies. If feasibility is an issue for a set, as it is for “minimum fuel treatment” approaches such as melt refining and AIROX, it can help to make an estimate of how performances would have to change to achieve feasibility.

  14. The FIT Model - Fuel-cycle Integration and Tradeoffs

    Energy Technology Data Exchange (ETDEWEB)

    Steven J. Piet; Nick R. Soelberg; Samuel E. Bays; Candido Pereira; Layne F. Pincock; Eric L. Shaber; Meliisa C Teague; Gregory M Teske; Kurt G Vedros

    2010-09-01

    All mass streams from fuel separation and fabrication are products that must meet some set of product criteria – fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the “system losses study” team that developed it [Shropshire2009, Piet2010] are an initial step by the FCR&D program toward a global analysis that accounts for the requirements and capabilities of each component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R&D needs and set longer-term goals. The question originally posed to the “system losses study” was the cost of separation, fuel fabrication, waste management, etc. versus the separation efficiency. In other words, are the costs associated with marginal reductions in separations losses (or improvements in product recovery) justified by the gains in the performance of other systems? We have learned that that is the wrong question. The right question is: how does one adjust the compositions and quantities of all mass streams, given uncertain product criteria, to balance competing objectives including cost? FIT is a method to analyze different fuel cycles using common bases to determine how chemical performance changes in one part of a fuel cycle (say used fuel cooling times or separation efficiencies) affect other parts of the fuel cycle. FIT estimates impurities in fuel and waste via a rough estimate of physics and mass balance for a set of technologies. If feasibility is an issue for a set, as it is for “minimum fuel treatment” approaches such as melt refining and AIROX, it can help to make an estimate of how performances would have to change to achieve feasibility.

  15. A Method to Model Season of Birth as a Surrogate Environmental Risk Factor for Disease

    Directory of Open Access Journals (Sweden)

    Susan Searles Nielsen

    2008-03-01

    Full Text Available Environmental exposures, including some that vary seasonally, may play a role in the development of many types of childhood diseases such as cancer. Those observed in children are unique in that the relevant period of exposure is inherently limited or perhaps even specific to a very short window during prenatal development or early infancy. As such, researchers have investigated whether specific childhood cancers are associated with season of birth. Typically a basic method for analysis has been used, for example categorization of births into one of four seasons, followed by simple comparisons between categories such as via logistic regression, to obtain odds ratios (ORs, confidence intervals (CIs and p-values. In this paper we present an alternative method, based upon an iterative trigonometric logistic regression model used to analyze the cyclic nature of birth dates related to disease occurrence. Disease birth-date results are presented using a sinusoidal graph with a peak date of relative risk and a single p-value that tests whether an overall seasonal association is present. An OR and CI comparing children born in the 3-month period around the peak to the symmetrically opposite 3-month period also can be obtained. Advantages of this derivative-free method include ease of use, increased statistical power to detect associations, and the ability to avoid potentially arbitrary, subjective demarcation of seasons.

  16. The Secondary Organic Aerosol Processor (SOAP v1.0) model: a unified model with different ranges of complexity based on the molecular surrogate approach

    Science.gov (United States)

    Couvidat, F.; Sartelet, K.

    2015-04-01

    In this paper the Secondary Organic Aerosol Processor (SOAP v1.0) model is presented. This model determines the partitioning of organic compounds between the gas and particle phases. It is designed to be modular with different user options depending on the computation time and the complexity required by the user. This model is based on the molecular surrogate approach, in which each surrogate compound is associated with a molecular structure to estimate some properties and parameters (hygroscopicity, absorption into the aqueous phase of particles, activity coefficients and phase separation). Each surrogate can be hydrophilic (condenses only into the aqueous phase of particles), hydrophobic (condenses only into the organic phases of particles) or both (condenses into both the aqueous and the organic phases of particles). Activity coefficients are computed with the UNIFAC (UNIversal Functional group Activity Coefficient; Fredenslund et al., 1975) thermodynamic model for short-range interactions and with the Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients (AIOMFAC) parameterization for medium- and long-range interactions between electrolytes and organic compounds. Phase separation is determined by Gibbs energy minimization. The user can choose between an equilibrium representation and a dynamic representation of organic aerosols (OAs). In the equilibrium representation, compounds in the particle phase are assumed to be at equilibrium with the gas phase. However, recent studies show that the organic aerosol is not at equilibrium with the gas phase because the organic phases could be semi-solid (very viscous liquid phase). The condensation-evaporation of organic compounds could then be limited by the diffusion in the organic phases due to the high viscosity. An implicit dynamic representation of secondary organic aerosols (SOAs) is available in SOAP with OAs divided into layers, the first layer being at the center of the particle (slowly

  17. Direct model fitting to combine dithered ACS images

    Science.gov (United States)

    Mahmoudian, H.; Wucknitz, O.

    2013-08-01

    The information lost in images of undersampled CCD cameras can be recovered with the technique of "dithering". A number of subexposures is taken with sub-pixel shifts in order to record structures on scales smaller than a pixel. The standard method to combine such exposures, "Drizzle", averages after reversing the displacements, including rotations and distortions. More sophisticated methods are available to produce, e.g., Nyquist sampled representations of band-limited inputs. While the combined images produced by these methods can be of high quality, their use as input for forward-modelling techniques in gravitational lensing is still not optimal, because the residual artefacts still affect the modelling results in unpredictable ways. In this paper we argue for an overall modelling approach that takes into account the dithering and the lensing without the intermediate product of a combined image. As one building block we introduce an alternative approach to combine dithered images by direct model fitting with a least-squares approach including a regularization constraint. We present tests with simulated and real data that show the quality of the results. The additional effects of gravitational lensing and the convolution with an instrumental point spread function can be included in a natural way, avoiding the possible systematic errors of previous procedures.

  18. Modeling Percentile Rank of Cardiorespiratory Fitness Across the Lifespan

    Science.gov (United States)

    Graves, Rasinio S.; Mahnken, Jonathan D.; Perea, Rodrigo D.; Billinger, Sandra A.; Vidoni, Eric D.

    2016-01-01

    Purpose The purpose of this investigation was to create an equation for continuous percentile rank of maximal oxygen consumption (VO2 max) from ages 20 to 99. Methods We used a two-staged modeling approach with existing normative data from the American College of Sports Medicine for VO2 max. First, we estimated intercept and slope parameters for each decade of life as a logistic function. We then modeled change in intercept and slope as functions of age (stage two) using weighted least squares regression. The resulting equations were used to predict fitness percentile rank based on age, sex, and VO2 max, and included estimates for individuals beyond 79 years old. Results We created a continuous, sex specific model of VO2 max percentile rank across the lifespan. Conclusions Percentile ranking of VO2 max can be made continuous and account for adults aged 20 to 99 with reasonable accuracy, improving the utility of this normalization procedure in practical and research settings, particularly in aging populations. PMID:26778922

  19. Prediction of naphthenic acid species degradation by kinetic and surrogate models during the ozonation of oil sands process-affected water.

    Science.gov (United States)

    Islam, Md Shahinoor; Moreira, Jesús; Chelme-Ayala, Pamela; Gamal El-Din, Mohamed

    2014-09-15

    Oil sands process-affected water (OSPW) is a complex mixture of organic and inorganic contaminants, and suspended solids, generated by the oil sands industry during the bitumen extraction process. OSPW contains a large number of structurally diverse organic compounds, and due to variability of the water quality of different OSPW matrices, there is a need to select a group of easily measured surrogate parameters for monitoring and treatment process control. In this study, kinetic and surrogate correlation models were developed to predict the degradation of naphthenic acids (NAs) species during the ozonation of OSPW. Additionally, the speciation and distribution of classical and oxidized NA species in raw and ozonated OSPW were also examined. The structure-reactivity of NA species indicated that the reactivity of individual NA species increased as the carbon and hydrogen deficiency numbers increased. The kinetic parameters obtained in this study allowed calculating the evolution of the concentrations of the acid-extractable fraction (AEF), chemical oxygen demand (COD), and NA distributions for a given ozonation process. High correlations between the AEF and COD and NA species were found, suggesting that AEF and COD can be used as surrogate parameters to predict the degradation of NAs during the ozonation of OSPW.

  20. Methodical fitting for mathematical models of rubber-like materials

    Science.gov (United States)

    Destrade, Michel; Saccomandi, Giuseppe; Sgura, Ivonne

    2017-02-01

    A great variety of models can describe the nonlinear response of rubber to uniaxial tension. Yet an in-depth understanding of the successive stages of large extension is still lacking. We show that the response can be broken down in three steps, which we delineate by relying on a simple formatting of the data, the so-called Mooney plot transform. First, the small-to-moderate regime, where the polymeric chains unfold easily and the Mooney plot is almost linear. Second, the strain-hardening regime, where blobs of bundled chains unfold to stiffen the response in correspondence to the `upturn' of the Mooney plot. Third, the limiting-chain regime, with a sharp stiffening occurring as the chains extend towards their limit. We provide strain-energy functions with terms accounting for each stage that (i) give an accurate local and then global fitting of the data; (ii) are consistent with weak nonlinear elasticity theory and (iii) can be interpreted in the framework of statistical mechanics. We apply our method to Treloar's classical experimental data and also to some more recent data. Our method not only provides models that describe the experimental data with a very low quantitative relative error, but also shows that the theory of nonlinear elasticity is much more robust that seemed at first sight.

  1. Aerodynamic wind-turbine rotor design using surrogate modeling and three-dimensional viscous-inviscid interaction technique

    DEFF Research Database (Denmark)

    Sessarego, Matias; Ramos García, Néstor; Yang, Hua;

    2016-01-01

    In this paper a surrogate optimization methodology using a three-dimensional viscous-inviscid interaction code for the aerodynamic design of wind-turbine rotors is presented. The framework presents aunique approach because it does not require the commonly-used blade element momentum (BEM)method. ......In this paper a surrogate optimization methodology using a three-dimensional viscous-inviscid interaction code for the aerodynamic design of wind-turbine rotors is presented. The framework presents aunique approach because it does not require the commonly-used blade element momentum (BEM...... performance can be achieved using the new design method and that themethodology is effective for the aerodynamic design of wind-turbine rotors....

  2. A cautionary note on the use of information fit indexes in covariance structure modeling with means

    NARCIS (Netherlands)

    Wicherts, J.M.; Dolan, C.V.

    2004-01-01

    Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases i

  3. How to apply case reports in clinical practice using surrogate models via example of the trigeminocardiac reflex.

    Science.gov (United States)

    Sandu, Nora; Chowdhury, Tumul; Schaller, Bernhard J

    2016-04-06

    Case reports are an increasing source of evidence in clinical medicine. Until a few years ago, such case reports were emerged into systematic reviews and nowadays they are often fitted to the development of clinical (thinking) models. We describe this modern progress of knowledge creation by the example of the trigeminocardiac reflex that was first described in 1999 by a case series and was developed over the cause-and-effect relationship, triangulation to systematic reviews and finally to thinking models. Therefore, this editorial not only underlines the increasing and outstanding importance of (unique) case reports in current science, but also in current clinical decision-making and therefore also that of specific journals like the Journal of Medical Case Reports.

  4. A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting

    Science.gov (United States)

    Soltani-Mohammadi, Saeed; Safa, Mohammad

    2016-09-01

    Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.

  5. A quantitative confidence signal detection model: 1. Fitting psychometric functions.

    Science.gov (United States)

    Yi, Yongwoo; Merfeld, Daniel M

    2016-04-01

    Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. Copyright © 2016 the American Physiological Society.

  6. RNA virus evolution via a fitness-space model

    Energy Technology Data Exchange (ETDEWEB)

    Tsimring, L.S.; Levine, H. [Institute for Nonlinear Science, University of California, San Diego, La Jolla, California 92093-0402 (United States); Kessler, D.A. [Department of Physics, Bar-Ilan University, Ramat Gan 52900 (Israel)

    1996-06-01

    We present a mean-field theory for the evolution of RNA virus populations. The theory operates with a distribution of the population in a one-dimensional fitness space, and is valid for sufficiently smooth fitness landscapes. Our approach explains naturally the recent experimental observation [I. S. Novella {ital et} {ital al}., Proc. Natl. Acad. Sci. U.S.A. {bold 92}, 5841{endash}5844 (1995)] of two distinct stages in the growth of virus fitness. {copyright} {ital 1995 The American Physical Society.}

  7. Fitness voter model: Damped oscillations and anomalous consensus

    Science.gov (United States)

    Woolcock, Anthony; Connaughton, Colm; Merali, Yasmin; Vazquez, Federico

    2017-09-01

    We study the dynamics of opinion formation in a heterogeneous voter model on a complete graph, in which each agent is endowed with an integer fitness parameter k ≥0 , in addition to its + or - opinion state. The evolution of the distribution of k -values and the opinion dynamics are coupled together, so as to allow the system to dynamically develop heterogeneity and memory in a simple way. When two agents with different opinions interact, their k -values are compared, and with probability p the agent with the lower value adopts the opinion of the one with the higher value, while with probability 1 -p the opposite happens. The agent that keeps its opinion (winning agent) increments its k -value by one. We study the dynamics of the system in the entire 0 ≤p ≤1 range and compare with the case p =1 /2 , in which opinions are decoupled from the k -values and the dynamics is equivalent to that of the standard voter model. When 0 ≤p mean consensus time τ appears to grow logarithmically with the number of agents N , and it is greatly decreased relative to the linear behavior τ ˜N found in the standard voter model. When 1 /2

    model, although it still scales linearly with N . The p =1 case is special, with a relaxation to coexistence that scales as t-2.73 and a consensus time that scales as τ ˜Nβ , with β ≃1.45 .

  8. Convergence, Admissibility, and Fit of Alternative Confirmatory Factor Analysis Models for MTMM Data

    Science.gov (United States)

    Lance, Charles E.; Fan, Yi

    2016-01-01

    We compared six different analytic models for multitrait-multimethod (MTMM) data in terms of convergence, admissibility, and model fit to 258 samples of previously reported data. Two well-known models, the correlated trait-correlated method (CTCM) and the correlated trait-correlated uniqueness (CTCU) models, were fit for reference purposes in…

  9. Convergence, Admissibility, and Fit of Alternative Confirmatory Factor Analysis Models for MTMM Data

    Science.gov (United States)

    Lance, Charles E.; Fan, Yi

    2016-01-01

    We compared six different analytic models for multitrait-multimethod (MTMM) data in terms of convergence, admissibility, and model fit to 258 samples of previously reported data. Two well-known models, the correlated trait-correlated method (CTCM) and the correlated trait-correlated uniqueness (CTCU) models, were fit for reference purposes in…

  10. An Application of M[subscript 2] Statistic to Evaluate the Fit of Cognitive Diagnostic Models

    Science.gov (United States)

    Liu, Yanlou; Tian, Wei; Xin, Tao

    2016-01-01

    The fit of cognitive diagnostic models (CDMs) to response data needs to be evaluated, since CDMs might yield misleading results when they do not fit the data well. Limited-information statistic M[subscript 2] and the associated root mean square error of approximation (RMSEA[subscript 2]) in item factor analysis were extended to evaluate the fit of…

  11. A conceptual model of family surrogate end-of-life decision-making process in the nursing home setting: goals of care as guiding stars.

    Science.gov (United States)

    Bern-Klug, Mercedes

    2014-01-01

    An increasing proportion of dying is occurring in America's nursing homes (NH). Family members are involved in (and affected by) medical decision-making on behalf of NH residents approaching the end of life, especially when the resident is cognitively impaired. This article proposes an empirically derived conceptual model of the key factors NH family surrogate decision-makers consider when establishing or changing goals of care and the iterative process as applied to the NH setting. This model also establishes the importance of family social role expectations toward their loved one as well as the concept, "stance toward dying," as key in establishing or changing the main goal of care. NH staff and physicians can use the model as a framework for providing information and support to family members. Research is needed to better understand how to prepare staff and settings to support family surrogate decision-makers, in particular around setting goals of care. The model can be generalized beyond nursing homes.

  12. Keratoconus, cross-link-induction, comparison between fitting exponential function and a fitting equation obtained by a mathematical model.

    Science.gov (United States)

    Albanese, A; Urso, R; Bianciardi, L; Rigato, M; Battisti, E

    2009-11-01

    With reference to experimental data in the literature, we present a model consisting of two elastic elements, conceived to simulate resistance to stretching, at constant velocity of elongation, of corneal tissue affected by keratoconus, treated with riboflavin and ultraviolet irradiation to induce cross-linking. The function describing model behaviour adapted to stress and strain values. It was found that the Young's moduli of the two elastic elements increased in cross-linked tissues and that cross-linking treatment therefore increased corneal rigidity. It is recognized that this observation is substantially in line with the conclusion reported in the literature, obtained using an exponential fitting function. It is observed, however, that the latter function implies a condition of non-zero stresses without strain, and does not provide interpretative insights for lack of any biomechanical basis. Above all, the function fits a singular trend, inexplicably claimed to be viscoelastic, with surprising perfection. In any case, using the reported data, the study demonstrates that a fitting equation obtained by a modelling approach not only shows the evident efficacy of the treatment, but also provides orientations for studying modifications induced in cross-linked fibres.

  13. Evolution of N-species Kimura/voter models towards criticality, a surrogate for general models of accidental pathogens

    Science.gov (United States)

    Ghaffari, Peyman; Stollenwerk, Nico

    2012-09-01

    In models for accidental pathogens, with the paradigmatic epidemiological system of bacterial meningitis, there was evolution towards states exhibiting critical fluctuations with power law behaviour observed [1]. This is a model with many possibly pathogenic strains essentially evolving independently to low pathogenicity. A first and previous study had shown that in the limit of vanishing pathogenicity there are critical fluctuations with power law distributions observed, already when only two strains interact [2]. This earlier version of a two strain model was very recently reinvestigated [3] and named as Stollenwerk-Jansen model (SJ). Muñoz et al. demonstrated that this two-strain model for accidental pathogens is in the universality class of the so-called voter model. Though this model clearly shows criticality, its control parameter, the pathogenicity, is not self-tuning towards criticality. However, the multi-strain version mentioned above [1] is well evolving towards criticality, as well as a spatially explicit version of this, shown in [4] p. 155. These models of multi-strain type including explicitly mutations of the pathogenicity can be called SJ-models of type II [5]. Since the original epidemiological model is of SIRYX-type, the evolution to zero pathogenicity is slow and perturbed by large population noise. In the present article we now show on the basis of the notion of the voter-model universality classes the evolution of n-voter models with mutaion towards criticality, now much less perturbed by population noise, hence demonstrating a clear mechanism of self-organized criticality in the sense of [6, 7]. The present results have wide implications for many diseases in which a large proportion of infections is asymptomatic, meaning that the system has already evolved towards an average low pathogenicity. This holds not only for the original paradigmatic case of bacterial meningitis, but was reecently also suggested for example for dengue fever (DENFREE

  14. Regularization Methods for Fitting Linear Models with Small Sample Sizes: Fitting the Lasso Estimator Using R

    Science.gov (United States)

    Finch, W. Holmes; Finch, Maria E. Hernandez

    2016-01-01

    Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates…

  15. Birds as biodiversity surrogates

    DEFF Research Database (Denmark)

    Larsen, Frank Wugt; Bladt, Jesper Stentoft; Balmford, Andrew

    2012-01-01

    1. Most biodiversity is still unknown, and therefore, priority areas for conservation typically are identified based on the presence of surrogates, or indicator groups. Birds are commonly used as surrogates of biodiversity owing to the wide availability of relevant data and their broad popular...... appeal. However, some studies have found birds to perform relatively poorly as indicators. We therefore ask how the effectiveness of this approach can be improved by supplementing data on birds with information on other taxa. 2. Here, we explore two strategies using (i) species data for other taxa...... areas identified on the basis of birds alone performed well in representing overall species diversity where birds were relatively speciose compared to the other taxa in the data sets. Adding species data for one taxon increased surrogate effectiveness better than adding genus- and family-level data...

  16. Regularization Methods for Fitting Linear Models with Small Sample Sizes: Fitting the Lasso Estimator Using R

    Directory of Open Access Journals (Sweden)

    W. Holmes Finch

    2016-05-01

    Full Text Available Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates exhibit very high variance and can therefore not be trusted, or because the statistical algorithm cannot converge on parameter estimates at all. There exist an alternative set of model estimation procedures, known collectively as regularization methods, which can be used in such circumstances, and which have been shown through simulation research to yield accurate parameter estimates. The purpose of this paper is to describe, for those unfamiliar with them, the most popular of these regularization methods, the lasso, and to demonstrate its use on an actual high dimensional dataset involving adults with autism, using the R software language. Results of analyses involving relating measures of executive functioning with a full scale intelligence test score are presented, and implications of using these models are discussed.

  17. Fitting Item Response Theory Models to Two Personality Inventories: Issues and Insights.

    Science.gov (United States)

    Chernyshenko, Oleksandr S.; Stark, Stephen; Chan, Kim-Yin; Drasgow, Fritz; Williams, Bruce

    2001-01-01

    Compared the fit of several Item Response Theory (IRT) models to two personality assessment instruments using data from 13,059 individuals responding to one instrument and 1,770 individuals responding to the other. Two- and three-parameter logistic models fit some scales reasonably well, but not others, and the graded response model generally did…

  18. Computationally efficient and flexible modular modelling approach for river and urban drainage systems based on surrogate conceptual models

    Science.gov (United States)

    Wolfs, Vincent; Willems, Patrick

    2015-04-01

    Water managers rely increasingly on mathematical simulation models that represent individual parts of the water system, such as the river, sewer system or waste water treatment plant. The current evolution towards integral water management requires the integration of these distinct components, leading to an increased model scale and scope. Besides this growing model complexity, certain applications gained interest and importance, such as uncertainty and sensitivity analyses, auto-calibration of models and real time control. All these applications share the need for models with a very limited calculation time, either for performing a large number of simulations, or a long term simulation followed by a statistical post-processing of the results. The use of the commonly applied detailed models that solve (part of) the de Saint-Venant equations is infeasible for these applications or such integrated modelling due to several reasons, of which a too long simulation time and the inability to couple submodels made in different software environments are the main ones. Instead, practitioners must use simplified models for these purposes. These models are characterized by empirical relationships and sacrifice model detail and accuracy for increased computational efficiency. The presented research discusses the development of a flexible integral modelling platform that complies with the following three key requirements: (1) Include a modelling approach for water quantity predictions for rivers, floodplains, sewer systems and rainfall runoff routing that require a minimal calculation time; (2) A fast and semi-automatic model configuration, thereby making maximum use of data of existing detailed models and measurements; (3) Have a calculation scheme based on open source code to allow for future extensions or the coupling with other models. First, a novel and flexible modular modelling approach based on the storage cell concept was developed. This approach divides each

  19. A fitted neoprene garment to cover dressings in swine models.

    Science.gov (United States)

    Mino, Matthew J; Mauskar, Neil A; Matt, Sara E; Pavlovich, Anna R; Prindeze, Nicholas J; Moffatt, Lauren T; Shupp, Jeffrey W

    2012-12-17

    Domesticated porcine species are commonly used in studies of wound healing, owing to similarities between porcine skin and human skin. Such studies often involve wound dressings, and keeping these dressings intact on the animal can be a challenge. The authors describe a novel and simple technique for constructing a fitted neoprene garment for pigs that covers dressings and maintains their integrity during experiments.

  20. Development of multi-component diesel surrogate fuel models – Part II:Validation of the integrated mechanisms in 0-D kinetic and 2-D CFD spray combustion simulations

    DEFF Research Database (Denmark)

    Poon, Hiew Mun; Pang, Kar Mun; Ng, Hoon Kiat;

    2016-01-01

    The aim of this study is to develop compact yet comprehensive multi-component diesel surrogate fuel models for computational fluid dynamics (CFD) spray combustion modelling studies. The fuel constituent reduced mechanisms including n-hexadecane (HXN), 2,2,4,4,6,8,8-heptamethylnonane (HMN......), cyclohexane(CHX) and toluene developed in Part I are applied in this work. They are combined to produce two different versions of multi-component diesel surrogate models in the form of MCDS1 (HXN + HMN)and MCDS2 (HXN + HMN + toluene + CHX). The integrated mechanisms are then comprehensively validated in zero...... fuel model for diesel fuels with CN values ranging from 15 to100. It also shows that MCDS2 is a more appropriate surrogate model for fuels with aromatics and cyclo-paraffinic contents, particularly when soot calculation is of main interest....

  1. Plutonium radiation surrogate

    Science.gov (United States)

    Frank, Michael I [Dublin, CA

    2010-02-02

    A self-contained source of gamma-ray and neutron radiation suitable for use as a radiation surrogate for weapons-grade plutonium is described. The source generates a radiation spectrum similar to that of weapons-grade plutonium at 5% energy resolution between 59 and 2614 keV, but contains no special nuclear material and emits little .alpha.-particle radiation. The weapons-grade plutonium radiation surrogate also emits neutrons having fluxes commensurate with the gamma-radiation intensities employed.

  2. The FITS model office ergonomics program: a model for best practice.

    Science.gov (United States)

    Chim, Justine M Y

    2014-01-01

    An effective office ergonomics program can predict positive results in reducing musculoskeletal injury rates, enhancing productivity, and improving staff well-being and job satisfaction. Its objective is to provide a systematic solution to manage the potential risk of musculoskeletal disorders among computer users in an office setting. A FITS Model office ergonomics program is developed. The FITS Model Office Ergonomics Program has been developed which draws on the legislative requirements for promoting the health and safety of workers using computers for extended periods as well as previous research findings. The Model is developed according to the practical industrial knowledge in ergonomics, occupational health and safety management, and human resources management in Hong Kong and overseas. This paper proposes a comprehensive office ergonomics program, the FITS Model, which considers (1) Furniture Evaluation and Selection; (2) Individual Workstation Assessment; (3) Training and Education; (4) Stretching Exercises and Rest Break as elements of an effective program. An experienced ergonomics practitioner should be included in the program design and implementation. Through the FITS Model Office Ergonomics Program, the risk of musculoskeletal disorders among computer users can be eliminated or minimized, and workplace health and safety and employees' wellness enhanced.

  3. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    CERN Document Server

    Mead, Alexander; Heymans, Catherine; Joudaki, Shahab; Heavens, Alan

    2015-01-01

    We present an optimised variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically-motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of $\\Lambda$CDM and $w$CDM models the halo-model power is accurate to $\\simeq 5$ per cent for $k\\leq 10h\\,\\mathrm{Mpc}^{-1}$ and $z\\leq 2$. We compare our results with recent revisions of the popular HALOFIT model and show that our predictions are more accurate. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limi...

  4. The effects of post-exposure smallpox vaccination on clinical disease presentation: addressing the data gaps between historical epidemiology and modern surrogate model data.

    Science.gov (United States)

    Keckler, M Shannon; Reynolds, Mary G; Damon, Inger K; Karem, Kevin L

    2013-10-25

    Decades after public health interventions - including pre- and post-exposure vaccination - were used to eradicate smallpox, zoonotic orthopoxvirus outbreaks and the potential threat of a release of variola virus remain public health concerns. Routine prophylactic smallpox vaccination of the public ceased worldwide in 1980, and the adverse event rate associated with the currently licensed live vaccinia virus vaccine makes reinstatement of policies recommending routine pre-exposure vaccination unlikely in the absence of an orthopoxvirus outbreak. Consequently, licensing of safer vaccines and therapeutics that can be used post-orthopoxvirus exposure is necessary to protect the global population from these threats. Variola virus is a solely human pathogen that does not naturally infect any other known animal species. Therefore, the use of surrogate viruses in animal models of orthopoxvirus infection is important for the development of novel vaccines and therapeutics. Major complications involved with the use of surrogate models include both the absence of a model that accurately mimics all aspects of human smallpox disease and a lack of reproducibility across model species. These complications limit our ability to model post-exposure vaccination with newer vaccines for application to human orthopoxvirus outbreaks. This review seeks to (1) summarize conclusions about the efficacy of post-exposure smallpox vaccination from historic epidemiological reports and modern animal studies; (2) identify data gaps in these studies; and (3) summarize the clinical features of orthopoxvirus-associated infections in various animal models to identify those models that are most useful for post-exposure vaccination studies. The ultimate purpose of this review is to provide observations and comments regarding available model systems and data gaps for use in improving post-exposure medical countermeasures against orthopoxviruses.

  5. Modelling population dynamics model formulation, fitting and assessment using state-space methods

    CERN Document Server

    Newman, K B; Morgan, B J T; King, R; Borchers, D L; Cole, D J; Besbeas, P; Gimenez, O; Thomas, L

    2014-01-01

    This book gives a unifying framework for estimating the abundance of open populations: populations subject to births, deaths and movement, given imperfect measurements or samples of the populations.  The focus is primarily on populations of vertebrates for which dynamics are typically modelled within the framework of an annual cycle, and for which stochastic variability in the demographic processes is usually modest. Discrete-time models are developed in which animals can be assigned to discrete states such as age class, gender, maturity,  population (within a metapopulation), or species (for multi-species models). The book goes well beyond estimation of abundance, allowing inference on underlying population processes such as birth or recruitment, survival and movement. This requires the formulation and fitting of population dynamics models.  The resulting fitted models yield both estimates of abundance and estimates of parameters characterizing the underlying processes.  

  6. Effectiveness of external respiratory surrogates for in vivo liver motion estimation

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Kai-Hsiang; Ho, Ming-Chih; Yeh, Chi-Chuan; Chen, Yu-Chien; Lian, Feng-Li; Lin, Win-Li; Yen, Jia-Yush; Chen, Yung-Yaw [Department of Electrical Engineering, National Taiwan University, Taipei 10617, Taiwan (China); Department of Surgery, National Taiwan University Hospital and College of Medicine, National Taiwan University, Taipei 10041, Taiwan (China); Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei 10617, Taiwan (China); Department of Electrical Engineering, National Taiwan University, Taipei 10617, Taiwan (China); Institute of Biomedical Engineering, National Taiwan University, Taipei 10041, Taiwan (China); Department of Mechanical Engineering, National Taiwan University, Taipei 10617, Taiwan (China); Department of Electrical Engineering and Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei 10617, Taiwan (China)

    2012-08-15

    Purpose: Due to low frame rate of MRI and high radiation damage from fluoroscopy and CT, liver motion estimation using external respiratory surrogate signals seems to be a better approach to track liver motion in real-time for liver tumor treatments in radiotherapy and thermotherapy. This work proposes a liver motion estimation method based on external respiratory surrogate signals. Animal experiments are also conducted to investigate related issues, such as the sensor arrangement, multisensor fusion, and the effective time period. Methods: Liver motion and abdominal motion are both induced by respiration and are proved to be highly correlated. Contrary to the difficult direct measurement of the liver motion, the abdominal motion can be easily accessed. Based on this idea, our study is split into the model-fitting stage and the motion estimation stage. In the first stage, the correlation between the surrogates and the liver motion is studied and established via linear regression method. In the second stage, the liver motion is estimated by the surrogate signals with the correlation model. Animal experiments on cases of single surrogate signal, multisurrogate signals, and long-term surrogate signals are conducted and discussed to verify the practical use of this approach. Results: The results show that the best single sensor location is at the middle of the upper abdomen, while multisurrogate models are generally better than the single ones. The estimation error is reduced from 0.6 mm for the single surrogate models to 0.4 mm for the multisurrogate models. The long-term validity of the estimation models is quite satisfactory within the period of 10 min with the estimation error less than 1.4 mm. Conclusions: External respiratory surrogate signals from the abdomen motion produces good performance for liver motion estimation in real-time. Multisurrogate signals enhance estimation accuracy, and the estimation model can maintain its accuracy for at least 10 min. This

  7. A New Finite Interval Lifetime Distribution Model for Fitting Bathtub-Shaped Failure Rate Curve

    Directory of Open Access Journals (Sweden)

    Xiaohong Wang

    2015-01-01

    Full Text Available This paper raised a new four-parameter fitting model to describe bathtub curve, which is widely used in research on components’ life analysis, then gave explanation of model parameters, and provided parameter estimation method as well as application examples utilizing some well-known lifetime data. By comparative analysis between the new model and some existing bathtub curve fitting model, we can find that the new fitting model is very convenient and its parameters are clear; moreover, this model is of universal applicability which is not only suitable for bathtub-shaped failure rate curves but also applicable for the constant, increasing, and decreasing failure rate curves.

  8. Fitting Multilevel Models with Ordinal Outcomes: Performance of Alternative Specifications and Methods of Estimation

    Science.gov (United States)

    Bauer, Daniel J.; Sterba, Sonya K.

    2011-01-01

    Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ…

  9. A Cautionary Note on the Use of Information Fit Indexes in Covariance Structure Modeling with Means

    Science.gov (United States)

    Wicherts, Jelte M.; Dolan, Conor V.

    2004-01-01

    Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases in which models without mean restrictions (i.e.,…

  10. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    Science.gov (United States)

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  11. Effect of Correlations Between Model Parameters and Nuisance Parameters When Model Parameters are Fit to Data

    CERN Document Server

    Roe, Byron

    2013-01-01

    The effect of correlations between model parameters and nuisance parameters is discussed, in the context of fitting model parameters to data. Modifications to the usual $\\chi^2$ method are required. Fake data studies, as used at present, will not be optimum. Problems will occur for applications of the Maltoni-Schwetz \\cite{ms} theorem. Neutrino oscillations are used as examples, but the problems discussed here are general ones, which are often not addressed.

  12. Refractive Index of Humid Air in the Infrared: Model Fits

    CERN Document Server

    Mathar, R J

    2006-01-01

    The theory of summation of electromagnetic line transitions is used to tabulate the Taylor expansion of the refractive index of humid air over the basic independent parameters (temperature, pressure, humidity, wavelength) in five separate infrared regions from the H to the Q band at a fixed percentage of Carbon Dioxide. These are least-squares fits to raw, highly resolved spectra for a set of temperatures from 10 to 25 C, a set of pressures from 500 to 1023 hPa, and a set of relative humidities from 5 to 60%. These choices reflect the prospective application to characterize ambient air at mountain altitudes of astronomical telescopes.

  13. The Thorny Relation Between Measurement Quality and Fit Index Cutoffs in Latent Variable Models.

    Science.gov (United States)

    McNeish, Daniel; An, Ji; Hancock, Gregory R

    2017-03-02

    Latent variable modeling is a popular and flexible statistical framework. Concomitant with fitting latent variable models is assessment of how well the theoretical model fits the observed data. Although firm cutoffs for these fit indexes are often cited, recent statistical proofs and simulations have shown that these fit indexes are highly susceptible to measurement quality. For instance, a root mean square error of approximation (RMSEA) value of 0.06 (conventionally thought to indicate good fit) can actually indicate poor fit with poor measurement quality (e.g., standardized factors loadings of around 0.40). Conversely, an RMSEA value of 0.20 (conventionally thought to indicate very poor fit) can indicate acceptable fit with very high measurement quality (standardized factor loadings around 0.90). Despite the wide-ranging effect on applications of latent variable models, the high level of technical detail involved with this phenomenon has curtailed the exposure of these important findings to empirical researchers who are employing these methods. This article briefly reviews these methodological studies in minimal technical detail and provides a demonstration to easily quantify the large influence measurement quality has on fit index values and how greatly the cutoffs would change if they were derived under an alternative level of measurement quality. Recommendations for best practice are also discussed.

  14. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  15. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    Science.gov (United States)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  16. The issue of statistical power for overall model fit in evaluating structural equation models

    Directory of Open Access Journals (Sweden)

    Richard HERMIDA

    2015-06-01

    Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.

  17. Envelope: interactive software for modeling and fitting complex isotope distributions

    Directory of Open Access Journals (Sweden)

    Sykes Michael T

    2008-10-01

    Full Text Available Abstract Background An important aspect of proteomic mass spectrometry involves quantifying and interpreting the isotope distributions arising from mixtures of macromolecules with different isotope labeling patterns. These patterns can be quite complex, in particular with in vivo metabolic labeling experiments producing fractional atomic labeling or fractional residue labeling of peptides or other macromolecules. In general, it can be difficult to distinguish the contributions of species with different labeling patterns to an experimental spectrum and difficult to calculate a theoretical isotope distribution to fit such data. There is a need for interactive and user-friendly software that can calculate and fit the entire isotope distribution of a complex mixture while comparing these calculations with experimental data and extracting the contributions from the differently labeled species. Results Envelope has been developed to be user-friendly while still being as flexible and powerful as possible. Envelope can simultaneously calculate the isotope distributions for any number of different labeling patterns for a given peptide or oligonucleotide, while automatically summing these into a single overall isotope distribution. Envelope can handle fractional or complete atom or residue-based labeling, and the contribution from each different user-defined labeling pattern is clearly illustrated in the interactive display and is individually adjustable. At present, Envelope supports labeling with 2H, 13C, and 15N, and supports adjustments for baseline correction, an instrument accuracy offset in the m/z domain, and peak width. Furthermore, Envelope can display experimental data superimposed on calculated isotope distributions, and calculate a least-squares goodness of fit between the two. All of this information is displayed on the screen in a single graphical user interface. Envelope supports high-quality output of experimental and calculated

  18. Assessing Fit of Cognitive Diagnostic Models: A Case Study

    Science.gov (United States)

    Sinharay, Sandip; Almond, Russell G.

    2007-01-01

    A cognitive diagnostic model uses information from educational experts to describe the relationships between item performances and posited proficiencies. When the cognitive relationships can be described using a fully Bayesian model, Bayesian model checking procedures become available. Checking models tied to cognitive theory of the domains…

  19. Model Fitting Versus Curve Fitting: A Model of Renormalization Provides a Better Account of Age Aftereffects Than a Model of Local Repulsion.

    Science.gov (United States)

    O'Neil, Sean F; Mac, Amy; Rhodes, Gillian; Webster, Michael A

    2015-12-01

    Recently, we proposed that the aftereffects of adapting to facial age are consistent with a renormalization of the perceived age (e.g., so that after adapting to a younger or older age, all ages appear slightly older or younger, respectively). This conclusion has been challenged by arguing that the aftereffects can also be accounted for by an alternative model based on repulsion (in which facial ages above or below the adapting age are biased away from the adaptor). However, we show here that this challenge was based on allowing the fitted functions to take on values which are implausible and incompatible across the different adapting conditions. When the fits are constrained or interpreted in terms of standard assumptions about normalization and repulsion, then the two analyses both agree in pointing to a pattern of renormalization in age aftereffects.

  20. Developments in Surrogating Methods

    Directory of Open Access Journals (Sweden)

    Hans van Dormolen

    2005-11-01

    Full Text Available In this paper, I would like to talk about the developments in surrogating methods for preservation. My main focus will be on the technical aspects of preservation surrogates. This means that I will tell you something about my job as Quality Manager Microfilming for the Netherlands’ national preservation program, Metamorfoze, which is coordinated by the National Library. I am responsible for the quality of the preservation microfilms, which are produced for Metamorfoze. Firstly, I will elaborate on developments in preservation methods in relation to the following subjects: · Preservation microfilms · Scanning of preservation microfilms · Preservation scanning · Computer Output Microfilm. In the closing paragraphs of this paper, I would like to tell you something about the methylene blue test. This is an important test for long-term storage of preservation microfilms. Also, I will give you a brief report on the Cellulose Acetate Microfilm Conference that was held in the British Library in London, May 2005.

  1. DiskFit: a code to fit simple non-axisymmetric galaxy models either to photometric images or to kinematic maps

    CERN Document Server

    Sellwood, J A

    2015-01-01

    This posting announces public availability of version 1.2 of the DiskFit software package developed by the authors, which may be used to fit simple non-axisymmetric models either to images or to velocity fields of disk galaxies. Here we give an outline of the capability of the code and provide the link to downloading executables, the source code, and a comprehensive on-line manual. We argue that in important respects the code is superior to rotcur for fitting kinematic maps and to galfit for fitting multi-component models to photometric images.

  2. Antiviral Activity of Bacillus sp. Isolated from the Marine Sponge Petromica citrina against Bovine Viral Diarrhea Virus, a Surrogate Model of the Hepatitis C Virus

    Science.gov (United States)

    Bastos, Juliana Cristina Santiago; Kohn, Luciana Konecny; Fantinatti-Garboggini, Fabiana; Padilla, Marina Aiello; Flores, Eduardo Furtado; da Silva, Bárbara Pereira; de Menezes, Cláudia Beatriz Afonso; Arns, Clarice Weis

    2013-01-01

    The Hepatitis C virus causes chronic infections in humans, which can develop to liver cirrhosis and hepatocellular carcinoma. The Bovine viral diarrhea virus is used as a surrogate model for antiviral assays for the HCV. From marine invertebrates and microorganisms isolated from them, extracts were prepared for assessment of their possible antiviral activity. Of the 128 tested, 2 were considered active and 1 was considered promising. The best result was obtained from the extracts produced from the Bacillus sp. isolated from the sponge Petromica citrina. The extracts 555 (500 µg/mL, SI>18) and 584 (150 µg/mL, SI 27) showed a percentage of protection of 98% against BVDV, and the extract 616, 90% of protection. All of them showed activity during the viral adsorption. Thus, various substances are active on these studied organisms and may lead to the development of drugs which ensure an alternative therapy for the treatment of hepatitis C. PMID:23628828

  3. Antiviral Activity of Bacillus sp. Isolated from the Marine Sponge Petromica citrina against Bovine Viral Diarrhea Virus, a Surrogate Model of the Hepatitis C Virus

    Directory of Open Access Journals (Sweden)

    Clarice Weis Arns

    2013-04-01

    Full Text Available The Hepatitis C virus causes chronic infections in humans, which can develop to liver cirrhosis and hepatocellular carcinoma. The Bovine viral diarrhea virus is used as a surrogate model for antiviral assays for the HCV. From marine invertebrates and microorganisms isolated from them, extracts were prepared for assessment of their possible antiviral activity. Of the 128 tested, 2 were considered active and 1 was considered promising. The best result was obtained from the extracts produced from the Bacillus sp. isolated from the sponge Petromica citrina. The extracts 555 (500 µg/mL, SI>18 and 584 (150 µg/mL, SI 27 showed a percentage of protection of 98% against BVDV, and the extract 616, 90% of protection. All of them showed activity during the viral adsorption. Thus, various substances are active on these studied organisms and may lead to the development of drugs which ensure an alternative therapy for the treatment of hepatitis C.

  4. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    Science.gov (United States)

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…

  5. A model for programmatic assessment fit for purpose.

    NARCIS (Netherlands)

    Vleuten, C.P.M. van der; Schuwirth, L.W.; Driessen, E.W.; Dijkstra, J.; Tigelaar, D.; Baartman, L.K.; Tartwijk, J. van

    2012-01-01

    We propose a model for programmatic assessment in action, which simultaneously optimises assessment for learning and assessment for decision making about learner progress. This model is based on a set of assessment principles that are interpreted from empirical research. It specifies cycles of train

  6. Fitting a Stochastic Model for Golden-Ten

    NARCIS (Netherlands)

    de Vos, J.C.; van der Genugten, B.B.

    1996-01-01

    Golden-Ten is an observation game in which players try to predict the outcome of the motion of a ball rolling down the surface of a drum.This paper describes the motion of the ball as a stochastic model, based on a deterministic, mechanical model.To this end, the motion is split into several stages,

  7. Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits

    Science.gov (United States)

    Kopasakis, George

    2015-01-01

    Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.

  8. Fitting the Two-Higgs-Doublet model of type II

    CERN Document Server

    Eberhardt, Otto

    2014-01-01

    We present the current status of the Two-Higgs-Doublet model of type II. Taking into account all available relevant information, we exclude at $95$% CL sizeable deviations of the so-called alignment limit, in which all couplings of the light CP-even Higgs boson $h$ are Standard-Model-like. While we can set a lower limit of $240$ GeV on the mass of the pseudoscalar Higgs boson at $95$% CL, the mass of the heavy CP-even Higgs boson $H$ can be even lighter than $200$ GeV. The strong constraints on the model parameters also set limits on the triple Higgs couplings: the $hhh$ coupling in the Two-Higgs-Doublet model of type II cannot be larger than in the Standard Model, while the $hhH$ coupling can maximally be $2.5$ times the size of the Standard Model $hhh$ coupling, assuming an $H$ mass below $1$ TeV. The selection of benchmark scenarios which maximize specific effects within the allowed regions for further collider studies is illustrated for the $H$ branching fraction to fermions and gauge bosons. As an exampl...

  9. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  10. Human surrogate neck response to +Gz vertical impact

    NARCIS (Netherlands)

    Rooij, L. van; Uittenbogaard, J.

    2011-01-01

    For the evaluation of impact scenarios with a substantial vertical component, the performance of current human surrogates - the RID 3D hardware dummy and two numerical human models - was evaluated. Volunteer tests with 10G and 6G pulses were compared to reconstructed tests with human surrogates.

  11. Inactivation of Tulane virus, a novel surrogate for human norovirus

    Science.gov (United States)

    Human noroviruses (HuNoVs) are the major cause of non-bacterial epidemics of gastroenteritis. Due to the inability to cultivate HuNoVs and the lack of an efficient small animal model, surrogates are used to study HuNoV biology. Two such surrogates, the feline calicivirus (FCV) and the murine norovir...

  12. Human surrogate neck response to +Gz vertical impact

    NARCIS (Netherlands)

    Rooij, L. van; Uittenbogaard, J.

    2011-01-01

    For the evaluation of impact scenarios with a substantial vertical component, the performance of current human surrogates - the RID 3D hardware dummy and two numerical human models - was evaluated. Volunteer tests with 10G and 6G pulses were compared to reconstructed tests with human surrogates. Add

  13. Using proper regression methods for fitting the Langmuir model to sorption data

    Science.gov (United States)

    The Langmuir model, originally developed for the study of gas sorption to surfaces, is one of the most commonly used models for fitting phosphorus sorption data. There are good theoretical reasons, however, against applying this model to describe P sorption to soils. Nevertheless, the Langmuir model...

  14. Efficient parallel Levenberg-Marquardt model fitting towards real-time automated parametric imaging microscopy.

    Science.gov (United States)

    Zhu, Xiang; Zhang, Dianwen

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy.

  15. Efficient Parallel Levenberg-Marquardt Model Fitting towards Real-Time Automated Parametric Imaging Microscopy

    OpenAIRE

    Xiang Zhu; Dianwen Zhang

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetim...

  16. The empirical likelihood goodness-of-fit test for regression model

    Institute of Scientific and Technical Information of China (English)

    Li-xing ZHU; Yong-song QIN; Wang-li XU

    2007-01-01

    Goodness-of-fit test for regression modes has received much attention in literature. In this paper, empirical likelihood (EL) goodness-of-fit tests for regression models including classical parametric and autoregressive (AR) time series models are proposed. Unlike the existing locally smoothing and globally smoothing methodologies, the new method has the advantage that the tests are self-scale invariant and that the asymptotic null distribution is chi-squared. Simulations are carried out to illustrate the methodology.

  17. A fitness model for the Italian Interbank Money Market

    CERN Document Server

    De Masi, G; Iori, G

    2006-01-01

    We use the theory of complex networks in order to quantitatively characterize the formation of communities in a particular financial market. The system is composed by different banks exchanging on a daily basis loans and debts of liquidity. Through topological analysis and by means of a model of network growth we can determine the formation of different group of banks characterized by different business strategy. The model based on Pareto's Law makes no use of growth or preferential attachment and it reproduces correctly all the various statistical properties of the system. We believe that this network modeling of the market could be an efficient way to evaluate the impact of different policies in the market of liquidity.

  18. Fitness model for the Italian interbank money market

    Science.gov (United States)

    de Masi, G.; Iori, G.; Caldarelli, G.

    2006-12-01

    We use the theory of complex networks in order to quantitatively characterize the formation of communities in a particular financial market. The system is composed by different banks exchanging on a daily basis loans and debts of liquidity. Through topological analysis and by means of a model of network growth we can determine the formation of different group of banks characterized by different business strategy. The model based on Pareto’s law makes no use of growth or preferential attachment and it reproduces correctly all the various statistical properties of the system. We believe that this network modeling of the market could be an efficient way to evaluate the impact of different policies in the market of liquidity.

  19. A no-scale inflationary model to fit them all

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics, King' s College London, WC2R 2LS London (United Kingdom); García, Marcos A.G.; Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V., E-mail: john.ellis@cern.ch, E-mail: garciagarcia@physics.umn.edu, E-mail: dimitri@physics.tamu.edu, E-mail: olive@physics.umn.edu [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Texas A and M University, College Station, 77843 Texas (United States)

    2014-08-01

    The magnitude of B-mode polarization in the cosmic microwave background as measured by BICEP2 favours models of chaotic inflation with a quadratic m{sup 2} φ{sup 2}/2 potential, whereas data from the Planck satellite favour a small value of the tensor-to-scalar perturbation ratio r that is highly consistent with the Starobinsky R +R{sup 2} model. Reality may lie somewhere between these two scenarios. In this paper we propose a minimal two-field no-scale supergravity model that interpolates between quadratic and Starobinsky-like inflation as limiting cases, while retaining the successful prediction n{sub s} ≅ 0.96.

  20. Fitness model for the Italian interbank money market.

    Science.gov (United States)

    De Masi, G; Iori, G; Caldarelli, G

    2006-12-01

    We use the theory of complex networks in order to quantitatively characterize the formation of communities in a particular financial market. The system is composed by different banks exchanging on a daily basis loans and debts of liquidity. Through topological analysis and by means of a model of network growth we can determine the formation of different group of banks characterized by different business strategy. The model based on Pareto's law makes no use of growth or preferential attachment and it reproduces correctly all the various statistical properties of the system. We believe that this network modeling of the market could be an efficient way to evaluate the impact of different policies in the market of liquidity.

  1. BOUSSINESQ MODELLING OF NEARSHORE WAVES UNDER BODY FITTED COORDINATE

    Institute of Scientific and Technical Information of China (English)

    FANG Ke-zhao; ZOU Zhi-li; LIU Zhong-bo; YIN Ji-wei

    2012-01-01

    A set of nonlinear Boussinesq equations with fully nonlinearity property is solved numerically in generalized coordinates,to develop a Boussinesq-type wave model in dealing with irregular computation boundaries in complex nearshore regions and to facilitate the grid refinements in simulations.The governing equations expressed in contravariant components of velocity vectors under curv ilinear coordinates are derived and a high order finite difference scheme on a staggered grid is employed for the numerical implementation.The developed model is used to simulate nearshore wave propagations under curvilinear coordinates,the numerical results are compared against analytical or experimental data with a good agreement.

  2. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  3. Design of spatial experiments: Model fitting and prediction

    Energy Technology Data Exchange (ETDEWEB)

    Fedorov, V.V.

    1996-03-01

    The main objective of the paper is to describe and develop model oriented methods and algorithms for the design of spatial experiments. Unlike many other publications in this area, the approach proposed here is essentially based on the ideas of convex design theory.

  4. Reducing uncertainty based on model fitness: Application to a ...

    African Journals Online (AJOL)

    2015-01-07

    Jan 7, 2015 ... 2Hydrology and Water Quality, Agricultural and Biological Engineering ... This general methodology is applied to a reservoir model of the Okavango ... Global sensitivity and uncertainty analysis (GSA/UA) system- ... and weighing risks between decisions (Saltelli et al., 2008). ...... resources and support.

  5. A case study on the use of appropriate surrogates for antecedent moisture conditions (AMCs

    Directory of Open Access Journals (Sweden)

    G. A. Ali

    2010-06-01

    Full Text Available While a large number of non-linear hillslope and catchment rainfall-runoff responses have been attributed to the temporal variability in antecedent moisture conditions (AMCs, two problems emerge: 1 the difficulty of measuring AMCs, and 2 the absence of explicit guidelines for the choice of surrogates or proxies for AMCs. This paper aims at determining whether or not multiple surrogates for AMCs should be used in order not to bias our understanding of a system hydrological behaviour. We worked in a small forested catchment, the Hermine, where soil moisture has been measured at 121 different locations at four depths on 16 occasions. Without making any assumption on active processes, we used various linear and nonlinear regression models to evaluate the point-scale temporal relations between actual soil moisture contents and selected meteorological-based surrogates for AMCs. We then mapped the nature of the "best fit" model to identify 1 spatial clusters of soil moisture monitoring sites whose hydrological behaviour was similar, and 2 potential topographic influences on these behaviours. Two conclusions stood out. Firstly, it was shown that the sole reference to AMCs indices traditionally used in catchment hydrology, namely antecedent rainfall amounts summed over periods of seven or ten days, would have led to an incomplete understanding of the Hermine catchment dynamics. Secondly, the relationships between point-scale soil moisture content and surrogates for AMCs were not spatially homogeneous, thus revealing a mosaic of linear and nonlinear catchment "active" and "contributing" sources whose location was often controlled by surface terrain attributes or the topography of a soil-confining layer interface. These results represent a step forward in developing a hydrological conceptual model for the Hermine catchment as they indicate depth-specific processes and spatially-variable triggering conditions. Further investigations are, however, necessary

  6. On assessing model fit for distribution-free longitudinal models under missing data.

    Science.gov (United States)

    Wu, P; Tu, X M; Kowalski, J

    2014-01-15

    The generalized estimating equation (GEE), a distribution-free, or semi-parametric, approach for modeling longitudinal data, is used in a wide range of behavioral, psychotherapy, pharmaceutical drug safety, and healthcare-related research studies. Most popular methods for assessing model fit are based on the likelihood function for parametric models, rendering them inappropriate for distribution-free GEE. One rare exception is a score statistic initially proposed by Tsiatis for logistic regression (1980) and later extended by Barnhart and Willamson to GEE (1998). Because GEE only provides valid inference under the missing completely at random assumption and missing values arising in most longitudinal studies do not follow such a restricted mechanism, this GEE-based score test has very limited applications in practice. We propose extensions of this goodness-of-fit test to address missing data under the missing at random assumption, a more realistic model that applies to most studies in practice. We examine the performance of the proposed tests using simulated data and demonstrate the utilities of such tests with data from a real study on geriatric depression and associated medical comorbidities.

  7. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda

    2009-05-12

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.

  8. Network growth models: A behavioural basis for attachment proportional to fitness

    Science.gov (United States)

    Bell, Michael; Perera, Supun; Piraveenan, Mahendrarajah; Bliemer, Michiel; Latty, Tanya; Reid, Chris

    2017-01-01

    Several growth models have been proposed in the literature for scale-free complex networks, with a range of fitness-based attachment models gaining prominence recently. However, the processes by which such fitness-based attachment behaviour can arise are less well understood, making it difficult to compare the relative merits of such models. This paper analyses an evolutionary mechanism that would give rise to a fitness-based attachment process. In particular, it is proven by analytical and numerical methods that in homogeneous networks, the minimisation of maximum exposure to node unfitness leads to attachment probabilities that are proportional to node fitness. This result is then extended to heterogeneous networks, with supply chain networks being used as an example. PMID:28205599

  9. Structural model of in-group dynamic of 6-10 years old boys’ motor fitness

    Directory of Open Access Journals (Sweden)

    Ivashchenko O.V.

    2015-10-01

    Full Text Available Purpose: to determine structural model of in-group dynamic of 6-10 years old boys’ motor fitness. Material: in the research 6 years old boys (n=48, 7 years old (n=45, 8 years old (n=60, 9 years’ age (n=47 and10 years’ age (n=40 participated. We carried out analysis of factorial model of schoolchildren’s motor fitness. Results: we received information for taking decisions in monitoring of physical education. This information is also necessary for working out of effective programs of children’s and adolescents’ physical training. We determined model of motor fitness and specified informative tests for pedagogic control in every age group. In factorial model of boys’ motor fitness the following factor is the most significant: for 6 years - complex development of motor skills; for 7 years - also complex development of motor skills; for 8 years - strength and coordination; for 9 years - complex development of motor skills; for 10 years - complex development of motor skills. Conclusions: In factorial model of 6-10 years old boys’ motor fitness the most significant are backbone and shoulder joints’ mobility, complex manifestation of motor skills, motor coordination. The most informative tests for assessment of different age boys’ motor fitness have been determined.

  10. Adaptation in tunably rugged fitness landscapes: the rough Mount Fuji model.

    Science.gov (United States)

    Neidhart, Johannes; Szendro, Ivan G; Krug, Joachim

    2014-10-01

    Much of the current theory of adaptation is based on Gillespie's mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage.

  11. Gfitter - Revisiting the global electroweak fit of the Standard Model and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Flaecher, H.; Hoecker, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Goebel, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Haller, J. [Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Moenig, K.; Stelzer, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2008-11-15

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M{sub H}=116.4{sup +18.3}{sub -1.3} GeV, and the 2{sigma} and 3{sigma} allowed regions [114,145] GeV and [[113,168] and [180,225

  12. Model independent analysis of dark energy I: Supernova fitting result

    CERN Document Server

    Gong, Y

    2004-01-01

    The nature of dark energy is a mystery to us. This paper uses the supernova data to explore the property of dark energy by some model independent methods. We first Talyor expanded the scale factor $a(t)$ to find out the deceleration parameter $q_0<0$. This result just invokes the Robertson-Walker metric. Then we discuss several different parameterizations used in the literature. We find that $\\Omega_{\\rm DE0}$ is almost less than -1 at $1\\sigma$ level. We also find that the transition redshift from deceleration phase to acceleration phase is $z_{\\rm T}\\sim 0.3$.

  13. Comparative model accuracy of a data-fitted generalized Aw-Rascle-Zhang model

    CERN Document Server

    Fan, Shimao; Seibold, Benjamin

    2013-01-01

    The Aw-Rascle-Zhang (ARZ) model can be interpreted as a generalization of the Lighthill-Whitham-Richards (LWR) model, possessing a family of fundamental diagram curves, each of which represents a class of drivers with a different empty road velocity. A weakness of this approach is that different drivers possess vastly different densities at which traffic flow stagnates. This drawback can be overcome by modifying the pressure relation in the ARZ model, leading to the generalized Aw-Rascle-Zhang (GARZ) model. We present an approach to determine the parameter functions of the GARZ model from fundamental diagram measurement data. The predictive accuracy of the resulting data-fitted GARZ model is compared to other traffic models by means of a three-detector test setup, employing two types of data: vehicle trajectory data, and sensor data. This work also considers the extension of the ARZ and the GARZ models to models with a relaxation term, and conducts an investigation of the optimal relaxation time.

  14. Assessing Fit of Alternative Unidimensional Polytomous IRT Models Using Posterior Predictive Model Checking.

    Science.gov (United States)

    Li, Tongyun; Xie, Chao; Jiao, Hong

    2016-05-30

    This article explored the application of the posterior predictive model checking (PPMC) method in assessing fit for unidimensional polytomous item response theory (IRT) models, specifically the divide-by-total models (e.g., the generalized partial credit model). Previous research has primarily focused on using PPMC in model checking for unidimensional and multidimensional IRT models for dichotomous data, and has paid little attention to polytomous models. A Monte Carlo simulation was conducted to investigate the performance of PPMC in detecting different sources of misfit for the partial credit model family. Results showed that the PPMC method, in combination with appropriate discrepancy measures, had adequate power in detecting different sources of misfit for the partial credit model family. Global odds ratio and item total correlation exhibited specific patterns in detecting the absence of the slope parameter, whereas Yen's Q1 was found to be promising in the detection of misfit caused by the constant category intersection parameter constraint across items. (PsycINFO Database Record

  15. Are Fit Indices Biased in Favor of Bi-Factor Models in Cognitive Ability Research?: A Comparison of Fit in Correlated Factors, Higher-Order, and Bi-Factor Models via Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Grant B. Morgan

    2015-02-01

    Full Text Available Bi-factor confirmatory factor models have been influential in research on cognitive abilities because they often better fit the data than correlated factors and higher-order models. They also instantiate a perspective that differs from that offered by other models. Motivated by previous work that hypothesized an inherent statistical bias of fit indices favoring the bi-factor model, we compared the fit of correlated factors, higher-order, and bi-factor models via Monte Carlo methods. When data were sampled from a true bi-factor structure, each of the approximate fit indices was more likely than not to identify the bi-factor solution as the best fitting. When samples were selected from a true multiple correlated factors structure, approximate fit indices were more likely overall to identify the correlated factors solution as the best fitting. In contrast, when samples were generated from a true higher-order structure, approximate fit indices tended to identify the bi-factor solution as best fitting. There was extensive overlap of fit values across the models regardless of true structure. Although one model may fit a given dataset best relative to the other models, each of the models tended to fit the data well in absolute terms. Given this variability, models must also be judged on substantive and conceptual grounds.

  16. Modeling Individual Damped Linear Oscillator Processes with Differential Equations: Using Surrogate Data Analysis to Estimate the Smoothing Parameter

    Science.gov (United States)

    Deboeck, Pascal R.; Boker, Steven M.; Bergeman, C. S.

    2008-01-01

    Among the many methods available for modeling intraindividual time series, differential equation modeling has several advantages that make it promising for applications to psychological data. One interesting differential equation model is that of the damped linear oscillator (DLO), which can be used to model variables that have a tendency to…

  17. Revisiting a Statistical Shortcoming When Fitting the Langmuir Model to Sorption Data

    Science.gov (United States)

    The Langmuir model is commonly used for describing sorption behavior of reactive solutes to surfaces. Fitting the Langmuir model to sorption data requires either the use of nonlinear regression or, alternatively, linear regression using one of the linearized versions of the model. Statistical limit...

  18. A simple model of group selection that cannot be analyzed with inclusive fitness

    NARCIS (Netherlands)

    M. van Veelen; S. Luo; B. Simon

    2014-01-01

    A widespread claim in evolutionary theory is that every group selection model can be recast in terms of inclusive fitness. Although there are interesting classes of group selection models for which this is possible, we show that it is not true in general. With a simple set of group selection models,

  19. Development of a program to fit data to a new logistic model for microbial growth.

    Science.gov (United States)

    Fujikawa, Hiroshi; Kano, Yoshihiro

    2009-06-01

    Recently we developed a mathematical model for microbial growth in food. The model successfully predicted microbial growth at various patterns of temperature. In this study, we developed a program to fit data to the model with a spread sheet program, Microsoft Excel. Users can instantly get curves fitted to the model by inputting growth data and choosing the slope portion of a curve. The program also could estimate growth parameters including the rate constant of growth and the lag period. This program would be a useful tool for analyzing growth data and further predicting microbial growth.

  20. ergm: A Package to Fit, Simulate and Diagnose Exponential-Family Models for Networks

    Directory of Open Access Journals (Sweden)

    David R. Hunter

    2008-12-01

    Full Text Available We describe some of the capabilities of the ergm package and the statistical theory underlying it. This package contains tools for accomplishing three important, and inter-related, tasks involving exponential-family random graph models (ERGMs: estimation, simulation, and goodness of fit. More precisely, ergm has the capability of approximating a maximum likelihood estimator for an ERGM given a network data set; simulating new network data sets from a fitted ERGM using Markov chain Monte Carlo; and assessing how well a fitted ERGM does at capturing characteristics of a particular network data set.

  1. Efficient fitting of multiplanet Keplerian models to radial velocity and astrometry data

    CERN Document Server

    Howard, J T Wright A W

    2009-01-01

    We describe a technique for solving for the orbital elements of multiple planets from radial velocity (RV) and/or astrometric data taken with 1 m/s and microarcsecond precision, appropriate for efforts to detect Earth-massed planets in their stars' habitable zones, such as NASA's proposed Space Interferometry Mission. We include details of calculating analytic derivatives for use in the Levenberg-Marquardt (LM) algorithm for the problems of fitting RV and astrometric data separately and jointly. We also explicate the general method of separating the linear and nonlinear components of a model fit in the context of an LM fit, show how explicit derivatives can be calculated in such a model, and demonstrate the speed up and convergence improvements of such a scheme in the case of a five-planet fit to published radial velocity data for 55 Cnc.

  2. Spin models inferred from patient data faithfully describe HIV fitness landscapes and enable rational vaccine design

    CERN Document Server

    Shekhar, Karthik; Ferguson, Andrew L; Barton, John P; Kardar, Mehran; Chakraborty, Arup K

    2013-01-01

    Mutational escape from vaccine induced immune responses has thwarted the development of a successful vaccine against AIDS, whose causative agent is HIV, a highly mutable virus. Knowing the virus' fitness as a function of its proteomic sequence can enable rational design of potent vaccines, as this information can focus vaccine induced immune responses to target mutational vulnerabilities of the virus. Spin models have been proposed as a means to infer intrinsic fitness landscapes of HIV proteins from patient-derived viral protein sequences. These sequences are the product of non-equilibrium viral evolution driven by patient-specific immune responses, and are subject to phylogenetic constraints. How can such sequence data allow inference of intrinsic fitness landscapes? We combined computer simulations and variational theory \\'{a} la Feynman to show that, in most circumstances, spin models inferred from patient-derived viral sequences reflect the correct rank order of the fitness of mutant viral strains. Our f...

  3. Spin models inferred from patient-derived viral sequence data faithfully describe HIV fitness landscapes

    Science.gov (United States)

    Shekhar, Karthik; Ruberman, Claire F.; Ferguson, Andrew L.; Barton, John P.; Kardar, Mehran; Chakraborty, Arup K.

    2013-12-01

    Mutational escape from vaccine-induced immune responses has thwarted the development of a successful vaccine against AIDS, whose causative agent is HIV, a highly mutable virus. Knowing the virus' fitness as a function of its proteomic sequence can enable rational design of potent vaccines, as this information can focus vaccine-induced immune responses to target mutational vulnerabilities of the virus. Spin models have been proposed as a means to infer intrinsic fitness landscapes of HIV proteins from patient-derived viral protein sequences. These sequences are the product of nonequilibrium viral evolution driven by patient-specific immune responses and are subject to phylogenetic constraints. How can such sequence data allow inference of intrinsic fitness landscapes? We combined computer simulations and variational theory á la Feynman to show that, in most circumstances, spin models inferred from patient-derived viral sequences reflect the correct rank order of the fitness of mutant viral strains. Our findings are relevant for diverse viruses.

  4. Kinetic modelling of a surrogate diesel fuel applied to 3D auto-ignition in HCCI engines

    CERN Document Server

    Bounaceur, Roda; Fournet, René; Battin-Leclerc, Frédérique; Jay, S; Da Cruz, A Pires

    2007-01-01

    The prediction of auto-ignition delay times in HCCI engines has risen interest on detailed chemical models. This paper described a validated kinetic mechanism for the oxidation of a model Diesel fuel (n-decane and α-methylnaphthalene). The 3D model for the description of low and high temperature auto-ignition in engines is presented. The behavior of the model fuel is compared with that of n-heptane. Simulations show that the 3D model coupled with the kinetic mechanism can reproduce experimental HCCI and Diesel engine results and that the correct modeling of auto-ignition in the cool flame region is essential in HCCI conditions.

  5. Note: curve fit models for atomic force microscopy cantilever calibration in water.

    Science.gov (United States)

    Kennedy, Scott J; Cole, Daniel G; Clark, Robert L

    2011-11-01

    Atomic force microscopy stiffness calibrations performed on commercial instruments using the thermal noise method on the same cantilever in both air and water can vary by as much as 20% when a simple harmonic oscillator model and white noise are used in curve fitting. In this note, several fitting strategies are described that reduce this difference to about 11%. © 2011 American Institute of Physics

  6. Empirical models of Total Electron Content based on functional fitting over Taiwan during geomagnetic quiet condition

    Directory of Open Access Journals (Sweden)

    Y. Kakinami

    2009-08-01

    Full Text Available Empirical models of Total Electron Content (TEC based on functional fitting over Taiwan (120° E, 24° N have been constructed using data of the Global Positioning System (GPS from 1998 to 2007 during geomagnetically quiet condition (Dst>−30 nT. The models provide TEC as functions of local time (LT, day of year (DOY and the solar activity (F, which are represented by 1–162 days mean of F10.7 and EUV. Other models based on median values have been also constructed and compared with the models based on the functional fitting. Under same values of F parameter, the models based on the functional fitting show better accuracy than those based on the median values in all cases. The functional fitting model using daily EUV is the most accurate with 9.2 TECu of root mean square error (RMS than the 15-days running median with 10.4 TECu RMS and the model of International Reference Ionosphere 2007 (IRI2007 with 14.7 TECu RMS. IRI2007 overestimates TEC when the solar activity is low, and underestimates TEC when the solar activity is high. Though average of 81 days centered running mean of F10.7 and daily F10.7 is often used as indicator of EUV, our result suggests that average of F10.7 mean from 1 to 54 day prior and current day is better than the average of 81 days centered running mean for reproduction of TEC. This paper is for the first time comparing the median based model with the functional fitting model. Results indicate the functional fitting model yielding a better performance than the median based one. Meanwhile we find that the EUV radiation is essential to derive an optimal TEC.

  7. Optimisation of Ionic Models to Fit Tissue Action Potentials: Application to 3D Atrial Modelling

    Directory of Open Access Journals (Sweden)

    Amr Al Abed

    2013-01-01

    Full Text Available A 3D model of atrial electrical activity has been developed with spatially heterogeneous electrophysiological properties. The atrial geometry, reconstructed from the male Visible Human dataset, included gross anatomical features such as the central and peripheral sinoatrial node (SAN, intra-atrial connections, pulmonary veins, inferior and superior vena cava, and the coronary sinus. Membrane potentials of myocytes from spontaneously active or electrically paced in vitro rabbit cardiac tissue preparations were recorded using intracellular glass microelectrodes. Action potentials of central and peripheral SAN, right and left atrial, and pulmonary vein myocytes were each fitted using a generic ionic model having three phenomenological ionic current components: one time-dependent inward, one time-dependent outward, and one leakage current. To bridge the gap between the single-cell ionic models and the gross electrical behaviour of the 3D whole-atrial model, a simplified 2D tissue disc with heterogeneous regions was optimised to arrive at parameters for each cell type under electrotonic load. Parameters were then incorporated into the 3D atrial model, which as a result exhibited a spontaneously active SAN able to rhythmically excite the atria. The tissue-based optimisation of ionic models and the modelling process outlined are generic and applicable to image-based computer reconstruction and simulation of excitable tissue.

  8. Incidence of Changes in Respiration-Induced Tumor Motion and Its Relationship With Respiratory Surrogates During Individual Treatment Fractions

    Energy Technology Data Exchange (ETDEWEB)

    Malinowski, Kathleen [Department of Bioengineering, A. James Clark School of Engineering, University of Maryland, College Park, MD (United States); Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); McAvoy, Thomas J. [Department of Bioengineering, A. James Clark School of Engineering, University of Maryland, College Park, MD (United States); Institute of Systems Research, University of Maryland, College Park, MD (United States); George, Rohini [Department of Bioengineering, A. James Clark School of Engineering, University of Maryland, College Park, MD (United States); Dietrich, Sonja [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, CA (United States); D' Souza, Warren D., E-mail: wdsou001@umaryland.edu [Department of Bioengineering, A. James Clark School of Engineering, University of Maryland, College Park, MD (United States); Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States)

    2012-04-01

    Purpose: To determine how frequently (1) tumor motion and (2) the spatial relationship between tumor and respiratory surrogate markers change during a treatment fraction in lung and pancreas cancer patients. Methods and Materials: A Cyberknife Synchrony system radiographically localized the tumor and simultaneously tracked three respiratory surrogate markers fixed to a form-fitting vest. Data in 55 lung and 29 pancreas fractions were divided into successive 10-min blocks. Mean tumor positions and tumor position distributions were compared across 10-min blocks of data. Treatment margins were calculated from both 10 and 30 min of data. Partial least squares (PLS) regression models of tumor positions as a function of external surrogate marker positions were created from the first 10 min of data in each fraction; the incidence of significant PLS model degradation was used to assess changes in the spatial relationship between tumors and surrogate markers. Results: The absolute change in mean tumor position from first to third 10-min blocks was >5 mm in 13% and 7% of lung and pancreas cases, respectively. Superior-inferior and medial-lateral differences in mean tumor position were significantly associated with the lobe of lung. In 61% and 54% of lung and pancreas fractions, respectively, margins calculated from 30 min of data were larger than margins calculated from 10 min of data. The change in treatment margin magnitude for superior-inferior motion was >1 mm in 42% of lung and 45% of pancreas fractions. Significantly increasing tumor position prediction model error (mean {+-} standard deviation rates of change of 1.6 {+-} 2.5 mm per 10 min) over 30 min indicated tumor-surrogate relationship changes in 63% of fractions. Conclusions: Both tumor motion and the relationship between tumor and respiratory surrogate displacements change in most treatment fractions for patient in-room time of 30 min.

  9. Surrogate Endpoints in Suicide Research

    Science.gov (United States)

    Wortzel, Hal S.; Gutierrez, Peter M.; Homaifar, Beeta Y.; Breshears, Ryan E.; Harwood, Jeri E.

    2010-01-01

    Surrogate endpoints frequently substitute for rare outcomes in research. The ability to learn about completed suicides by investigating more readily available and proximate outcomes, such as suicide attempts, has obvious appeal. However, concerns with surrogates from the statistical science perspective exist, and mounting evidence from…

  10. Is Model Fitting Necessary for Model-Based fMRI?

    Directory of Open Access Journals (Sweden)

    Robert C Wilson

    2015-06-01

    Full Text Available Model-based analysis of fMRI data is an important tool for investigating the computational role of different brain regions. With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning. One potential weakness with this approach is that models often have free parameters and thus the results of the analysis may depend on how these free parameters are set. In this work we asked whether this hypothetical weakness is a problem in practice. We first developed general closed-form expressions for the relationship between results of fMRI analyses using different regressors, e.g., one corresponding to the true process underlying the measured data and one a model-derived approximation of the true generative regressor. Then, as a specific test case, we examined the sensitivity of model-based fMRI to the learning rate parameter in reinforcement learning, both in theory and in two previously-published datasets. We found that even gross errors in the learning rate lead to only minute changes in the neural results. Our findings thus suggest that precise model fitting is not always necessary for model-based fMRI. They also highlight the difficulty in using fMRI data for arbitrating between different models or model parameters. While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.

  11. Is Model Fitting Necessary for Model-Based fMRI?

    Science.gov (United States)

    Wilson, Robert C; Niv, Yael

    2015-06-01

    Model-based analysis of fMRI data is an important tool for investigating the computational role of different brain regions. With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning. One potential weakness with this approach is that models often have free parameters and thus the results of the analysis may depend on how these free parameters are set. In this work we asked whether this hypothetical weakness is a problem in practice. We first developed general closed-form expressions for the relationship between results of fMRI analyses using different regressors, e.g., one corresponding to the true process underlying the measured data and one a model-derived approximation of the true generative regressor. Then, as a specific test case, we examined the sensitivity of model-based fMRI to the learning rate parameter in reinforcement learning, both in theory and in two previously-published datasets. We found that even gross errors in the learning rate lead to only minute changes in the neural results. Our findings thus suggest that precise model fitting is not always necessary for model-based fMRI. They also highlight the difficulty in using fMRI data for arbitrating between different models or model parameters. While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.

  12. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    Science.gov (United States)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on

  13. Topological Performance Measures as Surrogates for Physical Flow Models for Risk and Vulnerability Analysis for Electric Power Systems

    CERN Document Server

    LaRocca, Sarah; Hassel, Henrik; Guikema, Seth

    2013-01-01

    Critical infrastructure systems must be both robust and resilient in order to ensure the functioning of society. To improve the performance of such systems, we often use risk and vulnerability analysis to find and address system weaknesses. A critical component of such analyses is the ability to accurately determine the negative consequences of various types of failures in the system. Numerous mathematical and simulation models exist which can be used to this end. However, there are relatively few studies comparing the implications of using different modeling approaches in the context of comprehensive risk analysis of critical infrastructures. Thus in this paper, we suggest a classification of these models, which span from simple topologically-oriented models to advanced physical flow-based models. Here, we focus on electric power systems and present a study aimed at understanding the tradeoffs between simplicity and fidelity in models used in the context of risk analysis. Specifically, the purpose of this pa...

  14. Soft X-ray spectral fits of Geminga with model neutron star atmospheres

    Science.gov (United States)

    Meyer, R. D.; Pavlov, G. G.; Meszaros, P.

    1994-01-01

    The spectrum of the soft X-ray pulsar Geminga consists of two components, a softer one which can be interpreted as thermal-like radiation from the surface of the neutron star, and a harder one interpreted as radiation from a polar cap heated by relativistic particles. We have fitted the soft spectrum using a detailed magnetized hydrogen atmosphere model. The fitting parameters are the hydrogen column density, the effective temperature T(sub eff), the gravitational redshift z, and the distance to radius ratio, for different values of the magnetic field B. The best fits for this model are obtained when B less than or approximately 1 x 10(exp 12) G and z lies on the upper boundary of the explored range (z = 0.45). The values of T(sub eff) approximately = (2-3) x 10(exp 5) K are a factor of 2-3 times lower than the value of T(sub eff) obtained for blackbody fits with the same z. The lower T(sub eff) increases the compatibility with some proposed schemes for fast neutrino cooling of neutron stars (NSs) by the direct Urca process or by exotic matter, but conventional cooling cannot be excluded. The hydrogen atmosphere fits also imply a smaller distance to Geminga than that inferred from a blackbody fit. An accurate evaluation of the distance would require a better knowledge of the ROSAT Position Sensitive Proportional Counter (PSPC) response to the low-energy region of the incident spectrum. Our modeling of the soft component with a cooler magnetized atmosphere also implies that the hard-component fit requires a characteristic temperature which is higher (by a factor of approximately 2-3) and a surface area which is smaller (by a factor of 10(exp 3), compared to previous blackbody fits.

  15. Finite population size effects in quasispecies models with single-peak fitness landscape

    Science.gov (United States)

    Saakian, David B.; Deem, Michael W.; Hu, Chin-Kun

    2012-04-01

    We consider finite population size effects for Crow-Kimura and Eigen quasispecies models with single-peak fitness landscape. We formulate accurately the iteration procedure for the finite population models, then derive the Hamilton-Jacobi equation (HJE) to describe the dynamic of the probability distribution. The steady-state solution of HJE gives the variance of the mean fitness. Our results are useful for understanding the population sizes of viruses in which the infinite population models can give reliable results for biological evolution problems.

  16. Fit of different linear models to the lactation curve of Italian water buffalo

    Directory of Open Access Journals (Sweden)

    N.P.P. Macciotta

    2010-01-01

    Full Text Available Mathematical modelling of lactation curve by suitable functions of time, widely used in the dairy cattle industry, can represent also for buffaloes a fundamental tool for management and breeding decision, where average curves are considered, and for genetic evaluation by random regression models, where individual patterns are fitted.

  17. A fungal growth model fitted to carbon-limited dynamics of Rhizoctonia solani

    NARCIS (Netherlands)

    Jeger, M.J.; Lamour, A.; Gilligan, C.A.; Otten, W.

    2008-01-01

    Here, a quasi-steady-state approximation was used to simplify a mathematical model for fungal growth in carbon-limiting systems, and this was fitted to growth dynamics of the soil-borne plant pathogen and saprotroph Rhizoctonia solani. The model identified a criterion for invasion into

  18. An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models

    Science.gov (United States)

    Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.

    2014-01-01

    As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…

  19. Modelling metabolic evolution on phenotypic fitness landscapes: a case study on C4 photosynthesis.

    Science.gov (United States)

    Heckmann, David

    2015-12-01

    How did the complex metabolic systems we observe today evolve through adaptive evolution? The fitness landscape is the theoretical framework to answer this question. Since experimental data on natural fitness landscapes is scarce, computational models are a valuable tool to predict landscape topologies and evolutionary trajectories. Careful assumptions about the genetic and phenotypic features of the system under study can simplify the design of such models significantly. The analysis of C4 photosynthesis evolution provides an example for accurate predictions based on the phenotypic fitness landscape of a complex metabolic trait. The C4 pathway evolved multiple times from the ancestral C3 pathway and models predict a smooth 'Mount Fuji' landscape accordingly. The modelled phenotypic landscape implies evolutionary trajectories that agree with data on modern intermediate species, indicating that evolution can be predicted based on the phenotypic fitness landscape. Future directions will have to include structural changes of metabolic fitness landscape structure with changing environments. This will not only answer important evolutionary questions about reversibility of metabolic traits, but also suggest strategies to increase crop yields by engineering the C4 pathway into C3 plants.

  20. Fitting of adaptive neuron model to electrophysiological recordings using particle swarm optimization algorithm

    Science.gov (United States)

    Shan, Bonan; Wang, Jiang; Zhang, Lvxia; Deng, Bin; Wei, Xile

    2017-02-01

    In order to fit neural model’s spiking features to electrophysiological recordings, in this paper, a fitting framework based on particle swarm optimization (PSO) algorithm is proposed to estimate the model parameters in an augmented multi-timescale adaptive threshold (AugMAT) model. PSO algorithm is an advanced evolutionary calculation method based on iteration. Selecting a reasonable criterion function will ensure the effectiveness of PSO algorithm. In this work, firing rate information is used as the main spiking feature and the estimation error of firing rate is selected as the criterion for fitting. A series of simulations are presented to verify the performance of the framework. The first step is model validation; an artificial training data is introduced to test the fitting procedure. Then we talk about the suitable PSO parameters, which exhibit adequate compromise between speediness and accuracy. Lastly, this framework is used to fit the electrophysiological recordings, after three adjustment steps, the features of experimental data are translated into realistic spiking neuron model.

  1. Can a first-order exponential decay model fit heart rate recovery after resistance exercise?

    Science.gov (United States)

    Bartels-Ferreira, Rhenan; de Sousa, Élder D; Trevizani, Gabriela A; Silva, Lilian P; Nakamura, Fábio Y; Forjaz, Cláudia L M; Lima, Jorge Roberto P; Peçanha, Tiago

    2015-03-01

    The time-constant of postexercise heart rate recovery (HRRτ ) obtained by fitting heart rate decay curve by a first-order exponential fitting has being used to assess cardiac autonomic recovery after endurance exercise. The feasibility of this model was not tested after resistance exercise (RE). The aim of this study was to test the goodness of fit of the first-order exponential decay model to fit heart rate recovery (HRR) after RE. Ten healthy subjects participated in the study. The experimental sessions occurred in two separated days and consisted of performance of 1 set of 10 repetitions at 50% or 80% of the load achieved on the one-repetition maximum test [low-intensity (LI) and high-intensity (HI) sessions, respectively]. Heart rate (HR) was continuously registered before and during exercise and also for 10 min of recovery. A monoexponential equation was used to fit the HRR curve during the postexercise period using different time windows (i.e. 30, 60, 90, … 600 s). For each time window, (i) HRRτ was calculated and (ii) variation of HR explained by the model (R(2) goodness of fit index) was assessed. The HRRτ showed stabilization from 360 and 420 s on LI and HI, respectively. Acceptable R(2) values were observed from the 360 s on LI (R(2) > 0.65) and at all tested time windows on HI (R(2) > 0.75). In conclusion, this study showed that using a minimum length of monitoring (~420 s) HRR after RE can be adequately modelled by a first-order exponential fitting. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  2. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    Science.gov (United States)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  3. Does the Foreign Income Shock in a Small Open Economy DSGE Model Fit Croatian Data?

    OpenAIRE

    Arčabić, Vladimir; Globan, Tomislav; Nadoveza, Ozana; Rogić Dumančić, Lucija; Tica, Josip

    2016-01-01

    The paper compares theoretical impulse response functions from a DSGE model for a small open economy with an empirical VAR model estimated for the Croatian economy. The theoretical model fits the data well as long as monetary policy is modelled as a fixed exchange rate regime. The paper considers only a foreign output gap shock. A positive foreign shock increases domestic GDP and prices and decreases terms of trade, which is in compliance with theoretical assumptions. Interest rates behave di...

  4. A goodness-of-fit test for occupancy models with correlated within-season revisits

    Science.gov (United States)

    Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.

    2016-01-01

    Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and

  5. Mathematical Modeling of Allelopathy. III. A Model for Curve-Fitting Allelochemical Dose Responses

    Science.gov (United States)

    Liu, De Li; An, Min; Johnson, Ian R.; Lovett, John V.

    2003-01-01

    Bioassay techniques are often used to study the effects of allelochemicals on plant processes, and it is generally observed that the processes are stimulated at low allelochemical concentrations and inhibited as the concentrations increase. A simple empirical model is presented to analyze this type of response. The stimulation-inhibition properties of allelochemical-dose responses can be described by the parameters in the model. The indices, p% reductions, are calculated to assess the allelochemical effects. The model is compared with experimental data for the response of lettuce seedling growth to Centaurepensin, the olfactory response of weevil larvae to α-terpineol, and the responses of annual ryegrass (Lolium multiflorum Lam.), creeping red fescue (Festuca rubra L., cv. Ensylva), Kentucky bluegrass (Poa pratensis L., cv. Kenblue), perennial ryegrass (L. perenne L., cv. Manhattan), and Rebel tall fescue (F. arundinacea Schreb) seedling growth to leachates of Rebel and Kentucky 31 tall fescue. The results show that the model gives a good description to observations and can be used to fit a wide range of dose responses. Assessments of the effects of leachates of Rebel and Kentucky 31 tall fescue clearly differentiate the properties of the allelopathic sources and the relative sensitivities of indicators such as the length of root and leaf. PMID:19330111

  6. Mathematical Modeling of Allelopathy. III. A Model for Curve-Fitting Allelochemical Dose Responses.

    Science.gov (United States)

    Liu, De Li; An, Min; Johnson, Ian R; Lovett, John V

    2003-01-01

    Bioassay techniques are often used to study the effects of allelochemicals on plant processes, and it is generally observed that the processes are stimulated at low allelochemical concentrations and inhibited as the concentrations increase. A simple empirical model is presented to analyze this type of response. The stimulation-inhibition properties of allelochemical-dose responses can be described by the parameters in the model. The indices, p% reductions, are calculated to assess the allelochemical effects. The model is compared with experimental data for the response of lettuce seedling growth to Centaurepensin, the olfactory response of weevil larvae to alpha-terpineol, and the responses of annual ryegrass (Lolium multiflorum Lam.), creeping red fescue (Festuca rubra L., cv. Ensylva), Kentucky bluegrass (Poa pratensis L., cv. Kenblue), perennial ryegrass (L. perenne L., cv. Manhattan), and Rebel tall fescue (F. arundinacea Schreb) seedling growth to leachates of Rebel and Kentucky 31 tall fescue. The results show that the model gives a good description to observations and can be used to fit a wide range of dose responses. Assessments of the effects of leachates of Rebel and Kentucky 31 tall fescue clearly differentiate the properties of the allelopathic sources and the relative sensitivities of indicators such as the length of root and leaf.

  7. Fitting simulated random events to experimental histograms by means of parametric models

    Energy Technology Data Exchange (ETDEWEB)

    Kortner, Oliver E-mail: oliver.kortner@cern.chkortner@mppmu.mpg.de; Zupancic, Crtomir

    2003-05-11

    Classical chi-square quantities are appropriate tools for fitting analytical parameter-dependent models to (multidimensional) measured histograms. In contrast, this article proposes a family of special chi-squares suitable for fits with models which simulate experimental data by Monte Carlo methods, thus introducing additional randomness. We investigate the dependence of such chi-squares on the number of experimental and simulated events in each bin, and on the theoretical parameter-dependent weight linking the two kinds of events. We identify the unknown probability distributions of the weights and their inter-bin correlations as the main obstacle to a general performance analysis of the proposed chi-square quantities.

  8. Developing a Robust Surrogate Model of Chemical Flooding Based on the Artificial Neural Network for Enhanced Oil Recovery Implications

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Ahmadi

    2015-01-01

    Full Text Available Application of chemical flooding in petroleum reservoirs turns into hot topic of the recent researches. Development strategies of the aforementioned technique are more robust and precise when we consider both economical points of view (net present value, NPV and technical points of view (recovery factor, RF. In current study many attempts have been made to propose predictive model for estimation of efficiency of chemical flooding in oil reservoirs. To gain this end, a couple of swarm intelligence and artificial neural network (ANN is employed. Also, lucrative and high precise chemical flooding data banks reported in previous attentions are utilized to test and validate proposed intelligent model. According to the mean square error (MSE, correlation coefficient, and average absolute relative deviation, the suggested swarm approach has acceptable reliability, integrity and robustness. Thus, the proposed intelligent model can be considered as an alternative model to predict the efficiency of chemical flooding in oil reservoir when the required experimental data are not available or accessible.

  9. A new analytical edge spread function fitting model for modulation transfer function measurement

    Institute of Scientific and Technical Information of China (English)

    Tiecheng Li; Huajun Feng; Zhihai Xu

    2011-01-01

    @@ We propose a new analytical edge spread function (ESF) fitting model to measure the modulation transfer function (MTF).The ESF data obtained from a slanted-edge image are fitted to our model through the non-linear least squares (NLLSQ) method.The differentiation of the ESF yields the line spread function (LSF), the Fourier transform of which gives the profile of two-dimensional MTF.Compared with the previous methods, the MTF estimate determined by our method conforms more closely to the reference.A practical application of our MTF measurement in degraded image restoration also validates the accuracy of our model.%We propose a new analytical edge spread function (ESF) fitting model to measure the modulation transfer function (MTF). The ESF data obtained from a slanted-edge image are fitted to our model through the non-linear least squares (NLLSQ) method. The differentiation of the ESF yields the line spread function (LSF), the Fourier transform of which gives the profile of two-dimensional MTF. Compared with the previous methods, the MTF estimate determined by our method conforms more closely to the reference. A practical application of our MTF measurement in degraded image restoration also validates the accuracy of our model.

  10. Fitting and comparing competing models of the species abundance distribution: assessment and prospect

    Directory of Open Access Journals (Sweden)

    Thomas J Matthews

    2014-06-01

    Full Text Available A species abundance distribution (SAD characterises patterns in the commonness and rarity of all species within an ecological community. As such, the SAD provides the theoretical foundation for a number of other biogeographical and macroecological patterns, such as the species–area relationship, as well as being an interesting pattern in its own right. While there has been resurgence in the study of SADs in the last decade, less focus has been placed on methodology in SAD research, and few attempts have been made to synthesise the vast array of methods which have been employed in SAD model evaluation. As such, our review has two aims. First, we provide a general overview of SADs, including descriptions of the commonly used distributions, plotting methods and issues with evaluating SAD models. Second, we review a number of recent advances in SAD model fitting and comparison. We conclude by providing a list of recommendations for fitting and evaluating SAD models. We argue that it is time for SAD studies to move away from many of the traditional methods available for fitting and evaluating models, such as sole reliance on the visual examination of plots, and embrace statistically rigorous techniques. In particular, we recommend the use of both goodness-of-fit tests and model-comparison analyses because each provides unique information which one can use to draw inferences.

  11. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  12. Hierarchical Shrinkage Priors and Model Fitting for High-dimensional Generalized Linear Models

    Science.gov (United States)

    Yi, Nengjun; Ma, Shuangge

    2013-01-01

    Genetic and other scientific studies routinely generate very many predictor variables, which can be naturally grouped, with predictors in the same groups being highly correlated. It is desirable to incorporate the hierarchical structure of the predictor variables into generalized linear models for simultaneous variable selection and coefficient estimation. We propose two prior distributions: hierarchical Cauchy and double-exponential distributions, on coefficients in generalized linear models. The hierarchical priors include both variable-specific and group-specific tuning parameters, thereby not only adopting different shrinkage for different coefficients and different groups but also providing a way to pool the information within groups. We fit generalized linear models with the proposed hierarchical priors by incorporating flexible expectation-maximization (EM) algorithms into the standard iteratively weighted least squares as implemented in the general statistical package R. The methods are illustrated with data from an experiment to identify genetic polymorphisms for survival of mice following infection with Listeria monocytogenes. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). PMID:23192052

  13. Timing of blunt force injuries in long bones: the effects of the environment, PMI length and human surrogate model.

    Science.gov (United States)

    Coelho, Luís; Cardoso, Hugo F V

    2013-12-10

    Timing of blunt force trauma in human bone is a critical forensic issue, but there is limited knowledge on how different environmental conditions, the duration of postmortem interval (PMI), different bone types and different animal models influence fracture morphology. This study aims at evaluating the influence of the type of postmortem environment and the duration of the postmortem period on fracture morphology, for distinguishing perimortem from postmortem fractures on different types of long bones from different species. Fresh limb segments from pig and goat were sequentially left to decompose, under 3 different environmental circumstances (surface, buried and submerged), resulting in sets with different PMI lengths (0, 28, 56, 84, 112, 140, 168 and 196 days), which were then fractured. Fractured bones (total=325; pig tibia=110; pig fibula=110; goat metatarsals=105) were classified according to the Fracture Freshness Index (FFI). Climatic data for the experiment location was collected. Statistical analysis included descriptive statistics, correlation analysis between FFI and PMI, Mann-Whitney U tests comparing FFI medians for different PMI's and linear regression analysis using PMI, pluviosity and temperature as predictors for FFI. Surface samples presented increases in FFI with increasing PMI, with positive correlations for all bone types. The same results were observed in submerged samples, except for pig tibia. Median FFI values for surface samples could distinguish bones with PMI=0 days from PMI≥56 days. Buried samples presented no significant correlation between FFI and PMI, and nonsignificant regression models. Regression analysis of surface and submerged samples suggested differences in FFI variation with PMI between bone types, although without statistical significance. Adding climatic data to surface regression models resulted in PMI no longer predicting FFI. When comparing different animal models, linear regressions suggested greater increases in

  14. Comparing PyMorph and SDSS photometry. I. Background sky and model fitting effects

    Science.gov (United States)

    Fischer, J.-L.; Bernardi, M.; Meert, A.

    2017-01-01

    A number of recent estimates of the total luminosities of galaxies in the SDSS are significantly larger than those reported by the SDSS pipeline. This is because of a combination of three effects: one is simply a matter of defining the scale out to which one integrates the fit when defining the total luminosity, and amounts on average to ≤0.1 mags even for the most luminous galaxies. The other two are less trivial and tend to be larger; they are due to differences in how the background sky is estimated and what model is fit to the surface brightness profile. We show that PyMorph sky estimates are fainter than those of the SDSS DR7 or DR9 pipelines, but are in excellent agreement with the estimates of Blanton et al. (2011). Using the SDSS sky biases luminosities by more than a few tenths of a magnitude for objects with half-light radii ≥7 arcseconds. In the SDSS main galaxy sample these are typically luminous galaxies, so they are not necessarily nearby. This bias becomes worse when allowing the model more freedom to fit the surface brightness profile. When PyMorph sky values are used, then two component Sersic-Exponential fits to E+S0s return more light than single component deVaucouleurs fits (up to ˜0.2 mag), but less light than single Sersic fits (0.1 mag). Finally, we show that PyMorph fits of Meert et al. (2015) to DR7 data remain valid for DR9 images. Our findings show that, especially at large luminosities, these PyMorph estimates should be preferred to the SDSS pipeline values.

  15. Comparing pymorph and SDSS photometry - I. Background sky and model fitting effects

    Science.gov (United States)

    Fischer, J.-L.; Bernardi, M.; Meert, A.

    2017-05-01

    A number of recent estimates of the total luminosities of galaxies in the SDSS are significantly larger than those reported by the Sloan Digital Sky Survey (SDSS) pipeline. This is because of a combination of three effects: one is simply a matter of defining the scale out to which one integrates the fit when defining the total luminosity, and amounts on average to ≤0.1 mag even for the most luminous galaxies. The other two are less trivial and tend to be larger; they are due to differences in how the background sky is estimated and what model is fit to the surface brightness profile. We show that pymorph sky estimates are fainter than those of the Sloan Digital Sky Servey Data Release 7 or Data Release 9 pipelines, but are in excellent agreement with the estimates of Blanton et al. Using the SDSS sky biases luminosities by more than a few tenths of a magnitude for objects with half-light radii ≥7 arcsec. In the SDSS main galaxy sample, these are typically luminous galaxies, so they are not necessarily nearby. This bias becomes worse when allowing the model more freedom to fit the surface brightness profile. When pymorph sky values are used, then two-component Sérsic-exponential fits to E+S0s return more light than single component deVaucouleurs fits (up to ˜0.2 mag), but less light than single Sérsic fits (0.1 mag). Finally, we show that pymorph fits of Meert et al. to DR7 data remain valid for DR9 images. Our findings show that, especially at large luminosities, these pymorph estimates should be preferred to the SDSS pipeline values.

  16. Non-Uniqueness of the Geometry of Interplanetary Magnetic Flux Ropes Obtained from Model-Fitting

    Science.gov (United States)

    Marubashi, K.; Cho, K.-S.

    2015-12-01

    Since the early recognition of the important role of interplanetary magnetic flux ropes (IPFRs) to carry the southward magnetic fields to the Earth, many attempts have been made to determine the structure of the IPFRs by model-fitting analyses to the interplanetary magnetic field variations. This paper describes the results of fitting analyses for three selected solar wind structures in the latter half of 2014. In the fitting analysis a special attention was paid to identification of all the possible models or geometries that can reproduce the observed magnetic field variation. As a result, three or four geometries have been found for each of the three cases. The non-uniqueness of the fitted results include (1) the different geometries naturally stemming from the difference in the models used for fitting, and (2) an unexpected result that either of magnetic field chirality, left-handed and right-handed, can reproduce the observation in some cases. Thus we conclude that the model-fitting cannot always give us a unique geometry of the observed magnetic flux rope. In addition, we have found that the magnetic field chirality of a flux rope cannot be uniquely inferred from the sense of field vector rotation observed in the plane normal to the Earth-Sun line; the sense of rotation changes depending on the direction of the flux rope axis. These findings exert an important impact on the studies aimed at the geometrical relationships between the flux ropes and the magnetic field structures in the solar corona where the flux ropes were produced, such studies being an important step toward predicting geomagnetic storms based on observations of solar eruption phenomena.

  17. Assessing the weighted multi-objective adaptive surrogate model optimization to derive large-scale reservoir operating rules with sensitivity analysis

    Science.gov (United States)

    Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao

    2017-01-01

    The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.

  18. Testing the fitness consequences of the thermoregulatory and parental care models for the origin of endothermy.

    Directory of Open Access Journals (Sweden)

    Sabrina Clavijo-Baque

    Full Text Available The origin of endothermy is a puzzling phenomenon in the evolution of vertebrates. To address this issue several explicative models have been proposed. The main models proposed for the origin of endothermy are the aerobic capacity, the thermoregulatory and the parental care models. Our main proposal is that to compare the alternative models, a critical aspect is to determine how strongly natural selection was influenced by body temperature, and basal and maximum metabolic rates during the evolution of endothermy. We evaluate these relationships in the context of three main hypotheses aimed at explaining the evolution of endothermy, namely the parental care hypothesis and two hypotheses related to the thermoregulatory model (thermogenic capacity and higher body temperature models. We used data on basal and maximum metabolic rates and body temperature from 17 rodent populations, and used intrinsic population growth rate (R(max as a global proxy of fitness. We found greater support for the thermogenic capacity model of the thermoregulatory model. In other words, greater thermogenic capacity is associated with increased fitness in rodent populations. To our knowledge, this is the first test of the fitness consequences of the thermoregulatory and parental care models for the origin of endothermy.

  19. The FIT 2.0 Model - Fuel-cycle Integration and Tradeoffs

    Energy Technology Data Exchange (ETDEWEB)

    Steven J. Piet; Nick R. Soelberg; Layne F. Pincock; Eric L. Shaber; Gregory M Teske

    2011-06-01

    All mass streams from fuel separation and fabrication are products that must meet some set of product criteria – fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the “system losses study” team that developed it [Shropshire2009, Piet2010b] are steps by the Fuel Cycle Technology program toward an analysis that accounts for the requirements and capabilities of each fuel cycle component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R&D needs and set longer-term goals. This report describes FIT 2, an update of the original FIT model.[Piet2010c] FIT is a method to analyze different fuel cycles; in particular, to determine how changes in one part of a fuel cycle (say, fuel burnup, cooling, or separation efficiencies) chemically affect other parts of the fuel cycle. FIT provides the following: Rough estimate of physics and mass balance feasibility of combinations of technologies. If feasibility is an issue, it provides an estimate of how performance would have to change to achieve feasibility. Estimate of impurities in fuel and impurities in waste as function of separation performance, fuel fabrication, reactor, uranium source, etc.

  20. The FIT 2.0 Model - Fuel-cycle Integration and Tradeoffs

    Energy Technology Data Exchange (ETDEWEB)

    Steven J. Piet; Nick R. Soelberg; Layne F. Pincock; Eric L. Shaber; Gregory M Teske

    2011-06-01

    All mass streams from fuel separation and fabrication are products that must meet some set of product criteria – fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the “system losses study” team that developed it [Shropshire2009, Piet2010b] are steps by the Fuel Cycle Technology program toward an analysis that accounts for the requirements and capabilities of each fuel cycle component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R&D needs and set longer-term goals. This report describes FIT 2, an update of the original FIT model.[Piet2010c] FIT is a method to analyze different fuel cycles; in particular, to determine how changes in one part of a fuel cycle (say, fuel burnup, cooling, or separation efficiencies) chemically affect other parts of the fuel cycle. FIT provides the following: Rough estimate of physics and mass balance feasibility of combinations of technologies. If feasibility is an issue, it provides an estimate of how performance would have to change to achieve feasibility. Estimate of impurities in fuel and impurities in waste as function of separation performance, fuel fabrication, reactor, uranium source, etc.

  1. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    Directory of Open Access Journals (Sweden)

    Javier Macias-Guarasa

    2012-10-01

    Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  2. A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test

    Science.gov (United States)

    Liang, Tie; Wells, Craig S.

    2015-01-01

    Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…

  3. On Fitting Nonlinear Latent Curve Models to Multiple Variables Measured Longitudinally

    Science.gov (United States)

    Blozis, Shelley A.

    2007-01-01

    This article shows how nonlinear latent curve models may be fitted for simultaneous analysis of multiple variables measured longitudinally using Mx statistical software. Longitudinal studies often involve observation of several variables across time with interest in the associations between change characteristics of different variables measured…

  4. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    Science.gov (United States)

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  5. Longitudinal Changes in Physical Fitness Performance in Youth: A Multilevel Latent Growth Curve Modeling Approach

    Science.gov (United States)

    Wang, Chee Keng John; Pyun, Do Young; Liu, Woon Chia; Lim, Boon San Coral; Li, Fuzhong

    2013-01-01

    Using a multilevel latent growth curve modeling (LGCM) approach, this study examined longitudinal change in levels of physical fitness performance over time (i.e. four years) in young adolescents aged from 12-13 years. The sample consisted of 6622 students from 138 secondary schools in Singapore. Initial analyses found between-school variation on…

  6. Checking the Adequacy of Fit of Models from Split-Plot Designs

    DEFF Research Database (Denmark)

    Almini, A. A.; Kulahci, Murat; Montgomery, D. C.

    2009-01-01

    One of the main features that distinguish split-plot experiments from other experiments is that they involve two types of experimental errors: the whole-plot (WP) error and the subplot (SP) error. Taking this into consideration is very important when computing measures of adequacy of fit for split......-plot models. In this article, we propose the computation of two R-2, R-2-adjusted, prediction error sums of squares (PRESS), and R-2-prediction statistics to measure the adequacy of fit for the WP and the SP submodels in a split-plot design. This is complemented with the graphical analysis of the two types...... of errors to check for any violation of the underlying assumptions and the adequacy of fit of split-plot models. Using examples, we show how computing two measures of model adequacy of fit for each split-plot design model is appropriate and useful as they reveal whether the correct WP and SP effects have...

  7. Fit Gap Analysis – The Role of Business Process Reference Models

    Directory of Open Access Journals (Sweden)

    Dejan Pajk

    2013-12-01

    Full Text Available Enterprise resource planning (ERP systems support solutions for standard business processes such as financial, sales, procurement and warehouse. In order to improve the understandability and efficiency of their implementation, ERP vendors have introduced reference models that describe the processes and underlying structure of an ERP system. To select and successfully implement an ERP system, the capabilities of that system have to be compared with a company’s business needs. Based on a comparison, all of the fits and gaps must be identified and further analysed. This step usually forms part of ERP implementation methodologies and is called fit gap analysis. The paper theoretically overviews methods for applying reference models and describes fit gap analysis processes in detail. The paper’s first contribution is its presentation of a fit gap analysis using standard business process modelling notation. The second contribution is the demonstration of a process-based comparison approach between a supply chain process and an ERP system process reference model. In addition to its theoretical contributions, the results can also be practically applied to projects involving the selection and implementation of ERP systems.

  8. Impact of Missing Data on Person-Model Fit and Person Trait Estimation

    Science.gov (United States)

    Zhang, Bo; Walker, Cindy M.

    2008-01-01

    The purpose of this research was to examine the effects of missing data on person-model fit and person trait estimation in tests with dichotomous items. Under the missing-completely-at-random framework, four missing data treatment techniques were investigated including pairwise deletion, coding missing responses as incorrect, hotdeck imputation,…

  9. Longitudinal Changes in Physical Fitness Performance in Youth: A Multilevel Latent Growth Curve Modeling Approach

    Science.gov (United States)

    Wang, Chee Keng John; Pyun, Do Young; Liu, Woon Chia; Lim, Boon San Coral; Li, Fuzhong

    2013-01-01

    Using a multilevel latent growth curve modeling (LGCM) approach, this study examined longitudinal change in levels of physical fitness performance over time (i.e. four years) in young adolescents aged from 12-13 years. The sample consisted of 6622 students from 138 secondary schools in Singapore. Initial analyses found between-school variation on…

  10. Modeling of pharmaceuticals mixtures toxicity with deviation ratio and best-fit functions models.

    Science.gov (United States)

    Wieczerzak, Monika; Kudłak, Błażej; Yotova, Galina; Nedyalkova, Miroslava; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek

    2016-11-15

    The present study deals with assessment of ecotoxicological parameters of 9 drugs (diclofenac (sodium salt), oxytetracycline hydrochloride, fluoxetine hydrochloride, chloramphenicol, ketoprofen, progesterone, estrone, androstenedione and gemfibrozil), present in the environmental compartments at specific concentration levels, and their mutual combinations by couples against Microtox® and XenoScreen YES/YAS® bioassays. As the quantitative assessment of ecotoxicity of drug mixtures is an complex and sophisticated topic in the present study we have used two major approaches to gain specific information on the mutual impact of two separate drugs present in a mixture. The first approach is well documented in many toxicological studies and follows the procedure for assessing three types of models, namely concentration addition (CA), independent action (IA) and simple interaction (SI) by calculation of a model deviation ratio (MDR) for each one of the experiments carried out. The second approach used was based on the assumption that the mutual impact in each mixture of two drugs could be described by a best-fit model function with calculation of weight (regression coefficient or other model parameter) for each of the participants in the mixture or by correlation analysis. It was shown that the sign and the absolute value of the weight or the correlation coefficient could be a reliable measure for the impact of either drug A on drug B or, vice versa, of B on A. Results of studies justify the statement, that both of the approaches show similar assessment of the mode of mutual interaction of the drugs studied. It was found that most of the drug mixtures exhibit independent action and quite few of the mixtures show synergic or dependent action. Copyright © 2016. Published by Elsevier B.V.

  11. A Fundamental Study of Convective Mixing of CO2 in Layered Heterogeneous Saline Aquifers with Low Permeability Zones using Surrogate Fluids and Numerical Modeling

    Science.gov (United States)

    Agartan, E.; Illangasekare, T. H.; Cihan, A.; Birkholzer, J. T.; Zhou, Q.; Trevisan, L.

    2013-12-01

    Dissolution trapping is one of the primary mechanisms contributing to long-term and stable storage of supercritical CO2 (scCO2) in deep saline geologic formations. When entrapped scCO2 dissolves in formation brine, density-driven convective fingers are expected to be generated due to the higher density of the solution compared to brine. These fingers enhance mixing of dissolved scCO2 in brine (Ennis-King & Paterson, 2003). The goal of this study is to evaluate the contribution of convective mixing to dissolution trapping of CO2 in naturally layered heterogeneous formations with low permeability zones via experimental and numerical analyses. To understand the fundamental process of dissolution trapping in the laboratory under ambient pressure and temperature conditions, a group of surrogate fluids were selected according to their density and viscosity values before and after dissolution. Fluids were tested in a variety of porous media systems. After selection of the appropriate fluid mixture based on the closest behavior to scCO2 brine systems, a set of experiments in a small homogeneously packed test tank was performed to analyze the fingering behaviors. A second set of experiments was conducted in the same test tank with layered soil systems to study the effects of formation heterogeneity on convective mixing. A finite volume method based numerical code was developed to capture the dominant processes observed in the experiments. This model was then used to simulate more complex heterogeneous systems that were not represented in the limited set of experiments. Results of these analyses suggest that convective fingers developed in homogeneous formations may not be significantly contributing to mixing and hence dissolution trapping in heterogeneous formations depending on the permeability contrasts and thickness of the low permeability layers.

  12. Automatic segmentation of vertebral arteries in CT angiography using combined circular and cylindrical model fitting

    Science.gov (United States)

    Lee, Min Jin; Hong, Helen; Chung, Jin Wook

    2014-03-01

    We propose an automatic vessel segmentation method of vertebral arteries in CT angiography using combined circular and cylindrical model fitting. First, to generate multi-segmented volumes, whole volume is automatically divided into four segments by anatomical properties of bone structures along z-axis of head and neck. To define an optimal volume circumscribing vertebral arteries, anterior-posterior bounding and side boundaries are defined as initial extracted vessel region. Second, the initial vessel candidates are tracked using circular model fitting. Since boundaries of the vertebral arteries are ambiguous in case the arteries pass through the transverse foramen in the cervical vertebra, the circle model is extended along z-axis to cylinder model for considering additional vessel information of neighboring slices. Finally, the boundaries of the vertebral arteries are detected using graph-cut optimization. From the experiments, the proposed method provides accurate results without bone artifacts and eroded vessels in the cervical vertebra.

  13. Erroneous Arrhenius: modified arrhenius model best explains the temperature dependence of ectotherm fitness.

    Science.gov (United States)

    Knies, Jennifer L; Kingsolver, Joel G

    2010-08-01

    The initial rise of fitness that occurs with increasing temperature is attributed to Arrhenius kinetics, in which rates of reaction increase exponentially with increasing temperature. Models based on Arrhenius typically assume single rate-limiting reactions over some physiological temperature range for which all the rate-limiting enzymes are in 100% active conformation. We test this assumption using data sets for microbes that have measurements of fitness (intrinsic rate of population growth) at many temperatures and over a broad temperature range and for diverse ectotherms that have measurements at fewer temperatures. When measurements are available at many temperatures, strictly Arrhenius kinetics are rejected over the physiological temperature range. However, over a narrower temperature range, we cannot reject strictly Arrhenius kinetics. The temperature range also affects estimates of the temperature dependence of fitness. These results indicate that Arrhenius kinetics only apply over a narrow range of temperatures for ectotherms, complicating attempts to identify general patterns of temperature dependence.

  14. Modelling of the toe trajectory during normal gait using circle-fit approximation.

    Science.gov (United States)

    Fang, Juan; Hunt, Kenneth J; Xie, Le; Yang, Guo-Yuan

    2016-10-01

    This work aimed to validate the approach of using a circle to fit the toe trajectory relative to the hip and to investigate linear regression models for describing such toe trajectories from normal gait. Twenty-four subjects walked at seven speeds. Best-fit circle algorithms were developed to approximate the relative toe trajectory using a circle. It was detected that the mean approximation error between the toe trajectory and its best-fit circle was less than 4 %. Regarding the best-fit circles for the toe trajectories from all subjects, the normalised radius was constant, while the normalised centre offset reduced when the walking cadence increased; the curve range generally had a positive linear relationship with the walking cadence. The regression functions of the circle radius, the centre offset and the curve range with leg length and walking cadence were definitively defined. This study demonstrated that circle-fit approximation of the relative toe trajectories is generally applicable in normal gait. The functions provided a quantitative description of the relative toe trajectories. These results have potential application for design of gait rehabilitation technologies.

  15. Econometric modelling of risk adverse behaviours of entrepreneurs in the provision of house fittings in China

    Directory of Open Access Journals (Sweden)

    Rita Yi Man Li

    2012-03-01

    Full Text Available Entrepreneurs have always born the risk of running their business. They reap a profit in return for their risk taking and work. Housing developers are no different. In many countries, such as Australia, the United Kingdom and the United States, they interpret the tastes of the buyers and provide the dwellings they develop with basic fittings such as floor and wall coverings, bathroom fittings and kitchen cupboards. In mainland China, however, in most of the developments, units or houses are sold without floor or wall coverings, kitchen  or bathroom fittings. What is the motive behind this choice? This paper analyses the factors affecting housing developers’ decisions to provide fittings based on 1701 housing developments in Hangzhou, Chongqing and Hangzhou using a Probit model. The results show that developers build a higher proportion of bare units in mainland China when: 1 there is shortage of housing; 2 land costs are high so that the comparative costs of providing fittings become relatively low.

  16. The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing.

    Directory of Open Access Journals (Sweden)

    Jaclyn K Mann

    2014-08-01

    Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  17. Fitting parametric models of diffusion MRI in regions of partial volume

    Science.gov (United States)

    Eaton-Rosen, Zach; Cardoso, M. J.; Melbourne, Andrew; Orasanu, Eliza; Bainbridge, Alan; Kendall, Giles S.; Robertson, Nicola J.; Marlow, Neil; Ourselin, Sebastien

    2016-03-01

    Regional analysis is normally done by fitting models per voxel and then averaging over a region, accounting for partial volume (PV) only to some degree. In thin, folded regions such as the cerebral cortex, such methods do not work well, as the partial volume confounds parameter estimation. Instead, we propose to fit the models per region directly with explicit PV modeling. In this work we robustly estimate region-wise parameters whilst explicitly accounting for partial volume effects. We use a high-resolution segmentation from a T1 scan to assign each voxel in the diffusion image a probabilistic membership to each of k tissue classes. We rotate the DW signal at each voxel so that it aligns with the z-axis, then model the signal at each voxel as a linear superposition of a representative signal from each of the k tissue types. Fitting involves optimising these representative signals to best match the data, given the known probabilities of belonging to each tissue type that we obtained from the segmentation. We demonstrate this method improves parameter estimation in digital phantoms for the diffusion tensor (DT) and `Neurite Orientation Dispersion and Density Imaging' (NODDI) models. The method provides accurate parameter estimates even in regions where the normal approach fails completely, for example where partial volume is present in every voxel. Finally, we apply this model to brain data from preterm infants, where the thin, convoluted, maturing cortex necessitates such an approach.

  18. Improved cosmological model fitting of Planck data with a dark energy spike

    Science.gov (United States)

    Park, Chan-Gyung

    2015-06-01

    The Λ cold dark matter (Λ CDM ) model is currently known as the simplest cosmology model that best describes observations with a minimal number of parameters. Here we introduce a cosmology model that is preferred over the conventional Λ CDM one by constructing dark energy as the sum of the cosmological constant Λ and an additional fluid that is designed to have an extremely short transient spike in energy density during the radiation-matter equality era and an early scaling behavior with radiation and matter densities. The density parameter of the additional fluid is defined as a Gaussian function plus a constant in logarithmic scale-factor space. Searching for the best-fit cosmological parameters in the presence of such a dark energy spike gives a far smaller chi-square value by about 5 times the number of additional parameters introduced and narrower constraints on the matter density and Hubble constant compared with the best-fit Λ CDM model. The significant improvement in reducing the chi square mainly comes from the better fitting of the Planck temperature power spectrum around the third (ℓ≈800 ) and sixth (ℓ≈1800 ) acoustic peaks. The likelihood ratio test and the Akaike information criterion suggest that the model of a dark energy spike is strongly favored by the current cosmological observations over the conventional Λ CDM model. However, based on the Bayesian information criterion which penalizes models with more parameters, the strong evidence supporting the presence of a dark energy spike disappears. Our result emphasizes that the alternative cosmological parameter estimation with even better fitting of the same observational data is allowed in Einstein's gravity.

  19. A flexible, interactive software tool for fitting the parameters of neuronal models

    Directory of Open Access Journals (Sweden)

    Péter eFriedrich

    2014-07-01

    Full Text Available The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problem of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting

  20. Kompaneets Model Fitting of the Orion-Eridanus Superbubble II: Thinking Outside of Barnard's Loop

    CERN Document Server

    Pon, Andy; Alves, Joao; Bally, John; Basu, Shantanu; Tielens, Alexander G G M

    2016-01-01

    The Orion star-forming region is the nearest active high-mass star-forming region and has created a large superbubble, the Orion-Eridanus superbubble. Recent work by Ochsendorf et al. (2015) has extended the accepted boundary of the superbubble. We fit Kompaneets models of superbubbles expanding in exponential atmospheres to the new, larger shape of the Orion-Eridanus superbubble. We find that this larger morphology of the superbubble is consistent with the evolution of the superbubble being primarily controlled by expansion into the exponential Galactic disk ISM if the superbubble is oriented with the Eridanus side farther from the Sun than the Orion side. Unlike previous Kompaneets model fits that required abnormally small scale heights for the Galactic disk (<40 pc), we find morphologically consistent models with scale heights of 80 pc, similar to that expected for the Galactic disk.

  1. Fitting a mixture model by expectation maximization to discover motifs in biopolymers

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, T.L.; Elkan, C. [Univ. of California, La Jolla, CA (United States)

    1994-12-31

    The algorithm described in this paper discovers one or more motifs in a collection of DNA or protein sequences by using the technique of expectation maximization to fit a two-component finite mixture model to the set of sequences. Multiple motifs are found by fitting a mixture model to the data, probabilistically erasing the occurrences of the motif thus found, and repeating the process to find successive motifs. The algorithm requires only a set of unaligned sequences and a number specifying the width of the motifs as input. It returns a model of each motif and a threshold which together can be used as a Bayes-optimal classifier for searching for occurrences of the motif in other databases. The algorithm estimates how many times each motif occurs in each sequence in the dataset and outputs an alignment of the occurrences of the motif. The algorithm is capable of discovering several different motifs with differing numbers of occurrences in a single dataset.

  2. Spin models inferred from patient-derived viral sequence data faithfully describe HIV fitness landscapes

    Science.gov (United States)

    Shekhar, Karthik; Ruberman, Claire F.; Ferguson, Andrew L.; Barton, John P.; Kardar, Mehran; Chakraborty, Arup K.

    2017-01-01

    Mutational escape from vaccine-induced immune responses has thwarted the development of a successful vaccine against AIDS, whose causative agent is HIV, a highly mutable virus. Knowing the virus’ fitness as a function of its proteomic sequence can enable rational design of potent vaccines, as this information can focus vaccine-induced immune responses to target mutational vulnerabilities of the virus. Spin models have been proposed as a means to infer intrinsic fitness landscapes of HIV proteins from patient-derived viral protein sequences. These sequences are the product of nonequilibrium viral evolution driven by patient-specific immune responses and are subject to phylogenetic constraints. How can such sequence data allow inference of intrinsic fitness landscapes? We combined computer simulations and variational theory á la Feynman to show that, in most circumstances, spin models inferred from patient-derived viral sequences reflect the correct rank order of the fitness of mutant viral strains. Our findings are relevant for diverse viruses. PMID:24483484

  3. Assessing the Fit of Structural Equation Models With Multiply Imputed Data.

    Science.gov (United States)

    Enders, Craig K; Mansolf, Maxwell

    2016-11-28

    Multiple imputation has enjoyed widespread use in social science applications, yet the application of imputation-based inference to structural equation modeling has received virtually no attention in the literature. Thus, this study has 2 overarching goals: evaluate the application of Meng and Rubin's (1992) pooling procedure for likelihood ratio statistic to the SEM test of model fit, and explore the possibility of using this test statistic to define imputation-based versions of common fit indices such as the TLI, CFI, and RMSEA. Computer simulation results suggested that, when applied to a correctly specified model, the pooled likelihood ratio statistic performed well as a global test of model fit and was closely calibrated to the corresponding full information maximum likelihood (FIML) test statistic. However, when applied to misspecified models with high rates of missingness (30%-40%), the imputation-based test statistic generally exhibited lower power than that of FIML. Using the pooled test statistic to construct imputation-based versions of the TLI, CFI, and RMSEA worked well and produced indices that were well-calibrated with those of full information maximum likelihood estimation. This article gives Mplus and R code to implement the pooled test statistic, and it offers a number of recommendations for future research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Double-sigmoid model for fitting fatigue profiles in mouse fast- and slow-twitch muscle.

    Science.gov (United States)

    Cairns, S P; Robinson, D M; Loiselle, D S

    2008-07-01

    We present a curve-fitting approach that permits quantitative comparisons of fatigue profiles obtained with different stimulation protocols in isolated slow-twitch soleus and fast-twitch extensor digitorum longus (EDL) muscles of mice. Profiles from our usual stimulation protocol (125 Hz for 500 ms, evoked once every second for 100-300 s) could be fitted by single-term functions (sigmoids or exponentials) but not by a double exponential. A clearly superior fit, as confirmed by the Akaiki Information Criterion, was achieved using a double-sigmoid function. Fitting accuracy was exceptional; mean square errors were typically 0.9995. The first sigmoid (early fatigue) involved approximately 10% decline of isometric force to an intermediate plateau in both muscle types; the second sigmoid (late fatigue) involved a reduction of force to a final plateau, the decline being 83% of initial force in EDL and 63% of initial force in soleus. The maximal slope of each sigmoid was seven- to eightfold greater in EDL than in soleus. The general applicability of the model was tested by fitting profiles with a severe force loss arising from repeated tetanic stimulation evoked at different frequencies or rest periods, or with excitation via nerve terminals in soleus. Late fatigue, which was absent at 30 Hz, occurred earlier and to a greater extent at 125 than 50 Hz. The model captured small changes in rate of late fatigue for nerve terminal versus sarcolemmal stimulation. We conclude that a double-sigmoid expression is a useful and accurate model to characterize fatigue in isolated muscle preparations.

  5. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  6. Kinetic modelling of RDF pyrolysis: Model-fitting and model-free approaches.

    Science.gov (United States)

    Çepelioğullar, Özge; Haykırı-Açma, Hanzade; Yaman, Serdar

    2016-02-01

    In this study, refuse derived fuel (RDF) was selected as solid fuel and it was pyrolyzed in a thermal analyzer from room temperature to 900°C at heating rates of 5, 10, 20, and 50°C/min in N2 atmosphere. The obtained thermal data was used to calculate the kinetic parameters using Coats-Redfern, Friedman, Flylnn-Wall-Ozawa (FWO) and Kissinger-Akahira-Sunose (KAS) methods. As a result of Coats-Redfern model, decomposition process was assumed to be four independent reactions with different reaction orders. On the other hand, model free methods demonstrated that activation energy trend had similarities for the reaction progresses of 0.1, 0.2-0.7 and 0.8-0.9. The average activation energies were found between 73-161kJ/mol and it is possible to say that FWO and KAS models produced closer results to the average activation energies compared to Friedman model. Experimental studies showed that RDF may be a sustainable and promising feedstock for alternative processes in terms of waste management strategies.

  7. A Fundamental Study of Convective Mixing Contributing to Dissolution Trapping of CO2 in Heterogeneous Geologic Media using Surrogate Fluids and Numerical Modeling

    Science.gov (United States)

    Illangasekare, Tissa; Agartan, Eliff; Trevisan, Luca; Cihan, Abdullah; Birkholzer, Jens; Zhou, Quanlin

    2013-04-01

    Geologic sequestration of carbon dioxide is considered as an important strategy to slow down global warming and hence climate change. Dissolution trapping is one of the primary mechanisms contributing to long-term storage of supercritical CO2 (scCO2) in deep saline geologic formations. When liquid scCO2 is injected into the formation, its density is less than density of brine. During the movement of injected scCO2 under the effect of buoyancy forces, it is immobilized due to capillary forces. With the progress of time, entrapped scCO2 dissolves in formation brine, and density-driven convective fingers are expected to be generated due to the higher density of the solute compared to brine. These fingers enhance mixing of dissolved CO2 in brine. The development and role of these convective fingers in mixing in homogeneous formations have been studied in past investigations. The goal of this study is to evaluate the contribution of convective mixing to dissolution trapping of scCO2 in naturally heterogeneous geologic formations via laboratory experiments and numerical analyses. To mimic the dissolution of scCO2 in formation brine under ambient laboratory conditions, a group of surrogate fluids were selected according to their density and viscosity ratios, and tested in different fluid/fluid mixtures and variety of porous media test systems. After selection of the appropriate fluid mixture, a set of experiments in a small test tank packed in homogeneous configurations was performed in order to analyze the fingering behavior. A second set of experiments was conducted for layered systems to study the effects of formation heterogeneity on convective mixing. To capture the dominant processes observed in the experiments, a Finite Volume based numerical code was developed. The model was then used to simulate more complex heterogeneous systems that were not represented in the experiments. Results of these analyses suggest that density-driven convective fingers that contributes

  8. Multiple phase transitions in an agent-based evolutionary model with neutral fitness.

    Science.gov (United States)

    King, Dawn M; Scott, Adam D; Bahar, Sonya

    2017-04-01

    Null models are crucial for understanding evolutionary processes such as speciation and adaptive radiation. We analyse an agent-based null model, considering a case without selection-neutral evolution-in which organisms are defined only by phenotype. Universal dynamics has previously been demonstrated in a related model on a neutral fitness landscape, showing that this system belongs to the directed percolation (DP) universality class. The traditional null condition of neutral fitness (where fitness is defined as the number of offspring each organism produces) is extended here to include equal probability of death among organisms. We identify two types of phase transition: (i) a non-equilibrium DP transition through generational time (i.e. survival), and (ii) an equilibrium ordinary percolation transition through the phenotype space (based on links between mating organisms). Owing to the dynamical rules of the DP reaction-diffusion process, organisms can only sparsely fill the phenotype space, resulting in significant phenotypic diversity within a cluster of mating organisms. This highlights the necessity of understanding hierarchical evolutionary relationships, rather than merely developing taxonomies based on phenotypic similarity, in order to develop models that can explain phylogenetic patterns found in the fossil record or to make hypotheses for the incomplete fossil record of deep time.

  9. llc: a collection of R functions for fitting a class of Lee-Carter mortality models using iterative fitting algorithms

    OpenAIRE

    Butt, Z.; Haberman, S

    2009-01-01

    We implement a specialised iterative regression methodology in R for the analysis of age-period mortality data based on a class of generalised Lee-Carter (LC) type modelling structures. The LC-based modelling frameworks is viewed in the current literature as among the most efficient and transparent methods of modelling and projecting mortality improvements. Thus, we make use of the modelling approach discussed in Renshaw and Haberman (2006), which extends the basic LC model and proposes to ma...

  10. Impact of Economic Development Model on the Fitting Effect of the Mathematical Model of Changes in Cultivated Land Resources

    Institute of Scientific and Technical Information of China (English)

    Xin; YAO; Min; ZHANG

    2014-01-01

    The mathematical model is often used for fitting the trend of changes in cultivated land resources in the land use planning,but the fitting effect is different in different study areas. In this paper,we take two geographically adjacent cities with great differences in the economic development model,Xinghua City and Jingjiang City,as the research object. Using logarithmic model( M1),Kuznets model( M2),logistic model( M3) and multivariate linear model( M4),we fit the process of changes in cultivated land resources during the period 1980- 2009,and compare the differences in the fitting effect between different models. In terms of the model fitting effect in Xinghua City,it is in the order of M3 > M4 > M1 > M2,which is related to the fact that the local areas lay great emphasis on agricultural development,and pay close attention to ensuring the cultivated land area; in terms of the model fitting effect in Jingjiang City,it is in the order of M1 > M3 > M4 > M2,and the deep-seated cause is that its development model is dominated by extended trade expansion,and the level of intensive land use is constantly improved. In addition,we discuss the multi-stage characteristics of changes in cultivated land resources,and propose a solution of using the same model to simulate in various phases. The research results in Jingjiang City show that the coefficient of determination in the first phase( R2=0. 958) and the standard error( SE = 0. 261) are both better than those of the original model( R2= 0. 945,SE = 0. 312); the coefficient of determination in the second phase is slightly low( R2= 0. 851),but the standard error is greatly improved( SE = 0. 137). Compared with the research conclusions of other scholars,it can be believed that this method can better solve the problems that the scatter plot of logistic model presents wave-shape and the scatter plot of Kuznets model presents " M"-shape,in order to improve the applicability of mathematical models.

  11. Tectonic plate under a localized boundary stress: fitting of a zero-range solvable model

    CERN Document Server

    Petrova, L

    2008-01-01

    We suggest a method of fitting of a zero-range model of a tectonic plate under a boundary stress on the basis of comparison of the theoretical formulae for the corresponding eigenfunctions/eigenvalues with the results extraction under monitoring, in the remote zone, of non-random (regular) oscillations of the Earth with periods 0.2-6 hours, on the background seismic process, in case of low seismic activity. Observations of changes of the characteristics of the oscillations (frequency, amplitude and polarization) in course of time, together with the theoretical analysis of the fitted model, would enable us to localize the stressed zone on the boundary of the plate and estimate the risk of a powerful earthquake at the zone.

  12. Validation of a Best-Fit Pharmacokinetic Model for Scopolamine Disposition after Intranasal Administration

    Science.gov (United States)

    Wu, L.; Chow, D. S-L.; Tam, V.; Putcha, L.

    2015-01-01

    An intranasal gel formulation of scopolamine (INSCOP) was developed for the treatment of Motion Sickness. Bioavailability and pharmacokinetics (PK) were determined per Investigative New Drug (IND) evaluation guidance by the Food and Drug Administration. Earlier, we reported the development of a PK model that can predict the relationship between plasma, saliva and urinary scopolamine (SCOP) concentrations using data collected from an IND clinical trial with INSCOP. This data analysis project is designed to validate the reported best fit PK model for SCOP by comparing observed and model predicted SCOP concentration-time profiles after administration of INSCOP.

  13. Nonlocal nonlinear refractive index of gold nanoparticles synthesized by ascorbic acid reduction: comparison of fitting models.

    Science.gov (United States)

    Balbuena Ortega, A; Arroyo Carrasco, M L; Méndez Otero, M M; Gayou, V L; Delgado Macuil, R; Martínez Gutiérrez, H; Iturbe Castillo, M D

    2014-12-12

    In this paper, the nonlinear refractive index of colloidal gold nanoparticles under continuous wave illumination is investigated with the z-scan technique. Gold nanoparticles were synthesized using ascorbic acid as reductant, phosphates as stabilizer and cetyltrimethylammonium chloride (CTAC) as surfactant agent. The nanoparticle size was controlled with the CTAC concentration. Experiments changing incident power and sample concentration were done. The experimental z-scan results were fitted with three models: thermal lens, aberrant thermal lens and the nonlocal model. It is shown that the nonlocal model reproduces with exceptionally good agreement; the obtained experimental behaviour.

  14. Construction and validation of detailed kinetic models for the combustion of gasoline surrogates; Construction et validation de modeles cinetiques detailles pour la combustion de melanges modeles des essences

    Energy Technology Data Exchange (ETDEWEB)

    Touchard, S.

    2005-10-15

    The irreversible reduction of oil resources, the CO{sub 2} emission control and the application of increasingly strict standards of pollutants emission lead the worldwide researchers to work to reduce the pollutants formation and to improve the engine yields, especially by using homogenous charge combustion of lean mixtures. The numerical simulation of fuel blends oxidation is an essential tool to study the influence of fuel formulation and motor conditions on auto-ignition and on pollutants emissions. The automatic generation helps to obtain detailed kinetic models, especially at low temperature, where the number of reactions quickly exceeds thousand. The main purpose of this study is the generation and the validation of detailed kinetic models for the oxidation of gasoline blends using the EXGAS software. This work has implied an improvement of computation rules for thermodynamic and kinetic data, those were validated by numerical simulation using CHEMKIN II softwares. A large part of this work has concerned the understanding of the low temperature oxidation chemistry of the C5 and larger alkenes. Low and high temperature mechanisms were proposed and validated for 1 pentene, 1-hexene, the binary mixtures containing 1 hexene/iso octane, 1 hexene/toluene, iso octane/toluene and the ternary mixture of 1 hexene/toluene/iso octane. Simulations were also done for propene, 1-butene and iso-octane with former models including the modifications proposed in this PhD work. If the generated models allowed us to simulate with a good agreement the auto-ignition delays of the studied molecules and blends, some uncertainties still remains for some reaction paths leading to the formation of cyclic products in the case of alkenes oxidation at low temperature. It would be also interesting to carry on this work for combustion models of gasoline blends at low temperature. (author)

  15. Effects of new mutations on fitness: insights from models and data.

    Science.gov (United States)

    Bataillon, Thomas; Bailey, Susan F

    2014-07-01

    The rates and properties of new mutations affecting fitness have implications for a number of outstanding questions in evolutionary biology. Obtaining estimates of mutation rates and effects has historically been challenging, and little theory has been available for predicting the distribution of fitness effects (DFE); however, there have been recent advances on both fronts. Extreme-value theory predicts the DFE of beneficial mutations in well-adapted populations, while phenotypic fitness landscape models make predictions for the DFE of all mutations as a function of the initial level of adaptation and the strength of stabilizing selection on traits underlying fitness. Direct experimental evidence confirms predictions on the DFE of beneficial mutations and favors distributions that are roughly exponential but bounded on the right. A growing number of studies infer the DFE using genomic patterns of polymorphism and divergence, recovering a wide range of DFE. Future work should be aimed at identifying factors driving the observed variation in the DFE. We emphasize the need for further theory explicitly incorporating the effects of partial pleiotropy and heterogeneity in the environment on the expected DFE.

  16. Hair length, facial attractiveness, personality attribution: A multiple fitness model of hairdressing

    OpenAIRE

    Bereczkei, Tamas; Mesko, Norbert

    2007-01-01

    Multiple Fitness Model states that attractiveness varies across multiple dimensions, with each feature representing a different aspect of mate value. In the present study, male raters judged the attractiveness of young females with neotenous and mature facial features, with various hair lengths. Results revealed that the physical appearance of long-haired women was rated high, regardless of their facial attractiveness being valued high or low. Women rated as most attractive were those whose f...

  17. SCAN-based hybrid and double-hybrid density functionals from models without fitted parameters

    OpenAIRE

    Hui, Kerwin; Chai, Jeng-Da

    2015-01-01

    By incorporating the nonempirical SCAN semilocal density functional [Sun, Ruzsinszky, and Perdew, Phys. Rev. Lett. 115, 036402 (2015)] in the underlying expression of four existing hybrid and double-hybrid models, we propose one hybrid (SCAN0) and three double-hybrid (SCAN0-DH, SCAN-QIDH, and SCAN0-2) density functionals, which are free from any fitted parameters. The SCAN-based double-hybrid functionals consistently outperform their parent SCAN semilocal functional for self-interaction probl...

  18. Model fitting of kink waves in the solar atmosphere: Gaussian damping and time-dependence

    CERN Document Server

    Morton, R J

    2016-01-01

    {Observations of the solar atmosphere have shown that magnetohydrodynamic waves are ubiquitous throughout. Improvements in instrumentation and the techniques used for measurement of the waves now enables subtleties of competing theoretical models to be compared with the observed waves behaviour. Some studies have already begun to undertake this process. However, the techniques employed for model comparison have generally been unsuitable and can lead to erroneous conclusions about the best model. The aim here is to introduce some robust statistical techniques for model comparison to the solar waves community, drawing on the experiences from other areas of astrophysics. In the process, we also aim to investigate the physics of coronal loop oscillations. } {The methodology exploits least-squares fitting to compare models to observational data. We demonstrate that the residuals between the model and observations contain significant information about the ability for the model to describe the observations, and show...

  19. Goodness of fit to a mathematical model for Drosophila sleep behavior is reduced in hyposomnolent mutants

    Directory of Open Access Journals (Sweden)

    Joshua M. Diamond

    2016-01-01

    Full Text Available The conserved nature of sleep in Drosophila has allowed the fruit fly to emerge in the last decade as a powerful model organism in which to study sleep. Recent sleep studies in Drosophila have focused on the discovery and characterization of hyposomnolent mutants. One common feature of these animals is a change in sleep architecture: sleep bout count tends to be greater, and sleep bout length lower, in hyposomnolent mutants. I propose a mathematical model, produced by least-squares nonlinear regression to fit the form Y = aX∧b, which can explain sleep behavior in the healthy animal as well as previously-reported changes in total sleep and sleep architecture in hyposomnolent mutants. This model, fit to sleep data, yields coefficient of determination R squared, which describes goodness of fit. R squared is lower, as compared to control, in hyposomnolent mutants insomniac and fumin. My findings raise the possibility that low R squared is a feature of all hyposomnolent mutants, not just insomniac and fumin. If this were the case, R squared could emerge as a novel means by which sleep researchers might assess sleep dysfunction.

  20. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  1. Design and verifications of an eye model fitted with contact lenses for wavefront measurement systems

    Science.gov (United States)

    Cheng, Yuan-Chieh; Chen, Jia-Hong; Chang, Rong-Jie; Wang, Chung-Yen; Hsu, Wei-Yao; Wang, Pei-Jen

    2015-09-01

    Contact lenses are typically measured by the wet-box method because of the high optical power resulting from the anterior central curvature of cornea, even though the back vertex power of the lenses are small. In this study, an optical measurement system based on the Shack-Hartmann wavefront principle was established to investigate the aberrations of soft contact lenses. Fitting conditions were micmicked to study the optical design of an eye model with various topographical shapes in the anterior cornea. Initially, the contact lenses were measured by the wet-box method, and then by fitting the various topographical shapes of cornea to the eye model. In addition, an optics simulation program was employed to determine the sources of errors and assess the accuracy of the system. Finally, samples of soft contact lenses with various Diopters were measured; and, both simulations and experimental results were compared for resolving the controversies of fitting contact lenses to an eye model for optical measurements. More importantly, the results show that the proposed system can be employed for study of primary aberrations in contact lenses.

  2. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    Science.gov (United States)

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  3. Evaluating smooth muscle cells from CaCl2-induced rat aortal expansions as a surrogate culture model for study of elastogenic induction of human aneurysmal cells.

    Science.gov (United States)

    Gacchina, Carmen; Brothers, Thomas; Ramamurthi, Anand

    2011-08-01

    Regression of abdominal aortic aneurysms (AAAs) via regeneration of new elastic matrix is constrained by poor elastin synthesis by adult vascular cells and absence of methods to stimulate the same. We recently showed hyaluronan oligomers (HA-o) and TGF-β1 (termed elastogenic factors) to enhance elastin synthesis and matrix formation by healthy rat aortic smooth muscle cells (RASMCs). We also determined that these factors could likewise elastogenically induce aneurysmal RASMCs isolated from periadventitial CaCl(2)-injury induced rat AAAs (aRASMCs). However, the factor doses should be increased for these diseased cell types, as even when induced, elastic matrix amounts are roughly one order of magnitude lower than those produced by healthy RASMCs. We presently investigate the dose-specific elastogenic effects of HA-o (0-20  μg/mL) and TGF-β1 (0-10  ng/mL) factors on aRASMCs and compare their phenotype and elastogenic responses to those of human AAA-derived SMCs (aHASMCs); we seek to determine whether aRASMCs are appropriate surrogate cell types to study in the context of inducing elastic matrix regeneration within human AAAs. The periadventitial CaCl(2)-injury model of AAAs exhibits many of the pathological characteristics of human AAAs, including similarities in terms of decreased SMC contractile activity, enhanced proliferation, and reduced elastogenic capacity of aneurysmal SMCs (relative to healthy SMCs) when isolated and expanded in culture. Both aRASMCs and aHASMCs can be elastogenically stimulated by HA-o and TGF-β1 and show broadly similar trends in their dose-specific responses to these factors. However, compared with aHASMCs, aRASMCs appear to be far less elastogenically inducible. This may be due to differences in maturity of the AAAs studied, with the CaCl(2)-injury induced aortal expansion barely qualifying as an aneurysm and the human AAA representing a more well-developed condition. Further study of SMCs from stage-matched CaCl(2)-injury

  4. Role Modeling Attitudes, Physical Activity and Fitness Promoting Behaviors of Prospective Physical Education Specialists and Non-Specialists.

    Science.gov (United States)

    Cardinal, Bradley J.; Cardinal, Marita K.

    2002-01-01

    Compared the role modeling attitudes and physical activity and fitness promoting behaviors of undergraduate students majoring in physical education and in elementary education. Student teacher surveys indicated that physical education majors had more positive attitudes toward role modeling physical activity and fitness promoting behaviors and…

  5. Fitting the HIV epidemic in Zambia: a two-sex micro-simulation model.

    Directory of Open Access Journals (Sweden)

    Pauline M Leclerc

    Full Text Available BACKGROUND: In describing and understanding how the HIV epidemic spreads in African countries, previous studies have not taken into account the detailed periods at risk. This study is based on a micro-simulation model (individual-based of the spread of the HIV epidemic in the population of Zambia, where women tend to marry early and where divorces are not frequent. The main target of the model was to fit the HIV seroprevalence profiles by age and sex observed at the Demographic and Health Survey conducted in 2001. METHODS AND FINDINGS: A two-sex micro-simulation model of HIV transmission was developed. Particular attention was paid to precise age-specific estimates of exposure to risk through the modelling of the formation and dissolution of relationships: marriage (stable union, casual partnership, and commercial sex. HIV transmission was exclusively heterosexual for adults or vertical (mother-to-child for children. Three stages of HIV infection were taken into account. All parameters were derived from empirical population-based data. Results show that basic parameters could not explain the dynamics of the HIV epidemic in Zambia. In order to fit the age and sex patterns, several assumptions were made: differential susceptibility of young women to HIV infection, differential susceptibility or larger number of encounters for male clients of commercial sex workers, and higher transmission rate. The model allowed to quantify the role of each type of relationship in HIV transmission, the proportion of infections occurring at each stage of disease progression, and the net reproduction rate of the epidemic (R(0 = 1.95. CONCLUSIONS: The simulation model reproduced the dynamics of the HIV epidemic in Zambia, and fitted the age and sex pattern of HIV seroprevalence in 2001. The same model could be used to measure the effect of changing behaviour in the future.

  6. Measuring fit of sequence data to phylogenetic model: gain of power using marginal tests.

    Science.gov (United States)

    Waddell, Peter J; Ota, Rissa; Penny, David

    2009-10-01

    Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (Unended quest: an intellectual autobiography. Fontana, London, 1976) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (Nature 297:197-200, 1982) to the present. We compare the general log-likelihood ratio (the G or G (2) statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (P approximately 0.5), but the marginalized tests do. Tests on pairwise frequency (F) matrices, strongly (P < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (P < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4( t ) patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with P < 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published trees may really be far larger than the analytical methods (e.g., bootstrap) report.

  7. Love as a regulative ideal in surrogate decision making.

    Science.gov (United States)

    Stonestreet, Erica Lucast

    2014-10-01

    This discussion aims to give a normative theoretical basis for a "best judgment" model of surrogate decision making rooted in a regulative ideal of love. Currently, there are two basic models of surrogate decision making for incompetent patients: the "substituted judgment" model and the "best interests" model. The former draws on the value of autonomy and responds with respect; the latter draws on the value of welfare and responds with beneficence. It can be difficult to determine which of these two models is more appropriate for a given patient, and both approaches may seem inadequate for a surrogate who loves the patient. The proposed "best judgment" model effectively draws on the values incorporated in each of the traditional standards, but does so because these values are important to someone who loves a patient, since love responds to the patient as the specific person she is.

  8. Computational Software for Fitting Seismic Data to Epidemic-Type Aftershock Sequence Models

    Science.gov (United States)

    Chu, A.

    2014-12-01

    Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work introduces software to implement two of ETAS models described in Ogata (1998). To find the Maximum-Likelihood Estimates (MLEs), my software provides estimates of the homogeneous background rate parameter and the temporal and spatial parameters that govern triggering effects by applying the Expectation-Maximization (EM) algorithm introduced in Veen and Schoenberg (2008). Despite other computer programs exist for similar data modeling purpose, using EM-algorithm has the benefits of stability and robustness (Veen and Schoenberg, 2008). Spatial shapes that are very long and narrow cause difficulties in optimization convergence and problems with flat or multi-modal log-likelihood functions encounter similar issues. My program uses a robust method to preset a parameter to overcome the non-convergence computational issue. In addition to model fitting, the software is equipped with useful tools for examining modeling fitting results, for example, visualization of estimated conditional intensity, and estimation of expected number of triggered aftershocks. A simulation generator is also given with flexible spatial shapes that may be defined by the user. This open-source software has a very simple user interface. The user may execute it on a local computer, and the program also has potential to be hosted online. Java language is used for the software's core computing part and an optional interface to the statistical package R is provided.

  9. Surrogate markers of visceral adiposity in young adults: waist circumference and body mass index are more accurate than waist hip ratio, model of adipose distribution and visceral adiposity index.

    Directory of Open Access Journals (Sweden)

    Susana Borruel

    Full Text Available Surrogate indexes of visceral adiposity, a major risk factor for metabolic and cardiovascular disorders, are routinely used in clinical practice because objective measurements of visceral adiposity are expensive, may involve exposure to radiation, and their availability is limited. We compared several surrogate indexes of visceral adiposity with ultrasound assessment of subcutaneous and visceral adipose tissue depots in 99 young Caucasian adults, including 20 women without androgen excess, 53 women with polycystic ovary syndrome, and 26 men. Obesity was present in 7, 21, and 7 subjects, respectively. We obtained body mass index (BMI, waist circumference (WC, waist-hip ratio (WHR, model of adipose distribution (MOAD, visceral adiposity index (VAI, and ultrasound measurements of subcutaneous and visceral adipose tissue depots and hepatic steatosis. WC and BMI showed the strongest correlations with ultrasound measurements of visceral adiposity. Only WHR correlated with sex hormones. Linear stepwise regression models including VAI were only slightly stronger than models including BMI or WC in explaining the variability in the insulin sensitivity index (yet BMI and WC had higher individual standardized coefficients of regression, and these models were superior to those including WHR and MOAD. WC showed 0.94 (95% confidence interval 0.88-0.99 and BMI showed 0.91 (0.85-0.98 probability of identifying the presence of hepatic steatosis according to receiver operating characteristic curve analysis. In conclusion, WC and BMI not only the simplest to obtain, but are also the most accurate surrogate markers of visceral adiposity in young adults, and are good indicators of insulin resistance and powerful predictors of the presence of hepatic steatosis.

  10. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites: SURROGATE-BASED MCMC FOR CLM

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Maoyi [Earth System Analysis and Modeling Group, Pacific Northwest National Laboratory, Richland Washington USA; Ray, Jaideep [Sandia National Laboratories, Livermore California USA; Hou, Zhangshuan [Hydrology Technical Group, Pacific Northwest National Laboratory, Richland Washington USA; Ren, Huiying [Hydrology Technical Group, Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Earth System Analysis and Modeling Group, Pacific Northwest National Laboratory, Richland Washington USA; Swiler, Laura [Sandia National Laboratories, Albuquerque New Mexico USA

    2016-07-04

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.

  11. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    Science.gov (United States)

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  12. Fitting the CDO correlation skew: a tractable structural jump-diffusion model

    DEFF Research Database (Denmark)

    Willemann, Søren

    2007-01-01

    We extend a well-known structural jump-diffusion model for credit risk to handle both correlations through diffusion of asset values and common jumps in asset value. Through a simplifying assumption on the default timing and efficient numerical techniques, we develop a semi-analytic framework...... allowing for instantaneous calibration to heterogeneous CDS curves and fast computation of CDO tranche spreads. We calibrate the model to CDX and iTraxx data from February 2007 and achieve a satisfactory fit. To price the senior tranches for both indices, we require a risk-neutral probability of a market...

  13. Fitting of different models for water vapour sorption on potato starch granules

    Science.gov (United States)

    Czepirski, L.; Komorowska-Czepirska, E.; Szymońska, J.

    2002-08-01

    Water vapour adsorption isotherms for native and modified potato starch were investigated. To obtain the best fit for the experimental data several models based on the BET approach were evaluated. The hypothesis that water is adsorbed on the starch granules at the primary and secondary adsorption sites as well as a concept considering the adsorbent fractality were also tested. It was found, that the equilibrium adsorption points in the examined range of relative humidity (0.03-0.90) were most accurately predicted by using a three-parameter model proposed by Kats and Kutarov.

  14. Parameter Estimation of a Plucked String Synthesis Model Using a Genetic Algorithm with Perceptual Fitness Calculation

    Directory of Open Access Journals (Sweden)

    Riionheimo Janne

    2003-01-01

    Full Text Available We describe a technique for estimating control parameters for a plucked string synthesis model using a genetic algorithm. The model has been intensively used for sound synthesis of various string instruments but the fine tuning of the parameters has been carried out with a semiautomatic method that requires some hand adjustment with human listening. An automated method for extracting the parameters from recorded tones is described in this paper. The calculation of the fitness function utilizes knowledge of the properties of human hearing.

  15. A NON-UNIFORM SEDIMENT TRANSPORT MODEL WITH THE BOUNDARY-FITTING ORTHOGONAL COORDINATE SYSTEM

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A 2-D non-uniform sediment mathmatical model in the boundary-fitting orthogonal coordinate system was developed in this paper. The governing equations, the numerical scheme, the boundary conditions, the movable boundary technique and the numerical solutions were presented. The model was verified by the data of the reach 25km upstream the Jialingjiang estuary and the 44km long main stream of the Chongqing reach of the Yangtze river. The calculated results show that, the water elevation, the velocity distribution and the river bed deformation are in agreement with the measured data.

  16. Improved fitting of solution X-ray scattering data to macromolecular structures and structural ensembles by explicit water modeling.

    Science.gov (United States)

    Grishaev, Alexander; Guo, Liang; Irving, Thomas; Bax, Ad

    2010-11-10

    A new procedure, AXES, is introduced for fitting small-angle X-ray scattering (SAXS) data to macromolecular structures and ensembles of structures. By using explicit water models to account for the effect of solvent, and by restricting the adjustable fitting parameters to those that dominate experimental uncertainties, including sample/buffer rescaling, detector dark current, and, within a narrow range, hydration layer density, superior fits between experimental high resolution structures and SAXS data are obtained. AXES results are found to be more discriminating than standard Crysol fitting of SAXS data when evaluating poorly or incorrectly modeled protein structures. AXES results for ensembles of structures previously generated for ubiquitin show improved fits over fitting of the individual members of these ensembles, indicating these ensembles capture the dynamic behavior of proteins in solution.

  17. Testing Lack-of-fit for a Polynomial Errors-in-variables Model

    Institute of Scientific and Technical Information of China (English)

    Li-xing Zhu; Wei-xing Song; Heng-jian Gui

    2003-01-01

    When a regression model is applied as an approximation of underlying model of data, the model checking is important and relevant. In this paper, we investigate the lack-of-fit test for a polynomial errorin-variables model. As the ordinary residuals are biased when there exist measurement errors in covariables,we correct them and then construct a residual-based test of score type. The constructed test is asymptotically chi-squared under null hypotheses. Simulation study shows that the test can maintain the significance level well.The choice of weight functions involved in the test statistic and the related power study are also investigated.The application to two examples is illustrated. The approach can be readily extended to handle more general models.

  18. Model fitting of kink waves in the solar atmosphere: Gaussian damping and time-dependence

    Science.gov (United States)

    Morton, R. J.; Mooroogen, K.

    2016-09-01

    Aims: Observations of the solar atmosphere have shown that magnetohydrodynamic waves are ubiquitous throughout. Improvements in instrumentation and the techniques used for measurement of the waves now enables subtleties of competing theoretical models to be compared with the observed waves behaviour. Some studies have already begun to undertake this process. However, the techniques employed for model comparison have generally been unsuitable and can lead to erroneous conclusions about the best model. The aim here is to introduce some robust statistical techniques for model comparison to the solar waves community, drawing on the experiences from other areas of astrophysics. In the process, we also aim to investigate the physics of coronal loop oscillations. Methods: The methodology exploits least-squares fitting to compare models to observational data. We demonstrate that the residuals between the model and observations contain significant information about the ability for the model to describe the observations, and show how they can be assessed using various statistical tests. In particular we discuss the Kolmogorov-Smirnoff one and two sample tests, as well as the runs test. We also highlight the importance of including any observational trend line in the model-fitting process. Results: To demonstrate the methodology, an observation of an oscillating coronal loop undergoing standing kink motion is used. The model comparison techniques provide evidence that a Gaussian damping profile provides a better description of the observed wave attenuation than the often used exponential profile. This supports previous analysis from Pascoe et al. (2016, A&A, 585, L6). Further, we use the model comparison to provide evidence of time-dependent wave properties of a kink oscillation, attributing the behaviour to the thermodynamic evolution of the local plasma.

  19. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    Science.gov (United States)

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) .

  20. Blowout Jets: Hinode X-Ray Jets that Don't Fit the Standard Model

    Science.gov (United States)

    Moore, Ronald L.; Cirtain, Jonathan W.; Sterling, Alphonse C.; Falconer, David A.

    2010-01-01

    Nearly half of all H-alpha macrospicules in polar coronal holes appear to be miniature filament eruptions. This suggests that there is a large class of X-ray jets in which the jet-base magnetic arcade undergoes a blowout eruption as in a CME, instead of remaining static as in most solar X-ray jets, the standard jets that fit the model advocated by Shibata. Along with a cartoon depicting the standard model, we present a cartoon depicting the signatures expected of blowout jets in coronal X-ray images. From Hinode/XRT movies and STEREO/EUVI snapshots in polar coronal holes, we present examples of (1) X-ray jets that fit the standard model, and (2) X-ray jets that do not fit the standard model but do have features appropriate for blowout jets. These features are (1) a flare arcade inside the jet-base arcade in addition to the small flare arcade (bright point) outside that standard jets have, (2) a filament of cool (T is approximately 80,000K) plasma that erupts from the core of the jetbase arcade, and (3) an extra jet strand that should not be made by the reconnection for standard jets but could be made by reconnection between the ambient unipolar open field and the opposite-polarity leg of the filament-carrying flux-rope core field of the erupting jet-base arcade. We therefore infer that these non-standard jets are blowout jets, jets made by miniature versions of the sheared-core-arcade eruptions that make CMEs

  1. A Parametric Model of Shoulder Articulation for Virtual Assessment of Space Suit Fit

    Science.gov (United States)

    Kim, K. Han; Young, Karen S.; Bernal, Yaritza; Boppana, Abhishektha; Vu, Linh Q.; Benson, Elizabeth A.; Jarvis, Sarah; Rajulu, Sudhakar L.

    2016-01-01

    Shoulder injury is one of the most severe risks that have the potential to impair crewmembers' performance and health in long duration space flight. Overall, 64% of crewmembers experience shoulder pain after extra-vehicular training in a space suit, and 14% of symptomatic crewmembers require surgical repair (Williams & Johnson, 2003). Suboptimal suit fit, in particular at the shoulder region, has been identified as one of the predominant risk factors. However, traditional suit fit assessments and laser scans represent only a single person's data, and thus may not be generalized across wide variations of body shapes and poses. The aim of this work is to develop a software tool based on a statistical analysis of a large dataset of crewmember body shapes. This tool can accurately predict the skin deformation and shape variations for any body size and shoulder pose for a target population, from which the geometry can be exported and evaluated against suit models in commercial CAD software. A preliminary software tool was developed by statistically analyzing 150 body shapes matched with body dimension ranges specified in the Human-Systems Integration Requirements of NASA ("baseline model"). Further, the baseline model was incorporated with shoulder joint articulation ("articulation model"), using additional subjects scanned in a variety of shoulder poses across a pre-specified range of motion. Scan data was cleaned and aligned using body landmarks. The skin deformation patterns were dimensionally reduced and the co-variation with shoulder angles was analyzed. A software tool is currently in development and will be presented in the final proceeding. This tool would allow suit engineers to parametrically generate body shapes in strategically targeted anthropometry dimensions and shoulder poses. This would also enable virtual fit assessments, with which the contact volume and clearance between the suit and body surface can be predictively quantified at reduced time and

  2. Correlated parameter fit of arrhenius model for thermal denaturation of proteins and cells.

    Science.gov (United States)

    Qin, Zhenpeng; Balasubramanian, Saravana Kumar; Wolkers, Willem F; Pearce, John A; Bischof, John C

    2014-12-01

    Thermal denaturation of proteins is critical to cell injury, food science and other biomaterial processing. For example protein denaturation correlates strongly with cell death by heating, and is increasingly of interest in focal thermal therapies of cancer and other diseases at temperatures which often exceed 50 °C. The Arrhenius model is a simple yet widely used model for both protein denaturation and cell injury. To establish the utility of the Arrhenius model for protein denaturation at 50 °C and above its sensitivities to the kinetic parameters (activation energy E a and frequency factor A) were carefully examined. We propose a simplified correlated parameter fit to the Arrhenius model by treating E a, as an independent fitting parameter and allowing A to follow dependently. The utility of the correlated parameter fit is demonstrated on thermal denaturation of proteins and cells from the literature as a validation, and new experimental measurements in our lab using FTIR spectroscopy to demonstrate broad applicability of this method. Finally, we demonstrate that the end-temperature within which the denaturation is measured is important and changes the kinetics. Specifically, higher E a and A parameters were found at low end-temperature (50 °C) and reduce as end-temperatures increase to 70 °C. This trend is consistent with Arrhenius parameters for cell injury in the literature that are significantly higher for clonogenics (45-50 °C) vs. membrane dye assays (60-70 °C). Future opportunities to monitor cell injury by spectroscopic measurement of protein denaturation are discussed.

  3. Finding the right fit: A comparison of process assumptions underlying popular drift-diffusion models.

    Science.gov (United States)

    Ashby, Nathaniel J S; Jekel, Marc; Dickert, Stephan; Glöckner, Andreas

    2016-12-01

    Recent research makes increasing use of eye-tracking methodologies to generate and test process models. Overall, such research suggests that attention, generally indexed by fixations (gaze duration), plays a critical role in the construction of preference, although the methods used to support this supposition differ substantially. In 2 studies we empirically test prototypical versions of prominent processing assumptions against 1 another and several base models. We find that general evidence accumulation processes provide a good fit to the data. An accumulation process that assumes leakage and temporal variability in evidence weighting (i.e., a primacy effect) fits the aggregate data, both in terms of choices and decision times, and does so across varying types of choices (e.g., charitable giving and hedonic consumption) and numbers of options well. However, when comparing models on the level of the individual, for a majority of participants simpler models capture choice data better. The theoretical and practical implications of these findings are discussed. (PsycINFO Database Record

  4. The Herschel Orion Protostar Survey: Spectral Energy Distributions and Fits Using a Grid of Protostellar Models

    CERN Document Server

    Furlan, E; Ali, B; Stutz, A M; Stanke, T; Tobin, J J; Megeath, S T; Osorio, M; Hartmann, L; Calvet, N; Poteet, C A; Booker, J; Manoj, P; Watson, D M; Allen, L

    2016-01-01

    We present key results from the Herschel Orion Protostar Survey (HOPS): spectral energy distributions (SEDs) and model fits of 330 young stellar objects, predominantly protostars, in the Orion molecular clouds. This is the largest sample of protostars studied in a single, nearby star-formation complex. With near-infrared photometry from 2MASS, mid- and far-infrared data from Spitzer and Herschel, and sub-millimeter photometry from APEX, our SEDs cover 1.2-870 $\\mu$m and sample the peak of the protostellar envelope emission at ~100 $\\mu$m. Using mid-IR spectral indices and bolometric temperatures, we classify our sample into 92 Class 0 protostars, 125 Class I protostars, 102 flat-spectrum sources, and 11 Class II pre-main-sequence stars. We implement a simple protostellar model (including a disk in an infalling envelope with outflow cavities) to generate a grid of 30400 model SEDs and use it to determine the best-fit model parameters for each protostar. We argue that far-IR data are essential for accurate cons...

  5. Implicit Active Contour Model with Local and Global Intensity Fitting Energies

    Directory of Open Access Journals (Sweden)

    Xiaozeng Xu

    2013-01-01

    Full Text Available We propose a new active contour model which integrates a local intensity fitting (LIF energy with an auxiliary global intensity fitting (GIF energy. The LIF energy is responsible for attracting the contour toward object boundaries and is dominant near object boundaries, while the GIF energy incorporates global image information to improve the robustness to initialization of the contours. The proposed model not only can provide desirable segmentation results in the presence of intensity inhomogeneity but also allows for more flexible initialization of the contour compared to the RSF and LIF models, and we give a theoretical proof to compute a unique steady state regardless of the initialization; that is, the convergence of the zero-level line is irrespective of the initial function. This means that we can obtain the same zero-level line in the steady state, if we choose the initial function as a bounded function. In particular, our proposed model has the capability of detecting multiple objects or objects with interior holes or blurred edges.

  6. Empirical evaluation reveals best fit of a logistic mutation model for human Y-chromosomal microsatellites.

    Science.gov (United States)

    Jochens, Arne; Caliebe, Amke; Rösler, Uwe; Krawczak, Michael

    2011-12-01

    The rate of microsatellite mutation is dependent upon both the allele length and the repeat motif, but the exact nature of this relationship is still unknown. We analyzed data on the inheritance of human Y-chromosomal microsatellites in father-son duos, taken from 24 published reports and comprising 15,285 directly observable meioses. At the six microsatellites analyzed (DYS19, DYS389I, DYS390, DYS391, DYS392, and DYS393), a total of 162 mutations were observed. For each locus, we employed a maximum-likelihood approach to evaluate one of several single-step mutation models on the basis of the data. For five of the six loci considered, a novel logistic mutation model was found to provide the best fit according to Akaike's information criterion. This implies that the mutation probability at the loci increases (nonlinearly) with allele length at a rate that differs between upward and downward mutations. For DYS392, the best fit was provided by a linear model in which upward and downward mutation probabilities increase equally with allele length. This is the first study to empirically compare different microsatellite mutation models in a locus-specific fashion.

  7. An improved cosmological model fitting of Planck data with a dark energy spike

    CERN Document Server

    Park, Chan-Gyung

    2015-01-01

    The $\\Lambda$ cold dark matter ($\\Lambda\\textrm{CDM}$) model is currently known as the simplest cosmology model that best describes observations with minimal number of parameters. Here we introduce a cosmology model that is preferred over the conventional $\\Lambda\\textrm{CDM}$ one by constructing dark energy as the sum of the cosmological constant $\\Lambda$ and the additional fluid that is designed to have an extremely short transient spike in energy density during the radiation-matter equality era and the early scaling behavior with radiation and matter densities. The density parameter of the additional fluid is defined as a Gaussian function plus a constant in logarithmic scale-factor space. Searching for the best-fit cosmological parameters in the presence of such a dark energy spike gives a far smaller chi-square value by about five times the number of additional parameters introduced and narrower constraints on matter density and Hubble constant compared with the best-fit $\\Lambda\\textrm{CDM}$ model. The...

  8. A Multiple Criteria Decision Modelling approach to selection of estimation techniques for fitting extreme floods

    Science.gov (United States)

    Duckstein, L.; Bobée, B.; Ashkar, F.

    1991-09-01

    The problem of fitting a probability distribution, here log-Pearson Type III distribution, to extreme floods is considered from the point of view of two numerical and three non-numerical criteria. The six techniques of fitting considered include classical techniques (maximum likelihood, moments of logarithms of flows) and new methods such as mixed moments and the generalized method of moments developed by two of the co-authors. The latter method consists of fitting the distribution using moments of different order, in particular the SAM method (Sundry Averages Method) uses the moments of order 0 (geometric mean), 1 (arithmetic mean), -1 (harmonic mean) and leads to a smaller variance of the parameters. The criteria used to select the method of parameter estimation are: - the two statistical criteria of mean square error and bias; - the two computational criteria of program availability and ease of use; - the user-related criterion of acceptability. These criteria are transformed into value functions or fuzzy set membership functions and then three Multiple Criteria Decision Modelling (MCDM) techniques, namely, composite programming, ELECTRE, and MCQA, are applied to rank the estimation techniques.

  9. Systematic effects on the size-luminosity relation: dependence on model fitting and morphology

    CERN Document Server

    Bernardi, M; Vikram, V; Huertas-Company, M; Mei, S; Shankar, F; Sheth, R K

    2012-01-01

    We quantify the systematics in the size-luminosity relation of galaxies in the SDSS main sample which arise from fitting different 1- and 2-component model profiles to the images. In objects brighter than L*, fitting a single Sersic profile to what is really a two-component SerExp system leads to biases: the half-light radius is increasingly overestimated as n of the fitted single component increases; it is also overestimated at B/T ~ 0.6. However, the net effect on the R-L relation is small, except for the most luminous tail, where it curves upwards towards larger sizes. We also study how this relation depends on morphological type. Our analysis is one of the first to use Bayesian-classifier derived weights, rather than hard cuts, to define morphology. Crudely, there appear to be only two relations: one for early-types (Es, S0s and Sa's) and another for late-types (Sbs and Scds). However, closer inspection shows that within the early-type sample S0s tend to be 15% smaller than Es of the same luminosity, and,...

  10. Adapted strategic plannig model applied to small business: a case study in the fitness area

    Directory of Open Access Journals (Sweden)

    Eduarda Tirelli Hennig

    2012-06-01

    Full Text Available The strategic planning is an important management tool in the corporate scenario and shall not be restricted to big Companies. However, this kind of planning process in small business may need special adaptations due to their own characteristics. This paper aims to identify and adapt the existent models of strategic planning to the scenario of a small business in the fitness area. Initially, it is accomplished a comparative study among models of different authors to identify theirs phases and activities. Then, it is defined which of these phases and activities should be present in a model that will be utilized in a small business. That model was applied to a Pilates studio; it involves the establishment of an organizational identity, an environmental analysis as well as the definition of strategic goals, strategies and actions to reach them. Finally, benefits to the organization could be identified, as well as hurdles in the implementation of the tool.

  11. The Shape of Dark Matter Haloes II. The Galactus HI Modelling & Fitting Tool

    CERN Document Server

    Peters, S P C; Allen, R J; Freeman, K C

    2016-01-01

    We present a new HI modelling tool called \\textsc{Galactus}. The program has been designed to perform automated fits of disc-galaxy models to observations. It includes a treatment for the self-absorption of the gas. The software has been released into the public domain. We describe the design philosophy and inner workings of the program. After this, we model the face-on galaxy NGC2403, using both self-absorption and optically thin models, showing that self-absorption occurs even in face-on galaxies. It is shown that the maximum surface brightness plateaus seen in Paper I of this series are indeed signs of self-absorption. The apparent HI mass of an edge-on galaxy can be drastically lower compared to that same galaxy seen face-on. The Tully-Fisher relation is found to be relatively free from self-absorption issues.

  12. Revisiting algorithms for generating surrogate time series

    CERN Document Server

    Raeth, C; Papadakis, I E; Brinkmann, W

    2011-01-01

    The method of surrogates is one of the key concepts of nonlinear data analysis. Here, we demonstrate that commonly used algorithms for generating surrogates often fail to generate truly linear time series. Rather, they create surrogate realizations with Fourier phase correlations leading to non-detections of nonlinearities. We argue that reliable surrogates can only be generated, if one tests separately for static and dynamic nonlinearities.

  13. Beyond multi-fractals: surrogate time series and fields

    Science.gov (United States)

    Venema, V.; Simmer, C.

    2007-12-01

    Most natural complex are characterised by variability on a large range of temporal and spatial scales. The two main methodologies to generate such structures are Fourier/FARIMA based algorithms and multifractal methods. The former is restricted to Gaussian data, whereas the latter requires the structure to be self-similar. This work will present so-called surrogate data as an alternative that works with any (empirical) distribution and power spectrum. The best-known surrogate algorithm is the iterative amplitude adjusted Fourier transform (IAAFT) algorithm. We have studied six different geophysical time series (two clouds, runoff of a small and a large river, temperature and rain) and their surrogates. The power spectra and consequently the 2nd order structure functions were replicated accurately. Even the fourth order structure function was more accurately reproduced by the surrogates as would be possible by a fractal method, because the measured structure deviated too strong from fractal scaling. Only in case of the daily rain sums a fractal method could have been more accurate. Just as Fourier and multifractal methods, the current surrogates are not able to model the asymmetric increment distributions observed for runoff, i.e., they cannot reproduce nonlinear dynamical processes that are asymmetric in time. Furthermore, we have found differences for the structure functions on small scales. Surrogate methods are especially valuable for empirical studies, because the time series and fields that are generated are able to mimic measured variables accurately. Our main application is radiative transfer through structured clouds. Like many geophysical fields, clouds can only be sampled sparsely, e.g. with in-situ airborne instruments. However, for radiative transfer calculations we need full 3-dimensional cloud fields. A first study relating the measured properties of the cloud droplets and the radiative properties of the cloud field by generating surrogate cloud

  14. A gamma variate model that includes stretched exponential is a better fit for gastric emptying data from mice.

    Science.gov (United States)

    Bajzer, Željko; Gibbons, Simon J; Coleman, Heidi D; Linden, David R; Farrugia, Gianrico

    2015-08-01

    Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data.

  15. Fitting parametric random effects models in very large data sets with application to VHA national data.

    Science.gov (United States)

    Gebregziabher, Mulugeta; Egede, Leonard; Gilbert, Gregory E; Hunt, Kelly; Nietert, Paul J; Mauldin, Patrick

    2012-10-24

    With the current focus on personalized medicine, patient/subject level inference is often of key interest in translational research. As a result, random effects models (REM) are becoming popular for patient level inference. However, for very large data sets that are characterized by large sample size, it can be difficult to fit REM using commonly available statistical software such as SAS since they require inordinate amounts of computer time and memory allocations beyond what are available preventing model convergence. For example, in a retrospective cohort study of over 800,000 Veterans with type 2 diabetes with longitudinal data over 5 years, fitting REM via generalized linear mixed modeling using currently available standard procedures in SAS (e.g. PROC GLIMMIX) was very difficult and same problems exist in Stata's gllamm or R's lme packages. Thus, this study proposes and assesses the performance of a meta regression approach and makes comparison with methods based on sampling of the full data. We use both simulated and real data from a national cohort of Veterans with type 2 diabetes (n=890,394) which was created by linking multiple patient and administrative files resulting in a cohort with longitudinal data collected over 5 years. The outcome of interest was mean annual HbA1c measured over a 5 years period. Using this outcome, we compared parameter estimates from the proposed random effects meta regression (REMR) with estimates based on simple random sampling and VISN (Veterans Integrated Service Networks) based stratified sampling of the full data. Our results indicate that REMR provides parameter estimates that are less likely to be biased with tighter confidence intervals when the VISN level estimates are homogenous. When the interest is to fit REM in repeated measures data with very large sample size, REMR can be used as a good alternative. It leads to reasonable inference for both Gaussian and non-Gaussian responses if parameter estimates are

  16. An amino acid substitution-selection model adjusts residue fitness to improve phylogenetic estimation.

    Science.gov (United States)

    Wang, Huai-Chun; Susko, Edward; Roger, Andrew J

    2014-04-01

    Standard protein phylogenetic models use fixed rate matrices of amino acid interchange derived from analyses of large databases. Differences between the stationary amino acid frequencies of these rate matrices from those of a data set of interest are typically adjusted for by matrix multiplication that converts the empirical rate matrix to an exchangeability matrix which is then postmultiplied by the amino acid frequencies in the alignment. The result is a time-reversible rate matrix with stationary amino acid frequencies equal to the data set frequencies. On the basis of population genetics principles, we develop an amino acid substitution-selection model that parameterizes the fitness of an amino acid as the logarithm of the ratio of the frequency of the amino acid to the frequency of the same amino acid under no selection. The model gives rise to a different sequence of matrix multiplications to convert an empirical rate matrix to one that has stationary amino acid frequencies equal to the data set frequencies. We incorporated the substitution-selection model with an improved amino acid class frequency mixture (cF) model to partially take into account site-specific amino acid frequencies in the phylogenetic models. We show that 1) the selection models fit data significantly better than corresponding models without selection for most of the 21 test data sets; 2) both cF and cF selection models favored the phylogenetic trees that were inferred under current sophisticated models and methods for three difficult phylogenetic problems (the positions of microsporidia and breviates in eukaryote phylogeny and the position of the root of the angiosperm tree); and 3) for data simulated under site-specific residue frequencies, the cF selection models estimated trees closer to the generating trees than a standard Г model or cF without selection. We also explored several ways of estimating amino acid frequencies under neutral evolution that are required for these selection

  17. Current status of the Standard Model CKM fit and constraints on $\\Delta F=2$ New Physics

    CERN Document Server

    Charles, J; Descotes-Genon, S; Lacker, H; Menzel, A; Monteil, S; Niess, V; Ocariz, J; Orloff, J; Perez, A; Qian, W; Tisserand, V; Trabelsi, K; Urquijo, P; Silva, L Vale

    2015-01-01

    This letter summarises the status of the global fit of the CKM parameters within the Standard Model performed by the CKMfitter group. Special attention is paid to the inputs for the CKM angles $\\alpha$ and $\\gamma$ and the status of $B_s\\to\\mu\\mu$ and $B_d\\to \\mu\\mu$ decays. We illustrate the current situation for other unitarity triangles. We also discuss the constraints on generic $\\Delta F=2$ New Physics. All results have been obtained with the CKMfitter analysis package, featuring the frequentist statistical approach and using Rfit to handle theoretical uncertainties.

  18. Inverse problem theory methods for data fitting and model parameter estimation

    CERN Document Server

    Tarantola, A

    2002-01-01

    Inverse Problem Theory is written for physicists, geophysicists and all scientists facing the problem of quantitative interpretation of experimental data. Although it contains a lot of mathematics, it is not intended as a mathematical book, but rather tries to explain how a method of acquisition of information can be applied to the actual world.The book provides a comprehensive, up-to-date description of the methods to be used for fitting experimental data, or to estimate model parameters, and to unify these methods into the Inverse Problem Theory. The first part of the book deals wi

  19. Understanding Systematics in ZZ Ceti Model Fitting to Enable Differential Seismology

    Science.gov (United States)

    Fuchs, J. T.; Dunlap, B. H.; Clemens, J. C.; Meza, J. A.; Dennihy, E.; Koester, D.

    2017-03-01

    We are conducting a large spectroscopic survey of over 130 Southern ZZ Cetis with the Goodman Spectrograph on the SOAR Telescope. Because it employs a single instrument with high UV throughput, this survey will both improve the signal-to-noise of the sample of SDSS ZZ Cetis and provide a uniform dataset for model comparison. We are paying special attention to systematics in the spectral fitting and quantify three of those systematics here. We show that relative positions in the log g -Teff plane are consistent for these three systematics.

  20. Understanding Systematics in ZZ Ceti Model Fitting to Enable Differential Seismology

    CERN Document Server

    Fuchs, J T; Clemens, J C; Meza, J A; Dennihy, E; Koester, D

    2016-01-01

    We are conducting a large spectroscopic survey of over 130 Southern ZZ Cetis with the Goodman Spectrograph on the SOAR Telescope. Because it employs a single instrument with high UV throughput, this survey will both improve the signal-to-noise of the sample of SDSS ZZ Cetis and provide a uniform dataset for model comparison. We are paying special attention to systematics in the spectral fitting and quantify three of those systematics here. We show that relative positions in the $\\log{g}$-$T_{\\rm eff}$ plane are consistent for these three systematics.

  1. Evaluating Fit Indices for Multivariate t-Based Structural Equation Modeling with Data Contamination

    Directory of Open Access Journals (Sweden)

    Mark H. C. Lai

    2017-07-01

    Full Text Available In conventional structural equation modeling (SEM, with the presence of even a tiny amount of data contamination due to outliers or influential observations, normal-theory maximum likelihood (ML-Normal is not efficient and can be severely biased. The multivariate-t-based SEM, which recently got implemented in Mplus as an approach for mixture modeling, represents a robust estimation alternative to downweigh the impact of outliers and influential observations. To our knowledge, the use of maximum likelihood estimation with a multivariate-t model (ML-t to handle outliers has not been shown in SEM literature. In this paper we demonstrate the use of ML-t using the classic Holzinger and Swineford (1939 data set with a few observations modified as outliers or influential observations. A simulation study is then conducted to examine the performance of fit indices and information criteria under ML-Normal and ML-t in the presence of outliers. Results showed that whereas all fit indices got worse for ML-Normal with increasing amount of outliers and influential observations, their values were relatively stable with ML-t, and the use of information criteria was effective in selecting ML-normal without data contamination and selecting ML-t with data contamination, especially when the sample size was at least 200.

  2. Analytical Light Curve Models of Super-Luminous Supernvae: chi^2-Minimizations of Parameter Fits

    CERN Document Server

    Chatzopoulos, E; Vinko, J; Horvath, Z L; Nagy, A

    2013-01-01

    We present fits of generalized semi-analytic supernova (SN) light curve (LC) models for a variety of power inputs including Ni-56 and Co-56 radioactive decay, magnetar spin-down, and forward and reverse shock heating due to supernova ejecta-circumstellar matter (CSM) interaction. We apply our models to the observed LCs of the H-rich Super Luminous Supernovae (SLSN-II) SN 2006gy, SN 2006tf, SN 2008am, SN 2008es, CSS100217, the H-poor SLSN-I SN 2005ap, SCP06F6, SN 2007bi, SN 2010gx and SN 2010kd as well as to the interacting SN 2008iy and PTF09uj. Our goal is to determine the dominant mechanism that powers the LCs of these extraordinary events and the physical conditions involved in each case. We also present a comparison of our semi-analytical results with recent results from numerical radiation hydrodynamics calculations in the particular case of SN 2006gy in order to explore the strengths and weaknesses of our models. We find that CS shock heating produced by ejecta-CSM interaction provides a better fit to t...

  3. A global fit study on the new agegraphic dark energy model

    CERN Document Server

    Zhang, Jing-Fei; Zhang, Xin

    2012-01-01

    We perform a global fit study on the new agegraphic dark energy (NADE) model in a non-flat universe by using the MCMC method with the full CMB power spectra data from the WMAP 7-yr observations, the SNIa data from Union2.1 sample, BAO data from SDSS DR7 and WiggleZ Dark Energy Survey, and the latest measurements of $H_0$ from HST. We find that the value of $\\Omega_{k0}$ is greater than 0 at least at the 3$\\sigma$ confidence levels (CLs), which implies that the NADE model distinctly favors an open universe. Besides, our results show that the value of the key parameter of NADE model, $n=2.673^{+0.053+0.127+0.199}_{-0.077-0.151-0.222}$, at the 1--3$\\sigma$ CLs, where its best-fit value is significantly smaller than those obtained in previous works. We find that the reason leading to such a change comes from the different SNIa samples used. Our further test indicates that there is a distinct tension between the Union2 sample of SNIa and other observations, and the tension will be relieved once the Union2 sample i...

  4. Estimating Predictability Redundancy and Surrogate Data Method

    CERN Document Server

    Pecen, L

    1995-01-01

    A method for estimating theoretical predictability of time series is presented, based on information-theoretic functionals---redundancies and surrogate data technique. The redundancy, designed for a chosen model and a prediction horizon, evaluates amount of information between a model input (e.g., lagged versions of the series) and a model output (i.e., a series lagged by the prediction horizon from the model input) in number of bits. This value, however, is influenced by a method and precision of redundancy estimation and therefore it is a) normalized by maximum possible redundancy (given by the precision used), and b) compared to the redundancies obtained from two types of the surrogate data in order to obtain reliable classification of a series as either unpredictable or predictable. The type of predictability (linear or nonlinear) and its level can be further evaluated. The method is demonstrated using a numerically generated time series as well as high-frequency foreign exchange data and the theoretical ...

  5. Using Finite Model Analysis and Out of Hot Cell Surrogate Rod Testing to Analyze High Burnup Used Nuclear Fuel Mechanical Properties

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jy-An John [ORNL; Jiang, Hao [ORNL; Wang, Hong [ORNL

    2014-07-01

    Based on a series of FEA simulations, the discussions and the conclusions concerning the impact of the interface bonding efficiency to SNF vibration integrity are provided in this report; this includes the moment carrying capacity distribution between pellets and clad, and the impact of cohesion bonding on the flexural rigidity of the surrogate rod system. As progressive de-bonding occurs at the pellet-pellet interfaces and at the pellet-clad interface, the load ratio of the bending moment carrying capacity gradually shifts from the pellets to the clad; the clad starts to carry a significant portion of the bending moment resistance until reaching the full de-bonding state at the pellet-pellet interface regions. This results in localized plastic deformation of the clad at the pellet-pellet-clad interface region; the associated plastic deformations of SS clad leads to a significant degradation in the stiffness of the surrogate rod. For instance, the flexural rigidity was reduced by 39% from the perfect bond state to the de-bonded state at the pellet-pellet interfaces.

  6. Fitting mathematical models to describe the rheological behaviour of chocolate pastes

    Science.gov (United States)

    Barbosa, Carla; Diogo, Filipa; Alves, M. Rui

    2016-06-01

    The flow behavior is of utmost importance for the chocolate industry. The objective of this work was to study two mathematical models, Casson and Windhab models that can be used to fit chocolate rheological data and evaluate which better infers or previews the rheological behaviour of different chocolate pastes. Rheological properties (viscosity, shear stress and shear rates) were obtained with a rotational viscometer equipped with a concentric cylinder. The chocolate samples were white chocolate and chocolate with varying percentages in cacao (55%, 70% and 83%). The results showed that the Windhab model was the best to describe the flow behaviour of all the studied samples with higher determination coefficients (r2 > 0.9).

  7. GRace: a MATLAB-based application for fitting the discrimination-association model.

    Science.gov (United States)

    Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio

    2014-10-28

    The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed.

  8. Towards greater realism in inclusive fitness models: the case of worker reproduction in insect societies.

    Science.gov (United States)

    Wenseleers, Tom; Helanterä, Heikki; Alves, Denise A; Dueñez-Guzmán, Edgar; Pamilo, Pekka

    2013-01-01

    The conflicts over sex allocation and male production in insect societies have long served as an important test bed for Hamilton's theory of inclusive fitness, but have for the most part been considered separately. Here, we develop new coevolutionary models to examine the interaction between these two conflicts and demonstrate that sex ratio and colony productivity costs of worker reproduction can lead to vastly different outcomes even in species that show no variation in their relatedness structure. Empirical data on worker-produced males in eight species of Melipona bees support the predictions from a model that takes into account the demographic details of colony growth and reproduction. Overall, these models contribute significantly to explaining behavioural variation that previous theories could not account for.

  9. Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.

    Directory of Open Access Journals (Sweden)

    Gu Mi

    Full Text Available This work is about assessing model adequacy for negative binomial (NB regression, particularly (1 assessing the adequacy of the NB assumption, and (2 assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.

  10. Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.

    Science.gov (United States)

    Mi, Gu; Di, Yanming; Schafer, Daniel W

    2015-01-01

    This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.

  11. A diffusion process to model generalized von Bertalanffy growth patterns: fitting to real data.

    Science.gov (United States)

    Román-Román, Patricia; Romero, Desirée; Torres-Ruiz, Francisco

    2010-03-07

    The von Bertalanffy growth curve has been commonly used for modeling animal growth (particularly fish). Both deterministic and stochastic models exist in association with this curve, the latter allowing for the inclusion of fluctuations or disturbances that might exist in the system under consideration which are not always quantifiable or may even be unknown. This curve is mainly used for modeling the length variable whereas a generalized version, including a new parameter b > or = 1, allows for modeling both length and weight for some animal species in both isometric (b = 3) and allometric (b not = 3) situations. In this paper a stochastic model related to the generalized von Bertalanffy growth curve is proposed. This model allows to investigate the time evolution of growth variables associated both with individual behaviors and mean population behavior. Also, with the purpose of fitting the above-mentioned model to real data and so be able to forecast and analyze particular characteristics, we study the maximum likelihood estimation of the parameters of the model. In addition, and regarding the numerical problems posed by solving the likelihood equations, a strategy is developed for obtaining initial solutions for the usual numerical procedures. Such strategy is validated by means of simulated examples. Finally, an application to real data of mean weight of swordfish is presented. 2009 Elsevier Ltd. All rights reserved.

  12. An experimentally informed evolutionary model improves phylogenetic fit to divergent lactamase homologs.

    Science.gov (United States)

    Bloom, Jesse D

    2014-10-01

    Phylogenetic analyses of molecular data require a quantitative model for how sequences evolve. Traditionally, the details of the site-specific selection that governs sequence evolution are not known a priori, making it challenging to create evolutionary models that adequately capture the heterogeneity of selection at different sites. However, recent advances in high-throughput experiments have made it possible to quantify the effects of all single mutations on gene function. I have previously shown that such high-throughput experiments can be combined with knowledge of underlying mutation rates to create a parameter-free evolutionary model that describes the phylogeny of influenza nucleoprotein far better than commonly used existing models. Here, I extend this work by showing that published experimental data on TEM-1 beta-lactamase (Firnberg E, Labonte JW, Gray JJ, Ostermeier M. 2014. A comprehensive, high-resolution map of a gene's fitness landscape. Mol Biol Evol. 31:1581-1592) can be combined with a few mutation rate parameters to create an evolutionary model that describes beta-lactamase phylogenies much better than most common existing models. This experimentally informed evolutionary model is superior even for homologs that are substantially diverged (about 35% divergence at the protein level) from the TEM-1 parent that was the subject of the experimental study. These results suggest that experimental measurements can inform phylogenetic evolutionary models that are applicable to homologs that span a substantial range of sequence divergence.

  13. Bootstrapping Topological Properties and Systemic Risk of Complex Networks Using the Fitness Model

    Science.gov (United States)

    Musmeci, Nicolò; Battiston, Stefano; Caldarelli, Guido; Puliga, Michelangelo; Gabrielli, Andrea

    2013-05-01

    In this paper we present a novel method to reconstruct global topological properties of a complex network starting from limited information. We assume to know for all the nodes a non-topological quantity that we interpret as fitness. In contrast, we assume to know the degree, i.e. the number of connections, only for a subset of the nodes in the network. We then use a fitness model, calibrated on the subset of nodes for which degrees are known, in order to generate ensembles of networks. Here, we focus on topological properties that are relevant for processes of contagion and distress propagation in networks, i.e. network density and k-core structure, and we study how well these properties can be estimated as a function of the size of the subset of nodes utilized for the calibration. Finally, we also study how well the resilience to distress propagation in the network can be estimated using our method. We perform a first test on ensembles of synthetic networks generated with the Exponential Random Graph model, which allows to apply common tools from statistical mechanics. We then perform a second test on empirical networks taken from economic and financial contexts. In both cases, we find that a subset as small as 10 % of nodes can be enough to estimate the properties of the network along with its resilience with an error of 5 %.

  14. Reconstructing topological properties of complex networks from partial information using the Fitness Model

    Science.gov (United States)

    Gabrielli, Andrea; Battiston, Stefano; Caldarelli, Guido; Musmeci, Nicoló; Puliga, Michelangelo

    2014-03-01

    We present a new method to reconstruct global topological properties of complex networks starting from limited information. We assume to know for all nodes a non-topological quantity that we interpret as fitness, while the degree is known only for a subset of the nodes. We then use a fitness model, calibrated on the subset of nodes for which degrees are known, to generate ensembles of networks. We focus on topological properties relevant for processes of contagion and distress propagation in networks, i.e. network density and k-core structure. We study how well these properties can be estimated as a function of the size of the subset of nodes utilized for the calibration. We perform a first test on ensembles of synthetic networks generated with the Exponential Random Graph model. We then perform a second test on empirical networks taken from economic and financial contexts (World Trade Web and e-mid interbank network). In both cases, we find that a subset as small as 10% of nodes can be enough to estimate the properties of the network with an error of 5%.

  15. A simulation study of person-fit in the Rasch model

    Directory of Open Access Journals (Sweden)

    Richard Artner

    2016-09-01

    Full Text Available The validation of individual test scores in the Rasch model (1-PL model is of primary importance, but the decision which person-fit index to choose is still not sufficiently answered. In this work, a simulation study was conducted in order to compare five well known person-fit indices in terms of specificity and sensitivity, under different testing conditions. Furthermore, this study analyzed the decrease in specificity of Andersen´s Likelihood-Ratio test in case of person-misfit, using the median of the raw score as an internal criterion, as well as the positive effect of removing suspicious respondents with the index C*. The three non-parametric indices Ht, C* and U3 performed slightly better than the parametric indices OUTFIT and INFIT. All indices performed better with a higher number of respondents and a higher number of items. Ht, OUTFIT, and INFIT showed huge deviations between nominal and actual specificity levels. The simulation revealed that person-misfit has a huge negative impact on the specificity of Andersen´s Likelihood-Ratio test. However, the removal of suspicious respondents with C* worked quite well and the nominal specificity can be almost respected if the specificity level of C* is set to 0.95.

  16. A PID Positioning Controller with a Curve Fitting Model Based on RFID Technology

    Directory of Open Access Journals (Sweden)

    Young-Long Chen

    2013-04-01

    Full Text Available The global positioning system (GPS is an important research topic to solve outdoor positioning problems, but GPS is unable to locate objects accurately and precisely indoors. Some available systems apply ultrasound or optical tracking. This paper presents an efficient proportional-integral-derivative (PID controller with curve fitting model for mobile robot localization and position estimation which adopts passive radio frequency identification (RFID tags in a space. This scheme is based on a mobile robot carries an RFID reader module which reads the installed low-cost passive tags under the floor in a grid-like pattern. The PID controllers increase the efficiency of captured RFID tags and the curve fitting model is used to systematically identify the revolutions per minute (RPM of the motor. We control and monitor the position of the robot from a remote location through a mobile phone via Wi-Fi and Bluetooth network. Experiment results present that the number of captured RFID tags of our proposed scheme outperforms that of the previous scheme.

  17. Lévy Flights and Self-Similar Exploratory Behaviour of Termite Workers: Beyond Model Fitting

    Science.gov (United States)

    Miramontes, Octavio; DeSouza, Og; Paiva, Leticia Ribeiro; Marins, Alessandra; Orozco, Sirio

    2014-01-01

    Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties –including Lévy flights– in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale. PMID:25353958

  18. Levy flights and self-similar exploratory behaviour of termite workers: beyond model fitting.

    Directory of Open Access Journals (Sweden)

    Octavio Miramontes

    Full Text Available Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties--including Lévy flights--in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.

  19. Fitting multilevel models in complex survey data with design weights: Recommendations

    Directory of Open Access Journals (Sweden)

    Carle Adam C

    2009-07-01

    Full Text Available Abstract Background Multilevel models (MLM offer complex survey data analysts a unique approach to understanding individual and contextual determinants of public health. However, little summarized guidance exists with regard to fitting MLM in complex survey data with design weights. Simulation work suggests that analysts should scale design weights using two methods and fit the MLM using unweighted and scaled-weighted data. This article examines the performance of scaled-weighted and unweighted analyses across a variety of MLM and software programs. Methods Using data from the 2005–2006 National Survey of Children with Special Health Care Needs (NS-CSHCN: n = 40,723 that collected data from children clustered within states, I examine the performance of scaling methods across outcome type (categorical vs. continuous, model type (level-1, level-2, or combined, and software (Mplus, MLwiN, and GLLAMM. Results Scaled weighted estimates and standard errors differed slightly from unweighted analyses, agreeing more with each other than with unweighted analyses. However, observed differences were minimal and did not lead to different inferential conclusions. Likewise, results demonstrated minimal differences across software programs, increasing confidence in results and inferential conclusions independent of software choice. Conclusion If including design weights in MLM, analysts should scale the weights and use software that properly includes the scaled weights in the estimation.

  20. Spectral observations of Ellerman bombs and fitting with a two-cloud model

    CERN Document Server

    Hong, Jie; Li, Ying; Fang, Cheng; Cao, Wenda

    2014-01-01

    We study the H$\\alpha$ and Ca II 8542 \\r{A} line spectra of four typical Ellerman bombs (EBs) in active region NOAA 11765 on 2013 June 6, observed with the Fast Imaging Solar Spectrograph installed at the 1.6 meter New Solar Telescope at Big Bear Solar Observatory. Considering that EBs may occur in a restricted region in the lower atmosphere, and that their spectral lines show particular features, we propose a two-cloud model to fit the observed line profiles. The lower cloud can account for the wing emission, and the upper cloud is mainly responsible for the absorption at line center. After choosing carefully the free parameters, we get satisfactory fitting results. As expected, the lower cloud shows an increase of the source function, corresponding to a temperature increase of 400--1000 K in EBs relative to the quiet Sun. This is consistent with previous results deduced from semi-empirical models and confirms that a local heating occurs in the lower atmosphere during the appearance of EBs. We also find that...

  1. A PID Positioning Controller with a Curve Fitting Model Based on RFID Technology

    Directory of Open Access Journals (Sweden)

    Young-Long Chen

    2013-03-01

    Full Text Available The global positioning system (GPS is an important research topic to solve outdoor positioning problems, but GPSis unable to locate objects accurately and precisely indoors. Some available systems apply ultrasound or opticaltracking. This paper presents an efficient proportional-integral-derivative (PID controller with curve fitting model formobile robot localization and position estimation which adopts passive radio frequency identification (RFID tags ina space. This scheme is based on a mobile robot carries an RFID reader module which reads the installed low-costpassive tags under the floor in a grid-like pattern. The PID controllers increase the efficiency of captured RFID tagsand the curve fitting model is used to systematically identify the revolutions per minute (RPM of the motor. Wecontrol and monitor the position of the robot from a remote location through a mobile phone via Wi-Fi and Bluetoothnetwork. Experiment results present that the number of captured RFID tags of our proposed scheme outperformsthat of the previous scheme.

  2. Energy-dependent fitness: a quantitative model for the evolution of yeast transcription factor binding sites.

    Science.gov (United States)

    Mustonen, Ville; Kinney, Justin; Callan, Curtis G; Lässig, Michael

    2008-08-26

    We present a genomewide cross-species analysis of regulation for broad-acting transcription factors in yeast. Our model for binding site evolution is founded on biophysics: the binding energy between transcription factor and site is a quantitative phenotype of regulatory function, and selection is given by a fitness landscape that depends on this phenotype. The model quantifies conservation, as well as loss and gain, of functional binding sites in a coherent way. Its predictions are supported by direct cross-species comparison between four yeast species. We find ubiquitous compensatory mutations within functional sites, such that the energy phenotype and the function of a site evolve in a significantly more constrained way than does its sequence. We also find evidence for substantial evolution of regulatory function involving point mutations as well as sequence insertions and deletions within binding sites. Genes lose their regulatory link to a given transcription factor at a rate similar to the neutral point mutation rate, from which we infer a moderate average fitness advantage of functional over nonfunctional sites. In a wider context, this study provides an example of inference of selection acting on a quantitative molecular trait.

  3. A healthy fear of the unknown: perspectives on the interpretation of parameter fits from computational models in neuroscience.

    Directory of Open Access Journals (Sweden)

    Matthew R Nassar

    2013-04-01

    Full Text Available Fitting models to behavior is commonly used to infer the latent computational factors responsible for generating behavior. However, the complexity of many behaviors can handicap the interpretation of such models. Here we provide perspectives on problems that can arise when interpreting parameter fits from models that provide incomplete descriptions of behavior. We illustrate these problems by fitting commonly used and neurophysiologically motivated reinforcement-learning models to simulated behavioral data sets from learning tasks. These model fits can pass a host of standard goodness-of-fit tests and other model-selection diagnostics even when the models do not provide a complete description of the behavioral data. We show that such incomplete models can be misleading by yielding biased estimates of the parameters explicitly included in the models. This problem is particularly pernicious when the neglected factors are unknown and therefore not easily identified by model comparisons and similar methods. An obvious conclusion is that a parsimonious description of behavioral data does not necessarily imply an accurate description of the underlying computations. Moreover, general goodness-of-fit measures are not a strong basis to support claims that a particular model can provide a generalized understanding of the computations that govern behavior. To help overcome these challenges, we advocate the design of tasks that provide direct reports of the computational variables of interest. Such direct reports complement model-fitting approaches by providing a more complete, albeit possibly more task-specific, representation of the factors that drive behavior. Computational models then provide a means to connect such task-specific results to a more general algorithmic understanding of the brain.

  4. Observations from using models to fit the gas production of varying volume test cells and landfills.

    Science.gov (United States)

    Lamborn, Julia

    2012-12-01

    Landfill operators are looking for more accurate models to predict waste degradation and landfill gas production. The simple microbial growth and decay models, whilst being easy to use, have been shown to be inaccurate. Many of the newer and more complex (component) models are highly parameter hungry and many of the required parameters have not been collected or measured at full-scale landfills. This paper compares the results of using different models (LANDGEM, HBM, and two Monod models developed by the author) to fit the gas production of laboratory scale, field test cell and full-scale landfills and discusses some observations that can be made regarding the scalability of gas generation rates. The comparison of these results show that the fast degradation rate that occurs at laboratory scale is not replicated at field-test cell and full-scale landfills. At small scale, all the models predict a slower rate of gas generation than actually occurs. At field test cell and full-scale a number of models predict a faster gas generation than actually occurs. Areas for future work have been identified, which include investigations into the capture efficiency of gas extraction systems and into the parameter sensitivity and identification of the critical parameters for field-test cell and full-scale landfill predication.

  5. The challenges of fitting an item response theory model to the Social Anhedonia Scale.

    Science.gov (United States)

    Reise, Steven P; Horan, William P; Blanchard, Jack J

    2011-05-01

    This study explored the application of latent variable measurement models to the Social Anhedonia Scale (SAS; Eckblad, Chapman, Chapman, & Mishlove, 1982), a widely used and influential measure in schizophrenia-related research. Specifically, we applied unidimensional and bifactor item response theory (IRT) models to data from a community sample of young adults (n = 2,227). Ordinal factor analyses revealed that identifying a coherent latent structure in the 40-item SAS data was challenging due to (a) the presence of multiple small content clusters (e.g., doublets); (b) modest relations between those clusters, which, in turn, implies a general factor of only modest strength; (c) items that shared little variance with the majority of items; and (d) cross-loadings in bifactor solutions. Consequently, we conclude that SAS responses cannot be modeled accurately by either unidimensional or bifactor IRT models. Although the application of a bifactor model to a reduced 17-item set met with better success, significant psychometric and substantive problems remained. Results highlight the challenges of applying latent variable models to scales that were not originally designed to fit these models.

  6. Comprehensive two-dimensional river ice model based on boundary-fitted coordinate transformation method

    Directory of Open Access Journals (Sweden)

    Ze-yu MAO

    2014-01-01

    Full Text Available River ice is a natural phenomenon in cold regions, influenced by meteorology, geomorphology, and hydraulic conditions. River ice processes involve complex interactions between hydrodynamic, mechanical, and thermal processes, and they are also influenced by weather and hydrologic conditions. Because natural rivers are serpentine, with bends, narrows, and straight reaches, the commonly-used one-dimensional river ice models and two-dimensional models based on the rectangular Cartesian coordinates are incapable of simulating the physical phenomena accurately. In order to accurately simulate the complicated river geometry and overcome the difficulties of numerical simulation resulting from both complex boundaries and differences between length and width scales, a two-dimensional river ice numerical model based on a boundary-fitted coordinate transformation method was developed. The presented model considers the influence of the frazil ice accumulation under ice cover and the shape of the leading edge of ice cover during the freezing process. The model is capable of determining the velocity field, the distribution of water temperature, the concentration distribution of frazil ice, the transport of floating ice, the progression, stability, and thawing of ice cover, and the transport, accumulation, and erosion of ice under ice cover. A MacCormack scheme was used to solve the equations numerically. The model was validated with field observations from the Hequ Reach of the Yellow River. Comparison of simulation results with field data indicates that the model is capable of simulating the river ice process with high accuracy.

  7. Tanning Shade Gradations of Models in Mainstream Fitness and Muscle Enthusiast Magazines: Implications for Skin Cancer Prevention in Men

    National Research Council Canada - National Science Library

    Basch, Corey H; Hillyer, Grace Clarke; Ethan, Danna; Berdnik, Alyssa; Basch, Charles E

    2015-01-01

    .... This study evaluated and compared tanning shade gradations of adult Caucasian male and female model images in mainstream fitness and muscle enthusiast magazines. Sixty-nine U.S. magazine issues...

  8. The use of the Levenberg-Marquardt curve-fitting algorithm in pharmacokinetic modelling of DCE-MRI data.

    Science.gov (United States)

    Ahearn, T S; Staff, R T; Redpath, T W; Semple, S I K

    2005-05-07

    The use of curve-fitting and compartmental modelling for calculating physiological parameters from measured data has increased in popularity in recent years. Finding the 'best fit' of a model to data involves the minimization of a merit function. An example of a merit function is the sum of the squares of the differences between the data points and the model estimated points. This is facilitated by curve-fitting algorithms. Two curve-fitting methods, Levenberg-Marquardt and MINPACK-1, are investigated with respect to the search start points that they require and the accuracy of the returned fits. We have simulated one million dynamic contrast enhanced MRI curves using a range of parameters and investigated the use of single and multiple search starting points. We found that both algorithms, when used with a single starting point, return unreliable fits. When multiple start points are used, we found that both algorithms returned reliable parameters. However the MINPACK-1 method generally outperformed the Levenberg-Marquardt method. We conclude that the use of a single starting point when fitting compartmental modelling data such as this produces unsafe results and we recommend the use of multiple start points in order to find the global minima.

  9. Evaluation of the use of surrogate Laminaria digitata in eco-hydraulic laboratory experiments

    Institute of Scientific and Technical Information of China (English)

    PAUL Maike; HENRY Pierre-Yves T

    2014-01-01

    Inert surrogates can avoid husbandry and adaptation problems of live vegetation in laboratories. Surrogates are generally used for experiments on vegetation-hydrodynamics interactions, but it is unclear how well they replicate field conditions. Here, surrogates for the brown macroalgae Laminaria digitata were developed to reproduce its hydraulic roughness. Plant shape, stiffness and buoyancy of L. digitata were evaluated and compared to the properties of inert materials. Different surrogate materials and shapes were exposed to unidirectional flow. It is concluded that buoyancy is an important factor in low flow conditions and a basic shape might be sufficient to model complex shaped plants resulting in the same streamlined shape.

  10. Fit model between participation statement of exhibitors and visitors to improve the exhibition performance

    Directory of Open Access Journals (Sweden)

    Cristina García Magro

    2015-06-01

    Full Text Available Purpose: The aims of the paper is offers a model of analysis which allows to measure the impact on the performance of fairs, as well as the knowledge or not of the motives of participation of the visitors on the part of the exhibitors. Design/methodology: A review of the literature is established concerning two of the principal interested agents, exhibitors and visitors, focusing. The study is focused on the line of investigation referred to the motives of participation or not in a trade show. According to the information thrown by each perspectives of study, a comparative analysis is carried out in order to determine the degree of existing understanding between both. Findings: The trade shows allow to be studied from an integrated strategic marketing approach. The fit model between the reasons for participation of exhibitors and visitors offer information on the lack of an understanding between exhibitors and visitors, leading to dissatisfaction with the participation, a fact that is reflected in the fair success. The model identified shows that a strategic plan must be designed in which the reason for participation of visitor was incorporated as moderating variable of the reason for participation of exhibitors. The article concludes with the contribution of a series of proposals for the improvement of fairground results. Social implications: The fit model that improve the performance of trade shows, implicitly leads to successful achievement of targets for multiple stakeholders beyond the consideration of visitors and exhibitors. Originality/value: The integrated perspective of stakeholders allows the study of the existing relationships between the principal groups of interest, in such a way that, having knowledge on the condition of the question of the trade shows facilitates the task of the investigator in future academic works and allows that the interested groups obtain a better performance to the participation in fairs, as visitor or as

  11. GENERATING SOPHISTICATED SPATIAL SURROGATES USING THE MIMS SPATIAL ALLOCATOR

    Science.gov (United States)

    The Multimedia Integrated Modeling System (MIMS) Spatial Allocator is open-source software for generating spatial surrogates for emissions modeling, changing the map projection of Shapefiles, and performing other types of spatial allocation that does not require the use of a comm...

  12. Improving the Fit of a Land-Surface Model to Data Using its Adjoint

    Science.gov (United States)

    Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine

    2016-04-01

    Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.

  13. Variance analysis for model updating with a finite element based subspace fitting approach

    Science.gov (United States)

    Gautier, Guillaume; Mevel, Laurent; Mencik, Jean-Mathieu; Serra, Roger; Döhler, Michael

    2017-07-01

    Recently, a subspace fitting approach has been proposed for vibration-based finite element model updating. The approach makes use of subspace-based system identification, where the extended observability matrix is estimated from vibration measurements. Finite element model updating is performed by correlating the model-based observability matrix with the estimated one, by using a single set of experimental data. Hence, the updated finite element model only reflects this single test case. However, estimates from vibration measurements are inherently exposed to uncertainty due to unknown excitation, measurement noise and finite data length. In this paper, a covariance estimation procedure for the updated model parameters is proposed, which propagates the data-related covariance to the updated model parameters by considering a first-order sensitivity analysis. In particular, this propagation is performed through each iteration step of the updating minimization problem, by taking into account the covariance between the updated parameters and the data-related quantities. Simulated vibration signals are used to demonstrate the accuracy and practicability of the derived expressions. Furthermore, an application is shown on experimental data of a beam.

  14. Explicit finite element modelling of the impaction of metal press-fit acetabular components.

    Science.gov (United States)

    Hothi, H S; Busfield, J J C; Shelton, J C

    2011-03-01

    Metal press-fit cups and shells are widely used in hip resurfacing and total hip replacement procedures. These acetabular components are inserted into a reamed acetabula cavity by either impacting their inner polar surface (shells) or outer rim (cups). Two-dimensional explicit dynamics axisymmetric finite element models were developed to simulate these impaction methods. Greater impact velocities were needed to insert the components when the interference fit was increased; a minimum velocity of 2 m/s was required to fully seat a component with a 2 mm interference between the bone and outer diameter. Changing the component material from cobalt-chromium to titanium alloy resulted in a reduction in the number of impacts on the pole to seat it from 14 to nine. Of greatest significance, it was found that locking a rigid cap to the cup or shell rim resulted in up to nine fewer impactions being necessary to seat it than impacting directly on the polar surface or using a cap free from the rim of the component, as is the case with many commercial resurfacing cup impaction devices currently used. This is important to impactor design and could make insertion easier and also reduce acetabula bone damage.