WorldWideScience

Sample records for surrogate fitness model

  1. Surrogate waveform models

    Science.gov (United States)

    Blackman, Jonathan; Field, Scott; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel

    2015-04-01

    With the advanced detector era just around the corner, there is a strong need for fast and accurate models of gravitational waveforms from compact binary coalescence. Fast surrogate models can be built out of an accurate but slow waveform model with minimal to no loss in accuracy, but may require a large number of evaluations of the underlying model. This may be prohibitively expensive if the underlying is extremely slow, for example if we wish to build a surrogate for numerical relativity. We examine alternate choices to building surrogate models which allow for a more sparse set of input waveforms. Research supported in part by NSERC.

  2. Surrogate Modeling for Geometry Optimization

    DEFF Research Database (Denmark)

    Rojas Larrazabal, Marielba de la Caridad; Abraham, Yonas; Holzwarth, Natalie

    2009-01-01

    A new approach for optimizing the nuclear geometry of an atomic system is described. Instead of the original expensive objective function (energy functional), a small number of simpler surrogates is used.......A new approach for optimizing the nuclear geometry of an atomic system is described. Instead of the original expensive objective function (energy functional), a small number of simpler surrogates is used....

  3. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models

    Directory of Open Access Journals (Sweden)

    Scott E. Field

    2014-07-01

    Full Text Available We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform’s value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mc_{fit} online operations, where c_{fit} denotes the fitting function operation count and, typically, m≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 10^{5}M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in

  4. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models

    Science.gov (United States)

    Field, Scott E.; Galley, Chad R.; Hesthaven, Jan S.; Kaye, Jason; Tiglio, Manuel

    2014-07-01

    We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mcfit) online operations, where cfit denotes the fitting function operation count and, typically, m ≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 105M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in generating new waveforms with a

  5. Reduced cost mission design using surrogate models

    Science.gov (United States)

    Feldhacker, Juliana D.; Jones, Brandon A.; Doostan, Alireza; Hampton, Jerrad

    2016-01-01

    This paper uses surrogate models to reduce the computational cost associated with spacecraft mission design in three-body dynamical systems. Sampling-based least squares regression is used to project the system response onto a set of orthogonal bases, providing a representation of the ΔV required for rendezvous as a reduced-order surrogate model. Models are presented for mid-field rendezvous of spacecraft in orbits in the Earth-Moon circular restricted three-body problem, including a halo orbit about the Earth-Moon L2 libration point (EML-2) and a distant retrograde orbit (DRO) about the Moon. In each case, the initial position of the spacecraft, the time of flight, and the separation between the chaser and the target vehicles are all considered as design inputs. The results show that sample sizes on the order of 102 are sufficient to produce accurate surrogates, with RMS errors reaching 0.2 m/s for the halo orbit and falling below 0.01 m/s for the DRO. A single function call to the resulting surrogate is up to two orders of magnitude faster than computing the same solution using full fidelity propagators. The expansion coefficients solved for in the surrogates are then used to conduct a global sensitivity analysis of the ΔV on each of the input parameters, which identifies the separation between the spacecraft as the primary contributor to the ΔV cost. Finally, the models are demonstrated to be useful for cheap evaluation of the cost function in constrained optimization problems seeking to minimize the ΔV required for rendezvous. These surrogate models show significant advantages for mission design in three-body systems, in terms of both computational cost and capabilities, over traditional Monte Carlo methods.

  6. Active Subspaces for Wind Plant Surrogate Modeling

    Energy Technology Data Exchange (ETDEWEB)

    King, Ryan N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dykes, Katherine L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Adcock, Christiane [Massachusetts Institute of Technology

    2018-01-12

    Understanding the uncertainty in wind plant performance is crucial to their cost-effective design and operation. However, conventional approaches to uncertainty quantification (UQ), such as Monte Carlo techniques or surrogate modeling, are often computationally intractable for utility-scale wind plants because of poor congergence rates or the curse of dimensionality. In this paper we demonstrate that wind plant power uncertainty can be well represented with a low-dimensional active subspace, thereby achieving a significant reduction in the dimension of the surrogate modeling problem. We apply the active sub-spaces technique to UQ of plant power output with respect to uncertainty in turbine axial induction factors, and find a single active subspace direction dominates the sensitivity in power output. When this single active subspace direction is used to construct a quadratic surrogate model, the number of model unknowns can be reduced by up to 3 orders of magnitude without compromising performance on unseen test data. We conclude that the dimension reduction achieved with active subspaces makes surrogate-based UQ approaches tractable for utility-scale wind plants.

  7. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, Dongli [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Adbel-Khalik, Hany S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  8. Airfoil Shape Optimization based on Surrogate Model

    Science.gov (United States)

    Mukesh, R.; Lingadurai, K.; Selvakumar, U.

    2018-02-01

    Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.

  9. On Design Mining: Coevolution and Surrogate Models.

    Science.gov (United States)

    Preen, Richard J; Bull, Larry

    2017-01-01

    Design mining is the use of computational intelligence techniques to iteratively search and model the attribute space of physical objects evaluated directly through rapid prototyping to meet given objectives. It enables the exploitation of novel materials and processes without formal models or complex simulation. In this article, we focus upon the coevolutionary nature of the design process when it is decomposed into concurrent sub-design-threads due to the overall complexity of the task. Using an abstract, tunable model of coevolution, we consider strategies to sample subthread designs for whole-system testing and how best to construct and use surrogate models within the coevolutionary scenario. Drawing on our findings, we then describe the effective design of an array of six heterogeneous vertical-axis wind turbines.

  10. Multiple Surrogate Modeling for Wire-Wrapped Fuel Assembly Optimization

    International Nuclear Information System (INIS)

    Raza, Wasim; Kim, Kwang-Yong

    2007-01-01

    In this work, shape optimization of seven pin wire wrapped fuel assembly has been carried out in conjunction with RANS analysis in order to evaluate the performances of surrogate models. Previously, Ahmad and Kim performed the flow and heat transfer analysis based on the three-dimensional RANS analysis. But numerical optimization has not been applied to the design of wire-wrapped fuel assembly, yet. Surrogate models are being widely used in multidisciplinary optimization. Queipo et al. reviewed various surrogates based models used in aerospace applications. Goel et al. developed weighted average surrogate model based on response surface approximation (RSA), radial basis neural network (RBNN) and Krigging (KRG) models. In addition to the three basic models, RSA, RBNN and KRG, the multiple surrogate model, PBA also has been employed. Two geometric design variables and a multi-objective function with a weighting factor have been considered for this problem

  11. Nonspinning numerical relativity waveform surrogates: assessing the model

    Science.gov (United States)

    Field, Scott; Blackman, Jonathan; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel

    2015-04-01

    Recently, multi-modal gravitational waveform surrogate models have been built directly from data numerically generated by the Spectral Einstein Code (SpEC). I will describe ways in which the surrogate model error can be quantified. This task, in turn, requires (i) characterizing differences between waveforms computed by SpEC with those predicted by the surrogate model and (ii) estimating errors associated with the SpEC waveforms from which the surrogate is built. Both pieces can have numerous sources of numerical and systematic errors. We make an attempt to study the most dominant error sources and, ultimately, the surrogate model's fidelity. These investigations yield information about the surrogate model's uncertainty as a function of time (or frequency) and parameter, and could be useful in parameter estimation studies which seek to incorporate model error. Finally, I will conclude by comparing the numerical relativity surrogate model to other inspiral-merger-ringdown models. A companion talk will cover the building of multi-modal surrogate models.

  12. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  13. Adaptive surrogate model based multiobjective optimization for coastal aquifer management

    Science.gov (United States)

    Song, Jian; Yang, Yun; Wu, Jianfeng; Wu, Jichun; Sun, Xiaomin; Lin, Jin

    2018-06-01

    In this study, a novel surrogate model assisted multiobjective memetic algorithm (SMOMA) is developed for optimal pumping strategies of large-scale coastal groundwater problems. The proposed SMOMA integrates an efficient data-driven surrogate model with an improved non-dominated sorted genetic algorithm-II (NSGAII) that employs a local search operator to accelerate its convergence in optimization. The surrogate model based on Kernel Extreme Learning Machine (KELM) is developed and evaluated as an approximate simulator to generate the patterns of regional groundwater flow and salinity levels in coastal aquifers for reducing huge computational burden. The KELM model is adaptively trained during evolutionary search to satisfy desired fidelity level of surrogate so that it inhibits error accumulation of forecasting and results in correctly converging to true Pareto-optimal front. The proposed methodology is then applied to a large-scale coastal aquifer management in Baldwin County, Alabama. Objectives of minimizing the saltwater mass increase and maximizing the total pumping rate in the coastal aquifers are considered. The optimal solutions achieved by the proposed adaptive surrogate model are compared against those solutions obtained from one-shot surrogate model and original simulation model. The adaptive surrogate model does not only improve the prediction accuracy of Pareto-optimal solutions compared with those by the one-shot surrogate model, but also maintains the equivalent quality of Pareto-optimal solutions compared with those by NSGAII coupled with original simulation model, while retaining the advantage of surrogate models in reducing computational burden up to 94% of time-saving. This study shows that the proposed methodology is a computationally efficient and promising tool for multiobjective optimizations of coastal aquifer managements.

  14. Error modeling for surrogates of dynamical systems using machine learning

    Science.gov (United States)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-12-01

    A machine-learning-based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (e.g., random forests, LASSO) to map a large set of inexpensively computed `error indicators' (i.e., features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering), and subsequently constructs a `local' regression model to predict the time-instantaneous error within each identified region of feature space. We consider two uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance, and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (e.g., time-integrated errors). We apply the proposed framework to model errors in reduced-order models of nonlinear oil--water subsurface flow simulations. The reduced-order models used in this work entail application of trajectory piecewise linearization with proper orthogonal decomposition. When the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.

  15. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    Science.gov (United States)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  16. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel; Buse, Gerrit; Pfluger, Dirk

    2012-01-01

    of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute

  17. Surrogate-Based Optimization of Biogeochemical Transport Models

    Science.gov (United States)

    Prieß, Malte; Slawig, Thomas

    2010-09-01

    First approaches towards a surrogate-based optimization method for a one-dimensional marine biogeochemical model of NPZD type are presented. The model, developed by Oschlies and Garcon [1], simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. A key issue is to minimize the misfit between the model output and given observational data. Our aim is to reduce the overall optimization cost avoiding expensive function and derivative evaluations by using a surrogate model replacing the high-fidelity model in focus. This in particular becomes important for more complex three-dimensional models. We analyse a coarsening in the discretization of the model equations as one way to create such a surrogate. Here the numerical stability crucially depends upon the discrete stepsize in time and space and the biochemical terms. We show that for given model parameters the level of grid coarsening can be choosen accordingly yielding a stable and satisfactory surrogate. As one example of a surrogate-based optimization method we present results of the Aggressive Space Mapping technique (developed by John W. Bandler [2, 3]) applied to the optimization of this one-dimensional biogeochemical transport model.

  18. Black-hole kicks from numerical-relativity surrogate models

    Science.gov (United States)

    Gerosa, Davide; Hébert, François; Stein, Leo C.

    2018-05-01

    Binary black holes radiate linear momentum in gravitational waves as they merge. Recoils imparted to the black-hole remnant can reach thousands of km /s , thus ejecting black holes from their host galaxies. We exploit recent advances in gravitational waveform modeling to quickly and reliably extract recoils imparted to generic, precessing, black-hole binaries. Our procedure uses a numerical-relativity surrogate model to obtain the gravitational waveform given a set of binary parameters; then, from this waveform we directly integrate the gravitational-wave linear momentum flux. This entirely bypasses the need for fitting formulas which are typically used to model black-hole recoils in astrophysical contexts. We provide a thorough exploration of the black-hole kick phenomenology in the parameter space, summarizing and extending previous numerical results on the topic. Our extraction procedure is made publicly available as a module for the Python programming language named surrkick. Kick evaluations take ˜0.1 s on a standard off-the-shelf machine, thus making our code ideal to be ported to large-scale astrophysical studies.

  19. Surrogate Model for Recirculation Phase LBLOCA and DET Application

    International Nuclear Information System (INIS)

    Fynan, Douglas A; Ahn, Kwang-Il; Lee, John C.

    2014-01-01

    In the nuclear safety field, response surfaces were used in the first demonstration of the code scaling, applicability, and uncertainty (CSAU) methodology to quantify the uncertainty of the peak clad temperature (PCT) during a large-break loss-of-coolant accident (LBLOCA). Surrogates could have applications in other nuclear safety areas such as dynamic probabilistic safety assessment (PSA). Dynamic PSA attempts to couple the probabilistic nature of failure events, component transitions, and human reliability to deterministic calculations of time-dependent nuclear power plant (NPP) responses usually through the use of thermal-hydraulic (TH) system codes. The overall mathematical complexity of the dynamic PSA architectures with many embedded computational expensive TH code calculations with large input/output data streams have limited realistic studies of NPPs. This paper presents a time-dependent surrogate model for the recirculation phase of a hot leg LBLOCA in the OPR-1000. The surrogate model is developed through the ACE algorithm, a powerful nonparametric regression technique, trained on RELAP5 simulations of the LBLOCA. Benchmarking of the surrogate is presented and an application to a simplified dynamic event tree (DET). A time-dependent surrogate model to predict core subcooling during the recirculation phase of a hot leg LBLOCA in the OPR-1000 has been developed. The surrogate assumed the structure of a general discrete time dynamic model and learned the nonlinear functional form by performing nonparametric regression on RELAP5 simulations with the ACE algorithm. The surrogate model input parameters represent mass and energy flux terms to the RCS that appeared as user supplied or code calculated boundary conditions in the RELAP5 model. The surrogate accurately predicted the TH behavior of the core for a variety of HPSI system performance and containment conditions when compared with RELAP5 simulations. The surrogate was applied in a DET application replacing

  20. Emulating facial biomechanics using multivariate partial least squares surrogate models.

    Science.gov (United States)

    Wu, Tim; Martens, Harald; Hunter, Peter; Mithraratne, Kumar

    2014-11-01

    A detailed biomechanical model of the human face driven by a network of muscles is a useful tool in relating the muscle activities to facial deformations. However, lengthy computational times often hinder its applications in practical settings. The objective of this study is to replace precise but computationally demanding biomechanical model by a much faster multivariate meta-model (surrogate model), such that a significant speedup (to real-time interactive speed) can be achieved. Using a multilevel fractional factorial design, the parameter space of the biomechanical system was probed from a set of sample points chosen to satisfy maximal rank optimality and volume filling. The input-output relationship at these sampled points was then statistically emulated using linear and nonlinear, cross-validated, partial least squares regression models. It was demonstrated that these surrogate models can mimic facial biomechanics efficiently and reliably in real-time. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Progress in Chemical Kinetic Modeling for Surrogate Fuels

    Energy Technology Data Exchange (ETDEWEB)

    Pitz, W J; Westbrook, C K; Herbinet, O; Silke, E J

    2008-06-06

    Gasoline, diesel, and other alternative transportation fuels contain hundreds to thousands of compounds. It is currently not possible to represent all these compounds in detailed chemical kinetic models. Instead, these fuels are represented by surrogate fuel models which contain a limited number of representative compounds. We have been extending the list of compounds for detailed chemical models that are available for use in fuel surrogate models. Detailed models for components with larger and more complicated fuel molecular structures are now available. These advancements are allowing a more accurate representation of practical and alternative fuels. We have developed detailed chemical kinetic models for fuels with higher molecular weight fuel molecules such as n-hexadecane (C16). Also, we can consider more complicated fuel molecular structures like cyclic alkanes and aromatics that are found in practical fuels. For alternative fuels, the capability to model large biodiesel fuels that have ester structures is becoming available. These newly addressed cyclic and ester structures in fuels profoundly affect the reaction rate of the fuel predicted by the model. Finally, these surrogate fuel models contain large numbers of species and reactions and must be reduced for use in multi-dimensional models for spark-ignition, HCCI and diesel engines.

  2. Fitting PAC spectra with stochastic models: PolyPacFit

    Energy Technology Data Exchange (ETDEWEB)

    Zacate, M. O., E-mail: zacatem1@nku.edu [Northern Kentucky University, Department of Physics and Geology (United States); Evenson, W. E. [Utah Valley University, College of Science and Health (United States); Newhouse, R.; Collins, G. S. [Washington State University, Department of Physics and Astronomy (United States)

    2010-04-15

    PolyPacFit is an advanced fitting program for time-differential perturbed angular correlation (PAC) spectroscopy. It incorporates stochastic models and provides robust options for customization of fits. Notable features of the program include platform independence and support for (1) fits to stochastic models of hyperfine interactions, (2) user-defined constraints among model parameters, (3) fits to multiple spectra simultaneously, and (4) any spin nuclear probe.

  3. Models selection and fitting

    International Nuclear Information System (INIS)

    Martin Llorente, F.

    1990-01-01

    The models of atmospheric pollutants dispersion are based in mathematic algorithms that describe the transport, diffusion, elimination and chemical reactions of atmospheric contaminants. These models operate with data of contaminants emission and make an estimation of quality air in the area. This model can be applied to several aspects of atmospheric contamination

  4. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel

    2012-06-01

    Understanding the influence of multiple parameters in a complex simulation setting is a difficult task. In the ideal case, the scientist can freely steer such a simulation and is immediately presented with the results for a certain configuration of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute for the time-consuming simulation. The surrogate model we propose is based on the sparse grid technique, and we identify the main computational tasks associated with its evaluation and its extension. We further show how distributed data management combined with the specific use of accelerators allows us to approximate and deliver simulation results to a high-resolution visualization system in real-time. This significantly enhances the steering workflow and facilitates the interactive exploration of large datasets. © 2012 IEEE.

  5. A Multi-Fidelity Surrogate Model for the Equation of State for Mixtures of Real Gases

    Science.gov (United States)

    Ouellet, Frederick; Park, Chanyoung; Koneru, Rahul; Balachandar, S.; Rollin, Bertrand

    2017-11-01

    The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the products of detonated explosives must be treated as real gases while the ideal gas equation of state is used for the ambient air. As the products expand outward, they mix with the air and create a region where both state equations must be satisfied. One of the most accurate, yet expensive, methods to handle this problem is an algorithm that iterates between both state equations until both pressure and thermal equilibrium are achieved inside of each computational cell. This work creates a multi-fidelity surrogate model to replace this process. This is achieved by using a Kriging model to produce a curve fit which interpolates selected data from the iterative algorithm. The surrogate is optimized for computing speed and model accuracy by varying the number of sampling points chosen to construct the model. The performance of the surrogate with respect to the iterative method is tested in simulations using a finite volume code. The model's computational speed and accuracy are analyzed to show the benefits of this novel approach. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA00023.

  6. Optimal design of hydraulic excavator working device based on multiple surrogate models

    Directory of Open Access Journals (Sweden)

    Qingying Qiu

    2016-05-01

    Full Text Available The optimal design of hydraulic excavator working device is often characterized by computationally expensive analysis methods such as finite element analysis. Significant difficulties also exist when using a sensitivity-based decomposition approach to such practical engineering problems because explicit mathematical formulas between the objective function and design variables are impossible to formulate. An effective alternative is known as the surrogate model. The purpose of this article is to provide a comparative study on multiple surrogate models, including the response surface methodology, Kriging, radial basis function, and support vector machine, and select the one that best fits the optimization of the working device. In this article, a new modeling strategy based on the combination of the dimension variables between hinge joints and the forces loaded on hinge joints of the working device is proposed. In addition, the extent to which the accuracy of the surrogate models depends on different design variables is presented. The bionic intelligent optimization algorithm is then used to obtain the optimal results, which demonstrate that the maximum stresses calculated by the predicted method and finite element analysis are quite similar, but the efficiency of the former is much higher than that of the latter.

  7. Rapid Optimization of External Quantum Efficiency of Thin Film Solar Cells Using Surrogate Modeling of Absorptivity.

    Science.gov (United States)

    Kaya, Mine; Hajimirza, Shima

    2018-05-25

    This paper uses surrogate modeling for very fast design of thin film solar cells with improved solar-to-electricity conversion efficiency. We demonstrate that the wavelength-specific optical absorptivity of a thin film multi-layered amorphous-silicon-based solar cell can be modeled accurately with Neural Networks and can be efficiently approximated as a function of cell geometry and wavelength. Consequently, the external quantum efficiency can be computed by averaging surrogate absorption and carrier recombination contributions over the entire irradiance spectrum in an efficient way. Using this framework, we optimize a multi-layer structure consisting of ITO front coating, metallic back-reflector and oxide layers for achieving maximum efficiency. Our required computation time for an entire model fitting and optimization is 5 to 20 times less than the best previous optimization results based on direct Finite Difference Time Domain (FDTD) simulations, therefore proving the value of surrogate modeling. The resulting optimization solution suggests at least 50% improvement in the external quantum efficiency compared to bare silicon, and 25% improvement compared to a random design.

  8. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  9. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-01

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  10. A fast surrogate model tailor-made for real time control

    DEFF Research Database (Denmark)

    Borup, Morten; Thrysøe, Cecilie; Arnbjerg-Nielsen, Karsten

    A surrogate model of a detailed hydraulic urban drainage model is created for supplying inflow forecasts to an MPC model for 31 separate locations. The original model is subdivided into 66 relationships extracted from the original model. The surrogate model is 9000 times faster than the original...... model, with just a minor deviation from the original model results....

  11. Surrogate reservoir models for CSI well probabilistic production forecast

    Directory of Open Access Journals (Sweden)

    Saúl Buitrago

    2017-09-01

    Full Text Available The aim of this work is to present the construction and use of Surrogate Reservoir Models capable of accurately predicting cumulative oil production for every well stimulated with cyclic steam injection at any given time in a heavy oil reservoir in Mexico considering uncertain variables. The central composite experimental design technique was selected to capture the maximum amount of information from the model response with a minimum number of reservoir models simulations. Four input uncertain variables (the dead oil viscosity with temperature, the reservoir pressure, the reservoir permeability and oil sand thickness hydraulically connected to the well were selected as the ones with more impact on the initial hot oil production rate according to an analytical production prediction model. Twenty five runs were designed and performed with the STARS simulator for each well type on the reservoir model. The results show that the use of Surrogate Reservoir Models is a fast viable alternative to perform probabilistic production forecasting of the reservoir.

  12. Surrogate modeling of joint flood risk across coastal watersheds

    Science.gov (United States)

    Bass, Benjamin; Bedient, Philip

    2018-03-01

    This study discusses the development and performance of a rapid prediction system capable of representing the joint rainfall-runoff and storm surge flood response of tropical cyclones (TCs) for probabilistic risk analysis. Due to the computational demand required for accurately representing storm surge with the high-fidelity ADvanced CIRCulation (ADCIRC) hydrodynamic model and its coupling with additional numerical models to represent rainfall-runoff, a surrogate or statistical model was trained to represent the relationship between hurricane wind- and pressure-field characteristics and their peak joint flood response typically determined from physics based numerical models. This builds upon past studies that have only evaluated surrogate models for predicting peak surge, and provides the first system capable of probabilistically representing joint flood levels from TCs. The utility of this joint flood prediction system is then demonstrated by improving upon probabilistic TC flood risk products, which currently account for storm surge but do not take into account TC associated rainfall-runoff. Results demonstrate the source apportionment of rainfall-runoff versus storm surge and highlight that slight increases in flood risk levels may occur due to the interaction between rainfall-runoff and storm surge as compared to the Federal Emergency Management Association's (FEMAs) current practices.

  13. Bayesian calibration of the Community Land Model using surrogates

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Swiler, Laura Painton

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.

  14. Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kadoura, Ahmad, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa; Sun, Shuyu, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa [Computational Transport Phenomena Laboratory, The Earth Sciences and Engineering Department, The Physical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955-6900 (Saudi Arabia); Siripatana, Adil, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa; Hoteit, Ibrahim, E-mail: ibrahim.hoteit@kaust.edu.sa [Earth Fluid Modeling and Predicting Group, The Earth Sciences and Engineering Department, The Physical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955-6900 (Saudi Arabia); Knio, Omar, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa [Uncertainty Quantification Center, The Applied Mathematics and Computational Science Department, The Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955-6900 (Saudi Arabia)

    2016-06-07

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH{sub 4}, N{sub 2}, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO{sub 2} and C{sub 2} H{sub 6}.

  15. Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation

    KAUST Repository

    Kadoura, Ahmad Salim

    2016-06-01

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6.

  16. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    International Nuclear Information System (INIS)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-01-01

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well

  17. Development of a surrogate model for elemental analysis using a natural gamma ray spectroscopy tool

    International Nuclear Information System (INIS)

    Zhang, Qiong

    2015-01-01

    A surrogate model was developed that is based on fitting a semi-empirical model to GEANT4 computed spectra at various casing and cement thicknesses within the borehole. • Future work will involve case studies and an extension of the Monte Carlo computed elemental standards to a variety of nuclear logging tool designs

  18. Optimization of inlet plenum of A PBMR using surrogate modeling

    International Nuclear Information System (INIS)

    Lee, Sang-Moon; Kim, Kwang-Yong

    2009-01-01

    The purpose of present work is to optimize the design of inlet plenum of PBMR type gas cooled nuclear reactor numerically using a combining of three-dimensional Reynolds-averaged Navier-Stokes (RANS) analysis and surrogate modeling technique. Shear stress transport (SST) turbulence model is used as a turbulence closure. Three geometric design variables are selected, namely, rising channel diameter to plenum height ratio, aspect ratio of the plenum cross section, and inlet port angle. The objective function is defined as a linear combination of uniformity of three-dimensional flow distribution term and pressure drop in the inlet plenum and rising channels of PBMR term with a weighting factor. Twenty design points are selected using Latin-hypercube method of design of experiment and objective function values are obtained at each design point using RANS solver. (author)

  19. Conservative strategy-based ensemble surrogate model for optimal groundwater remediation design at DNAPLs-contaminated sites

    Science.gov (United States)

    Ouyang, Qi; Lu, Wenxi; Lin, Jin; Deng, Wenbing; Cheng, Weiguo

    2017-08-01

    The surrogate-based simulation-optimization techniques are frequently used for optimal groundwater remediation design. When this technique is used, surrogate errors caused by surrogate-modeling uncertainty may lead to generation of infeasible designs. In this paper, a conservative strategy that pushes the optimal design into the feasible region was used to address surrogate-modeling uncertainty. In addition, chance-constrained programming (CCP) was adopted to compare with the conservative strategy in addressing this uncertainty. Three methods, multi-gene genetic programming (MGGP), Kriging (KRG) and support vector regression (SVR), were used to construct surrogate models for a time-consuming multi-phase flow model. To improve the performance of the surrogate model, ensemble surrogates were constructed based on combinations of different stand-alone surrogate models. The results show that: (1) the surrogate-modeling uncertainty was successfully addressed by the conservative strategy, which means that this method is promising for addressing surrogate-modeling uncertainty. (2) The ensemble surrogate model that combines MGGP with KRG showed the most favorable performance, which indicates that this ensemble surrogate can utilize both stand-alone surrogate models to improve the performance of the surrogate model.

  20. Measured, modeled, and causal conceptions of fitness

    Science.gov (United States)

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  1. Adaptive Test Selection for Factorization-based Surrogate Fitness in Genetic Programming

    Directory of Open Access Journals (Sweden)

    Krawiec Krzysztof

    2017-12-01

    Full Text Available Genetic programming (GP is a variant of evolutionary algorithm where the entities undergoing simulated evolution are computer programs. A fitness function in GP is usually based on a set of tests, each of which defines the desired output a correct program should return for an exemplary input. The outcomes of interactions between programs and tests in GP can be represented as an interaction matrix, with rows corresponding to programs in the current population and columns corresponding to tests. In previous work, we proposed SFIMX, a method that performs only a fraction of interactions and employs non-negative matrix factorization to estimate the outcomes of remaining ones, shortening GP’s runtime. In this paper, we build upon that work and propose three extensions of SFIMX, in which the subset of tests drawn to perform interactions is selected with respect to test difficulty. The conducted experiment indicates that the proposed extensions surpass the original SFIMX on a suite of discrete GP benchmarks.

  2. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    Science.gov (United States)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  3. Surrogate Models for Online Monitoring and Process Troubleshooting of NBR Emulsion Copolymerization

    Directory of Open Access Journals (Sweden)

    Chandra Mouli R. Madhuranthakam

    2016-03-01

    Full Text Available Chemical processes with complex reaction mechanisms generally lead to dynamic models which, while beneficial for predicting and capturing the detailed process behavior, are not readily amenable for direct use in online applications related to process operation, optimisation, control, and troubleshooting. Surrogate models can help overcome this problem. In this research article, the first part focuses on obtaining surrogate models for emulsion copolymerization of nitrile butadiene rubber (NBR, which is usually produced in a train of continuous stirred tank reactors. The predictions and/or profiles for several performance characteristics such as conversion, number of polymer particles, copolymer composition, and weight-average molecular weight, obtained using surrogate models are compared with those obtained using the detailed mechanistic model. In the second part of this article, optimal flow profiles based on dynamic optimisation using the surrogate models are obtained for the production of NBR emulsions with the objective of minimising the off-specification product generated during grade transitions.

  4. Experimental Validation of Surrogate Models for Predicting the Draping of Physical Interpolating Surfaces

    DEFF Research Database (Denmark)

    Christensen, Esben Toke; Lund, Erik; Lindgaard, Esben

    2018-01-01

    This paper concerns the experimental validation of two surrogate models through a benchmark study involving two different variable shape mould prototype systems. The surrogate models in question are different methods based on kriging and proper orthogonal decomposition (POD), which were developed...... to the performance of the studied surrogate models. By comparing surrogate model performance for the two variable shape mould systems, and through a numerical study involving simple finite element models, the underlying cause of this effect is explained. It is concluded that for a variable shape mould prototype...... hypercube approach. This sampling method allows for generating a space filling and high-quality sample plan that respects mechanical constraints of the variable shape mould systems. Through the benchmark study, it is found that mechanical freeplay in the modeled system is severely detrimental...

  5. Transport of Pathogen Surrogates in Soil Treatment Units: Numerical Modeling

    Directory of Open Access Journals (Sweden)

    Ivan Morales

    2014-04-01

    Full Text Available Segmented mesocosms (n = 3 packed with sand, sandy loam or clay loam soil were used to determine the effect of soil texture and depth on transport of two septic tank effluent (STE-borne microbial pathogen surrogates—green fluorescent protein-labeled E. coli (GFPE and MS-2 coliphage—in soil treatment units. HYDRUS 2D/3D software was used to model the transport of these microbes from the infiltrative surface. Mesocosms were spiked with GFPE and MS-2 coliphage at 105 cfu/mL STE and 105–106 pfu/mL STE, respectively. In all soils, removal rates were >99.99% at 25 cm. The transport simulation compared (1 optimization; and (2 trial-and-error modeling approaches. Only slight differences between the transport parameters were observed between these approaches. Treating both the die-off rates and attachment/detachment rates as variables resulted in an overall better model fit, particularly for the tailing phase of the experiments. Independent of the fitting procedure, attachment rates computed by the model were higher in sandy and sandy loam soils than clay, which was attributed to unsaturated flow conditions at lower water content in the coarser-textured soils. Early breakthrough of the bacteria and virus indicated the presence of preferential flow in the system in the structured clay loam soil, resulting in faster movement of water and microbes through the soil relative to a conservative tracer (bromide.

  6. Greedy Sampling and Incremental Surrogate Model-Based Tailoring of Aeroservoelastic Model Database for Flexible Aircraft

    Science.gov (United States)

    Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.

    2018-01-01

    This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.

  7. An Efficient Constraint Boundary Sampling Method for Sequential RBDO Using Kriging Surrogate Model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jihoon; Jang, Junyong; Kim, Shinyu; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Cho, Sugil; Kim, Hyung Woo; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Busan (Korea, Republic of)

    2016-06-15

    Reliability-based design optimization (RBDO) requires a high computational cost owing to its reliability analysis. A surrogate model is introduced to reduce the computational cost in RBDO. The accuracy of the reliability depends on the accuracy of the surrogate model of constraint boundaries in the surrogated-model-based RBDO. In earlier researches, constraint boundary sampling (CBS) was proposed to approximate accurately the boundaries of constraints by locating sample points on the boundaries of constraints. However, because CBS uses sample points on all constraint boundaries, it creates superfluous sample points. In this paper, efficient constraint boundary sampling (ECBS) is proposed to enhance the efficiency of CBS. ECBS uses the statistical information of a kriging surrogate model to locate sample points on or near the RBDO solution. The efficiency of ECBS is verified by mathematical examples.

  8. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    Science.gov (United States)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  9. A conceptual model of the role of communication in surrogate decision making for hospitalized adults.

    Science.gov (United States)

    Torke, Alexia M; Petronio, Sandra; Sachs, Greg A; Helft, Paul R; Purnell, Christianna

    2012-04-01

    To build a conceptual model of the role of communication in decision making, based on literature from medicine, communication studies and medical ethics. We proposed a model and described each construct in detail. We review what is known about interpersonal and patient-physician communication, described literature about surrogate-clinician communication, and discussed implications for our developing model. The communication literature proposes two major elements of interpersonal communication: information processing and relationship building. These elements are composed of constructs such as information disclosure and emotional support that are likely to be relevant to decision making. We propose these elements of communication impact decision making, which in turn affects outcomes for both patients and surrogates. Decision making quality may also mediate the relationship between communication and outcomes. Although many elements of the model have been studied in relation to patient-clinician communication, there is limited data about surrogate decision making. There is evidence of high surrogate distress associated with decision making that may be alleviated by communication-focused interventions. More research is needed to test the relationships proposed in the model. Good communication with surrogates may improve both the quality of medical decisions and outcomes for the patient and surrogate. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  10. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    Science.gov (United States)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of

  11. Developing a particle tracking surrogate model to improve inversion of ground water - Surface water models

    Science.gov (United States)

    Cousquer, Yohann; Pryet, Alexandre; Atteia, Olivier; Ferré, Ty P. A.; Delbart, Célestine; Valois, Rémi; Dupuy, Alain

    2018-03-01

    The inverse problem of groundwater models is often ill-posed and model parameters are likely to be poorly constrained. Identifiability is improved if diverse data types are used for parameter estimation. However, some models, including detailed solute transport models, are further limited by prohibitive computation times. This often precludes the use of concentration data for parameter estimation, even if those data are available. In the case of surface water-groundwater (SW-GW) models, concentration data can provide SW-GW mixing ratios, which efficiently constrain the estimate of exchange flow, but are rarely used. We propose to reduce computational limits by simulating SW-GW exchange at a sink (well or drain) based on particle tracking under steady state flow conditions. Particle tracking is used to simulate advective transport. A comparison between the particle tracking surrogate model and an advective-dispersive model shows that dispersion can often be neglected when the mixing ratio is computed for a sink, allowing for use of the particle tracking surrogate model. The surrogate model was implemented to solve the inverse problem for a real SW-GW transport problem with heads and concentrations combined in a weighted hybrid objective function. The resulting inversion showed markedly reduced uncertainty in the transmissivity field compared to calibration on head data alone.

  12. Real-time characterization of partially observed epidemics using surrogate models.

    Energy Technology Data Exchange (ETDEWEB)

    Safta, Cosmin; Ray, Jaideep; Lefantzi, Sophia; Crary, David (Applied Research Associates, Arlington, VA); Sargsyan, Khachik; Cheng, Karen (Applied Research Associates, Arlington, VA)

    2011-09-01

    We present a statistical method, predicated on the use of surrogate models, for the 'real-time' characterization of partially observed epidemics. Observations consist of counts of symptomatic patients, diagnosed with the disease, that may be available in the early epoch of an ongoing outbreak. Characterization, in this context, refers to estimation of epidemiological parameters that can be used to provide short-term forecasts of the ongoing epidemic, as well as to provide gross information on the dynamics of the etiologic agent in the affected population e.g., the time-dependent infection rate. The characterization problem is formulated as a Bayesian inverse problem, and epidemiological parameters are estimated as distributions using a Markov chain Monte Carlo (MCMC) method, thus quantifying the uncertainty in the estimates. In some cases, the inverse problem can be computationally expensive, primarily due to the epidemic simulator used inside the inversion algorithm. We present a method, based on replacing the epidemiological model with computationally inexpensive surrogates, that can reduce the computational time to minutes, without a significant loss of accuracy. The surrogates are created by projecting the output of an epidemiological model on a set of polynomial chaos bases; thereafter, computations involving the surrogate model reduce to evaluations of a polynomial. We find that the epidemic characterizations obtained with the surrogate models is very close to that obtained with the original model. We also find that the number of projections required to construct a surrogate model is O(10)-O(10{sup 2}) less than the number of samples required by the MCMC to construct a stationary posterior distribution; thus, depending upon the epidemiological models in question, it may be possible to omit the offline creation and caching of surrogate models, prior to their use in an inverse problem. The technique is demonstrated on synthetic data as well as

  13. Adaptive surrogate modeling for response surface approximations with application to bayesian inference

    KAUST Repository

    Prudhomme, Serge; Bryant, Corey M.

    2015-01-01

    Parameter estimation for complex models using Bayesian inference is usually a very costly process as it requires a large number of solves of the forward problem. We show here how the construction of adaptive surrogate models using a posteriori error estimates for quantities of interest can significantly reduce the computational cost in problems of statistical inference. As surrogate models provide only approximations of the true solutions of the forward problem, it is nevertheless necessary to control these errors in order to construct an accurate reduced model with respect to the observables utilized in the identification of the model parameters. Effectiveness of the proposed approach is demonstrated on a numerical example dealing with the Spalart–Allmaras model for the simulation of turbulent channel flows. In particular, we illustrate how Bayesian model selection using the adapted surrogate model in place of solving the coupled nonlinear equations leads to the same quality of results while requiring fewer nonlinear PDE solves.

  14. Adaptive surrogate modeling for response surface approximations with application to bayesian inference

    KAUST Repository

    Prudhomme, Serge

    2015-09-17

    Parameter estimation for complex models using Bayesian inference is usually a very costly process as it requires a large number of solves of the forward problem. We show here how the construction of adaptive surrogate models using a posteriori error estimates for quantities of interest can significantly reduce the computational cost in problems of statistical inference. As surrogate models provide only approximations of the true solutions of the forward problem, it is nevertheless necessary to control these errors in order to construct an accurate reduced model with respect to the observables utilized in the identification of the model parameters. Effectiveness of the proposed approach is demonstrated on a numerical example dealing with the Spalart–Allmaras model for the simulation of turbulent channel flows. In particular, we illustrate how Bayesian model selection using the adapted surrogate model in place of solving the coupled nonlinear equations leads to the same quality of results while requiring fewer nonlinear PDE solves.

  15. Using multiscale spatial models to assess potential surrogate habitat for an imperiled reptile.

    Directory of Open Access Journals (Sweden)

    Jennifer M Fill

    Full Text Available In evaluating conservation and management options for species, practitioners might consider surrogate habitats at multiple scales when estimating available habitat or modeling species' potential distributions based on suitable habitats, especially when native environments are rare. Species' dependence on surrogates likely increases as optimal habitat is degraded and lost due to anthropogenic landscape change, and thus surrogate habitats may be vital for an imperiled species' survival in highly modified landscapes. We used spatial habitat models to examine a potential surrogate habitat for an imperiled ambush predator (eastern diamondback rattlesnake, Crotalus adamanteus; EDB at two scales. The EDB is an apex predator indigenous to imperiled longleaf pine ecosystems (Pinus palustris of the southeastern United States. Loss of native open-canopy pine savannas and woodlands has been suggested as the principal cause of the species' extensive decline. We examined EDB habitat selection in the Coastal Plain tidewater region to evaluate the role of marsh as a potential surrogate habitat and to further quantify the species' habitat requirements at two scales: home range (HR and within the home range (WHR. We studied EDBs using radiotelemetry and employed an information-theoretic approach and logistic regression to model habitat selection as use vs.We failed to detect a positive association with marsh as a surrogate habitat at the HR scale; rather, EDBs exhibited significantly negative associations with all landscape patches except pine savanna. Within home range selection was characterized by a negative association with forest and a positive association with ground cover, which suggests that EDBs may use surrogate habitats of similar structure, including marsh, within their home ranges. While our HR analysis did not support tidal marsh as a surrogate habitat, marsh may still provide resources for EDBs at smaller scales.

  16. A response-modeling alternative to surrogate models for support in computational analyses

    International Nuclear Information System (INIS)

    Rutherford, Brian

    2006-01-01

    Often, the objectives in a computational analysis involve characterization of system performance based on some function of the computed response. In general, this characterization includes (at least) an estimate or prediction for some performance measure and an estimate of the associated uncertainty. Surrogate models can be used to approximate the response in regions where simulations were not performed. For most surrogate modeling approaches, however (1) estimates are based on smoothing of available data and (2) uncertainty in the response is specified in a point-wise (in the input space) fashion. These aspects of the surrogate model construction might limit their capabilities. One alternative is to construct a probability measure, G(r), for the computer response, r, based on available data. This 'response-modeling' approach will permit probability estimation for an arbitrary event, E(r), based on the computer response. In this general setting, event probabilities can be computed: prob(E)=∫ r I(E(r))dG(r) where I is the indicator function. Furthermore, one can use G(r) to calculate an induced distribution on a performance measure, pm. For prediction problems where the performance measure is a scalar, its distribution F pm is determined by: F pm (z)=∫ r I(pm(r)≤z)dG(r). We introduce response models for scalar computer output and then generalize the approach to more complicated responses that utilize multiple response models

  17. Modeling of Heating and Evaporation of FACE I Gasoline Fuel and its Surrogates

    KAUST Repository

    Elwardani, Ahmed Elsaid

    2016-04-05

    The US Department of Energy has formulated different gasoline fuels called \\'\\'Fuels for Advanced Combustion Engines (FACE)\\'\\' to standardize their compositions. FACE I is a low octane number gasoline fuel with research octane number (RON) of approximately 70. The detailed hydrocarbon analysis (DHA) of FACE I shows that it contains 33 components. This large number of components cannot be handled in fuel spray simulation where thousands of droplets are directly injected in combustion chamber. These droplets are to be heated, broken-up, collided and evaporated simultaneously. Heating and evaporation of single droplet FACE I fuel was investigated. The heating and evaporation model accounts for the effects of finite thermal conductivity, finite liquid diffusivity and recirculation inside the droplet, referred to as the effective thermal conductivity/effective diffusivity (ETC/ED) model. The temporal variations of the liquid mass fractions of the droplet components were used to characterize the evaporation process. Components with similar evaporation characteristics were merged together. A representative component was initially chosen based on the highest initial mass fraction. Three 6 components surrogates, Surrogate 1-3, that match evaporation characteristics of FACE I have been formulated without keeping same mass fractions of different hydrocarbon types. Another two surrogates (Surrogate 4 and 5) were considered keeping same hydrocarbon type concentrations. A distillation based surrogate that matches measured distillation profile was proposed. The calculated molar mass, hydrogen-to-carbon (H/C) ratio and RON of Surrogate 4 and distillation based one are close to those of FACE I.

  18. Reduced Gasoline Surrogate (Toluene/n-Heptane/iso-Octane) Chemical Kinetic Model for Compression Ignition Simulations

    KAUST Repository

    Sarathy, Mani

    2018-04-03

    Toluene primary reference fuel (TPRF) (mixture of toluene, iso-octane and heptane) is a suitable surrogate to represent a wide spectrum of real fuels with varying octane sensitivity. Investigating different surrogates in engine simulations is a prerequisite to identify the best matching mixture. However, running 3D engine simulations using detailed models is currently impossible and reduction of detailed models is essential. This work presents an AramcoMech reduced kinetic model developed at King Abdullah University of Science and Technology (KAUST) for simulating complex TPRF surrogate blends. A semi-decoupling approach was used together with species and reaction lumping to obtain a reduced kinetic model. The model was widely validated against experimental data including shock tube ignition delay times and premixed laminar flame speeds. Finally, the model was utilized to simulate the combustion of a low reactivity gasoline fuel under partially premixed combustion conditions.

  19. Reduced Gasoline Surrogate (Toluene/n-Heptane/iso-Octane) Chemical Kinetic Model for Compression Ignition Simulations

    KAUST Repository

    Sarathy, Mani; Atef, Nour; Alfazazi, Adamu; Badra, Jihad; Zhang, Yu; Tzanetakis, Tom; Pei, Yuanjiang

    2018-01-01

    Toluene primary reference fuel (TPRF) (mixture of toluene, iso-octane and heptane) is a suitable surrogate to represent a wide spectrum of real fuels with varying octane sensitivity. Investigating different surrogates in engine simulations is a prerequisite to identify the best matching mixture. However, running 3D engine simulations using detailed models is currently impossible and reduction of detailed models is essential. This work presents an AramcoMech reduced kinetic model developed at King Abdullah University of Science and Technology (KAUST) for simulating complex TPRF surrogate blends. A semi-decoupling approach was used together with species and reaction lumping to obtain a reduced kinetic model. The model was widely validated against experimental data including shock tube ignition delay times and premixed laminar flame speeds. Finally, the model was utilized to simulate the combustion of a low reactivity gasoline fuel under partially premixed combustion conditions.

  20. The Patient-Worker: A Model for Human Research Subjects and Gestational Surrogates.

    Science.gov (United States)

    Ryman, Emma; Fulfer, Katy

    2017-01-13

    We propose the 'patient-worker' as a theoretical construct that responds to moral problems that arise with the globalization of healthcare and medical research. The patient-worker model recognizes that some participants in global medical industries are workers and are owed worker's rights. Further, these participants are patient-like insofar as they are beneficiaries of fiduciary relationships with healthcare professionals. We apply the patient-worker model to human subjects research and commercial gestational surrogacy. In human subjects research, subjects are usually characterized as either patients or as workers. Through questioning this dichotomy, we argue that some subject populations fit into both categories. With respect to commercial surrogacy, we enrich feminist discussions of embodied labor by describing how surrogates are beneficiaries of fiduciary obligations. They are not just workers, but patient-workers. Through these applications, the patient-worker model offers a helpful normative framework for exploring what globalized medical industries owe to the individuals who bear the bodily burdens of medical innovation. © 2017 John Wiley & Sons Ltd.

  1. Accelerated Monte Carlo system reliability analysis through machine-learning-based surrogate models of network connectivity

    International Nuclear Information System (INIS)

    Stern, R.E.; Song, J.; Work, D.B.

    2017-01-01

    The two-terminal reliability problem in system reliability analysis is known to be computationally intractable for large infrastructure graphs. Monte Carlo techniques can estimate the probability of a disconnection between two points in a network by selecting a representative sample of network component failure realizations and determining the source-terminal connectivity of each realization. To reduce the runtime required for the Monte Carlo approximation, this article proposes an approximate framework in which the connectivity check of each sample is estimated using a machine-learning-based classifier. The framework is implemented using both a support vector machine (SVM) and a logistic regression based surrogate model. Numerical experiments are performed on the California gas distribution network using the epicenter and magnitude of the 1989 Loma Prieta earthquake as well as randomly-generated earthquakes. It is shown that the SVM and logistic regression surrogate models are able to predict network connectivity with accuracies of 99% for both methods, and are 1–2 orders of magnitude faster than using a Monte Carlo method with an exact connectivity check. - Highlights: • Surrogate models of network connectivity are developed by machine-learning algorithms. • Developed surrogate models can reduce the runtime required for Monte Carlo simulations. • Support vector machine and logistic regressions are employed to develop surrogate models. • Numerical example of California gas distribution network demonstrate the proposed approach. • The developed models have accuracies 99%, and are 1–2 orders of magnitude faster than MCS.

  2. Are Physical Education Majors Models for Fitness?

    Science.gov (United States)

    Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela

    2012-01-01

    The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…

  3. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    Science.gov (United States)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  4. Surrogate screening models for the low physical activity criterion of frailty.

    Science.gov (United States)

    Eckel, Sandrah P; Bandeen-Roche, Karen; Chaves, Paulo H M; Fried, Linda P; Louis, Thomas A

    2011-06-01

    Low physical activity, one of five criteria in a validated clinical phenotype of frailty, is assessed by a standardized, semiquantitative questionnaire on up to 20 leisure time activities. Because of the time demanded to collect the interview data, it has been challenging to translate to studies other than the Cardiovascular Health Study (CHS), for which it was developed. Considering subsets of activities, we identified and evaluated streamlined surrogate assessment methods and compared them to one implemented in the Women's Health and Aging Study (WHAS). Using data on men and women ages 65 and older from the CHS, we applied logistic regression models to rank activities by "relative influence" in predicting low physical activity.We considered subsets of the most influential activities as inputs to potential surrogate models (logistic regressions). We evaluated predictive accuracy and predictive validity using the area under receiver operating characteristic curves and assessed criterion validity using proportional hazards models relating frailty status (defined using the surrogate) to mortality. Walking for exercise and moderately strenuous household chores were highly influential for both genders. Women required fewer activities than men for accurate classification. The WHAS model (8 CHS activities) was an effective surrogate, but a surrogate using 6 activities (walking, chores, gardening, general exercise, mowing and golfing) was also highly predictive. We recommend a 6 activity questionnaire to assess physical activity for men and women. If efficiency is essential and the study involves only women, fewer activities can be included.

  5. Integrating surrogate models into subsurface simulation framework allows computation of complex reactive transport scenarios

    Science.gov (United States)

    De Lucia, Marco; Kempka, Thomas; Jatnieks, Janis; Kühn, Michael

    2017-04-01

    Reactive transport simulations - where geochemical reactions are coupled with hydrodynamic transport of reactants - are extremely time consuming and suffer from significant numerical issues. Given the high uncertainties inherently associated with the geochemical models, which also constitute the major computational bottleneck, such requirements may seem inappropriate and probably constitute the main limitation for their wide application. A promising way to ease and speed-up such coupled simulations is achievable employing statistical surrogates instead of "full-physics" geochemical models [1]. Data-driven surrogates are reduced models obtained on a set of pre-calculated "full physics" simulations, capturing their principal features while being extremely fast to compute. Model reduction of course comes at price of a precision loss; however, this appears justified in presence of large uncertainties regarding the parametrization of geochemical processes. This contribution illustrates the integration of surrogates into the flexible simulation framework currently being developed by the authors' research group [2]. The high level language of choice for obtaining and dealing with surrogate models is R, which profits from state-of-the-art methods for statistical analysis of large simulations ensembles. A stand-alone advective mass transport module was furthermore developed in order to add such capability to any multiphase finite volume hydrodynamic simulator within the simulation framework. We present 2D and 3D case studies benchmarking the performance of surrogates and "full physics" chemistry in scenarios pertaining the assessment of geological subsurface utilization. [1] Jatnieks, J., De Lucia, M., Dransch, D., Sips, M.: "Data-driven surrogate model approach for improving the performance of reactive transport simulations.", Energy Procedia 97, 2016, p. 447-453. [2] Kempka, T., Nakaten, B., De Lucia, M., Nakaten, N., Otto, C., Pohl, M., Chabab [Tillner], E., Kühn, M

  6. Fog Density Estimation and Image Defogging Based on Surrogate Modeling for Optical Depth.

    Science.gov (United States)

    Jiang, Yutong; Sun, Changming; Zhao, Yu; Yang, Li

    2017-05-03

    In order to estimate fog density correctly and to remove fog from foggy images appropriately, a surrogate model for optical depth is presented in this paper. We comprehensively investigate various fog-relevant features and propose a novel feature based on the hue, saturation, and value color space which correlate well with the perception of fog density. We use a surrogate-based method to learn a refined polynomial regression model for optical depth with informative fog-relevant features such as dark-channel, saturation-value, and chroma which are selected on the basis of sensitivity analysis. Based on the obtained accurate surrogate model for optical depth, an effective method for fog density estimation and image defogging is proposed. The effectiveness of our proposed method is verified quantitatively and qualitatively by the experimental results on both synthetic and real-world foggy images.

  7. Contrast Gain Control Model Fits Masking Data

    Science.gov (United States)

    Watson, Andrew B.; Solomon, Joshua A.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    We studied the fit of a contrast gain control model to data of Foley (JOSA 1994), consisting of thresholds for a Gabor patch masked by gratings of various orientations, or by compounds of two orientations. Our general model includes models of Foley and Teo & Heeger (IEEE 1994). Our specific model used a bank of Gabor filters with octave bandwidths at 8 orientations. Excitatory and inhibitory nonlinearities were power functions with exponents of 2.4 and 2. Inhibitory pooling was broad in orientation, but narrow in spatial frequency and space. Minkowski pooling used an exponent of 4. All of the data for observer KMF were well fit by the model. We have developed a contrast gain control model that fits masking data. Unlike Foley's, our model accepts images as inputs. Unlike Teo & Heeger's, our model did not require multiple channels for different dynamic ranges.

  8. Effective use of integrated hydrological models in basin-scale water resources management: surrogate modeling approaches

    Science.gov (United States)

    Zheng, Y.; Wu, B.; Wu, X.

    2015-12-01

    Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real

  9. Fitting neuron models to spike trains

    Directory of Open Access Journals (Sweden)

    Cyrille eRossant

    2011-02-01

    Full Text Available Computational modeling is increasingly used to understand the function of neural circuitsin systems neuroscience.These studies require models of individual neurons with realisticinput-output properties.Recently, it was found that spiking models can accurately predict theprecisely timed spike trains produced by cortical neurons in response tosomatically injected currents,if properly fitted. This requires fitting techniques that are efficientand flexible enough to easily test different candidate models.We present a generic solution, based on the Brian simulator(a neural network simulator in Python, which allowsthe user to define and fit arbitrary neuron models to electrophysiological recordings.It relies on vectorization and parallel computing techniques toachieve efficiency.We demonstrate its use on neural recordings in the barrel cortex andin the auditory brainstem, and confirm that simple adaptive spiking modelscan accurately predict the response of cortical neurons. Finally, we show how a complexmulticompartmental model can be reduced to a simple effective spiking model.

  10. Development of surrogate models using artificial neural network for building shell energy labelling

    NARCIS (Netherlands)

    Melo, A.P.; Costola, D.; Lamberts, R.; Hensen, J.L.M.

    2014-01-01

    Surrogate models are an important part of building energy labelling programs, but these models still present low accuracy, particularly in cooling-dominated climates. The objective of this study was to evaluate the feasibility of using an artificial neural network (ANN) to improve the accuracy of

  11. Fitting Hidden Markov Models to Psychological Data

    Directory of Open Access Journals (Sweden)

    Ingmar Visser

    2002-01-01

    Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.

  12. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    Science.gov (United States)

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  13. Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position

    International Nuclear Information System (INIS)

    Malinowski, Kathleen T.; McAvoy, Thomas J.; George, Rohini; Dieterich, Sonja; D'Souza, Warren D.

    2012-01-01

    Purpose: To investigate the effect of tumor site, measurement precision, tumor–surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor–surrogate correlation and the precision in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor–surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3–3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.

  14. Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position

    Energy Technology Data Exchange (ETDEWEB)

    Malinowski, Kathleen T. [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States); McAvoy, Thomas J. [Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States); Department of Chemical and Biomolecular Engineering and Institute of Systems Research, University of Maryland, College Park, MD (United States); George, Rohini [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Dieterich, Sonja [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States); D' Souza, Warren D., E-mail: wdsou001@umaryland.edu [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States)

    2012-04-01

    Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precision in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.

  15. Theoretical investigations of the new Cokriging method for variable-fidelity surrogate modeling

    DEFF Research Database (Denmark)

    Zimmermann, Ralf; Bertram, Anna

    2018-01-01

    Cokriging is a variable-fidelity surrogate modeling technique which emulates a target process based on the spatial correlation of sampled data of different levels of fidelity. In this work, we address two theoretical questions associated with the so-called new Cokriging method for variable fidelity...

  16. Efficient stochastic EMC/EMI analysis using HDMR-generated surrogate models

    KAUST Repository

    Yü cel, Abdulkadir C.; Bagci, Hakan; Michielssen, Eric

    2011-01-01

    of direct Monte-Carlo (MC) methods. Unfortunately, SC-gPC-generated surrogate models often lack accuracy (i) when the number of uncertain/random system variables is large and/or (ii) when the observables exhibit rapid variations. © 2011 IEEE.

  17. Surrogate model approach for improving the performance of reactive transport simulations

    Science.gov (United States)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2016-04-01

    Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines

  18. Induced subgraph searching for geometric model fitting

    Science.gov (United States)

    Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi

    2017-11-01

    In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.

  19. Enhanced surrogate models for statistical design exploiting space mapping technology

    DEFF Research Database (Denmark)

    Koziel, Slawek; Bandler, John W.; Mohamed, Achmed S.

    2005-01-01

    We present advances in microwave and RF device modeling exploiting Space Mapping (SM) technology. We propose new SM modeling formulations utilizing input mappings, output mappings, frequency scaling and quadratic approximations. Our aim is to enhance circuit models for statistical analysis...

  20. Adaptive Surrogate Modeling for Response Surface Approximations with Application to Bayesian Inference

    KAUST Repository

    Prudhomme, Serge

    2015-01-07

    The need for surrogate models and adaptive methods can be best appreciated if one is interested in parameter estimation using a Bayesian calibration procedure for validation purposes. We extend here our latest work on error decomposition and adaptive refinement for response surfaces to the development of surrogate models that can be substituted for the full models to estimate the parameters of Reynolds-averaged Navier-Stokes models. The error estimates and adaptive schemes are driven here by a quantity of interest and are thus based on the approximation of an adjoint problem. We will focus in particular to the accurate estimation of evidences to facilitate model selection. The methodology will be illustrated on the Spalart-Allmaras RANS model for turbulence simulation.

  1. Adaptive Surrogate Modeling for Response Surface Approximations with Application to Bayesian Inference

    KAUST Repository

    Prudhomme, Serge

    2015-01-01

    The need for surrogate models and adaptive methods can be best appreciated if one is interested in parameter estimation using a Bayesian calibration procedure for validation purposes. We extend here our latest work on error decomposition and adaptive refinement for response surfaces to the development of surrogate models that can be substituted for the full models to estimate the parameters of Reynolds-averaged Navier-Stokes models. The error estimates and adaptive schemes are driven here by a quantity of interest and are thus based on the approximation of an adjoint problem. We will focus in particular to the accurate estimation of evidences to facilitate model selection. The methodology will be illustrated on the Spalart-Allmaras RANS model for turbulence simulation.

  2. Fitness

    Science.gov (United States)

    ... gov home http://www.girlshealth.gov/ Home Fitness Fitness Want to look and feel your best? Physical ... are? Check out this info: What is physical fitness? top Physical fitness means you can do everyday ...

  3. Comparative Numerical Study of Four Biodiesel Surrogates for Application on Diesel 0D Phenomenological Modeling

    Directory of Open Access Journals (Sweden)

    Claude Valery Ngayihi Abbe

    2016-01-01

    Full Text Available To meet more stringent norms and standards concerning engine performances and emissions, engine manufacturers need to develop new technologies enhancing the nonpolluting properties of the fuels. In that sense, the testing and development of alternative fuels such as biodiesel are of great importance. Fuel testing is nowadays a matter of experimental and numerical work. Researches on diesel engine’s fuel involve the use of surrogates, for which the combustion mechanisms are well known and relatively similar to the investigated fuel. Biodiesel, due to its complex molecular configuration, is still the subject of numerous investigations in that area. This study presents the comparison of four biodiesel surrogates, methyl-butanoate, ethyl-butyrate, methyl-decanoate, and methyl-9-decenoate, in a 0D phenomenological combustion model. They were investigated for in-cylinder pressure, thermal efficiency, and NOx emissions. Experiments were performed on a six-cylinder turbocharged DI diesel engine fuelled by methyl ester (MEB and ethyl ester (EEB biodiesel from wasted frying oil. Results showed that, among the four surrogates, methyl butanoate presented better results for all the studied parameters. In-cylinder pressure and thermal efficiency were predicted with good accuracy by the four surrogates. NOx emissions were well predicted for methyl butanoate but for the other three gave approximation errors over 50%.

  4. Evaluation of a surrogate contact model of TKA

    NARCIS (Netherlands)

    Marra, M.A.; Andersen, M.S.; Koopman, H.F.J.M.; Janssen, D.; Verdonschot, N.

    2016-01-01

    INTRODUCTION: Simultaneous prediction of body-level dynamics and detailed joint mechanics in the frame of musculoskeletal (MS) modeling represents still a highly computationally demanding task. Marra et al. (2014) recently presented and validated a MS model capable of concurrent prediction of muscle

  5. Comparison of surrogate models with different methods in ...

    Indian Academy of Sciences (India)

    In this article, polynomial regression (PR), radial basis function artificial neural network (RBFANN), and kriging ..... 10 kriging models with different parameters were also obtained. ..... shapes using stochastic optimization methods and com-.

  6. Emulating facial biomechanics using multivariate partial least squares surrogate models

    OpenAIRE

    Martens, Harald; Wu, Tim; Hunter, Peter; Mithraratne, Kumar

    2014-01-01

    This is the author’s final, accepted and refereed manuscript to the article. Locked until 2015-05-06 A detailed biomechanical model of the human face driven by a network of muscles is a useful tool in relating the muscle activities to facial deformations. However, lengthy computational times often hinder its applications in practical settings. The objective of this study is to replace precise but computationally demanding biomechanical model by a much faster multivariate meta-mode...

  7. Evaluation of kriging based surrogate models constructed from mesoscale computations of shock interaction with particles

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Oishik, E-mail: oishik-sen@uiowa.edu [Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Gaul, Nicholas J., E-mail: nicholas-gaul@ramdosolutions.com [RAMDO Solutions, LLC, Iowa City, IA 52240 (United States); Choi, K.K., E-mail: kyung-choi@uiowa.edu [Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Jacobs, Gustaaf, E-mail: gjacobs@sdsu.edu [Aerospace Engineering, San Diego State University, San Diego, CA 92115 (United States); Udaykumar, H.S., E-mail: hs-kumar@uiowa.edu [Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2017-05-01

    Macro-scale computations of shocked particulate flows require closure laws that model the exchange of momentum/energy between the fluid and particle phases. Closure laws are constructed in this work in the form of surrogate models derived from highly resolved mesoscale computations of shock-particle interactions. The mesoscale computations are performed to calculate the drag force on a cluster of particles for different values of Mach Number and particle volume fraction. Two Kriging-based methods, viz. the Dynamic Kriging Method (DKG) and the Modified Bayesian Kriging Method (MBKG) are evaluated for their ability to construct surrogate models with sparse data; i.e. using the least number of mesoscale simulations. It is shown that if the input data is noise-free, the DKG method converges monotonically; convergence is less robust in the presence of noise. The MBKG method converges monotonically even with noisy input data and is therefore more suitable for surrogate model construction from numerical experiments. This work is the first step towards a full multiscale modeling of interaction of shocked particle laden flows.

  8. Uncertainty propagation through an aeroelastic wind turbine model using polynomial surrogates

    DEFF Research Database (Denmark)

    Murcia Leon, Juan Pablo; Réthoré, Pierre-Elouan; Dimitrov, Nikolay Krasimirov

    2018-01-01

    of the uncertainty in annual energy production due to wind resource variability and/or robust wind power plant layout optimization. It can be concluded that it is possible to capture the global behavior of a modern wind turbine and its uncertainty under realistic inflow conditions using polynomial response surfaces......Polynomial surrogates are used to characterize the energy production and lifetime equivalent fatigue loads for different components of the DTU 10 MW reference wind turbine under realistic atmospheric conditions. The variability caused by different turbulent inflow fields are captured by creating......-alignment. The methodology presented extends the deterministic power and thrust coefficient curves to uncertainty models and adds new variables like damage equivalent fatigue loads in different components of the turbine. These surrogate models can then be implemented inside other work-flows such as: estimation...

  9. Efficient stochastic EMC/EMI analysis using HDMR-generated surrogate models

    KAUST Repository

    Yücel, Abdulkadir C.

    2011-08-01

    Stochastic methods have been used extensively to quantify effects due to uncertainty in system parameters (e.g. material, geometrical, and electrical constants) and/or excitation on observables pertinent to electromagnetic compatibility and interference (EMC/EMI) analysis (e.g. voltages across mission-critical circuit elements) [1]. In recent years, stochastic collocation (SC) methods, especially those leveraging generalized polynomial chaos (gPC) expansions, have received significant attention [2, 3]. SC-gPC methods probe surrogate models (i.e. compact polynomial input-output representations) to statistically characterize observables. They are nonintrusive, that is they use existing deterministic simulators, and often cost only a fraction of direct Monte-Carlo (MC) methods. Unfortunately, SC-gPC-generated surrogate models often lack accuracy (i) when the number of uncertain/random system variables is large and/or (ii) when the observables exhibit rapid variations. © 2011 IEEE.

  10. Generator Approach to Evolutionary Optimization of Catalysts and its Integration with Surrogate Modeling

    Czech Academy of Sciences Publication Activity Database

    Holeňa, Martin; Linke, D.; Rodemerck, U.

    2011-01-01

    Roč. 159, č. 1 (2011), s. 84-95 ISSN 0920-5861 R&D Projects: GA ČR GA201/08/0802 Institutional research plan: CEZ:AV0Z10300504 Keywords : optimization of catalytic materials * evolutionary optimization * surrogate modeling * artificial neural networks * multilayer perceptron * regression boosting Subject RIV: IN - Informatics, Computer Science Impact factor: 3.407, year: 2011

  11. Coastal aquifer management based on surrogate models and multi-objective optimization

    Science.gov (United States)

    Mantoglou, A.; Kourakos, G.

    2011-12-01

    The demand for fresh water in coastal areas and islands can be very high, especially in summer months, due to increased local needs and tourism. In order to satisfy demand, a combined management plan is proposed which involves: i) desalinization (if needed) of pumped water to a potable level using reverse osmosis and ii) injection of biologically treated waste water into the aquifer. The management plan is formulated into a multiobjective optimization framework, where simultaneous minimization of economic and environmental costs is desired; subject to a constraint to satisfy demand. The method requires modeling tools, which are able to predict the salinity levels of the aquifer in response to different alternative management scenarios. Variable density models can simulate the interaction between fresh and saltwater; however, they are computationally intractable when integrated in optimization algorithms. In order to alleviate this problem, a multi objective optimization algorithm is developed combining surrogate models based on Modular Neural Networks [MOSA(MNN)]. The surrogate models are trained adaptively during optimization based on a Genetic Algorithm. In the crossover step of the genetic algorithm, each pair of parents generates a pool of offspring. All offspring are evaluated based on the fast surrogate model. Then only the most promising offspring are evaluated based on the exact numerical model. This eliminates errors in Pareto solution due to imprecise predictions of the surrogate model. Three new criteria for selecting the most promising offspring were proposed, which improve the Pareto set and maintain the diversity of the optimum solutions. The method has important advancements compared to previous methods, e.g. alleviation of propagation of errors due to surrogate model approximations. The method is applied to a real coastal aquifer in the island of Santorini which is a very touristy island with high water demands. The results show that the algorithm

  12. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    International Nuclear Information System (INIS)

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  13. Using surrogate biomarkers to improve measurement error models in nutritional epidemiology

    Science.gov (United States)

    Keogh, Ruth H; White, Ian R; Rodwell, Sheila A

    2013-01-01

    Nutritional epidemiology relies largely on self-reported measures of dietary intake, errors in which give biased estimated diet–disease associations. Self-reported measurements come from questionnaires and food records. Unbiased biomarkers are scarce; however, surrogate biomarkers, which are correlated with intake but not unbiased, can also be useful. It is important to quantify and correct for the effects of measurement error on diet–disease associations. Challenges arise because there is no gold standard, and errors in self-reported measurements are correlated with true intake and each other. We describe an extended model for error in questionnaire, food record, and surrogate biomarker measurements. The focus is on estimating the degree of bias in estimated diet–disease associations due to measurement error. In particular, we propose using sensitivity analyses to assess the impact of changes in values of model parameters which are usually assumed fixed. The methods are motivated by and applied to measures of fruit and vegetable intake from questionnaires, 7-day diet diaries, and surrogate biomarker (plasma vitamin C) from over 25000 participants in the Norfolk cohort of the European Prospective Investigation into Cancer and Nutrition. Our results show that the estimated effects of error in self-reported measurements are highly sensitive to model assumptions, resulting in anything from a large attenuation to a small amplification in the diet–disease association. Commonly made assumptions could result in a large overcorrection for the effects of measurement error. Increased understanding of relationships between potential surrogate biomarkers and true dietary intake is essential for obtaining good estimates of the effects of measurement error in self-reported measurements on observed diet–disease associations. Copyright © 2013 John Wiley & Sons, Ltd. PMID:23553407

  14. Probabilistic Fatigue Damage Prognosis Using a Surrogate Model Trained Via 3D Finite Element Analysis

    Science.gov (United States)

    Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo

    2015-01-01

    Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.

  15. Integration of computational modeling and experimental techniques to design fuel surrogates

    DEFF Research Database (Denmark)

    Choudhury, H.A.; Intikhab, S.; Kalakul, Sawitree

    2017-01-01

    performance. A simplified alternative is to develop surrogate fuels that have fewer compounds and emulate certain important desired physical properties of the target fuels. Six gasoline blends were formulated through a computer aided model based technique “Mixed Integer Non-Linear Programming” (MINLP...... Virtual Process-Product Design Laboratory (VPPD-Lab) are applied onto the defined compositions of the surrogate gasoline. The aim is to primarily verify the defined composition of gasoline by means of VPPD-Lab. ρ, η and RVP are calculated with more accuracy and constraints such as distillation curve...... and flash point on the blend design are also considered. A post-design experiment-based verification step is proposed to further improve and fine-tune the “best” selected gasoline blends following the computation work. Here, advanced experimental techniques are used to measure the RVP, ρ, η, RON...

  16. Application of Design of Experiments and Surrogate Modeling within the NASA Advanced Concepts Office, Earth-to-Orbit Design Process

    Science.gov (United States)

    Zwack, Mathew R.; Dees, Patrick D.; Holt, James B.

    2016-01-01

    Decisions made during early conceptual design have a large impact upon the expected life-cycle cost (LCC) of a new program. It is widely accepted that up to 80% of such cost is committed during these early design phases. Therefore, to help minimize LCC, decisions made during conceptual design must be based upon as much information as possible. To aid in the decision making for new launch vehicle programs, the Advanced Concepts Office (ACO) at NASA Marshall Space Flight Center (MSFC) provides rapid turnaround pre-phase A and phase A concept definition studies. The ACO team utilizes a proven set of tools to provide customers with a full vehicle mass breakdown to tertiary subsystems, preliminary structural sizing based upon worst-case flight loads, and trajectory optimization to quantify integrated vehicle performance for a given mission. Although the team provides rapid turnaround for single vehicle concepts, the scope of the trade space can be limited due to analyst availability and the manpower requirements for manual execution of the analysis tools. In order to enable exploration of a broader design space, the ACO team has implemented an advanced design methods (ADM) based approach. This approach applies the concepts of design of experiments (DOE) and surrogate modeling to more exhaustively explore the trade space and provide the customer with additional design information to inform decision making. This paper will first discuss the automation of the ACO tool set, which represents a majority of the development effort. In order to fit a surrogate model within tolerable error bounds a number of DOE cases are needed. This number will scale with the number of variable parameters desired and the complexity of the system's response to those variables. For all but the smallest design spaces, the number of cases required cannot be produced within an acceptable timeframe using a manual process. Therefore, automation of the tools was a key enabler for the successful

  17. Development of surrogate models using artificial neural network for building shell energy labelling

    International Nuclear Information System (INIS)

    Melo, A.P.; Cóstola, D.; Lamberts, R.; Hensen, J.L.M.

    2014-01-01

    Surrogate models are an important part of building energy labelling programs, but these models still present low accuracy, particularly in cooling-dominated climates. The objective of this study was to evaluate the feasibility of using an artificial neural network (ANN) to improve the accuracy of surrogate models for labelling purposes. An ANN was applied to model the building stock of a city in Brazil, based on the results of extensive simulations using the high-resolution building energy simulation program EnergyPlus. Sensitivity and uncertainty analyses were carried out to evaluate the behaviour of the ANN model, and the variations in the best and worst performance for several typologies were analysed in relation to variations in the input parameters and building characteristics. The results obtained indicate that an ANN can represent the interaction between input and output data for a vast and diverse building stock. Sensitivity analysis showed that no single input parameter can be identified as the main factor responsible for the building energy performance. The uncertainty associated with several parameters plays a major role in assessing building energy performance, together with the facade area and the shell-to-floor ratio. The results of this study may have a profound impact as ANNs could be applied in the future to define regulations in many countries, with positive effects on optimizing the energy consumption. - Highlights: • We model several typologies which have variation in input parameters. • We evaluate the accuracy of surrogate models for labelling purposes. • ANN is applied to model the building stock. • Uncertainty in building plays a major role in the building energy performance. • Results show that ANN could help to develop building energy labelling systems

  18. Statistical surrogate models for prediction of high-consequence climate change.

    Energy Technology Data Exchange (ETDEWEB)

    Constantine, Paul; Field, Richard V., Jr.; Boslough, Mark Bruce Elrick

    2011-09-01

    In safety engineering, performance metrics are defined using probabilistic risk assessments focused on the low-probability, high-consequence tail of the distribution of possible events, as opposed to best estimates based on central tendencies. We frame the climate change problem and its associated risks in a similar manner. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We therefore propose the use of specialized statistical surrogate models (SSMs) for the purpose of exploring the probability law of various climate variables of interest. A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field. The SSM can be calibrated to available spatial and temporal data from existing climate databases, e.g., the Program for Climate Model Diagnosis and Intercomparison (PCMDI), or to a collection of outputs from a General Circulation Model (GCM), e.g., the Community Earth System Model (CESM) and its predecessors. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework is developed to provide quantitative measures of confidence, via Bayesian credible intervals, in the use of the proposed approach to assess these risks.

  19. Multi-model polynomial chaos surrogate dictionary for Bayesian inference in elasticity problems

    KAUST Repository

    Contreras, Andres A.; Le Maî tre, Olivier P.; Aquino, Wilkins; Knio, Omar

    2016-01-01

    of stiff inclusions embedded in a soft matrix, mimicking tumors in soft tissues. We rely on a polynomial chaos (PC) surrogate to accelerate the inference process. The PC surrogate predicts the dependence of the displacements field with the random elastic

  20. Multi-model polynomial chaos surrogate dictionary for Bayesian inference in elasticity problems

    KAUST Repository

    Contreras, Andres A.

    2016-09-19

    A method is presented for inferring the presence of an inclusion inside a domain; the proposed approach is suitable to be used in a diagnostic device with low computational power. Specifically, we use the Bayesian framework for the inference of stiff inclusions embedded in a soft matrix, mimicking tumors in soft tissues. We rely on a polynomial chaos (PC) surrogate to accelerate the inference process. The PC surrogate predicts the dependence of the displacements field with the random elastic moduli of the materials, and are computed by means of the stochastic Galerkin (SG) projection method. Moreover, the inclusion\\'s geometry is assumed to be unknown, and this is addressed by using a dictionary consisting of several geometrical models with different configurations. A model selection approach based on the evidence provided by the data (Bayes factors) is used to discriminate among the different geometrical models and select the most suitable one. The idea of using a dictionary of pre-computed geometrical models helps to maintain the computational cost of the inference process very low, as most of the computational burden is carried out off-line for the resolution of the SG problems. Numerical tests are used to validate the methodology, assess its performance, and analyze the robustness to model errors. © 2016 Elsevier Ltd

  1. Development of surrogate models for the prediction of the flow around an aircraft propeller

    Science.gov (United States)

    Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros

    2018-05-01

    In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.

  2. Estimation of k-ε parameters using surrogate models and jet-in-crossflow data

    Energy Technology Data Exchange (ETDEWEB)

    Lefantzi, Sophia [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Arunajatesan, Srinivasan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Dechant, Lawrence [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2014-11-01

    We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of the calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal

  3. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  4. Active learning surrogate models for the conception of systems with multiple failure modes

    International Nuclear Information System (INIS)

    Perrin, G.

    2016-01-01

    Due to the performance and certification criteria, complex mechanical systems have to taken into account several constraints, which can be associated with a series of performance functions. Different software are generally used to evaluate such functions, whose computational cost can vary a lot. In conception or reliability analysis, we thus are interested in the identification of the boundaries of the domain where all these constraints are satisfied, at the minimal total computational cost. To this end, the present work proposes an iterative method to maximize the knowledge about these limits while trying to minimize the required number of evaluations of each performance function. This method is based first on Gaussian process surrogate models that are defined on nested sub-spaces, and second, on an original selection criterion that takes into account the computational cost associated with each performance function. After presenting the theoretical basis of this approach, this paper compares its efficiency to alternative methods on an example. - Highlights: • An iterative method to identify the limits of a system is proposed. • The method is based on nested Gaussian process surrogate models. • A new selection criterion that is adapted to the system case is presented. • The interest of the method is illustrated on an analytical example.

  5. A Model Fit Statistic for Generalized Partial Credit Model

    Science.gov (United States)

    Liang, Tie; Wells, Craig S.

    2009-01-01

    Investigating the fit of a parametric model is an important part of the measurement process when implementing item response theory (IRT), but research examining it is limited. A general nonparametric approach for detecting model misfit, introduced by J. Douglas and A. S. Cohen (2001), has exhibited promising results for the two-parameter logistic…

  6. Goodness-of-Fit Assessment of Item Response Theory Models

    Science.gov (United States)

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  7. A neural network construction method for surrogate modeling of physics-based analysis

    Science.gov (United States)

    Sung, Woong Je

    connection as a zero-weight connection, the potential contribution to training error reduction of any present or absent connection can readily be evaluated using the BP algorithm. Instead of being broken, the connections that contribute less remain frozen with constant weight values optimized to that point but they are excluded from further weight optimization until reselected. In this way, a selective weight optimization is executed only for the dynamically maintained pool of high gradient connections. By searching the rapidly changing weights and concentrating optimization resources on them, the learning process is accelerated without either a significant increase in computational cost or a need for re-training. This results in a more task-adapted network connection structure. Combined with another important criterion for the division of a neuron which adds a new computational unit to a network, a highly fitted network can be grown out of the minimal random structure. This particular learning strategy can belong to a more broad class of the variable connectivity learning scheme and the devised algorithm has been named Optimal Brain Growth (OBG). The OBG algorithm has been tested on two canonical problems; a regression analysis using the Complicated Interaction Regression Function and a classification of the Two-Spiral Problem. A comparative study with conventional Multilayer Perceptrons (MLPs) consisting of single- and double-hidden layers shows that OBG is less sensitive to random initial conditions and generalizes better with only a minimal increase in computational time. This partially proves that a variable connectivity learning scheme has great potential to enhance computational efficiency and reduce efforts to select proper network architecture. To investigate the applicability of the OBG to more practical surrogate modeling tasks, the geometry-to-pressure mapping of a particular class of airfoils in the transonic flow regime has been sought using both the

  8. Surrogate runner model for draft tube losses computation within a wide range of operating points

    International Nuclear Information System (INIS)

    Susan-Resiga, R; Ciocan, T; Muntean, S; De Colombel, T; Leroy, P

    2014-01-01

    We introduce a quasi two-dimensional (Q2D) methodology for assessing the swirling flow exiting the runner of hydraulic turbines at arbitrary operating points, within a wide operating range. The Q2D model does not need actual runner computations, and as a result it represents a surrogate runner model for a-priori assessment of the swirling flow ingested by the draft tube. The axial, radial and circumferential velocity components are computed on a conical section located immediately downstream the runner blades trailing edge, then used as inlet conditions for regular draft tube computations. The main advantage of our model is that it allows the determination of the draft tube losses within the intended turbine operating range in the early design stages of a new or refurbished runner, thus providing a robust and systematic methodology to meet the optimal requirements for the flow at the runner outlet

  9. A reduced order aerothermodynamic modeling framework for hypersonic vehicles based on surrogate and POD

    Directory of Open Access Journals (Sweden)

    Chen Xin

    2015-10-01

    Full Text Available Aerothermoelasticity is one of the key technologies for hypersonic vehicles. Accurate and efficient computation of the aerothermodynamics is one of the primary challenges for hypersonic aerothermoelastic analysis. Aimed at solving the shortcomings of engineering calculation, computation fluid dynamics (CFD and experimental investigation, a reduced order modeling (ROM framework for aerothermodynamics based on CFD predictions using an enhanced algorithm of fast maximin Latin hypercube design is developed. Both proper orthogonal decomposition (POD and surrogate are considered and compared to construct ROMs. Two surrogate approaches named Kriging and optimized radial basis function (ORBF are utilized to construct ROMs. Furthermore, an enhanced algorithm of fast maximin Latin hypercube design is proposed, which proves to be helpful to improve the precisions of ROMs. Test results for the three-dimensional aerothermodynamic over a hypersonic surface indicate that: the ROMs precision based on Kriging is better than that by ORBF, ROMs based on Kriging are marginally more accurate than ROMs based on POD-Kriging. In a word, the ROM framework for hypersonic aerothermodynamics has good precision and efficiency.

  10. Gasoline surrogate modeling of gasoline ignition in a rapid compression machine and comparison to experiments

    Energy Technology Data Exchange (ETDEWEB)

    Mehl, M; Kukkadapu, G; Kumar, K; Sarathy, S M; Pitz, W J; Sung, S J

    2011-09-15

    The use of gasoline in homogeneous charge compression ignition engines (HCCI) and in duel fuel diesel - gasoline engines, has increased the need to understand its compression ignition processes under engine-like conditions. These processes need to be studied under well-controlled conditions in order to quantify low temperature heat release and to provide fundamental validation data for chemical kinetic models. With this in mind, an experimental campaign has been undertaken in a rapid compression machine (RCM) to measure the ignition of gasoline mixtures over a wide range of compression temperatures and for different compression pressures. By measuring the pressure history during ignition, information on the first stage ignition (when observed) and second stage ignition are captured along with information on the phasing of the heat release. Heat release processes during ignition are important because gasoline is known to exhibit low temperature heat release, intermediate temperature heat release and high temperature heat release. In an HCCI engine, the occurrence of low-temperature and intermediate-temperature heat release can be exploited to obtain higher load operation and has become a topic of much interest for engine researchers. Consequently, it is important to understand these processes under well-controlled conditions. A four-component gasoline surrogate model (including n-heptane, iso-octane, toluene, and 2-pentene) has been developed to simulate real gasolines. An appropriate surrogate mixture of the four components has been developed to simulate the specific gasoline used in the RCM experiments. This chemical kinetic surrogate model was then used to simulate the RCM experimental results for real gasoline. The experimental and modeling results covered ultra-lean to stoichiometric mixtures, compressed temperatures of 640-950 K, and compression pressures of 20 and 40 bar. The agreement between the experiments and model is encouraging in terms of first

  11. A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models

    Directory of Open Access Journals (Sweden)

    Erin Scott

    2016-01-01

    Full Text Available The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE models to observation reference data (Mackinson et al. 2009. The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting >1000 specific individual searches to find the statistically ‘best fit’ model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the ‘best fit’ model for ecological accuracy.

  12. Local fit evaluation of structural equation models using graphical criteria.

    Science.gov (United States)

    Thoemmes, Felix; Rosseel, Yves; Textor, Johannes

    2018-03-01

    Evaluation of model fit is critically important for every structural equation model (SEM), and sophisticated methods have been developed for this task. Among them are the χ² goodness-of-fit test, decomposition of the χ², derived measures like the popular root mean square error of approximation (RMSEA) or comparative fit index (CFI), or inspection of residuals or modification indices. Many of these methods provide a global approach to model fit evaluation: A single index is computed that quantifies the fit of the entire SEM to the data. In contrast, graphical criteria like d-separation or trek-separation allow derivation of implications that can be used for local fit evaluation, an approach that is hardly ever applied. We provide an overview of local fit evaluation from the viewpoint of SEM practitioners. In the presence of model misfit, local fit evaluation can potentially help in pinpointing where the problem with the model lies. For models that do fit the data, local tests can identify the parts of the model that are corroborated by the data. Local tests can also be conducted before a model is fitted at all, and they can be used even for models that are globally underidentified. We discuss appropriate statistical local tests, and provide applied examples. We also present novel software in R that automates this type of local fit evaluation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Diesel Surrogate Fuels for Engine Testing and Chemical-Kinetic Modeling: Compositions and Properties.

    Science.gov (United States)

    Mueller, Charles J; Cannella, William J; Bays, J Timothy; Bruno, Thomas J; DeFabio, Kathy; Dettman, Heather D; Gieleciak, Rafal M; Huber, Marcia L; Kweon, Chol-Bum; McConnell, Steven S; Pitz, William J; Ratcliff, Matthew A

    2016-02-18

    The primary objectives of this work were to formulate, blend, and characterize a set of four ultralow-sulfur diesel surrogate fuels in quantities sufficient to enable their study in single-cylinder-engine and combustion-vessel experiments. The surrogate fuels feature increasing levels of compositional accuracy (i.e., increasing exactness in matching hydrocarbon structural characteristics) relative to the single target diesel fuel upon which the surrogate fuels are based. This approach was taken to assist in determining the minimum level of surrogate-fuel compositional accuracy that is required to adequately emulate the performance characteristics of the target fuel under different combustion modes. For each of the four surrogate fuels, an approximately 30 L batch was blended, and a number of the physical and chemical properties were measured. This work documents the surrogate-fuel creation process and the results of the property measurements.

  14. A surrogate-based sensitivity quantification and Bayesian inversion of a regional groundwater flow model

    Science.gov (United States)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor

    2018-02-01

    Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.

  15. Surrogate-driven deformable motion model for organ motion tracking in particle radiation therapy

    Science.gov (United States)

    Fassi, Aurora; Seregni, Matteo; Riboldi, Marco; Cerveri, Pietro; Sarrut, David; Battista Ivaldi, Giovanni; Tabarelli de Fatis, Paola; Liotta, Marco; Baroni, Guido

    2015-02-01

    The aim of this study is the development and experimental testing of a tumor tracking method for particle radiation therapy, providing the daily respiratory dynamics of the patient’s thoraco-abdominal anatomy as a function of an external surface surrogate combined with an a priori motion model. The proposed tracking approach is based on a patient-specific breathing motion model, estimated from the four-dimensional (4D) planning computed tomography (CT) through deformable image registration. The model is adapted to the interfraction baseline variations in the patient’s anatomical configuration. The driving amplitude and phase parameters are obtained intrafractionally from a respiratory surrogate signal derived from the external surface displacement. The developed technique was assessed on a dataset of seven lung cancer patients, who underwent two repeated 4D CT scans. The first 4D CT was used to build the respiratory motion model, which was tested on the second scan. The geometric accuracy in localizing lung lesions, mediated over all breathing phases, ranged between 0.6 and 1.7 mm across all patients. Errors in tracking the surrounding organs at risk, such as lungs, trachea and esophagus, were lower than 1.3 mm on average. The median absolute variation in water equivalent path length (WEL) within the target volume did not exceed 1.9 mm-WEL for simulated particle beams. A significant improvement was achieved compared with error compensation based on standard rigid alignment. The present work can be regarded as a feasibility study for the potential extension of tumor tracking techniques in particle treatments. Differently from current tracking methods applied in conventional radiotherapy, the proposed approach allows for the dynamic localization of all anatomical structures scanned in the planning CT, thus providing complete information on density and WEL variations required for particle beam range adaptation.

  16. TU-CD-BRA-05: Atlas Selection for Multi-Atlas-Based Image Segmentation Using Surrogate Modeling

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: The growing size and heterogeneity in training atlas necessitates sophisticated schemes to identify only the most relevant atlases for the specific multi-atlas-based image segmentation problem. This study aims to develop a model to infer the inaccessible oracle geometric relevance metric from surrogate image similarity metrics, and based on such model, provide guidance to atlas selection in multi-atlas-based image segmentation. Methods: We relate the oracle geometric relevance metric in label space to the surrogate metric in image space, by a monotonically non-decreasing function with additive random perturbations. Subsequently, a surrogate’s ability to prognosticate the oracle order for atlas subset selection is quantified probabilistically. Finally, important insights and guidance are provided for the design of fusion set size, balancing the competing demands to include the most relevant atlases and to exclude the most irrelevant ones. A systematic solution is derived based on an optimization framework. Model verification and performance assessment is performed based on clinical prostate MR images. Results: The proposed surrogate model was exemplified by a linear map with normally distributed perturbation, and verified with several commonly-used surrogates, including MSD, NCC and (N)MI. The derived behaviors of different surrogates in atlas selection and their corresponding performance in ultimate label estimate were validated. The performance of NCC and (N)MI was similarly superior to MSD, with a 10% higher atlas selection probability and a segmentation performance increase in DSC by 0.10 with the first and third quartiles of (0.83, 0.89), compared to (0.81, 0.89). The derived optimal fusion set size, valued at 7/8/8/7 for MSD/NCC/MI/NMI, agreed well with the appropriate range [4, 9] from empirical observation. Conclusion: This work has developed an efficacious probabilistic model to characterize the image-based surrogate metric on atlas selection

  17. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  18. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  19. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  20. Geometric Generalisation of Surrogate Model-Based Optimisation to Combinatorial and Program Spaces

    Directory of Open Access Journals (Sweden)

    Yong-Hyuk Kim

    2014-01-01

    Full Text Available Surrogate models (SMs can profitably be employed, often in conjunction with evolutionary algorithms, in optimisation in which it is expensive to test candidate solutions. The spatial intuition behind SMs makes them naturally suited to continuous problems, and the only combinatorial problems that have been previously addressed are those with solutions that can be encoded as integer vectors. We show how radial basis functions can provide a generalised SM for combinatorial problems which have a geometric solution representation, through the conversion of that representation to a different metric space. This approach allows an SM to be cast in a natural way for the problem at hand, without ad hoc adaptation to a specific representation. We test this adaptation process on problems involving binary strings, permutations, and tree-based genetic programs.

  1. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization

    Science.gov (United States)

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously.

  2. A review on design of experiments and surrogate models in aircraft real-time and many-query aerodynamic analyses

    Science.gov (United States)

    Yondo, Raul; Andrés, Esther; Valero, Eusebio

    2018-01-01

    Full scale aerodynamic wind tunnel testing, numerical simulation of high dimensional (full-order) aerodynamic models or flight testing are some of the fundamental but complex steps in the various design phases of recent civil transport aircrafts. Current aircraft aerodynamic designs have increase in complexity (multidisciplinary, multi-objective or multi-fidelity) and need to address the challenges posed by the nonlinearity of the objective functions and constraints, uncertainty quantification in aerodynamic problems or the restrained computational budgets. With the aim to reduce the computational burden and generate low-cost but accurate models that mimic those full order models at different values of the design variables, Recent progresses have witnessed the introduction, in real-time and many-query analyses, of surrogate-based approaches as rapid and cheaper to simulate models. In this paper, a comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria. Benefits, drawbacks and comparative discussions in applying those methods are described. Furthermore, the paper familiarizes the readers with surrogate models that have been successfully applied to the general field of fluid dynamics, but not yet in the aerospace industry. Additionally, the review revisits the most popular sampling strategies used in conducting physical and simulation-based experiments in aircraft aerodynamic design. Attractive or smart designs infrequently used in the field and discussions on advanced sampling methodologies are presented, to give a glance on the various efficient possibilities to a priori sample the parameter space. Closing remarks foster on future perspectives, challenges and shortcomings associated with the use of surrogate models by aircraft industrial

  3. ITEM LEVEL DIAGNOSTICS AND MODEL - DATA FIT IN ITEM ...

    African Journals Online (AJOL)

    Global Journal

    Item response theory (IRT) is a framework for modeling and analyzing item response ... data. Though, there is an argument that the evaluation of fit in IRT modeling has been ... National Council on Measurement in Education ... model data fit should be based on three types of ... prediction should be assessed through the.

  4. A Comparison of Item Fit Statistics for Mixed IRT Models

    Science.gov (United States)

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  5. A combined sensitivity analysis and kriging surrogate modeling for early validation of health indicators

    International Nuclear Information System (INIS)

    Lamoureux, Benjamin; Mechbal, Nazih; Massé, Jean-Rémi

    2014-01-01

    To increase the dependability of complex systems, one solution is to assess their state of health continuously through the monitoring of variables sensitive to potential degradation modes. When computed in an operating environment, these variables, known as health indicators, are subject to many uncertainties. Hence, the stochastic nature of health assessment combined with the lack of data in design stages makes it difficult to evaluate the efficiency of a health indicator before the system enters into service. This paper introduces a method for early validation of health indicators during the design stages of a system development process. This method uses physics-based modeling and uncertainties propagation to create simulated stochastic data. However, because of the large number of parameters defining the model and its computation duration, the necessary runtime for uncertainties propagation is prohibitive. Thus, kriging is used to obtain low computation time estimations of the model outputs. Moreover, sensitivity analysis techniques are performed upstream to determine the hierarchization of the model parameters and to reduce the dimension of the input space. The validation is based on three types of numerical key performance indicators corresponding to the detection, identification and prognostic processes. After having introduced and formalized the framework of uncertain systems modeling and the different performance metrics, the issues of sensitivity analysis and surrogate modeling are addressed. The method is subsequently applied to the validation of a set of health indicators for the monitoring of an aircraft engine’s pumping unit

  6. Utilisation of transparent synthetic soil surrogates in geotechnical physical models: A review

    Directory of Open Access Journals (Sweden)

    Abideen Adekunle Ganiyu

    2016-08-01

    Full Text Available Efforts to obtain non-intrusive measurement of deformations and spatial flow within soil mass prior to the advent of transparent soils have perceptible limitations. The transparent soil is a two-phase medium composed of both the synthetic aggregate and fluid components of identical refractive indices aiming at attaining transparency of the resulting soil. The transparency facilitates real life visualisation of soil continuum in physical models. When applied in conjunction with advanced photogrammetry and image processing techniques, transparent soils enable the quantification of the spatial deformation, displacement and multi-phase flow in physical model tests. Transparent synthetic soils have been successfully employed in geotechnical model tests as soil surrogates based on the testing results of their geotechnical properties which replicate those of natural soils. This paper presents a review on transparent synthetic soils and their numerous applications in geotechnical physical models. The properties of the aggregate materials are outlined and the features of the various transparent clays and sands available in the literature are described. The merits of transparent soil are highlighted and the need to amplify its application in geotechnical physical model researches is emphasised. This paper will serve as a concise compendium on the subject of transparent soils for future researchers in this field.

  7. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    Science.gov (United States)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  8. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  9. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    Science.gov (United States)

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  10. Automated model fit method for diesel engine control development

    NARCIS (Netherlands)

    Seykens, X.L.J.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.J.H.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  11. topicmodels: An R Package for Fitting Topic Models

    Directory of Open Access Journals (Sweden)

    Bettina Grun

    2011-05-01

    Full Text Available Topic models allow the probabilistic modeling of term frequency occurrences in documents. The fitted model can be used to estimate the similarity between documents as well as between a set of specified keywords using an additional layer of latent variables which are referred to as topics. The R package topicmodels provides basic infrastructure for fitting topic models based on data structures from the text mining package tm. The package includes interfaces to two algorithms for fitting topic models: the variational expectation-maximization algorithm provided by David M. Blei and co-authors and an algorithm using Gibbs sampling by Xuan-Hieu Phan and co-authors.

  12. Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation

    KAUST Repository

    Kadoura, Ahmad Salim; Siripatana, Adil; Sun, Shuyu; Knio, Omar; Hoteit, Ibrahim

    2016-01-01

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard

  13. HDFITS: Porting the FITS data model to HDF5

    Science.gov (United States)

    Price, D. C.; Barsdell, B. R.; Greenhill, L. J.

    2015-09-01

    The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new 'HDFITS' format, where data are stored in HDF5 in a FITS-like manner. We show that HDFITS allows faster reading of data (up to 100x of FITS in some use cases), and improved compression (higher compression ratios and higher throughput). Finally, we show that by only changing the import lines in Python-based FITS utilities, HDFITS formatted data can be presented transparently as an in-memory FITS equivalent.

  14. Blueberry proanthocyanidins against human norovirus surrogates in model foods and under simulated gastric conditions.

    Science.gov (United States)

    Joshi, Snehal; Howell, Amy B; D'Souza, Doris H

    2017-05-01

    Blueberry proanthocyanidins (B-PAC) are known to decrease titers of human norovirus surrogates in vitro. The application of B-PAC as therapeutic or preventive options against foodborne viral illness needs to be determined using model foods and simulated gastric conditions in vitro. The objective of this study was to evaluate the antiviral effect of B-PAC in model foods (apple juice (AJ) and 2% reduced fat milk) and simulated gastrointestinal fluids against cultivable human norovirus surrogates (feline calicivirus; FCV-F9 and murine norovirus; MNV-1) over 24 h at 37 °C. Equal amounts of each virus (5 log PFU/ml) was mixed with B-PAC (1, 2 and 5 mg/ml) prepared either in AJ, or 2% milk, or simulated gastric fluids and incubated over 24 h at 37 °C. Controls included phosphate buffered saline, malic acid (pH 7.2), AJ, 2% milk or simulated gastric and intestinal fluids incubated with virus over 24 h at 37 °C. The tested viruses were reduced to undetectable levels within 15 min with B-PAC (1, 2 and 5 mg/ml) in AJ (pH 3.6). However, antiviral activity of B-PAC was reduced in milk. FCV-F9 was reduced by 0.4 and 1.09 log PFU/ml with 2 and 5 mg/ml B-PAC in milk, respectively and MNV-1 titers were reduced by 0.81 log PFU/ml with 5 mg/ml B-PAC in milk after 24 h. B-PAC at 5 mg/ml in simulated intestinal fluid reduced titers of the tested viruses to undetectable levels within 30 min. Overall, these results show the potential of B-PAC as preventive and therapeutic options for foodborne viral illnesses. Copyright © 2016. Published by Elsevier Ltd.

  15. Exposure assessment of mobile phone base station radiation in an outdoor environment using sequential surrogate modeling.

    Science.gov (United States)

    Aerts, Sam; Deschrijver, Dirk; Joseph, Wout; Verloock, Leen; Goeminne, Francis; Martens, Luc; Dhaene, Tom

    2013-05-01

    Human exposure to background radiofrequency electromagnetic fields (RF-EMF) has been increasing with the introduction of new technologies. There is a definite need for the quantification of RF-EMF exposure but a robust exposure assessment is not yet possible, mainly due to the lack of a fast and efficient measurement procedure. In this article, a new procedure is proposed for accurately mapping the exposure to base station radiation in an outdoor environment based on surrogate modeling and sequential design, an entirely new approach in the domain of dosimetry for human RF exposure. We tested our procedure in an urban area of about 0.04 km(2) for Global System for Mobile Communications (GSM) technology at 900 MHz (GSM900) using a personal exposimeter. Fifty measurement locations were sufficient to obtain a coarse street exposure map, locating regions of high and low exposure; 70 measurement locations were sufficient to characterize the electric field distribution in the area and build an accurate predictive interpolation model. Hence, accurate GSM900 downlink outdoor exposure maps (for use in, e.g., governmental risk communication and epidemiological studies) are developed by combining the proven efficiency of sequential design with the speed of exposimeter measurements and their ease of handling. Copyright © 2013 Wiley Periodicals, Inc.

  16. Analytical fitting model for rough-surface BRDF.

    Science.gov (United States)

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  17. An R package for fitting age, period and cohort models

    Directory of Open Access Journals (Sweden)

    Adriano Decarli

    2014-11-01

    Full Text Available In this paper we present the R implementation of a GLIM macro which fits age-period-cohort model following Osmond and Gardner. In addition to the estimates of the corresponding model, owing to the programming capability of R as an object oriented language, methods for printing, plotting and summarizing the results are provided. Furthermore, the researcher has fully access to the output of the main function (apc which returns all the models fitted within the function. It is so possible to critically evaluate the goodness of fit of the resulting model.

  18. Modeling Evolution on Nearly Neutral Network Fitness Landscapes

    Science.gov (United States)

    Yakushkina, Tatiana; Saakian, David B.

    2017-08-01

    To describe virus evolution, it is necessary to define a fitness landscape. In this article, we consider the microscopic models with the advanced version of neutral network fitness landscapes. In this problem setting, we suppose a fitness difference between one-point mutation neighbors to be small. We construct a modification of the Wright-Fisher model, which is related to ordinary infinite population models with nearly neutral network fitness landscape at the large population limit. From the microscopic models in the realistic sequence space, we derive two versions of nearly neutral network models: with sinks and without sinks. We claim that the suggested model describes the evolutionary dynamics of RNA viruses better than the traditional Wright-Fisher model with few sequences.

  19. Does model fit decrease the uncertainty of the data in comparison with a general non-model least squares fit?

    International Nuclear Information System (INIS)

    Pronyaev, V.G.

    2003-01-01

    The information entropy is taken as a measure of knowledge about the object and the reduced univariante variance as a common measure of uncertainty. Covariances in the model versus non-model least square fits are discussed

  20. Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Pantic, Maja

    2016-01-01

    Fitting algorithms for Active Appearance Models (AAMs) are usually considered to be robust but slow or fast but less able to generalize well to unseen variations. In this paper, we look into AAM fitting algorithms and make the following orthogonal contributions: We present a simple “project-out‿

  1. Selecting a Conservation Surrogate Species for Small Fragmented Habitats Using Ecological Niche Modelling

    Directory of Open Access Journals (Sweden)

    K. Anne-Isola Nekaris

    2015-01-01

    Full Text Available Flagship species are traditionally large, charismatic animals used to rally conservation efforts. Accepted flagship definitions suggest they need only fulfil a strategic role, unlike umbrella species that are used to shelter cohabitant taxa. The criteria used to select both flagship and umbrella species may not stand up in the face of dramatic forest loss, where remaining fragments may only contain species that do not suit either set of criteria. The Cinderella species concept covers aesthetically pleasing and overlooked species that fulfil the criteria of flagships or umbrellas. Such species are also more likely to occur in fragmented habitats. We tested Cinderella criteria on mammals in the fragmented forests of the Sri Lankan Wet Zone. We selected taxa that fulfilled both strategic and ecological roles. We created a shortlist of ten species, and from a survey of local perceptions highlighted two finalists. We tested these for umbrella characteristics against the original shortlist, utilizing Maximum Entropy (MaxEnt modelling, and analysed distribution overlap using ArcGIS. The criteria highlighted Loris tardigradus tardigradus and Prionailurus viverrinus as finalists, with the former having highest flagship potential. We suggest Cinderella species can be effective conservation surrogates especially in habitats where traditional flagship species have been extirpated.

  2. Development and application of surrogate model for assessment of ex-vessel debris bed dryout probability - 15157

    International Nuclear Information System (INIS)

    Yakush, S.E.; Lubchenko, N.T.; Kudinov, P.

    2015-01-01

    In this work we consider a water-cooled power reactor severe accident scenario with pressure vessel failure and subsequent release of molten corium. A surrogate model for prediction of dryout heat flux for ex-vessels debris beds of different shapes is developed. Functional form of dryout heat flux dependence on problem parameters is developed by the analysis of coolability problem in non-dimensional variables. It is shown that for a flat debris bed the dryout heat flux can be represented in terms of three 1-dimensional functions for which approximating formulas are found. For two-dimensional debris beds (cylindrical, conical, Gaussian heap, mound-shaped), an additional function taking into account the bed shape geometry is obtained from numerical simulations using DECOSIM code as a full model. With the surrogate model in hand, risk analysis of debris bed coolability is carried out by Monte Carlo sampling of the input parameters within selected ranges, with assumed distribution functions

  3. Phenotypic and genomic comparison of Mycobacterium aurum and surrogate model species to Mycobacterium tuberculosis: implications for drug discovery.

    Science.gov (United States)

    Namouchi, Amine; Cimino, Mena; Favre-Rochex, Sandrine; Charles, Patricia; Gicquel, Brigitte

    2017-07-13

    Tuberculosis (TB) is caused by Mycobacterium tuberculosis and represents one of the major challenges facing drug discovery initiatives worldwide. The considerable rise in bacterial drug resistance in recent years has led to the need of new drugs and drug regimens. Model systems are regularly used to speed-up the drug discovery process and circumvent biosafety issues associated with manipulating M. tuberculosis. These include the use of strains such as Mycobacterium smegmatis and Mycobacterium marinum that can be handled in biosafety level 2 facilities, making high-throughput screening feasible. However, each of these model species have their own limitations. We report and describe the first complete genome sequence of Mycobacterium aurum ATCC23366, an environmental mycobacterium that can also grow in the gut of humans and animals as part of the microbiota. This species shows a comparable resistance profile to that of M. tuberculosis for several anti-TB drugs. The aims of this study were to (i) determine the drug resistance profile of a recently proposed model species, Mycobacterium aurum, strain ATCC23366, for anti-TB drug discovery as well as Mycobacterium smegmatis and Mycobacterium marinum (ii) sequence and annotate the complete genome sequence of this species obtained using Pacific Bioscience technology (iii) perform comparative genomics analyses of the various surrogate strains with M. tuberculosis (iv) discuss how the choice of the surrogate model used for drug screening can affect the drug discovery process. We describe the complete genome sequence of M. aurum, a surrogate model for anti-tuberculosis drug discovery. Most of the genes already reported to be associated with drug resistance are shared between all the surrogate strains and M. tuberculosis. We consider that M. aurum might be used in high-throughput screening for tuberculosis drug discovery. We also highly recommend the use of different model species during the drug discovery screening process.

  4. Mixed butanols addition to gasoline surrogates: Shock tube ignition delay time measurements and chemical kinetic modeling

    KAUST Repository

    AlRamadan, Abdullah S.

    2015-10-01

    The demand for fuels with high anti-knock quality has historically been rising, and will continue to increase with the development of downsized and turbocharged spark-ignition engines. Butanol isomers, such as 2-butanol and tert-butanol, have high octane ratings (RON of 105 and 107, respectively), and thus mixed butanols (68.8% by volume of 2-butanol and 31.2% by volume of tert-butanol) can be added to the conventional petroleum-derived gasoline fuels to improve octane performance. In the present work, the effect of mixed butanols addition to gasoline surrogates has been investigated in a high-pressure shock tube facility. The ignition delay times of mixed butanols stoichiometric mixtures were measured at 20 and 40bar over a temperature range of 800-1200K. Next, 10vol% and 20vol% of mixed butanols (MB) were blended with two different toluene/n-heptane/iso-octane (TPRF) fuel blends having octane ratings of RON 90/MON 81.7 and RON 84.6/MON 79.3. These MB/TPRF mixtures were investigated in the shock tube conditions similar to those mentioned above. A chemical kinetic model was developed to simulate the low- and high-temperature oxidation of mixed butanols and MB/TPRF blends. The proposed model is in good agreement with the experimental data with some deviations at low temperatures. The effect of mixed butanols addition to TPRFs is marginal when examining the ignition delay times at high temperatures. However, when extended to lower temperatures (T < 850K), the model shows that the mixed butanols addition to TPRFs causes the ignition delay times to increase and hence behaves like an octane booster at engine-like conditions. © 2015 The Combustion Institute.

  5. Efficient Bayesian parameter estimation with implicit sampling and surrogate modeling for a vadose zone hydrological problem

    Science.gov (United States)

    Liu, Y.; Pau, G. S. H.; Finsterle, S.

    2015-12-01

    Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simu­lated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure

  6. Fitting Simpson's neutrino into the standard model

    International Nuclear Information System (INIS)

    Valle, J.W.F.

    1985-01-01

    I show how to accomodate the 17 keV state recently by Simpson as one of the neutrinos of the standard model. Experimental constraints can only be satisfied if the μ and tau neutrino combine to a very good approximation to form a Dirac neutrino of 17 keV leaving a light νsub(e). Neutrino oscillations will provide the most stringent test of the model. The cosmological bounds are also satisfied in a natural way in models with Goldstone bosons. Explicit examples are given in the framework of majoron-type models. Constraints on the lepton symmetry breaking scale which follow from astrophysics, cosmology and laboratory experiments are discussed. (orig.)

  7. Development of a surrogate model for analysis of ex-vessel steam explosion in Nordic type BWRs

    Energy Technology Data Exchange (ETDEWEB)

    Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se; Basso, Simone, E-mail: simoneb@kth.se; Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se

    2016-12-15

    Highlights: • Severe accident. • Steam explosion. • Surrogate model. • Sensitivity study. • Artificial neural networks. - Abstract: Severe accident mitigation strategy adopted in Nordic type Boiling Water Reactors (BWRs) employs ex-vessel core melt cooling in a deep pool of water below reactor vessel. Energetic fuel–coolant interaction (steam explosion) can occur during molten core release into water. Dynamic loads can threaten containment integrity increasing the risk of fission products release to the environment. Comprehensive uncertainty analysis is necessary in order to assess the risks. Computational costs of the existing fuel–coolant interaction (FCI) codes is often prohibitive for addressing the uncertainties, including the effect of stochastic triggering time. This paper discusses development of a computationally efficient surrogate model (SM) for prediction of statistical characteristics of steam explosion impulses in Nordic BWRs. The TEXAS-V code was used as the Full Model (FM) for the calculation of explosion impulses. The surrogate model was developed using artificial neural networks (ANNs) and the database of FM solutions. Statistical analysis was employed in order to treat chaotic response of steam explosion impulse to variations in the triggering time. Details of the FM and SM implementation and their verification are discussed in the paper.

  8. Lagrangian Modeling of Evaporating Sprays at Diesel Engine Conditions: Effects of Multi-Hole Injector Nozzles With JP-8 Surrogates

    Science.gov (United States)

    2014-05-01

    Lagrangian Modeling of Evaporating Sprays at Diesel Engine Conditions: Effects of Multi-Hole Injector Nozzles With JP-8 Surrogates by L...efficiency. In this study, three-dimensional numerical simulations of single and two-hole injector nozzles under diesel conditions are conducted to...numerical simulations of single and two-hole injector nozzles under diesel conditions are conducted to study the spray behavior and the effect of

  9. Surrogate models and optimal design of experiments for chemical kinetics applications

    KAUST Repository

    Bisetti, Fabrizio

    2015-01-07

    Kinetic models for reactive flow applications comprise hundreds of reactions describing the complex interaction among many chemical species. The detailed knowledge of the reaction parameters is a key component of the design cycle of next-generation combustion devices, which aim at improving conversion efficiency and reducing pollutant emissions. Shock tubes are a laboratory scale experimental configuration, which is widely used for the study of reaction rate parameters. Important uncertainties exist in the values of the thousands of parameters included in the most advanced kinetic models. This talk discusses the application of uncertainty quantification (UQ) methods to the analysis of shock tube data as well as the design of shock tube experiments. Attention is focused on a spectral framework in which uncertain inputs are parameterized in terms of canonical random variables, and quantities of interest (QoIs) are expressed in terms of a mean-square convergent series of orthogonal polynomials acting on these variables. We outline the implementation of a recent spectral collocation approach for determining the unknown coefficients of the expansion, namely using a sparse, adaptive pseudo-spectral construction that enables us to obtain surrogates for the QoIs accurately and efficiently. We first discuss the utility of the resulting expressions in quantifying the sensitivity of QoIs to uncertain inputs, and in the Bayesian inference key physical parameters from experimental measurements. We then discuss the application of these techniques to the analysis of shock-tube data and the optimal design of shock-tube experiments for two key reactions in combustion kinetics: the chain-brancing reaction H + O2 ←→ OH + O and the reaction of Furans with the hydroxyl radical OH.

  10. Fitting ARMA Time Series by Structural Equation Models.

    Science.gov (United States)

    van Buuren, Stef

    1997-01-01

    This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)

  11. A person fit test for IRT models for polytomous items

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Dagohoy, A.V.

    2007-01-01

    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability

  12. Fitting polytomous Rasch models in SAS

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    The item parameters of a polytomous Rasch model can be estimated using marginal and conditional approaches. This paper describes how this can be done in SAS (V8.2) for three item parameter estimation procedures: marginal maximum likelihood estimation, conditional maximum likelihood estimation, an...

  13. Surrogate motherhood

    OpenAIRE

    Arteta-Acosta Cindy

    2011-01-01

    Surrogate motherhood, also known as surrogacy, has recently become achance to exercise the right of paternity by some people. Surrogacy itself did notinvolve a disadvantaged idea, but when this is coupled with scientific experimentsand economic and personal interests, requires intervention of the State tolegislate about consequences arising from the unlimited execution of this practice. Since 70’s,developed countries have been creating laws, decrees and regulations to regulateassisted reprodu...

  14. Mixed butanols addition to gasoline surrogates: Shock tube ignition delay time measurements and chemical kinetic modeling

    KAUST Repository

    AlRamadan, Abdullah S.; Badra, Jihad; Javed, Tamour; Alabbad, Mohammed; Bokhumseen, Nehal; Gaillard, Patrick; Babiker, Hassan; Farooq, Aamir; Sarathy, Mani

    2015-01-01

    work, the effect of mixed butanols addition to gasoline surrogates has been investigated in a high-pressure shock tube facility. The ignition delay times of mixed butanols stoichiometric mixtures were measured at 20 and 40bar over a temperature range

  15. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    Science.gov (United States)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  16. Cutthroat trout virus as a surrogate in vitro infection model for testing inhibitors of hepatitis E virus replication

    Science.gov (United States)

    Debing, Yannick; Winton, James; Neyts, Johan; Dallmeier, Kai

    2013-01-01

    Hepatitis E virus (HEV) is one of the most important causes of acute hepatitis worldwide. Although most infections are self-limiting, mortality is particularly high in pregnant women. Chronic infections can occur in transplant and other immune-compromised patients. Successful treatment of chronic hepatitis E has been reported with ribavirin and pegylated interferon-alpha, however severe side effects were observed. We employed the cutthroat trout virus (CTV), a non-pathogenic fish virus with remarkable similarities to HEV, as a potential surrogate for HEV and established an antiviral assay against this virus using the Chinook salmon embryo (CHSE-214) cell line. Ribavirin and the respective trout interferon were found to efficiently inhibit CTV replication. Other known broad-spectrum inhibitors of RNA virus replication such as the nucleoside analog 2′-C-methylcytidine resulted only in a moderate antiviral activity. In its natural fish host, CTV levels largely fluctuate during the reproductive cycle with the virus detected mainly during spawning. We wondered whether this aspect of CTV infection may serve as a surrogate model for the peculiar pathogenesis of HEV in pregnant women. To that end the effect of three sex steroids on in vitro CTV replication was evaluated. Whereas progesterone resulted in marked inhibition of virus replication, testosterone and 17β-estradiol stimulated viral growth. Our data thus indicate that CTV may serve as a surrogate model for HEV, both for antiviral experiments and studies on the replication biology of the Hepeviridae.

  17. Random-growth urban model with geographical fitness

    Science.gov (United States)

    Kii, Masanobu; Akimoto, Keigo; Doi, Kenji

    2012-12-01

    This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.

  18. Experimental and numerical studies of burning velocities and kinetic modeling for practical and surrogate fuels

    Science.gov (United States)

    Zhao, Zhenwei

    To help understand the fuel oxidation process in practical combustion environments, laminar flame speeds and high temperature chemical kinetic models were studied for several practical fuels and "surrogate" fuels, such as propane, dimethyl ether (DME), and primary reference fuel (PRF) mixtures, gasoline and n-decane. The PIV system developed for the present work is described. The general principles for PIV measurements are outlined and the specific considerations are also reported. Laminar flame speeds were determined for propane/air over a range of equivalence ratios at initial temperature of 298 K, 500 K and 650 K and atmospheric pressure. Several data sets for propane/air laminar flame speeds with N 2 dilution are also reported. These results are compared to the literature data collected at the same conditions. The propane flame speed is also numerically calculated with a detailed kinetic model and multi component diffusion, including Soret effects. This thesis also presents experimentally determined laminar flame speeds for primary reference fuel (PRF) mixtures of n-heptane/iso-octane and real gasoline fuel at different initial temperature and at atmospheric pressure. Nitrogen dilution effects on the laminar flame speed are also studied for selected equivalence ratios at the same conditions. A minimization of detailed kinetic model for PRF mixtures on laminar flame speed conditions was performed and the measured flame speeds were compared with numerical predictions using this model. The measured laminar flame speeds of n-decane/air mixtures at 500 K and at atmospheric pressure with and without dilution were determined. The measured flame speeds are significantly different that those predicted using existing published kinetic models, including a model validated previously against high temperature data from flow reactor, jet-stirred reactor, shock tube ignition delay, and burner stabilized flame experiments. A significant update of this model is described which

  19. LEP asymmetries and fits of the standard model

    International Nuclear Information System (INIS)

    Pietrzyk, B.

    1994-01-01

    The lepton and quark asymmetries measured at LEP are presented. The results of the Standard Model fits to the electroweak data presented at this conference are given. The top mass obtained from the fit to the LEP data is 172 -14-20 +13+18 GeV; it is 177 -11-19 +11+18 when also the collider, ν and A LR data are included. (author). 10 refs., 3 figs., 2 tabs

  20. Automatic fitting of spiking neuron models to electrophysiological recordings

    Directory of Open Access Journals (Sweden)

    Cyrille Rossant

    2010-03-01

    Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.

  1. Using an external surrogate for predictor model training in real-time motion management of lung tumors

    Energy Technology Data Exchange (ETDEWEB)

    Rottmann, Joerg; Berbeco, Ross [Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States)

    2014-12-15

    Purpose: Precise prediction of respiratory motion is a prerequisite for real-time motion compensation techniques such as beam, dynamic couch, or dynamic multileaf collimator tracking. Collection of tumor motion data to train the prediction model is required for most algorithms. To avoid exposure of patients to additional dose from imaging during this procedure, the feasibility of training a linear respiratory motion prediction model with an external surrogate signal is investigated and its performance benchmarked against training the model with tumor positions directly. Methods: The authors implement a lung tumor motion prediction algorithm based on linear ridge regression that is suitable to overcome system latencies up to about 300 ms. Its performance is investigated on a data set of 91 patient breathing trajectories recorded from fiducial marker tracking during radiotherapy delivery to the lung of ten patients. The expected 3D geometric error is quantified as a function of predictor lookahead time, signal sampling frequency and history vector length. Additionally, adaptive model retraining is evaluated, i.e., repeatedly updating the prediction model after initial training. Training length for this is gradually increased with incoming (internal) data availability. To assess practical feasibility model calculation times as well as various minimum data lengths for retraining are evaluated. Relative performance of model training with external surrogate motion data versus tumor motion data is evaluated. However, an internal–external motion correlation model is not utilized, i.e., prediction is solely driven by internal motion in both cases. Results: Similar prediction performance was achieved for training the model with external surrogate data versus internal (tumor motion) data. Adaptive model retraining can substantially boost performance in the case of external surrogate training while it has little impact for training with internal motion data. A minimum

  2. Fitting Equilibrium Search Models to Labour Market Data

    DEFF Research Database (Denmark)

    Bowlus, Audra J.; Kiefer, Nicholas M.; Neumann, George R.

    1996-01-01

    Specification and estimation of a Burdett-Mortensen type equilibrium search model is considered. The estimation is nonstandard. An estimation strategy asymptotically equivalent to maximum likelihood is proposed and applied. The results indicate that specifications with a small number of productiv...... of productivity types fit the data well compared to the homogeneous model....

  3. Twitter classification model: the ABC of two million fitness tweets.

    Science.gov (United States)

    Vickey, Theodore A; Ginis, Kathleen Martin; Dabrowski, Maciej

    2013-09-01

    The purpose of this project was to design and test data collection and management tools that can be used to study the use of mobile fitness applications and social networking within the context of physical activity. This project was conducted over a 6-month period and involved collecting publically shared Twitter data from five mobile fitness apps (Nike+, RunKeeper, MyFitnessPal, Endomondo, and dailymile). During that time, over 2.8 million tweets were collected, processed, and categorized using an online tweet collection application and a customized JavaScript. Using the grounded theory, a classification model was developed to categorize and understand the types of information being shared by application users. Our data show that by tracking mobile fitness app hashtags, a wealth of information can be gathered to include but not limited to daily use patterns, exercise frequency, location-based workouts, and overall workout sentiment.

  4. Inverse uncertainty quantification of reactor simulations under the Bayesian framework using surrogate models constructed by polynomial chaos expansion

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Xu, E-mail: xuwu2@illinois.edu; Kozlowski, Tomasz

    2017-03-15

    Modeling and simulations are naturally augmented by extensive Uncertainty Quantification (UQ) and sensitivity analysis requirements in the nuclear reactor system design, in which uncertainties must be quantified in order to prove that the investigated design stays within acceptance criteria. Historically, expert judgment has been used to specify the nominal values, probability density functions and upper and lower bounds of the simulation code random input parameters for the forward UQ process. The purpose of this paper is to replace such ad-hoc expert judgment of the statistical properties of input model parameters with inverse UQ process. Inverse UQ seeks statistical descriptions of the model random input parameters that are consistent with the experimental data. Bayesian analysis is used to establish the inverse UQ problems based on experimental data, with systematic and rigorously derived surrogate models based on Polynomial Chaos Expansion (PCE). The methods developed here are demonstrated with the Point Reactor Kinetics Equation (PRKE) coupled with lumped parameter thermal-hydraulics feedback model. Three input parameters, external reactivity, Doppler reactivity coefficient and coolant temperature coefficient are modeled as uncertain input parameters. Their uncertainties are inversely quantified based on synthetic experimental data. Compared with the direct numerical simulation, surrogate model by PC expansion shows high efficiency and accuracy. In addition, inverse UQ with Bayesian analysis can calibrate the random input parameters such that the simulation results are in a better agreement with the experimental data.

  5. Optimizing water resources management in large river basins with integrated surface water-groundwater modeling: A surrogate-based approach

    Science.gov (United States)

    Wu, Bin; Zheng, Yi; Wu, Xin; Tian, Yong; Han, Feng; Liu, Jie; Zheng, Chunmiao

    2015-04-01

    Integrated surface water-groundwater modeling can provide a comprehensive and coherent understanding on basin-scale water cycle, but its high computational cost has impeded its application in real-world management. This study developed a new surrogate-based approach, SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), to incorporate the integrated modeling into water management optimization. Its applicability and advantages were evaluated and validated through an optimization research on the conjunctive use of surface water (SW) and groundwater (GW) for irrigation in a semiarid region in northwest China. GSFLOW, an integrated SW-GW model developed by USGS, was employed. The study results show that, due to the strong and complicated SW-GW interactions, basin-scale water saving could be achieved by spatially optimizing the ratios of groundwater use in different irrigation districts. The water-saving potential essentially stems from the reduction of nonbeneficial evapotranspiration from the aqueduct system and shallow groundwater, and its magnitude largely depends on both water management schemes and hydrological conditions. Important implications for water resources management in general include: first, environmental flow regulation needs to take into account interannual variation of hydrological conditions, as well as spatial complexity of SW-GW interactions; and second, to resolve water use conflicts between upper stream and lower stream, a system approach is highly desired to reflect ecological, economic, and social concerns in water management decisions. Overall, this study highlights that surrogate-based approaches like SOIM represent a promising solution to filling the gap between complex environmental modeling and real-world management decision-making.

  6. Generation of fluoroscopic 3D images with a respiratory motion model based on an external surrogate signal

    International Nuclear Information System (INIS)

    Hurwitz, Martina; Williams, Christopher L; Mishra, Pankaj; Rottmann, Joerg; Dhou, Salam; Wagar, Matthew; Mannarino, Edward G; Mak, Raymond H; Lewis, John H

    2015-01-01

    Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes. (paper)

  7. Generation of fluoroscopic 3D images with a respiratory motion model based on an external surrogate signal

    Science.gov (United States)

    Hurwitz, Martina; Williams, Christopher L.; Mishra, Pankaj; Rottmann, Joerg; Dhou, Salam; Wagar, Matthew; Mannarino, Edward G.; Mak, Raymond H.; Lewis, John H.

    2015-01-01

    Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes.

  8. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....

  9. [How to fit and interpret multilevel models using SPSS].

    Science.gov (United States)

    Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael

    2007-05-01

    Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.

  10. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.

    2014-09-16

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models\\' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  11. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.; Katzfuss, M.; Hu, J.; Johnson, V. E.

    2014-01-01

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  12. Person-fit to the Five Factor Model of personality

    Czech Academy of Sciences Publication Activity Database

    Allik, J.; Realo, A.; Mõttus, R.; Borkenau, P.; Kuppens, P.; Hřebíčková, Martina

    2012-01-01

    Roč. 71, č. 1 (2012), s. 35-45 ISSN 1421-0185 R&D Projects: GA ČR GAP407/10/2394 Institutional research plan: CEZ:AV0Z70250504 Keywords : Five Factor Model * cross - cultural comparison * person-fit Subject RIV: AN - Psychology Impact factor: 0.638, year: 2012

  13. Cutthroat trout virus as a surrogate in vitro infection model for testing inhibitors of hepatitis E virus replication.

    Science.gov (United States)

    Debing, Yannick; Winton, James; Neyts, Johan; Dallmeier, Kai

    2013-10-01

    Hepatitis E virus (HEV) is one of the most important causes of acute hepatitis worldwide. Although most infections are self-limiting, mortality is particularly high in pregnant women. Chronic infections can occur in transplant and other immune-compromised patients. Successful treatment of chronic hepatitis E has been reported with ribavirin and pegylated interferon-alpha, however severe side effects were observed. We employed the cutthroat trout virus (CTV), a non-pathogenic fish virus with remarkable similarities to HEV, as a potential surrogate for HEV and established an antiviral assay against this virus using the Chinook salmon embryo (CHSE-214) cell line. Ribavirin and the respective trout interferon were found to efficiently inhibit CTV replication. Other known broad-spectrum inhibitors of RNA virus replication such as the nucleoside analog 2'-C-methylcytidine resulted only in a moderate antiviral activity. In its natural fish host, CTV levels largely fluctuate during the reproductive cycle with the virus detected mainly during spawning. We wondered whether this aspect of CTV infection may serve as a surrogate model for the peculiar pathogenesis of HEV in pregnant women. To that end the effect of three sex steroids on in vitro CTV replication was evaluated. Whereas progesterone resulted in marked inhibition of virus replication, testosterone and 17β-estradiol stimulated viral growth. Our data thus indicate that CTV may serve as a surrogate model for HEV, both for antiviral experiments and studies on the replication biology of the Hepeviridae. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Local-metrics error-based Shepard interpolation as surrogate for highly non-linear material models in high dimensions

    Science.gov (United States)

    Lorenzi, Juan M.; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian

    2017-10-01

    Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.

  15. Role of pseudo-turbulent stresses in shocked particle clouds and construction of surrogate models for closure

    Science.gov (United States)

    Sen, O.; Gaul, N. J.; Davis, S.; Choi, K. K.; Jacobs, G.; Udaykumar, H. S.

    2018-05-01

    Macroscale models of shock-particle interactions require closure terms for unresolved solid-fluid momentum and energy transfer. These comprise the effects of mean as well as fluctuating fluid-phase velocity fields in the particle cloud. Mean drag and Reynolds stress equivalent terms (also known as pseudo-turbulent terms) appear in the macroscale equations. Closure laws for the pseudo-turbulent terms are constructed in this work from ensembles of high-fidelity mesoscale simulations. The computations are performed over a wide range of Mach numbers ( M) and particle volume fractions (φ ) and are used to explicitly compute the pseudo-turbulent stresses from the Favre average of the velocity fluctuations in the flow field. The computed stresses are then used as inputs to a Modified Bayesian Kriging method to generate surrogate models. The surrogates can be used as closure models for the pseudo-turbulent terms in macroscale computations of shock-particle interactions. It is found that the kinetic energy associated with the velocity fluctuations is comparable to that of the mean flow—especially for increasing M and φ . This work is a first attempt to quantify and evaluate the effect of velocity fluctuations for problems of shock-particle interactions.

  16. Time-varying surrogate data to assess nonlinearity in nonstationary time series: application to heart rate variability.

    Science.gov (United States)

    Faes, Luca; Zhao, He; Chon, Ki H; Nollo, Giandomenico

    2009-03-01

    We propose a method to extend to time-varying (TV) systems the procedure for generating typical surrogate time series, in order to test the presence of nonlinear dynamics in potentially nonstationary signals. The method is based on fitting a TV autoregressive (AR) model to the original series and then regressing the model coefficients with random replacements of the model residuals to generate TV AR surrogate series. The proposed surrogate series were used in combination with a TV sample entropy (SE) discriminating statistic to assess nonlinearity in both simulated and experimental time series, in comparison with traditional time-invariant (TIV) surrogates combined with the TIV SE discriminating statistic. Analysis of simulated time series showed that using TIV surrogates, linear nonstationary time series may be erroneously regarded as nonlinear and weak TV nonlinearities may remain unrevealed, while the use of TV AR surrogates markedly increases the probability of a correct interpretation. Application to short (500 beats) heart rate variability (HRV) time series recorded at rest (R), after head-up tilt (T), and during paced breathing (PB) showed: 1) modifications of the SE statistic that were well interpretable with the known cardiovascular physiology; 2) significant contribution of nonlinear dynamics to HRV in all conditions, with significant increase during PB at 0.2 Hz respiration rate; and 3) a disagreement between TV AR surrogates and TIV surrogates in about a quarter of the series, suggesting that nonstationarity may affect HRV recordings and bias the outcome of the traditional surrogate-based nonlinearity test.

  17. The global electroweak Standard Model fit after the Higgs discovery

    CERN Document Server

    Baak, Max

    2013-01-01

    We present an update of the global Standard Model (SM) fit to electroweak precision data under the assumption that the new particle discovered at the LHC is the SM Higgs boson. In this scenario all parameters entering the calculations of electroweak precision observalbes are known, allowing, for the first time, to over-constrain the SM at the electroweak scale and assert its validity. Within the SM the W boson mass and the effective weak mixing angle can be accurately predicted from the global fit. The results are compatible with, and exceed in precision, the direct measurements. An updated determination of the S, T and U parameters, which parametrize the oblique vacuum corrections, is given. The obtained values show good consistency with the SM expectation and no direct signs of new physics are seen. We conclude with an outlook to the global electroweak fit for a future e+e- collider.

  18. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    Directory of Open Access Journals (Sweden)

    Yongkai An

    2015-07-01

    Full Text Available This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately.

  19. Structural Damage Detection using Frequency Response Function Index and Surrogate Model Based on Optimized Extreme Learning Machine Algorithm

    Directory of Open Access Journals (Sweden)

    R. Ghiasi

    2017-09-01

    Full Text Available Utilizing surrogate models based on artificial intelligence methods for detecting structural damages has attracted the attention of many researchers in recent decades. In this study, a new kernel based on Littlewood-Paley Wavelet (LPW is proposed for Extreme Learning Machine (ELM algorithm to improve the accuracy of detecting multiple damages in structural systems.  ELM is used as metamodel (surrogate model of exact finite element analysis of structures in order to efficiently reduce the computational cost through updating process. In the proposed two-step method, first a damage index, based on Frequency Response Function (FRF of the structure, is used to identify the location of damages. In the second step, the severity of damages in identified elements is detected using ELM. In order to evaluate the efficacy of ELM, the results obtained from the proposed kernel were compared with other kernels proposed for ELM as well as Least Square Support Vector Machine algorithm. The solved numerical problems indicated that ELM algorithm accuracy in detecting structural damages is increased drastically in case of using LPW kernel.

  20. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com [Orange Labs, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée (France); Sudret, Bruno [ETH Zürich, Chair of Risk, Safety and Uncertainty Quantification, Stefano-Franscini-Platz 5, 8093 Zürich (Switzerland); Varsier, Nadège [Orange Labs, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Picon, Odile [ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée (France); Wiart, Joe [Orange Labs, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France); Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux (France)

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representation of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.

  1. Advanced surrogate model and sensitivity analysis methods for sodium fast reactor accident assessment

    International Nuclear Information System (INIS)

    Marrel, A.; Marie, N.; De Lozzo, M.

    2015-01-01

    Within the framework of the generation IV Sodium Fast Reactors, the safety in case of severe accidents is assessed. From this statement, CEA has developed a new physical tool to model the accident initiated by the Total Instantaneous Blockage (TIB) of a sub-assembly. This TIB simulator depends on many uncertain input parameters. This paper aims at proposing a global methodology combining several advanced statistical techniques in order to perform a global sensitivity analysis of this TIB simulator. The objective is to identify the most influential uncertain inputs for the various TIB outputs involved in the safety analysis. The proposed statistical methodology combining several advanced statistical techniques enables to take into account the constraints on the TIB simulator outputs (positivity constraints) and to deal simultaneously with various outputs. To do this, a space-filling design is used and the corresponding TIB model simulations are performed. Based on this learning sample, an efficient constrained Gaussian process metamodel is fitted on each TIB model outputs. Then, using the metamodels, classical sensitivity analyses are made for each TIB output. Multivariate global sensitivity analyses based on aggregated indices are also performed, providing additional valuable information. Main conclusions on the influence of each uncertain input are derived. - Highlights: • Physical-statistical tool for Sodium Fast Reactors TIB accident. • 27 uncertain parameters (core state, lack of physical knowledge) are highlighted. • Constrained Gaussian process efficiently predicts TIB outputs (safety criteria). • Multivariate sensitivity analyses reveal that three inputs are mainly influential. • The type of corium propagation (thermal or hydrodynamic) is the most influential

  2. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    DEFF Research Database (Denmark)

    Bolker, B.M.; Gardner, B.; Maunder, M.

    2013-01-01

    Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. R is convenient and (relatively) easy...... to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield...

  3. Supersymmetry with prejudice: Fitting the wrong model to LHC data

    Science.gov (United States)

    Allanach, B. C.; Dolan, Matthew J.

    2012-09-01

    We critically examine interpretations of hypothetical supersymmetric LHC signals, fitting to alternative wrong models of supersymmetry breaking. The signals we consider are some of the most constraining on the sparticle spectrum: invariant mass distributions with edges and endpoints from the golden decay chain q˜→qχ20(→l˜±l∓q)→χ10l+l-q. We assume a constrained minimal supersymmetric standard model (CMSSM) point to be the ‘correct’ one, but fit the signals instead with minimal gauge mediated supersymmetry breaking models (mGMSB) with a neutralino quasistable lightest supersymmetric particle, minimal anomaly mediation and large volume string compactification models. Minimal anomaly mediation and large volume scenario can be unambiguously discriminated against the CMSSM for the assumed signal and 1fb-1 of LHC data at s=14TeV. However, mGMSB would not be discriminated on the basis of the kinematic endpoints alone. The best-fit point spectra of mGMSB and CMSSM look remarkably similar, making experimental discrimination at the LHC based on the edges or Higgs properties difficult. However, using rate information for the golden chain should provide the additional separation required.

  4. Surrogate-Assisted Genetic Programming With Simplified Models for Automated Design of Dispatching Rules.

    Science.gov (United States)

    Nguyen, Su; Zhang, Mengjie; Tan, Kay Chen

    2017-09-01

    Automated design of dispatching rules for production systems has been an interesting research topic over the last several years. Machine learning, especially genetic programming (GP), has been a powerful approach to dealing with this design problem. However, intensive computational requirements, accuracy and interpretability are still its limitations. This paper aims at developing a new surrogate assisted GP to help improving the quality of the evolved rules without significant computational costs. The experiments have verified the effectiveness and efficiency of the proposed algorithms as compared to those in the literature. Furthermore, new simplification and visualisation approaches have also been developed to improve the interpretability of the evolved rules. These approaches have shown great potentials and proved to be a critical part of the automated design system.

  5. The Meaning of Goodness-of-Fit Tests: Commentary on "Goodness-of-Fit Assessment of Item Response Theory Models"

    Science.gov (United States)

    Thissen, David

    2013-01-01

    In this commentary, David Thissen states that "Goodness-of-fit assessment for IRT models is maturing; it has come a long way from zero." Thissen then references prior works on "goodness of fit" in the index of Lord and Novick's (1968) classic text; Yen (1984); Drasgow, Levine, Tsien, Williams, and Mead (1995); Chen and…

  6. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    International Nuclear Information System (INIS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-01-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  7. The influence of surrogate blood vessels on the impact response of a physical model of the brain.

    Science.gov (United States)

    Parnaik, Yednesh; Beillas, Philippe; Demetropoulos, Constantine K; Hardy, Warren N; Yang, King H; King, Albert I

    2004-11-01

    Cerebral blood vessels are an integral part of the brain and may play a role in the response of the brain to impact. The purpose of this study was to quantify the effects of surrogate vessels on the deformation patterns of a physical model of the brain under various impact conditions. Silicone gel and tubing were used as surrogates for brain tissue and blood vessels, respectively. Two aluminum cylinders representing a coronal section of the brain were constructed. One cylinder was filled with silicone gel only, and the other was filled with silicone gel and silicone tubing arranged in the radial direction in the peripheral region. An array of markers was embedded in the gel in both cylinders to facilitate strain calculation via high-speed video analysis. Both cylinders were simultaneously subjected to a combination of linear and angular acceleration using a two-segment pendulum. Marker motion was tracked, and maximum shear strain (MSS) and maximum principal strain (MPS) were calculated using markers clustered in groups of three. Four test series were conducted. Peak angular acceleration varied from 2,600 to 26,000 rad/s2, and peak angular speed varied from 17 to 29 rad/s. For a given impact condition, the test-to-test variation of these values was less than 5.5%. For all clusters, the peak MSS and peak MPS for both physical models were less than 26% and 32%, respectively. For 90% of the cluster locations, the absolute value of the difference in peak MSS and peak MPS between the physical models was 4% and 6%, respectively. In the physical model with tubing, strain tended to decrease in the periphery (near to the tubing), while it tended to increase toward the center (away from the tubing). Strain amplitudes were found to be sensitive to the peak angular speeds. In general, this study suggests that the vasculature could influence the deformation response of the brain.

  8. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Kunkun, E-mail: ktg@illinois.edu [The Center for Exascale Simulation of Plasma-Coupled Combustion (XPACC), University of Illinois at Urbana–Champaign, 1308 W Main St, Urbana, IL 61801 (United States); Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Congedo, Pietro M. [Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Abgrall, Rémi [Institut für Mathematik, Universität Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  9. Toward a Psychology of Surrogate Decision Making.

    Science.gov (United States)

    Tunney, Richard J; Ziegler, Fenja V

    2015-11-01

    In everyday life, many of the decisions that we make are made on behalf of other people. A growing body of research suggests that we often, but not always, make different decisions on behalf of other people than the other person would choose. This is problematic in the practical case of legally designated surrogate decision makers, who may not meet the substituted judgment standard. Here, we review evidence from studies of surrogate decision making and examine the extent to which surrogate decision making accurately predicts the recipient's wishes, or if it is an incomplete or distorted application of the surrogate's own decision-making processes. We find no existing domain-general model of surrogate decision making. We propose a framework by which surrogate decision making can be assessed and a novel domain-general theory as a unifying explanatory concept for surrogate decisions. © The Author(s) 2015.

  10. Development of correlations for combustion modelling with supercritical surrogate jet fuels

    Directory of Open Access Journals (Sweden)

    Raja Sekhar Dondapati

    2017-12-01

    Full Text Available Supercritical fluid technology finds its application in almost all engineering aspects in one or other way. Technology of clean jet fuel combustion is also seeing supercritical fluids as one of their contender in order to mitigate the challenges related to global warming and health issues occurred due to unwanted emissions which are found to be the by-products in conventional jet engine combustion. As jet fuel is a blend of hundred of hydrocarbons, thus estimation of chemical kinetics and emission characteristics while simulation become much complex. Advancement in supercritical jet fuel combustion technology demands reliable property statistics of jet fuel as a function temperature and pressure. Therefore, in the present work one jet fuel surrogate (n-dodecane which has been recognized as the constituent of real jet fuel is studied and thermophysical properties of each is evaluated in the supercritical regime. Correlation has been developed for two transport properties namely density and viscosity at the critical pressure and over a wide range of temperatures (TC + 100 K. Further, to endorse the reliability of the developed correlation, two arithmetical parameters have been evaluated which illustrates an outstanding agreement between the data obtained from online NIST Web-Book and the developed correlation.

  11. When the model fits the frame: the impact of regulatory fit on efficacy appraisal and persuasion in health communication.

    Science.gov (United States)

    Bosone, Lucia; Martinez, Frédéric; Kalampalikis, Nikos

    2015-04-01

    In health-promotional campaigns, positive and negative role models can be deployed to illustrate the benefits or costs of certain behaviors. The main purpose of this article is to investigate why, how, and when exposure to role models strengthens the persuasiveness of a message, according to regulatory fit theory. We argue that exposure to a positive versus a negative model activates individuals' goals toward promotion rather than prevention. By means of two experiments, we demonstrate that high levels of persuasion occur when a message advertising healthy dietary habits offers a regulatory fit between its framing and the described role model. Our data also establish that the effects of such internal regulatory fit by vicarious experience depend on individuals' perceptions of response-efficacy and self-efficacy. Our findings constitute a significant theoretical complement to previous research on regulatory fit and contain valuable practical implications for health-promotional campaigns. © 2015 by the Society for Personality and Social Psychology, Inc.

  12. Fitting Latent Cluster Models for Networks with latentnet

    Directory of Open Access Journals (Sweden)

    Pavel N. Krivitsky

    2007-12-01

    Full Text Available latentnet is a package to fit and evaluate statistical latent position and cluster models for networks. Hoff, Raftery, and Handcock (2002 suggested an approach to modeling networks based on positing the existence of an latent space of characteristics of the actors. Relationships form as a function of distances between these characteristics as well as functions of observed dyadic level covariates. In latentnet social distances are represented in a Euclidean space. It also includes a variant of the extension of the latent position model to allow for clustering of the positions developed in Handcock, Raftery, and Tantrum (2007.The package implements Bayesian inference for the models based on an Markov chain Monte Carlo algorithm. It can also compute maximum likelihood estimates for the latent position model and a two-stage maximum likelihood method for the latent position cluster model. For latent position cluster models, the package provides a Bayesian way of assessing how many groups there are, and thus whether or not there is any clustering (since if the preferred number of groups is 1, there is little evidence for clustering. It also estimates which cluster each actor belongs to. These estimates are probabilistic, and provide the probability of each actor belonging to each cluster. It computes four types of point estimates for the coefficients and positions: maximum likelihood estimate, posterior mean, posterior mode and the estimator which minimizes Kullback-Leibler divergence from the posterior. You can assess the goodness-of-fit of the model via posterior predictive checks. It has a function to simulate networks from a latent position or latent position cluster model.

  13. Rapid world modeling: Fitting range data to geometric primitives

    International Nuclear Information System (INIS)

    Feddema, J.; Little, C.

    1996-01-01

    For the past seven years, Sandia National Laboratories has been active in the development of robotic systems to help remediate DOE's waste sites and decommissioned facilities. Some of these facilities have high levels of radioactivity which prevent manual clean-up. Tele-operated and autonomous robotic systems have been envisioned as the only suitable means of removing the radioactive elements. World modeling is defined as the process of creating a numerical geometric model of a real world environment or workspace. This model is often used in robotics to plan robot motions which perform a task while avoiding obstacles. In many applications where the world model does not exist ahead of time, structured lighting, laser range finders, and even acoustical sensors have been used to create three dimensional maps of the environment. These maps consist of thousands of range points which are difficult to handle and interpret. This paper presents a least squares technique for fitting range data to planar and quadric surfaces, including cylinders and ellipsoids. Once fit to these primitive surfaces, the amount of data associated with a surface is greatly reduced up to three orders of magnitude, thus allowing for more rapid handling and analysis of world data

  14. Dynamic simulation of knee-joint loading during gait using force-feedback control and surrogate contact modelling.

    Science.gov (United States)

    Walter, Jonathan P; Pandy, Marcus G

    2017-10-01

    The aim of this study was to perform multi-body, muscle-driven, forward-dynamics simulations of human gait using a 6-degree-of-freedom (6-DOF) model of the knee in tandem with a surrogate model of articular contact and force control. A forward-dynamics simulation incorporating position, velocity and contact force-feedback control (FFC) was used to track full-body motion capture data recorded for multiple trials of level walking and stair descent performed by two individuals with instrumented knee implants. Tibiofemoral contact force errors for FFC were compared against those obtained from a standard computed muscle control algorithm (CMC) with a 6-DOF knee contact model (CMC6); CMC with a 1-DOF translating hinge-knee model (CMC1); and static optimization with a 1-DOF translating hinge-knee model (SO). Tibiofemoral joint loads predicted by FFC and CMC6 were comparable for level walking, however FFC produced more accurate results for stair descent. SO yielded reasonable predictions of joint contact loading for level walking but significant differences between model and experiment were observed for stair descent. CMC1 produced the least accurate predictions of tibiofemoral contact loads for both tasks. Our findings suggest that reliable estimates of knee-joint loading may be obtained by incorporating position, velocity and force-feedback control with a multi-DOF model of joint contact in a forward-dynamics simulation of gait. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. A model for emergency department end-of-life communications after acute devastating events--part I: decision-making capacity, surrogates, and advance directives.

    Science.gov (United States)

    Limehouse, Walter E; Feeser, V Ramana; Bookman, Kelly J; Derse, Arthur

    2012-09-01

    Making decisions for a patient affected by sudden devastating illness or injury traumatizes a patient's family and loved ones. Even in the absence of an emergency, surrogates making end-of-life treatment decisions may experience negative emotional effects. Helping surrogates with these end-of-life decisions under emergent conditions requires the emergency physician (EP) to be clear, making medical recommendations with sensitivity. This model for emergency department (ED) end-of-life communications after acute devastating events comprises the following steps: 1) determine the patient's decision-making capacity; 2) identify the legal surrogate; 3) elicit patient values as expressed in completed advance directives; 4) determine patient/surrogate understanding of the life-limiting event and expectant treatment goals; 5) convey physician understanding of the event, including prognosis, treatment options, and recommendation; 6) share decisions regarding withdrawing or withholding of resuscitative efforts, using available resources and considering options for organ donation; and 7) revise treatment goals as needed. Emergency physicians should break bad news compassionately, yet sufficiently, so that surrogate and family understand both the gravity of the situation and the lack of long-term benefit of continued life-sustaining interventions. EPs should also help the surrogate and family understand that palliative care addresses comfort needs of the patient including adequate treatment for pain, dyspnea, or anxiety. Part I of this communications model reviews determination of decision-making capacity, surrogacy laws, and advance directives, including legal definitions and application of these steps; Part II (which will appear in a future issue of AEM) covers communication moving from resuscitative to end-of-life and palliative treatment. EPs should recognize acute devastating illness or injuries, when appropriate, as opportunities to initiate end-of-life discussions and to

  16. An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models

    Science.gov (United States)

    Ames, Allison J.; Penfield, Randall D.

    2015-01-01

    Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…

  17. EVALUATION OF MURINE NOROVIRUS, FELINE CALICIVIRUS, POLIOVIRUS, AND MS2 AS SURROGATES FOR HUMAN NOROVIRUS IN a Model of Viral Persistence in SURFACE Water AND GROUNDWATER

    Science.gov (United States)

    Human noroviruses (NoV) are a significant cause of non bacterial gastroenteritis worldwide with contaminated drinking water a potential transmission route. The absence of a cell culture infectivity model for NoV necessitates the use of molecular methods and/or viral surrogate mod...

  18. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    Science.gov (United States)

    Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise

    2013-01-01

    1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.

  19. Feature extraction through least squares fit to a simple model

    International Nuclear Information System (INIS)

    Demuth, H.B.

    1976-01-01

    The Oak Ridge National Laboratory (ORNL) presented the Los Alamos Scientific Laboratory (LASL) with 18 radiographs of fuel rod test bundles. The problem is to estimate the thickness of the gap between some cylindrical rods and a flat wall surface. The edges of the gaps are poorly defined due to finite source size, x-ray scatter, parallax, film grain noise, and other degrading effects. The radiographs were scanned and the scan-line data were averaged to reduce noise and to convert the problem to one dimension. A model of the ideal gap, convolved with an appropriate point-spread function, was fit to the averaged data with a least squares program; and the gap width was determined from the final fitted-model parameters. The least squares routine did converge and the gaps obtained are of reasonable size. The method is remarkably insensitive to noise. This report describes the problem, the techniques used to solve it, and the results and conclusions. Suggestions for future work are also given

  20. Fit reduced GUTS models online: From theory to practice.

    Science.gov (United States)

    Baudrot, Virgile; Veber, Philippe; Gence, Guillaume; Charles, Sandrine

    2018-05-20

    Mechanistic modeling approaches, such as the toxicokinetic-toxicodynamic (TKTD) framework, are promoted by international institutions such as the European Food Safety Authority and the Organization for Economic Cooperation and Development to assess the environmental risk of chemical products generated by human activities. TKTD models can encompass a large set of mechanisms describing the kinetics of compounds inside organisms (e.g., uptake and elimination) and their effect at the level of individuals (e.g., damage accrual, recovery, and death mechanism). Compared to classical dose-response models, TKTD approaches have many advantages, including accounting for temporal aspects of exposure and toxicity, considering data points all along the experiment and not only at the end, and making predictions for untested situations as realistic exposure scenarios. Among TKTD models, the general unified threshold model of survival (GUTS) is within the most recent and innovative framework but is still underused in practice, especially by risk assessors, because specialist programming and statistical skills are necessary to run it. Making GUTS models easier to use through a new module freely available from the web platform MOSAIC (standing for MOdeling and StAtistical tools for ecotoxIClogy) should promote GUTS operability in support of the daily work of environmental risk assessors. This paper presents the main features of MOSAIC_GUTS: uploading of the experimental data, GUTS fitting analysis, and LCx estimates with their uncertainty. These features will be exemplified from literature data. Integr Environ Assess Manag 2018;00:000-000. © 2018 SETAC. © 2018 SETAC.

  1. Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2017-01-01

    The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days

  2. The FIT Model - Fuel-cycle Integration and Tradeoffs

    International Nuclear Information System (INIS)

    Piet, Steven J.; Soelberg, Nick R.; Bays, Samuel E.; Pereira, Candido; Pincock, Layne F.; Shaber, Eric L.; Teague, Melissa C.; Teske, Gregory M.; Vedros, Kurt G.

    2010-01-01

    All mass streams from fuel separation and fabrication are products that must meet some set of product criteria - fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the 'system losses study' team that developed it (Shropshire2009, Piet2010) are an initial step by the FCR and D program toward a global analysis that accounts for the requirements and capabilities of each component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R and D needs and set longer-term goals. The question originally posed to the 'system losses study' was the cost of separation, fuel fabrication, waste management, etc. versus the separation efficiency. In other words, are the costs associated with marginal reductions in separations losses (or improvements in product recovery) justified by the gains in the performance of other systems? We have learned that that is the wrong question. The right question is: how does one adjust the compositions and quantities of all mass streams, given uncertain product criteria, to balance competing objectives including cost? FIT is a method to analyze different fuel cycles using common bases to determine how chemical performance changes in one part of a fuel cycle (say used fuel cooling times or separation efficiencies) affect other parts of the fuel cycle. FIT estimates impurities in fuel and waste via a rough estimate of physics and mass balance for a set of technologies. If feasibility is an issue for a set, as it is for 'minimum fuel treatment' approaches such as melt refining and AIROX, it can help to make an estimate of how performances would have to change to achieve feasibility.

  3. Evaluation of the pentylenetetrazole seizure threshold test in epileptic mice as surrogate model for drug testing against pharmacoresistant seizures.

    Science.gov (United States)

    Töllner, Kathrin; Twele, Friederike; Löscher, Wolfgang

    2016-04-01

    Resistance to antiepileptic drugs (AEDs) is a major problem in epilepsy therapy, so that development of more effective AEDs is an unmet clinical need. Several rat and mouse models of epilepsy with spontaneous difficult-to-treat seizures exist, but because testing of antiseizure drug efficacy is extremely laborious in such models, they are only rarely used in the development of novel AEDs. Recently, the use of acute seizure tests in epileptic rats or mice has been proposed as a novel strategy for evaluating novel AEDs for increased antiseizure efficacy. In the present study, we compared the effects of five AEDs (valproate, phenobarbital, diazepam, lamotrigine, levetiracetam) on the pentylenetetrazole (PTZ) seizure threshold in mice that were made epileptic by pilocarpine. Experiments were started 6 weeks after a pilocarpine-induced status epilepticus. At this time, control seizure threshold was significantly lower in epileptic than in nonepileptic animals. Unexpectedly, only one AED (valproate) was less effective to increase seizure threshold in epileptic vs. nonepileptic mice, and this difference was restricted to doses of 200 and 300 mg/kg, whereas the difference disappeared at 400mg/kg. All other AEDs exerted similar seizure threshold increases in epileptic and nonepileptic mice. Thus, induction of acute seizures with PTZ in mice pretreated with pilocarpine does not provide an effective and valuable surrogate method to screen drugs for antiseizure efficacy in a model of difficult-to-treat chronic epilepsy as previously suggested from experiments with this approach in rats. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Surrogate Analysis and Index Developer (SAID) tool

    Science.gov (United States)

    Domanski, Marian M.; Straub, Timothy D.; Landers, Mark N.

    2015-10-01

    The use of acoustic and other parameters as surrogates for suspended-sediment concentrations (SSC) in rivers has been successful in multiple applications across the Nation. Tools to process and evaluate the data are critical to advancing the operational use of surrogates along with the subsequent development of regression models from which real-time sediment concentrations can be made available to the public. Recent developments in both areas are having an immediate impact on surrogate research and on surrogate monitoring sites currently (2015) in operation.

  5. A fitting LEGACY – modelling Kepler's best stars

    Directory of Open Access Journals (Sweden)

    Aarslev Magnus J.

    2017-01-01

    Full Text Available The LEGACY sample represents the best solar-like stars observed in the Kepler mission[5, 8]. The 66 stars in the sample are all on the main sequence or only slightly more evolved. They each have more than one year's observation data in short cadence, allowing for precise extraction of individual frequencies. Here we present model fits using a modified ASTFIT procedure employing two different near-surface-effect corrections, one by Christensen-Dalsgaard[4] and a newer correction proposed by Ball & Gizon[1]. We then compare the results obtained using the different corrections. We find that using the latter correction yields lower masses and significantly lower χ2 values for a large part of the sample.

  6. Global fits of GUT-scale SUSY models with GAMBIT

    Science.gov (United States)

    Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin

    2017-12-01

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

  7. Global fits of GUT-scale SUSY models with GAMBIT

    Energy Technology Data Exchange (ETDEWEB)

    Athron, Peter [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Bringmann, Torsten; Dal, Lars A.; Krislock, Abram; Raklev, Are [University of Oslo, Department of Physics, Oslo (Norway); Buckley, Andy [University of Glasgow, SUPA, School of Physics and Astronomy, Glasgow (United Kingdom); Chrzaszcz, Marcin [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); H. Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Krakow (Poland); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Jackson, Paul; White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, CNRS, ENS de Lyon, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval (France); Theoretical Physics Department, CERN, Geneva (Switzerland); Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Rogan, Christopher [Harvard University, Department of Physics, Cambridge, MA (United States); Ruiz de Austri, Roberto [IFIC-UV/CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Saavedra, Aldo [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); The University of Sydney, Faculty of Engineering and Information Technologies, Centre for Translational Data Science, School of Physics, Camperdown, NSW (Australia); Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Serra, Nicola [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); Collaboration: The GAMBIT Collaboration

    2017-12-15

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos. (orig.)

  8. A bipartite fitness model for online music streaming services

    Science.gov (United States)

    Pongnumkul, Suchit; Motohashi, Kazuyuki

    2018-01-01

    This paper proposes an evolution model and an analysis of the behavior of music consumers on online music streaming services. While previous studies have observed power-law degree distributions of usage in online music streaming services, the underlying behavior of users has not been well understood. Users and songs can be described using a bipartite network where an edge exists between a user node and a song node when the user has listened that song. The growth mechanism of bipartite networks has been used to understand the evolution of online bipartite networks Zhang et al. (2013). Existing bipartite models are based on a preferential attachment mechanism László Barabási and Albert (1999) in which the probability that a user listens to a song is proportional to its current popularity. This mechanism does not allow for two types of real world phenomena. First, a newly released song with high quality sometimes quickly gains popularity. Second, the popularity of songs normally decreases as time goes by. Therefore, this paper proposes a new model that is more suitable for online music services by adding fitness and aging functions to the song nodes of the bipartite network proposed by Zhang et al. (2013). Theoretical analyses are performed for the degree distribution of songs. Empirical data from an online streaming service, Last.fm, are used to confirm the degree distribution of the object nodes. Simulation results show improvements from a previous model. Finally, to illustrate the application of the proposed model, a simplified royalty cost model for online music services is used to demonstrate how the changes in the proposed parameters can affect the costs for online music streaming providers. Managerial implications are also discussed.

  9. Fitting outbreak models to data from many small norovirus outbreaks

    Directory of Open Access Journals (Sweden)

    Eamon B. O’Dea

    2014-03-01

    Full Text Available Infectious disease often occurs in small, independent outbreaks in populations with varying characteristics. Each outbreak by itself may provide too little information for accurate estimation of epidemic model parameters. Here we show that using standard stochastic epidemic models for each outbreak and allowing parameters to vary between outbreaks according to a linear predictor leads to a generalized linear model that accurately estimates parameters from many small and diverse outbreaks. By estimating initial growth rates in addition to transmission rates, we are able to characterize variation in numbers of initially susceptible individuals or contact patterns between outbreaks. With simulation, we find that the estimates are fairly robust to the data being collected at discrete intervals and imputation of about half of all infectious periods. We apply the method by fitting data from 75 norovirus outbreaks in health-care settings. Our baseline regression estimates are 0.0037 transmissions per infective-susceptible day, an initial growth rate of 0.27 transmissions per infective day, and a symptomatic period of 3.35 days. Outbreaks in long-term-care facilities had significantly higher transmission and initial growth rates than outbreaks in hospitals.

  10. Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data

    Science.gov (United States)

    McNeish, Daniel; Harring, Jeffrey R.

    2017-01-01

    To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…

  11. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    Science.gov (United States)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  12. Artificial neural network surrogate development of equivalence models for nuclear data uncertainty propagation in scenario studies

    Directory of Open Access Journals (Sweden)

    Krivtchik Guillaume

    2017-01-01

    Full Text Available Scenario studies simulate the whole fuel cycle over a period of time, from extraction of natural resources to geological storage. Through the comparison of different reactor fleet evolutions and fuel management options, they constitute a decision-making support. Consequently uncertainty propagation studies, which are necessary to assess the robustness of the studies, are strategic. Among numerous types of physical model in scenario computation that generate uncertainty, the equivalence models, built for calculating fresh fuel enrichment (for instance plutonium content in PWR MOX so as to be representative of nominal fuel behavior, are very important. The equivalence condition is generally formulated in terms of end-of-cycle mean core reactivity. As this results from a physical computation, it is therefore associated with an uncertainty. A state-of-the-art of equivalence models is exposed and discussed. It is shown that the existing equivalent models implemented in scenario codes, such as COSI6, are not suited to uncertainty propagation computation, for the following reasons: (i existing analytical models neglect irradiation, which has a strong impact on the result and its uncertainty; (ii current black-box models are not suited to cross-section perturbations management; and (iii models based on transport and depletion codes are too time-consuming for stochastic uncertainty propagation. A new type of equivalence model based on Artificial Neural Networks (ANN has been developed, constructed with data calculated with neutron transport and depletion codes. The model inputs are the fresh fuel isotopy, the irradiation parameters (burnup, core fractionation, etc., cross-sections perturbations and the equivalence criterion (for instance the core target reactivity in pcm at the end of the irradiation cycle. The model output is the fresh fuel content such that target reactivity is reached at the end of the irradiation cycle. Those models are built and

  13. Regression calibration with more surrogates than mismeasured variables

    KAUST Repository

    Kipnis, Victor

    2012-06-29

    In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.

  14. Regression calibration with more surrogates than mismeasured variables

    KAUST Repository

    Kipnis, Victor; Midthune, Douglas; Freedman, Laurence S.; Carroll, Raymond J.

    2012-01-01

    In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.

  15. FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES

    Directory of Open Access Journals (Sweden)

    U. S. Panday

    2012-09-01

    Full Text Available In literature and in photogrammetric workstations many approaches and systems to automatically reconstruct buildings from remote sensing data are described and available. Those building models are being used for instance in city modeling or in cadastre context. If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS data. This leads to inconsistent building outlines, which has a negative influence on visual impression, but more seriously also represents a wrong legal boundary in the cadaster. Oblique aerial images as opposed to nadir-view images reveal greater detail, enabling to see different views of an object taken from different directions. Building walls are visible from oblique images directly and those images are used for automated roof overhang estimation in this research. A fitting algorithm is employed to find roof parameters of simple buildings. It uses a least squares algorithm to fit projected wire frames to their corresponding edge lines extracted from the images. Self-occlusion is detected based on intersection result of viewing ray and the planes formed by the building whereas occlusion from other objects is detected using an ALS point cloud. Overhang and ground height are obtained by sweeping vertical and horizontal planes respectively. Experimental results are verified with high resolution ortho-images, field survey, and ALS data. Planimetric accuracy of 1cm mean and 5cm standard deviation was obtained, while buildings' orientation were accurate to mean of 0.23° and standard deviation of 0.96° with ortho-image. Overhang parameters were aligned to approximately 10cm with field survey. The ground and roof heights were accurate to mean of – 9cm and 8cm with standard deviations of 16cm and 8cm with ALS respectively. The developed approach reconstructs 3D building models well in cases of sufficient texture. More images should be acquired for

  16. WE-AB-303-11: Verification of a Deformable 4DCT Motion Model for Lung Tumor Tracking Using Different Driving Surrogates

    Energy Technology Data Exchange (ETDEWEB)

    Woelfelschneider, J [University Hospital Erlangen, Erlangen, DE (Germany); Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, DE (Germany); Seregni, M; Fassi, A; Baroni, G; Riboldi, M [Politecnico di Milano, Milano (Italy); Bert, C [University Hospital Erlangen, Erlangen, DE (Germany); Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, DE (Germany); GSI - Helmholtz Centre for Heavy Ion Research, Darmstadt, DE (Germany)

    2015-06-15

    Purpose: Tumor tracking is an advanced technique to treat intra-fractionally moving tumors. The aim of this study is to validate a surrogate-driven model based on four-dimensional computed tomography (4DCT) that is able to predict CT volumes corresponding to arbitrary respiratory states. Further, the comparison of three different driving surrogates is evaluated. Methods: This study is based on multiple 4DCTs of two patients treated for bronchial carcinoma and metastasis. Analyses for 18 additional patients are currently ongoing. The motion model was estimated from the planning 4DCT through deformable image registration. To predict a certain phase of a follow-up 4DCT, the model considers for inter-fractional variations (baseline correction) and intra-fractional respiratory parameters (amplitude and phase) derived from surrogates. In this evaluation, three different approaches were used to extract the motion surrogate: for each 4DCT phase, the 3D thoraco-abdominal surface motion, the body volume and the anterior-posterior motion of a virtual single external marker defined on the sternum were investigated. The estimated volumes resulting from the model were compared to the ground-truth clinical 4DCTs using absolute HU differences in the lung volume and landmarks localized using the Scale Invariant Feature Transform (SIFT). Results: The results show absolute HU differences between estimated and ground-truth images with median values limited to 55 HU and inter-quartile ranges (IQR) lower than 100 HU. Median 3D distances between about 1500 matching landmarks are below 2 mm for 3D surface motion and body volume methods. The single marker surrogates Result in increased median distances up to 0.6 mm. Analyses for the extended database incl. 20 patients are currently in progress. Conclusion: The results depend mainly on the image quality of the initial 4DCTs and the deformable image registration. All investigated surrogates can be used to estimate follow-up 4DCT phases

  17. A cautionary note on the use of information fit indexes in covariance structure modeling with means

    NARCIS (Netherlands)

    Wicherts, J.M.; Dolan, C.V.

    2004-01-01

    Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases

  18. Evaluation of a Surrogate Contact Model in Force-Dependent Kinematic Simulations of Total Knee Replacement

    NARCIS (Netherlands)

    Marra, M.A.; Andersen, M.S.; Damsgaard, M.; Koopman, B.; Janssen, D.W.; Verdonschot, N.J.

    2017-01-01

    Knowing the forces in the human body is of great clinical interest and musculoskeletal (MS) models are the most commonly used tool to estimate them in vivo. Unfortunately, the process of computing muscle, joint contact, and ligament forces simultaneously is computationally highly demanding. The goal

  19. Evaluation of a surrogate contact model in force-dependent kinematic simulations of total knee replacement

    NARCIS (Netherlands)

    Marra, Marco Antonio; Andersen, Michael S.; Damsgaard, Michael; Koopman, Bart F.J.M.; Janssen, Dennis; Verdonschot, Nico

    2017-01-01

    Knowing the forces in the human body is of great clinical interest and musculoskeletal (MS) models are the most commonly used tool to estimate them in vivo. Unfortunately, the process of computing muscle, joint contact, and ligament forces simultaneously is computationally highly demanding. The goal

  20. A reduced order aerothermodynamic modeling framework for hypersonic vehicles based on surrogate and POD

    OpenAIRE

    Chen Xin; Liu Li; Long Teng; Yue Zhenjiang

    2015-01-01

    Aerothermoelasticity is one of the key technologies for hypersonic vehicles. Accurate and efficient computation of the aerothermodynamics is one of the primary challenges for hypersonic aerothermoelastic analysis. Aimed at solving the shortcomings of engineering calculation, computation fluid dynamics (CFD) and experimental investigation, a reduced order modeling (ROM) framework for aerothermodynamics based on CFD predictions using an enhanced algorithm of fast maximin Latin hypercube design ...

  1. Estimating future temperature maxima in lakes across the United States using a surrogate modeling approach.

    Directory of Open Access Journals (Sweden)

    Jonathan B Butcher

    Full Text Available A warming climate increases thermal inputs to lakes with potential implications for water quality and aquatic ecosystems. In a previous study, we used a dynamic water column temperature and mixing simulation model to simulate chronic (7-day average maximum temperatures under a range of potential future climate projections at selected sites representative of different U.S. regions. Here, to extend results to lakes where dynamic models have not been developed, we apply a novel machine learning approach that uses Gaussian Process regression to describe the model response surface as a function of simplified lake characteristics (depth, surface area, water clarity and climate forcing (winter and summer air temperatures and potential evapotranspiration. We use this approach to extrapolate predictions from the simulation model to the statistical sample of U.S. lakes in the National Lakes Assessment (NLA database. Results provide a national-scale scoping assessment of the potential thermal risk to lake water quality and ecosystems across the U.S. We suggest a small fraction of lakes will experience less risk of summer thermal stress events due to changes in stratification and mixing dynamics, but most will experience increases. The percentage of lakes in the NLA with simulated 7-day average maximum water temperatures in excess of 30°C is projected to increase from less than 2% to approximately 22% by the end of the 21st century, which could significantly reduce the number of lakes that can support cold water fisheries. Site-specific analysis of the full range of factors that influence thermal profiles in individual lakes is needed to develop appropriate adaptation strategies.

  2. Optimising resolution for a preparative separation of Chinese herbal medicine using a surrogate model sample system.

    Science.gov (United States)

    Ye, Haoyu; Ignatova, Svetlana; Peng, Aihua; Chen, Lijuan; Sutherland, Ian

    2009-06-26

    This paper builds on previous modelling research with short single layer columns to develop rapid methods for optimising high-performance counter-current chromatography at constant stationary phase retention. Benzyl alcohol and p-cresol are used as model compounds to rapidly optimise first flow and then rotational speed operating conditions at a preparative scale with long columns for a given phase system using a Dynamic Extractions Midi-DE centrifuge. The transfer to a high value extract such as the crude ethanol extract of Chinese herbal medicine Millettia pachycarpa Benth. is then demonstrated and validated using the same phase system. The results show that constant stationary phase modelling of flow and speed with long multilayer columns works well as a cheap, quick and effective method of optimising operating conditions for the chosen phase system-hexane-ethyl acetate-methanol-water (1:0.8:1:0.6, v/v). Optimum conditions for resolution were a flow of 20 ml/min and speed of 1200 rpm, but for throughput were 80 ml/min at the same speed. The results show that 80 ml/min gave the best throughputs for tephrosin (518 mg/h), pyranoisoflavone (47.2 mg/h) and dehydrodeguelin (10.4 mg/h), whereas for deguelin (100.5 mg/h), the best flow rate was 40 ml/min.

  3. Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach

    Science.gov (United States)

    Chowdhury, R.; Adhikari, S.

    2012-10-01

    Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.

  4. A versatile curve-fit model for linear to deeply concave rank abundance curves

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    A new, flexible curve-fit model for linear to concave rank abundance curves was conceptualized and validated using observational data. The model links the geometric-series model and log-series model and can also fit deeply concave rank abundance curves. The model is based ¿ in an unconventional way

  5. Efficient Bayesian inference of subsurface flow models using nested sampling and sparse polynomial chaos surrogates

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior distributions by iteratively re-focusing a set of samples to high likelihood regions. NS allows representing the posterior probability density function (PDF) with a smaller number of samples and reduces the curse of dimensionality effects. The main difficulty of the NS algorithm is in the constrained sampling step which is commonly performed using a random walk Markov Chain Monte-Carlo (MCMC) algorithm. In this work, we perform a two-stage sampling using a polynomial chaos response surface to filter out rejected samples in the Markov Chain Monte-Carlo method. The combined use of nested sampling and the two-stage MCMC based on approximate response surfaces provides significant computational gains in terms of the number of simulation runs. The proposed algorithm is applied for calibration and model selection of subsurface flow models. © 2013.

  6. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    OpenAIRE

    Matthew P. Adams; Catherine J. Collier; Sven Uthicke; Yan X. Ow; Lucas Langlois; Katherine R. O’Brien

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluat...

  7. The Secondary Organic Aerosol Processor (SOAP v1.0) model: a unified model with different ranges of complexity based on the molecular surrogate approach

    Science.gov (United States)

    Couvidat, F.; Sartelet, K.

    2015-04-01

    In this paper the Secondary Organic Aerosol Processor (SOAP v1.0) model is presented. This model determines the partitioning of organic compounds between the gas and particle phases. It is designed to be modular with different user options depending on the computation time and the complexity required by the user. This model is based on the molecular surrogate approach, in which each surrogate compound is associated with a molecular structure to estimate some properties and parameters (hygroscopicity, absorption into the aqueous phase of particles, activity coefficients and phase separation). Each surrogate can be hydrophilic (condenses only into the aqueous phase of particles), hydrophobic (condenses only into the organic phases of particles) or both (condenses into both the aqueous and the organic phases of particles). Activity coefficients are computed with the UNIFAC (UNIversal Functional group Activity Coefficient; Fredenslund et al., 1975) thermodynamic model for short-range interactions and with the Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients (AIOMFAC) parameterization for medium- and long-range interactions between electrolytes and organic compounds. Phase separation is determined by Gibbs energy minimization. The user can choose between an equilibrium representation and a dynamic representation of organic aerosols (OAs). In the equilibrium representation, compounds in the particle phase are assumed to be at equilibrium with the gas phase. However, recent studies show that the organic aerosol is not at equilibrium with the gas phase because the organic phases could be semi-solid (very viscous liquid phase). The condensation-evaporation of organic compounds could then be limited by the diffusion in the organic phases due to the high viscosity. An implicit dynamic representation of secondary organic aerosols (SOAs) is available in SOAP with OAs divided into layers, the first layer being at the center of the particle (slowly

  8. Virtual Suit Fit Assessment Using Body Shape Model

    Data.gov (United States)

    National Aeronautics and Space Administration — Shoulder injury is one of the most serious risks for crewmembers in long-duration spaceflight. While suboptimal suit fit and contact pressures between the shoulder...

  9. Fitness voter model: Damped oscillations and anomalous consensus.

    Science.gov (United States)

    Woolcock, Anthony; Connaughton, Colm; Merali, Yasmin; Vazquez, Federico

    2017-09-01

    We study the dynamics of opinion formation in a heterogeneous voter model on a complete graph, in which each agent is endowed with an integer fitness parameter k≥0, in addition to its + or - opinion state. The evolution of the distribution of k-values and the opinion dynamics are coupled together, so as to allow the system to dynamically develop heterogeneity and memory in a simple way. When two agents with different opinions interact, their k-values are compared, and with probability p the agent with the lower value adopts the opinion of the one with the higher value, while with probability 1-p the opposite happens. The agent that keeps its opinion (winning agent) increments its k-value by one. We study the dynamics of the system in the entire 0≤p≤1 range and compare with the case p=1/2, in which opinions are decoupled from the k-values and the dynamics is equivalent to that of the standard voter model. When 0≤psystem approaches exponentially fast to the consensus state of the initial majority opinion. The mean consensus time τ appears to grow logarithmically with the number of agents N, and it is greatly decreased relative to the linear behavior τ∼N found in the standard voter model. When 1/2system initially relaxes to a state with an even coexistence of opinions, but eventually reaches consensus by finite-size fluctuations. The approach to the coexistence state is monotonic for 1/2oscillations around the coexistence value. The final approach to coexistence is approximately a power law t^{-b(p)} in both regimes, where the exponent b increases with p. Also, τ increases respect to the standard voter model, although it still scales linearly with N. The p=1 case is special, with a relaxation to coexistence that scales as t^{-2.73} and a consensus time that scales as τ∼N^{β}, with β≃1.45.

  10. Biomechanical investigation of impact induced rib fractures of a porcine infant surrogate model.

    Science.gov (United States)

    Blackburne, William B; Waddell, J Neil; Swain, Michael V; Alves de Sousa, Ricardo J; Kieser, Jules A

    2016-09-01

    This study investigated the structural, biomechanical and fractographic features of rib fractures in a piglet model, to test the hypothesis that fist impact, apart from thoracic squeezing, may result in lateral costal fractures as observed in abused infants. A mechanical fist with an accelerometer was constructed and fixed to a custom jig. Twenty stillborn piglets in the supine position were impacted on the thoracic cage. The resultant force versus time curves from the accelerometer data showed a number of steps indicative of rib fracture. The correlation between impact force and number of fractures was statistically significant (Pearson׳s r=0.528). Of the fractures visualized, 15 completely pierced the parietal pleura of the thoracic wall, and 5 had butterfly fracture patterning. Scanning electron microscopy showed complete bone fractures, at the zone of impact, were normal to the axis of the ribs. Incomplete vertical fractures, with bifurcation, occurred on the periphery of the contact zone. This work suggests the mechanism of rib failure during a fist impact is typical of the transverse fracture pattern in the anterolateral region associated with cases of non-accidental rib injury. The impact events investigated have a velocity of ~2-3m/s, approximately 2×10(4) times faster than previous quasi-static axial and bending tests. While squeezing the infantile may induce buckle fractures in the anterior as well as posterior region of the highly flexible bones, a fist punch impact event may result in anterolateral transverse fractures. Hence, these findings suggest that the presence of anterolateral rib fractures may result from impact rather than manual compression. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Response of mouse skin to tattooing: use of SKH-1 mice as a surrogate model for human tattooing

    International Nuclear Information System (INIS)

    Gopee, Neera V.; Cui, Yanyan; Olson, Greg; Warbritton, Alan R.; Miller, Barbara J.; Couch, Letha H.; Wamer, Wayne G.; Howard, Paul C.

    2005-01-01

    Tattooing is a popular cosmetic practice involving more than 45 million US citizens. Since the toxicology of tattoo inks and pigments used to formulate tattoo inks has not been reported, we studied the immunological impact of tattooing and determined recovery time from this trauma. SKH-1 hairless mice were tattooed using commercial tattoo inks or suspensions of titanium dioxide, cadmium sulfide, or iron oxide, and sacrificed at 0.5, 1, 3, 4, 7, or 14 days post-tattooing. Histological evaluation revealed dermal hemorrhage at 0.5 and 1 day. Acute inflammation and epidermal necrosis were initiated at 0.5 day decreasing in incidence by day 14. Dermal necrosis and epidermal hyperplasia were prominent by day 3, reducing in severity by day 14. Chronic active inflammation persisted in all tattooed mice from day 3 to 14 post-tattooing. Inguinal and axillary lymph nodes were pigmented, the inguinal being most reactive as evidenced by lymphoid hyperplasia and polymorphonuclear infiltration. Cutaneous nuclear protein concentrations of nuclear factor-kappa B were elevated between 0.5 and 4 days. Inflammatory and proliferative biomarkers, cyclooxygenase-1, cyclooxygenase-2, and ornithine decarboxylase protein levels were elevated between 0.5 and 4 days in the skin and decreased to control levels by day 14. Interleukin-1 beta and interleukin-10 were elevated in the lymph nodes but suppressed in the tattooed skin, with maximal suppression occurring between days 0.5 and 4. These data demonstrate that mice substantially recover from the tattooing insult by 14 days, leaving behind pigment in the dermis and the regional lymph nodes. The response seen in mice is similar to acute injury seen in humans, suggesting that the murine model might be a suitable surrogate for investigating the toxicological and phototoxicological properties of ingredients used in tattooing

  12. Item level diagnostics and model - data fit in item response theory ...

    African Journals Online (AJOL)

    Item response theory (IRT) is a framework for modeling and analyzing item response data. Item-level modeling gives IRT advantages over classical test theory. The fit of an item score pattern to an item response theory (IRT) models is a necessary condition that must be assessed for further use of item and models that best fit ...

  13. CRAPONE, Optical Model Potential Fit of Neutron Scattering Data

    International Nuclear Information System (INIS)

    Fabbri, F.; Fratamico, G.; Reffo, G.

    2004-01-01

    1 - Description of problem or function: Automatic search for local and non-local optical potential parameters for neutrons. Total, elastic, differential elastic cross sections, l=0 and l=1 strength functions and scattering length can be considered. 2 - Method of solution: A fitting procedure is applied to different sets of experimental data depending on the local or non-local approximation chosen. In the non-local approximation the fitting procedure can be simultaneously performed over the whole energy range. The best fit is obtained when a set of parameters is found where CHI 2 is at its minimum. The solution of the system equations is obtained by diagonalization of the matrix according to the Jacobi method

  14. Birds as biodiversity surrogates

    DEFF Research Database (Denmark)

    Larsen, Frank Wugt; Bladt, Jesper Stentoft; Balmford, Andrew

    2012-01-01

    1. Most biodiversity is still unknown, and therefore, priority areas for conservation typically are identified based on the presence of surrogates, or indicator groups. Birds are commonly used as surrogates of biodiversity owing to the wide availability of relevant data and their broad popular...... and applications.?Good surrogates of biodiversity are necessary to help identify conservation areas that will be effective in preventing species extinctions. Birds perform fairly well as surrogates in cases where birds are relatively speciose, but overall effectiveness will be improved by adding additional data...... from other taxa, in particular from range-restricted species. Conservation solutions with focus on birds as biodiversity surrogate could therefore benefit from also incorporating species data from other taxa....

  15. Soil physical properties influencing the fitting parameters in Philip and Kostiakov infiltration models

    International Nuclear Information System (INIS)

    Mbagwu, J.S.C.

    1994-05-01

    Among the many models developed for monitoring the infiltration process those of Philip and Kostiakov have been studied in detail because of their simplicity and the ease of estimating their fitting parameters. The important soil physical factors influencing the fitting parameters in these infiltration models are reported in this study. The results of the study show that the single most important soil property affecting the fitting parameters in these models is the effective porosity. 36 refs, 2 figs, 5 tabs

  16. The FITS model office ergonomics program: a model for best practice.

    Science.gov (United States)

    Chim, Justine M Y

    2014-01-01

    An effective office ergonomics program can predict positive results in reducing musculoskeletal injury rates, enhancing productivity, and improving staff well-being and job satisfaction. Its objective is to provide a systematic solution to manage the potential risk of musculoskeletal disorders among computer users in an office setting. A FITS Model office ergonomics program is developed. The FITS Model Office Ergonomics Program has been developed which draws on the legislative requirements for promoting the health and safety of workers using computers for extended periods as well as previous research findings. The Model is developed according to the practical industrial knowledge in ergonomics, occupational health and safety management, and human resources management in Hong Kong and overseas. This paper proposes a comprehensive office ergonomics program, the FITS Model, which considers (1) Furniture Evaluation and Selection; (2) Individual Workstation Assessment; (3) Training and Education; (4) Stretching Exercises and Rest Break as elements of an effective program. An experienced ergonomics practitioner should be included in the program design and implementation. Through the FITS Model Office Ergonomics Program, the risk of musculoskeletal disorders among computer users can be eliminated or minimized, and workplace health and safety and employees' wellness enhanced.

  17. Revisiting the Global Electroweak Fit of the Standard Model and Beyond with Gfitter

    CERN Document Server

    Flächer, Henning; Haller, J; Höcker, A; Mönig, K; Stelzer, J

    2009-01-01

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter projec...

  18. Modelling population dynamics model formulation, fitting and assessment using state-space methods

    CERN Document Server

    Newman, K B; Morgan, B J T; King, R; Borchers, D L; Cole, D J; Besbeas, P; Gimenez, O; Thomas, L

    2014-01-01

    This book gives a unifying framework for estimating the abundance of open populations: populations subject to births, deaths and movement, given imperfect measurements or samples of the populations.  The focus is primarily on populations of vertebrates for which dynamics are typically modelled within the framework of an annual cycle, and for which stochastic variability in the demographic processes is usually modest. Discrete-time models are developed in which animals can be assigned to discrete states such as age class, gender, maturity,  population (within a metapopulation), or species (for multi-species models). The book goes well beyond estimation of abundance, allowing inference on underlying population processes such as birth or recruitment, survival and movement. This requires the formulation and fitting of population dynamics models.  The resulting fitted models yield both estimates of abundance and estimates of parameters characterizing the underlying processes.  

  19. Model Fitting for Predicted Precipitation in Darwin: Some Issues with Model Choice

    Science.gov (United States)

    Farmer, Jim

    2010-01-01

    In Volume 23(2) of the "Australian Senior Mathematics Journal," Boncek and Harden present an exercise in fitting a Markov chain model to rainfall data for Darwin Airport (Boncek & Harden, 2009). Days are subdivided into those with precipitation and precipitation-free days. The author abbreviates these labels to wet days and dry days.…

  20. Model-fitting approach to kinetic analysis of non-isothermal oxidation of molybdenite

    International Nuclear Information System (INIS)

    Ebrahimi Kahrizsangi, R.; Abbasi, M. H.; Saidi, A.

    2007-01-01

    The kinetics of molybdenite oxidation was studied by non-isothermal TGA-DTA with heating rate 5 d eg C .min -1 . The model-fitting kinetic approach applied to TGA data. The Coats-Redfern method used of model fitting. The popular model-fitting gives excellent fit non-isothermal data in chemically controlled regime. The apparent activation energy was determined to be about 34.2 kcalmol -1 With pre-exponential factor about 10 8 sec -1 for extent of reaction less than 0.5

  1. Repair models of cell survival and corresponding computer program for survival curve fitting

    International Nuclear Information System (INIS)

    Shen Xun; Hu Yiwei

    1992-01-01

    Some basic concepts and formulations of two repair models of survival, the incomplete repair (IR) model and the lethal-potentially lethal (LPL) model, are introduced. An IBM-PC computer program for survival curve fitting with these models was developed and applied to fit the survivals of human melanoma cells HX118 irradiated at different dose rates. Comparison was made between the repair models and two non-repair models, the multitar get-single hit model and the linear-quadratic model, in the fitting and analysis of the survival-dose curves. It was shown that either IR model or LPL model can fit a set of survival curves of different dose rates with same parameters and provide information on the repair capacity of cells. These two mathematical models could be very useful in quantitative study on the radiosensitivity and repair capacity of cells

  2. The l z ( p ) * Person-Fit Statistic in an Unfolding Model Context.

    Science.gov (United States)

    Tendeiro, Jorge N

    2017-01-01

    Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded unfolding model is used. Results from a simulation study indicate that the person-fit statistic performed relatively well in detecting midpoint response style patterns and not so well in detecting extreme response style patterns.

  3. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    Science.gov (United States)

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  4. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    Science.gov (United States)

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  5. Evaluation of the validity of treatment decisions based on surrogate country models before introduction of the Polish FRAX and recommendations in comparison to current practice.

    Science.gov (United States)

    Glinkowski, Wojciech M; Narloch, Jerzy; Glinkowska, Bożena; Bandura, Małgorzata

    2018-03-01

    Patients diagnosed before the Polish FRAX was introduced may require re-evaluation and treatment changes if the diagnosis was established according to a surrogate country FRAX score. The aim of the study was to evaluate the validity of treatment decisions based on the surrogate country model before introduction of the Polish FRAX and to provide recommendations based on the current practice. We evaluated a group of 142 postmenopausal women (70.7 ±8.9 years) who underwent bone mineral density measurements. We used 22 country-specific FRAX models and compared these to the Polish model. The mean risk values for hip and major osteoporotic fractures within 10 years were 4.575 (from 0.82 to 8.46) and 12.47% (from 2.18 to 21.65), respectively. In the case of a major fracture, 94.4% of women would receive lifestyle advice, and 5.6% would receive treatment according to the Polish FRAX using the guidelines of the National Osteoporosis Foundation (NOF). Polish treatment thresholds would implement pharmacotherapy in 32.4% of the study group. In the case of hip fractures, 45% of women according to the NOF would require pharmacotherapy but only 9.8% of women would qualify according to Polish guidelines. Nearly all surrogate FRAX calculator scores proved significantly different form Polish ( p > 0.05). More patients might have received antiresorptive medication before the Polish FRAX. This study recommends re-evaluation of patients who received medical therapy before the Polish FRAX was introduced and a review of the recommendations, considering the side effects of antiresorptive medication.

  6. The issue of statistical power for overall model fit in evaluating structural equation models

    Directory of Open Access Journals (Sweden)

    Richard HERMIDA

    2015-06-01

    Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.

  7. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species.

    Science.gov (United States)

    Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R

    2017-01-04

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  8. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    Science.gov (United States)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  9. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  10. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development.

    Science.gov (United States)

    Tøndel, Kristin; Niederer, Steven A; Land, Sander; Smith, Nicolas P

    2014-05-20

    Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input-output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on

  11. Computationally efficient and flexible modular modelling approach for river and urban drainage systems based on surrogate conceptual models

    Science.gov (United States)

    Wolfs, Vincent; Willems, Patrick

    2015-04-01

    Water managers rely increasingly on mathematical simulation models that represent individual parts of the water system, such as the river, sewer system or waste water treatment plant. The current evolution towards integral water management requires the integration of these distinct components, leading to an increased model scale and scope. Besides this growing model complexity, certain applications gained interest and importance, such as uncertainty and sensitivity analyses, auto-calibration of models and real time control. All these applications share the need for models with a very limited calculation time, either for performing a large number of simulations, or a long term simulation followed by a statistical post-processing of the results. The use of the commonly applied detailed models that solve (part of) the de Saint-Venant equations is infeasible for these applications or such integrated modelling due to several reasons, of which a too long simulation time and the inability to couple submodels made in different software environments are the main ones. Instead, practitioners must use simplified models for these purposes. These models are characterized by empirical relationships and sacrifice model detail and accuracy for increased computational efficiency. The presented research discusses the development of a flexible integral modelling platform that complies with the following three key requirements: (1) Include a modelling approach for water quantity predictions for rivers, floodplains, sewer systems and rainfall runoff routing that require a minimal calculation time; (2) A fast and semi-automatic model configuration, thereby making maximum use of data of existing detailed models and measurements; (3) Have a calculation scheme based on open source code to allow for future extensions or the coupling with other models. First, a novel and flexible modular modelling approach based on the storage cell concept was developed. This approach divides each

  12. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia; Harmandaris, Vagelis; Katsoulakis, Markos A.; Plechac, Petr

    2015-01-01

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics

  13. Self- and surrogate-reported communication functioning in aphasia.

    Science.gov (United States)

    Doyle, Patrick J; Hula, William D; Austermann Hula, Shannon N; Stone, Clement A; Wambaugh, Julie L; Ross, Katherine B; Schumacher, James G

    2013-06-01

    To evaluate the dimensionality and measurement invariance of the aphasia communication outcome measure (ACOM), a self- and surrogate-reported measure of communicative functioning in aphasia. Responses to a large pool of items describing communication activities were collected from 133 community-dwelling persons with aphasia of ≥ 1 month post-onset and their associated surrogate respondents. These responses were evaluated using confirmatory and exploratory factor analysis. Chi-square difference tests of nested factor models were used to evaluate patient-surrogate measurement invariance and the equality of factor score means and variances. Association and agreement between self- and surrogate reports were examined using correlation and scatterplots of pairwise patient-surrogate differences. Three single-factor scales (Talking, Comprehension, and Writing) approximating patient-surrogate measurement invariance were identified. The variance of patient-reported scores on the Talking and Writing scales was higher than surrogate-reported variances on these scales. Correlations between self- and surrogate reports were moderate-to-strong, but there were significant disagreements in a substantial number of individual cases. Despite minimal bias and relatively strong association, surrogate reports of communicative functioning in aphasia are not reliable substitutes for self-reports by persons with aphasia. Furthermore, although measurement invariance is necessary for direct comparison of self- and surrogate reports, the costs of obtaining invariance in terms of scale reliability and content validity may be substantial. Development of non-invariant self- and surrogate report scales may be preferable for some applications.

  14. The lz(p)* Person-Fit Statistic in an Unfolding Model Context

    NARCIS (Netherlands)

    Tendeiro, Jorge N.

    2017-01-01

    Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded

  15. The effects of post-exposure smallpox vaccination on clinical disease presentation: addressing the data gaps between historical epidemiology and modern surrogate model data.

    Science.gov (United States)

    Keckler, M Shannon; Reynolds, Mary G; Damon, Inger K; Karem, Kevin L

    2013-10-25

    Decades after public health interventions - including pre- and post-exposure vaccination - were used to eradicate smallpox, zoonotic orthopoxvirus outbreaks and the potential threat of a release of variola virus remain public health concerns. Routine prophylactic smallpox vaccination of the public ceased worldwide in 1980, and the adverse event rate associated with the currently licensed live vaccinia virus vaccine makes reinstatement of policies recommending routine pre-exposure vaccination unlikely in the absence of an orthopoxvirus outbreak. Consequently, licensing of safer vaccines and therapeutics that can be used post-orthopoxvirus exposure is necessary to protect the global population from these threats. Variola virus is a solely human pathogen that does not naturally infect any other known animal species. Therefore, the use of surrogate viruses in animal models of orthopoxvirus infection is important for the development of novel vaccines and therapeutics. Major complications involved with the use of surrogate models include both the absence of a model that accurately mimics all aspects of human smallpox disease and a lack of reproducibility across model species. These complications limit our ability to model post-exposure vaccination with newer vaccines for application to human orthopoxvirus outbreaks. This review seeks to (1) summarize conclusions about the efficacy of post-exposure smallpox vaccination from historic epidemiological reports and modern animal studies; (2) identify data gaps in these studies; and (3) summarize the clinical features of orthopoxvirus-associated infections in various animal models to identify those models that are most useful for post-exposure vaccination studies. The ultimate purpose of this review is to provide observations and comments regarding available model systems and data gaps for use in improving post-exposure medical countermeasures against orthopoxviruses. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Fitting and Testing Conditional Multinormal Partial Credit Models

    Science.gov (United States)

    Hessen, David J.

    2012-01-01

    A multinormal partial credit model for factor analysis of polytomously scored items with ordered response categories is derived using an extension of the Dutch Identity (Holland in "Psychometrika" 55:5-18, 1990). In the model, latent variables are assumed to have a multivariate normal distribution conditional on unweighted sums of item…

  17. Model Fit and Item Factor Analysis: Overfactoring, Underfactoring, and a Program to Guide Interpretation.

    Science.gov (United States)

    Clark, D Angus; Bowles, Ryan P

    2018-04-23

    In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.

  18. Assessment of health surveys: fitting a multidimensional graded response model.

    Science.gov (United States)

    Depaoli, Sarah; Tiemensma, Jitske; Felt, John M

    The multidimensional graded response model, an item response theory (IRT) model, can be used to improve the assessment of surveys, even when sample sizes are restricted. Typically, health-based survey development utilizes classical statistical techniques (e.g. reliability and factor analysis). In a review of four prominent journals within the field of Health Psychology, we found that IRT-based models were used in less than 10% of the studies examining scale development or assessment. However, implementing IRT-based methods can provide more details about individual survey items, which is useful when determining the final item content of surveys. An example using a quality of life survey for Cushing's syndrome (CushingQoL) highlights the main components for implementing the multidimensional graded response model. Patients with Cushing's syndrome (n = 397) completed the CushingQoL. Results from the multidimensional graded response model supported a 2-subscale scoring process for the survey. All items were deemed as worthy contributors to the survey. The graded response model can accommodate unidimensional or multidimensional scales, be used with relatively lower sample sizes, and is implemented in free software (example code provided in online Appendix). Use of this model can help to improve the quality of health-based scales being developed within the Health Sciences.

  19. Surrogate-based optimization of hydraulic fracturing in pre-existing fracture networks

    Science.gov (United States)

    Chen, Mingjie; Sun, Yunwei; Fu, Pengcheng; Carrigan, Charles R.; Lu, Zhiming; Tong, Charles H.; Buscheck, Thomas A.

    2013-08-01

    Hydraulic fracturing has been used widely to stimulate production of oil, natural gas, and geothermal energy in formations with low natural permeability. Numerical optimization of fracture stimulation often requires a large number of evaluations of objective functions and constraints from forward hydraulic fracturing models, which are computationally expensive and even prohibitive in some situations. Moreover, there are a variety of uncertainties associated with the pre-existing fracture distributions and rock mechanical properties, which affect the optimized decisions for hydraulic fracturing. In this study, a surrogate-based approach is developed for efficient optimization of hydraulic fracturing well design in the presence of natural-system uncertainties. The fractal dimension is derived from the simulated fracturing network as the objective for maximizing energy recovery sweep efficiency. The surrogate model, which is constructed using training data from high-fidelity fracturing models for mapping the relationship between uncertain input parameters and the fractal dimension, provides fast approximation of the objective functions and constraints. A suite of surrogate models constructed using different fitting methods is evaluated and validated for fast predictions. Global sensitivity analysis is conducted to gain insights into the impact of the input variables on the output of interest, and further used for parameter screening. The high efficiency of the surrogate-based approach is demonstrated for three optimization scenarios with different and uncertain ambient conditions. Our results suggest the critical importance of considering uncertain pre-existing fracture networks in optimization studies of hydraulic fracturing.

  20. A No-Scale Inflationary Model to Fit Them All

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri; Olive, Keith

    2014-01-01

    The magnitude of B-mode polarization in the cosmic microwave background as measured by BICEP2 favours models of chaotic inflation with a quadratic $m^2 \\phi^2/2$ potential, whereas data from the Planck satellite favour a small value of the tensor-to-scalar perturbation ratio $r$ that is highly consistent with the Starobinsky $R + R^2$ model. Reality may lie somewhere between these two scenarios. In this paper we propose a minimal two-field no-scale supergravity model that interpolates between quadratic and Starobinsky-like inflation as limiting cases, while retaining the successful prediction $n_s \\simeq 0.96$.

  1. Effectiveness of external respiratory surrogates for in vivo liver motion estimation

    International Nuclear Information System (INIS)

    Chang, Kai-Hsiang; Ho, Ming-Chih; Yeh, Chi-Chuan; Chen, Yu-Chien; Lian, Feng-Li; Lin, Win-Li; Yen, Jia-Yush; Chen, Yung-Yaw

    2012-01-01

    Purpose: Due to low frame rate of MRI and high radiation damage from fluoroscopy and CT, liver motion estimation using external respiratory surrogate signals seems to be a better approach to track liver motion in real-time for liver tumor treatments in radiotherapy and thermotherapy. This work proposes a liver motion estimation method based on external respiratory surrogate signals. Animal experiments are also conducted to investigate related issues, such as the sensor arrangement, multisensor fusion, and the effective time period. Methods: Liver motion and abdominal motion are both induced by respiration and are proved to be highly correlated. Contrary to the difficult direct measurement of the liver motion, the abdominal motion can be easily accessed. Based on this idea, our study is split into the model-fitting stage and the motion estimation stage. In the first stage, the correlation between the surrogates and the liver motion is studied and established via linear regression method. In the second stage, the liver motion is estimated by the surrogate signals with the correlation model. Animal experiments on cases of single surrogate signal, multisurrogate signals, and long-term surrogate signals are conducted and discussed to verify the practical use of this approach. Results: The results show that the best single sensor location is at the middle of the upper abdomen, while multisurrogate models are generally better than the single ones. The estimation error is reduced from 0.6 mm for the single surrogate models to 0.4 mm for the multisurrogate models. The long-term validity of the estimation models is quite satisfactory within the period of 10 min with the estimation error less than 1.4 mm. Conclusions: External respiratory surrogate signals from the abdomen motion produces good performance for liver motion estimation in real-time. Multisurrogate signals enhance estimation accuracy, and the estimation model can maintain its accuracy for at least 10 min. This

  2. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  3. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  4. Design of spatial experiments: Model fitting and prediction

    Energy Technology Data Exchange (ETDEWEB)

    Fedorov, V.V.

    1996-03-01

    The main objective of the paper is to describe and develop model oriented methods and algorithms for the design of spatial experiments. Unlike many other publications in this area, the approach proposed here is essentially based on the ideas of convex design theory.

  5. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda; Hart, Jeffrey D.

    2009-01-01

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors

  6. Reducing uncertainty based on model fitness: Application to a ...

    African Journals Online (AJOL)

    A weakness of global sensitivity and uncertainty analysis methodologies is the often subjective definition of prior parameter probability distributions, especially ... The reservoir representing the central part of the wetland, where flood waters separate into several independent distributaries, is a keystone area within the model.

  7. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda

    2009-05-12

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.

  8. The sensitivity of Alpine summer convection to surrogate climate change: an intercomparison between convection-parameterizing and convection-resolving models

    Directory of Open Access Journals (Sweden)

    M. Keller

    2018-04-01

    Full Text Available Climate models project an increase in heavy precipitation events in response to greenhouse gas forcing. Important elements of such events are rain showers and thunderstorms, which are poorly represented in models with parameterized convection. In this study, simulations with 12 km horizontal grid spacing (convection-parameterizing model, CPM and 2 km grid spacing (convection-resolving model, CRM are employed to investigate the change in the diurnal cycle of convection with warmer climate. For this purpose, simulations of 11 days in June 2007 with a pronounced diurnal cycle of convection are compared with surrogate simulations from the same period. The surrogate climate simulations mimic a future climate with increased temperatures but unchanged relative humidity and similar synoptic-scale circulation. Two temperature scenarios are compared: one with homogeneous warming (HW using a vertically uniform warming and the other with vertically dependent warming (VW that enables changes in lapse rate.The two sets of simulations with parameterized and explicit convection exhibit substantial differences, some of which are well known from the literature. These include differences in the timing and amplitude of the diurnal cycle of convection, and the frequency of precipitation with low intensities. The response to climate change is much less studied. We can show that stratification changes have a strong influence on the changes in convection. Precipitation is strongly increasing for HW but decreasing for the VW simulations. For cloud type frequencies, virtually no changes are found for HW, but a substantial reduction in high clouds is found for VW. Further, we can show that the climate change signal strongly depends upon the horizontal resolution. In particular, significant differences between CPM and CRM are found in terms of the radiative feedbacks, with CRM exhibiting a stronger negative feedback in the top-of-the-atmosphere energy budget.

  9. The sensitivity of Alpine summer convection to surrogate climate change: an intercomparison between convection-parameterizing and convection-resolving models

    Science.gov (United States)

    Keller, Michael; Kröner, Nico; Fuhrer, Oliver; Lüthi, Daniel; Schmidli, Juerg; Stengel, Martin; Stöckli, Reto; Schär, Christoph

    2018-04-01

    Climate models project an increase in heavy precipitation events in response to greenhouse gas forcing. Important elements of such events are rain showers and thunderstorms, which are poorly represented in models with parameterized convection. In this study, simulations with 12 km horizontal grid spacing (convection-parameterizing model, CPM) and 2 km grid spacing (convection-resolving model, CRM) are employed to investigate the change in the diurnal cycle of convection with warmer climate. For this purpose, simulations of 11 days in June 2007 with a pronounced diurnal cycle of convection are compared with surrogate simulations from the same period. The surrogate climate simulations mimic a future climate with increased temperatures but unchanged relative humidity and similar synoptic-scale circulation. Two temperature scenarios are compared: one with homogeneous warming (HW) using a vertically uniform warming and the other with vertically dependent warming (VW) that enables changes in lapse rate. The two sets of simulations with parameterized and explicit convection exhibit substantial differences, some of which are well known from the literature. These include differences in the timing and amplitude of the diurnal cycle of convection, and the frequency of precipitation with low intensities. The response to climate change is much less studied. We can show that stratification changes have a strong influence on the changes in convection. Precipitation is strongly increasing for HW but decreasing for the VW simulations. For cloud type frequencies, virtually no changes are found for HW, but a substantial reduction in high clouds is found for VW. Further, we can show that the climate change signal strongly depends upon the horizontal resolution. In particular, significant differences between CPM and CRM are found in terms of the radiative feedbacks, with CRM exhibiting a stronger negative feedback in the top-of-the-atmosphere energy budget.

  10. Gfitter - Revisiting the global electroweak fit of the Standard Model and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Flaecher, H.; Hoecker, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Goebel, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Haller, J. [Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Moenig, K.; Stelzer, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2008-11-15

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M{sub H}=116.4{sup +18.3}{sub -1.3} GeV, and the 2{sigma} and 3{sigma} allowed regions [114,145] GeV and [[113,168] and [180,225

  11. Fitting measurement models to vocational interest data: are dominance models ideal?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz; Rounds, James; Williams, Bruce A

    2009-09-01

    In this study, the authors examined the item response process underlying 3 vocational interest inventories: the Occupational Preference Inventory (C.-P. Deng, P. I. Armstrong, & J. Rounds, 2007), the Interest Profiler (J. Rounds, T. Smith, L. Hubert, P. Lewis, & D. Rivkin, 1999; J. Rounds, C. M. Walker, et al., 1999), and the Interest Finder (J. E. Wall & H. E. Baker, 1997; J. E. Wall, L. L. Wise, & H. E. Baker, 1996). Item response theory (IRT) dominance models, such as the 2-parameter and 3-parameter logistic models, assume that item response functions (IRFs) are monotonically increasing as the latent trait increases. In contrast, IRT ideal point models, such as the generalized graded unfolding model, have IRFs that peak where the latent trait matches the item. Ideal point models are expected to fit better because vocational interest inventories ask about typical behavior, as opposed to requiring maximal performance. Results show that across all 3 interest inventories, the ideal point model provided better descriptions of the response process. The importance of specifying the correct item response model for precise measurement is discussed. In particular, scores computed by a dominance model were shown to be sometimes illogical: individuals endorsing mostly realistic or mostly social items were given similar scores, whereas scores based on an ideal point model were sensitive to which type of items respondents endorsed.

  12. Nonlinear models for fitting growth curves of Nellore cows reared in the Amazon Biome

    Directory of Open Access Journals (Sweden)

    Kedma Nayra da Silva Marinho

    2013-09-01

    Full Text Available Growth curves of Nellore cows were estimated by comparing six nonlinear models: Brody, Logistic, two alternatives by Gompertz, Richards and Von Bertalanffy. The models were fitted to weight-age data, from birth to 750 days of age of 29,221 cows, born between 1976 and 2006 in the Brazilian states of Acre, Amapá, Amazonas, Pará, Rondônia, Roraima and Tocantins. The models were fitted by the Gauss-Newton method. The goodness of fit of the models was evaluated by using mean square error, adjusted coefficient of determination, prediction error and mean absolute error. Biological interpretation of parameters was accomplished by plotting estimated weights versus the observed weight means, instantaneous growth rate, absolute maturity rate, relative instantaneous growth rate, inflection point and magnitude of the parameters A (asymptotic weight and K (maturing rate. The Brody and Von Bertalanffy models fitted the weight-age data but the other models did not. The average weight (A and growth rate (K were: 384.6±1.63 kg and 0.0022±0.00002 (Brody and 313.40±0.70 kg and 0.0045±0.00002 (Von Bertalanffy. The Brody model provides better goodness of fit than the Von Bertalanffy model.

  13. Development of multi-component diesel surrogate fuel models – Part II:Validation of the integrated mechanisms in 0-D kinetic and 2-D CFD spray combustion simulations

    DEFF Research Database (Denmark)

    Poon, Hiew Mun; Pang, Kar Mun; Ng, Hoon Kiat

    2016-01-01

    ), cyclohexane(CHX) and toluene developed in Part I are applied in this work. They are combined to produce two different versions of multi-component diesel surrogate models in the form of MCDS1 (HXN + HMN)and MCDS2 (HXN + HMN + toluene + CHX). The integrated mechanisms are then comprehensively validated in zero......-dimensional chemical kinetic simulations under a wide range of shock tube and jetstirred reactor conditions. Subsequently, the fidelity of the surrogate models is further evaluated in two-dimensional CFD spray combustion simulations. Simulation results show that ignition delay (ID) prediction corresponds well...... an increase of maximum local soot volume fraction by a factor of2.1 when the ambient temperature increases from 900 K to 1000 K, while the prediction by MCDS1 is lower at 1.6. This trend qualitatively agrees with the experimental observation. This work demonstrates that MCDS1 serves as a potential surrogate...

  14. Are Fit Indices Biased in Favor of Bi-Factor Models in Cognitive Ability Research?: A Comparison of Fit in Correlated Factors, Higher-Order, and Bi-Factor Models via Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Grant B. Morgan

    2015-02-01

    Full Text Available Bi-factor confirmatory factor models have been influential in research on cognitive abilities because they often better fit the data than correlated factors and higher-order models. They also instantiate a perspective that differs from that offered by other models. Motivated by previous work that hypothesized an inherent statistical bias of fit indices favoring the bi-factor model, we compared the fit of correlated factors, higher-order, and bi-factor models via Monte Carlo methods. When data were sampled from a true bi-factor structure, each of the approximate fit indices was more likely than not to identify the bi-factor solution as the best fitting. When samples were selected from a true multiple correlated factors structure, approximate fit indices were more likely overall to identify the correlated factors solution as the best fitting. In contrast, when samples were generated from a true higher-order structure, approximate fit indices tended to identify the bi-factor solution as best fitting. There was extensive overlap of fit values across the models regardless of true structure. Although one model may fit a given dataset best relative to the other models, each of the models tended to fit the data well in absolute terms. Given this variability, models must also be judged on substantive and conceptual grounds.

  15. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    International Nuclear Information System (INIS)

    Liang, Zhong Wei; Wang, Yi Jun; Ye, Bang Yan; Brauwer, Richard Kars

    2012-01-01

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process

  16. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Zhong Wei; Wang, Yi Jun [Guangzhou Univ., Guangzhou (China); Ye, Bang Yan [South China Univ. of Technology, Guangzhou (China); Brauwer, Richard Kars [Indian Institute of Technology, Kanpur (India)

    2012-10-15

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process.

  17. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Huang, Can

    2018-01-01

    –slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost......In order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master...... optimality. Numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods....

  18. Fitness, Sleep-Disordered Breathing, Symptoms of Depression, and Cognition in Inactive Overweight Children: Mediation Models.

    Science.gov (United States)

    Stojek, Monika M K; Montoya, Amanda K; Drescher, Christopher F; Newberry, Andrew; Sultan, Zain; Williams, Celestine F; Pollock, Norman K; Davis, Catherine L

    We used mediation models to examine the mechanisms underlying the relationships among physical fitness, sleep-disordered breathing (SDB), symptoms of depression, and cognitive functioning. We conducted a cross-sectional secondary analysis of the cohorts involved in the 2003-2006 project PLAY (a trial of the effects of aerobic exercise on health and cognition) and the 2008-2011 SMART study (a trial of the effects of exercise on cognition). A total of 397 inactive overweight children aged 7-11 received a fitness test, standardized cognitive test (Cognitive Assessment System, yielding Planning, Attention, Simultaneous, Successive, and Full Scale scores), and depression questionnaire. Parents completed a Pediatric Sleep Questionnaire. We used bootstrapped mediation analyses to test whether SDB mediated the relationship between fitness and depression and whether SDB and depression mediated the relationship between fitness and cognition. Fitness was negatively associated with depression ( B = -0.041; 95% CI, -0.06 to -0.02) and SDB ( B = -0.005; 95% CI, -0.01 to -0.001). SDB was positively associated with depression ( B = 0.99; 95% CI, 0.32 to 1.67) after controlling for fitness. The relationship between fitness and depression was mediated by SDB (indirect effect = -0.005; 95% CI, -0.01 to -0.0004). The relationship between fitness and the attention component of cognition was independently mediated by SDB (indirect effect = 0.058; 95% CI, 0.004 to 0.13) and depression (indirect effect = -0.071; 95% CI, -0.01 to -0.17). SDB mediates the relationship between fitness and depression, and SDB and depression separately mediate the relationship between fitness and the attention component of cognition.

  19. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    Science.gov (United States)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on

  1. Recent progress in gasoline surrogate fuels

    KAUST Repository

    Sarathy, Mani

    2017-12-06

    Petroleum-derived gasoline is currently the most widely used fuel for transportation propulsion. The design and operation of gasoline fuels is governed by specific physical and chemical kinetic fuel properties. These must be thoroughly understood in order to improve sustainable gasoline fuel technologies in the face of economical, technological, and societal challenges. For this reason, surrogate mixtures are formulated to emulate the thermophysical, thermochemical, and chemical kinetic properties of the real fuel, so that fundamental experiments and predictive simulations can be conducted. Early studies on gasoline combustion typically adopted single component or binary mixtures (n-heptane/isooctane) as surrogates. However, the last decade has seen rapid progress in the formulation and utilization of ternary mixtures (n-heptane/isooctane/toluene), as well as multicomponent mixtures that span the entire carbon number range of gasoline fuels (C4–C10). The increased use of oxygenated fuels (ethanol, butanol, MTBE, etc.) as blending components/additives has also motivated studies on their addition to gasoline fuels. This comprehensive review presents the available experimental and chemical kinetic studies which have been performed to better understand the combustion properties of gasoline fuels and their surrogates. Focus is on the development and use of surrogate fuels that emulate real fuel properties governing the design and operation of engines. A detailed analysis is presented for the various classes of compounds used in formulating gasoline surrogate fuels, including n-paraffins, isoparaffins, olefins, naphthenes, and aromatics. Chemical kinetic models for individual molecules and mixtures of molecules to emulate gasoline surrogate fuels are presented. Despite the recent progress in gasoline surrogate fuel combustion research, there are still major gaps remaining; these are critically discussed, as well as their implications on fuel formulation and engine

  2. Recent progress in gasoline surrogate fuels

    KAUST Repository

    Sarathy, Mani; Farooq, Aamir; Kalghatgi, Gautam T.

    2017-01-01

    Petroleum-derived gasoline is currently the most widely used fuel for transportation propulsion. The design and operation of gasoline fuels is governed by specific physical and chemical kinetic fuel properties. These must be thoroughly understood in order to improve sustainable gasoline fuel technologies in the face of economical, technological, and societal challenges. For this reason, surrogate mixtures are formulated to emulate the thermophysical, thermochemical, and chemical kinetic properties of the real fuel, so that fundamental experiments and predictive simulations can be conducted. Early studies on gasoline combustion typically adopted single component or binary mixtures (n-heptane/isooctane) as surrogates. However, the last decade has seen rapid progress in the formulation and utilization of ternary mixtures (n-heptane/isooctane/toluene), as well as multicomponent mixtures that span the entire carbon number range of gasoline fuels (C4–C10). The increased use of oxygenated fuels (ethanol, butanol, MTBE, etc.) as blending components/additives has also motivated studies on their addition to gasoline fuels. This comprehensive review presents the available experimental and chemical kinetic studies which have been performed to better understand the combustion properties of gasoline fuels and their surrogates. Focus is on the development and use of surrogate fuels that emulate real fuel properties governing the design and operation of engines. A detailed analysis is presented for the various classes of compounds used in formulating gasoline surrogate fuels, including n-paraffins, isoparaffins, olefins, naphthenes, and aromatics. Chemical kinetic models for individual molecules and mixtures of molecules to emulate gasoline surrogate fuels are presented. Despite the recent progress in gasoline surrogate fuel combustion research, there are still major gaps remaining; these are critically discussed, as well as their implications on fuel formulation and engine

  3. Developments in Surrogating Methods

    Directory of Open Access Journals (Sweden)

    Hans van Dormolen

    2005-11-01

    Full Text Available In this paper, I would like to talk about the developments in surrogating methods for preservation. My main focus will be on the technical aspects of preservation surrogates. This means that I will tell you something about my job as Quality Manager Microfilming for the Netherlands’ national preservation program, Metamorfoze, which is coordinated by the National Library. I am responsible for the quality of the preservation microfilms, which are produced for Metamorfoze. Firstly, I will elaborate on developments in preservation methods in relation to the following subjects: · Preservation microfilms · Scanning of preservation microfilms · Preservation scanning · Computer Output Microfilm. In the closing paragraphs of this paper, I would like to tell you something about the methylene blue test. This is an important test for long-term storage of preservation microfilms. Also, I will give you a brief report on the Cellulose Acetate Microfilm Conference that was held in the British Library in London, May 2005.

  4. Kernel-density estimation and approximate Bayesian computation for flexible epidemiological model fitting in Python.

    Science.gov (United States)

    Irvine, Michael A; Hollingsworth, T Déirdre

    2018-05-26

    Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Standard error propagation in R-matrix model fitting for light elements

    International Nuclear Information System (INIS)

    Chen Zhenpeng; Zhang Rui; Sun Yeying; Liu Tingjin

    2003-01-01

    The error propagation features with R-matrix model fitting 7 Li, 11 B and 17 O systems were researched systematically. Some laws of error propagation were revealed, an empirical formula P j = U j c / U j d = K j · S-bar · √m / √N for describing standard error propagation was established, the most likely error ranges for standard cross sections of 6 Li(n,t), 10 B(n,α0) and 10 B(n,α1) were estimated. The problem that the standard error of light nuclei standard cross sections may be too small results mainly from the R-matrix model fitting, which is not perfect. Yet R-matrix model fitting is the most reliable evaluation method for such data. The error propagation features of R-matrix model fitting for compound nucleus system of 7 Li, 11 B and 17 O has been studied systematically, some laws of error propagation are revealed, and these findings are important in solving the problem mentioned above. Furthermore, these conclusions are suitable for similar model fitting in other scientific fields. (author)

  6. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    Science.gov (United States)

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  7. Assessing model fit in latent class analysis when asymptotics do not hold

    NARCIS (Netherlands)

    van Kollenburg, Geert H.; Mulder, Joris; Vermunt, Jeroen K.

    2015-01-01

    The application of latent class (LC) analysis involves evaluating the LC model using goodness-of-fit statistics. To assess the misfit of a specified model, say with the Pearson chi-squared statistic, a p-value can be obtained using an asymptotic reference distribution. However, asymptotic p-values

  8. Development and design of a late-model fitness test instrument based on LabView

    Science.gov (United States)

    Xie, Ying; Wu, Feiqing

    2010-12-01

    Undergraduates are pioneers of China's modernization program and undertake the historic mission of rejuvenating our nation in the 21st century, whose physical fitness is vital. A smart fitness test system can well help them understand their fitness and health conditions, thus they can choose more suitable approaches and make practical plans for exercising according to their own situation. following the future trends, a Late-model fitness test Instrument based on LabView has been designed to remedy defects of today's instruments. The system hardware consists of fives types of sensors with their peripheral circuits, an acquisition card of NI USB-6251 and a computer, while the system software, on the basis of LabView, includes modules of user register, data acquisition, data process and display, and data storage. The system, featured by modularization and an open structure, is able to be revised according to actual needs. Tests results have verified the system's stability and reliability.

  9. Fast and exact Newton and Bidirectional fitting of Active Appearance Models.

    Science.gov (United States)

    Kossaifi, Jean; Tzimiropoulos, Yorgos; Pantic, Maja

    2016-12-21

    Active Appearance Models (AAMs) are generative models of shape and appearance that have proven very attractive for their ability to handle wide changes in illumination, pose and occlusion when trained in the wild, while not requiring large training dataset like regression-based or deep learning methods. The problem of fitting an AAM is usually formulated as a non-linear least squares one and the main way of solving it is a standard Gauss-Newton algorithm. In this paper we extend Active Appearance Models in two ways: we first extend the Gauss-Newton framework by formulating a bidirectional fitting method that deforms both the image and the template to fit a new instance. We then formulate a second order method by deriving an efficient Newton method for AAMs fitting. We derive both methods in a unified framework for two types of Active Appearance Models, holistic and part-based, and additionally show how to exploit the structure in the problem to derive fast yet exact solutions. We perform a thorough evaluation of all algorithms on three challenging and recently annotated inthe- wild datasets, and investigate fitting accuracy, convergence properties and the influence of noise in the initialisation. We compare our proposed methods to other algorithms and show that they yield state-of-the-art results, out-performing other methods while having superior convergence properties.

  10. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    Science.gov (United States)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  11. The disconnected values model improves mental well-being and fitness in an employee wellness program.

    Science.gov (United States)

    Anshel, Mark H; Brinthaupt, Thomas M; Kang, Minsoo

    2010-01-01

    This study examined the effect of a 10-week wellness program on changes in physical fitness and mental well-being. The conceptual framework for this study was the Disconnected Values Model (DVM). According to the DVM, detecting the inconsistencies between negative habits and values (e.g., health, family, faith, character) and concluding that these "disconnects" are unacceptable promotes the need for health behavior change. Participants were 164 full-time employees at a university in the southeastern U.S. The program included fitness coaching and a 90-minute orientation based on the DVM. Multivariate Mixed Model analyses indicated significantly improved scores from pre- to post-intervention on selected measures of physical fitness and mental well-being. The results suggest that the Disconnected Values Model provides an effective cognitive-behavioral approach to generating health behavior change in a 10-week workplace wellness program.

  12. A goodness-of-fit test for occupancy models with correlated within-season revisits

    Science.gov (United States)

    Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.

    2016-01-01

    Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and

  13. Tests of fit of historically-informed models of African American Admixture.

    Science.gov (United States)

    Gross, Jessica M

    2018-02-01

    African American populations in the U.S. formed primarily by mating between Africans and Europeans over the last 500 years. To date, studies of admixture have focused on either a one-time admixture event or continuous input into the African American population from Europeans only. Our goal is to gain a better understanding of the admixture process by examining models that take into account (a) assortative mating by ancestry in the African American population, (b) continuous input from both Europeans and Africans, and (c) historically informed variation in the rate of African migration over time. We used a model-based clustering method to generate distributions of African ancestry in three samples comprised of 147 African Americans from two published sources. We used a log-likelihood method to examine the fit of four models to these distributions and used a log-likelihood ratio test to compare the relative fit of each model. The mean ancestry estimates for our datasets of 77% African/23% European to 83% African/17% European ancestry are consistent with previous studies. We find admixture models that incorporate continuous gene flow from Europeans fit significantly better than one-time event models, and that a model involving continuous gene flow from Africans and Europeans fits better than one with continuous gene flow from Europeans only for two samples. Importantly, models that involve continuous input from Africans necessitate a higher level of gene flow from Europeans than previously reported. We demonstrate that models that take into account information about the rate of African migration over the past 500 years fit observed patterns of African ancestry better than alternative models. Our approach will enrich our understanding of the admixture process in extant and past populations. © 2017 Wiley Periodicals, Inc.

  14. GOODNESS-OF-FIT TEST FOR THE ACCELERATED FAILURE TIME MODEL BASED ON MARTINGALE RESIDUALS

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2013-01-01

    Roč. 49, č. 1 (2013), s. 40-59 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:GA MŠk(CZ) SVV 261315/2011 Keywords : accelerated failure time model * survival analysis * goodness-of-fit Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/novak-goodness-of-fit test for the aft model based on martingale residuals.pdf

  15. Efficient occupancy model-fitting for extensive citizen-science data

    Science.gov (United States)

    Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen

  16. Fitting and comparing competing models of the species abundance distribution: assessment and prospect

    Directory of Open Access Journals (Sweden)

    Thomas J Matthews

    2014-06-01

    Full Text Available A species abundance distribution (SAD characterises patterns in the commonness and rarity of all species within an ecological community. As such, the SAD provides the theoretical foundation for a number of other biogeographical and macroecological patterns, such as the species–area relationship, as well as being an interesting pattern in its own right. While there has been resurgence in the study of SADs in the last decade, less focus has been placed on methodology in SAD research, and few attempts have been made to synthesise the vast array of methods which have been employed in SAD model evaluation. As such, our review has two aims. First, we provide a general overview of SADs, including descriptions of the commonly used distributions, plotting methods and issues with evaluating SAD models. Second, we review a number of recent advances in SAD model fitting and comparison. We conclude by providing a list of recommendations for fitting and evaluating SAD models. We argue that it is time for SAD studies to move away from many of the traditional methods available for fitting and evaluating models, such as sole reliance on the visual examination of plots, and embrace statistically rigorous techniques. In particular, we recommend the use of both goodness-of-fit tests and model-comparison analyses because each provides unique information which one can use to draw inferences.

  17. Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.

    Science.gov (United States)

    Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei

    2015-02-01

    This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.

  18. Human surrogate neck response to +Gz vertical impact

    NARCIS (Netherlands)

    Rooij, L. van; Uittenbogaard, J.

    2011-01-01

    For the evaluation of impact scenarios with a substantial vertical component, the performance of current human surrogates - the RID 3D hardware dummy and two numerical human models - was evaluated. Volunteer tests with 10G and 6G pulses were compared to reconstructed tests with human surrogates.

  19. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  20. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  1. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  2. Model-independent partial wave analysis using a massively-parallel fitting framework

    Science.gov (United States)

    Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.

    2017-10-01

    The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.

  3. Study on fitness functions of genetic algorithm for dynamically correcting nuclide atmospheric diffusion model

    International Nuclear Information System (INIS)

    Ji Zhilong; Ma Yuanwei; Wang Dezhong

    2014-01-01

    Background: In radioactive nuclides atmospheric diffusion models, the empirical dispersion coefficients were deduced under certain experiment conditions, whose difference with nuclear accident conditions is a source of deviation. A better estimation of the radioactive nuclide's actual dispersion process could be done by correcting dispersion coefficients with observation data, and Genetic Algorithm (GA) is an appropriate method for this correction procedure. Purpose: This study is to analyze the fitness functions' influence on the correction procedure and the forecast ability of diffusion model. Methods: GA, coupled with Lagrange dispersion model, was used in a numerical simulation to compare 4 fitness functions' impact on the correction result. Results: In the numerical simulation, the fitness function with observation deviation taken into consideration stands out when significant deviation exists in the observed data. After performing the correction procedure on the Kincaid experiment data, a significant boost was observed in the diffusion model's forecast ability. Conclusion: As the result shows, in order to improve dispersion models' forecast ability using GA, observation data should be given different weight in the fitness function corresponding to their error. (authors)

  4. Brief communication: human cranial variation fits iterative founder effect model with African origin.

    Science.gov (United States)

    von Cramon-Taubadel, Noreen; Lycett, Stephen J

    2008-05-01

    Recent studies comparing craniometric and neutral genetic affinity matrices have concluded that, on average, human cranial variation fits a model of neutral expectation. While human craniometric and genetic data fit a model of isolation by geographic distance, it is not yet clear whether this is due to geographically mediated gene flow or human dispersal events. Recently, human genetic data have been shown to fit an iterative founder effect model of dispersal with an African origin, in line with the out-of-Africa replacement model for modern human origins, and Manica et al. (Nature 448 (2007) 346-349) have demonstrated that human craniometric data also fit this model. However, in contrast with the neutral model of cranial evolution suggested by previous studies, Manica et al. (2007) made the a priori assumption that cranial form has been subject to climatically driven natural selection and therefore correct for climate prior to conducting their analyses. Here we employ a modified theoretical and methodological approach to test whether human cranial variability fits the iterative founder effect model. In contrast with Manica et al. (2007) we employ size-adjusted craniometric variables, since climatic factors such as temperature have been shown to correlate with aspects of cranial size. Despite these differences, we obtain similar results to those of Manica et al. (2007), with up to 26% of global within-population craniometric variation being explained by geographic distance from sub-Saharan Africa. Comparative analyses using non-African origins do not yield significant results. The implications of these results are discussed in the light of the modern human origins debate. (c) 2007 Wiley-Liss, Inc.

  5. A scaled Lagrangian method for performing a least squares fit of a model to plant data

    International Nuclear Information System (INIS)

    Crisp, K.E.

    1988-01-01

    Due to measurement errors, even a perfect mathematical model will not be able to match all the corresponding plant measurements simultaneously. A further discrepancy may be introduced if an un-modelled change in conditions occurs within the plant which should have required a corresponding change in model parameters - e.g. a gradual deterioration in the performance of some component(s). Taking both these factors into account, what is required is that the overall discrepancy between the model predictions and the plant data is kept to a minimum. This process is known as 'model fitting', A method is presented for minimising any function which consists of the sum of squared terms, subject to any constraints. Its most obvious application is in the process of model fitting, where a weighted sum of squares of the differences between model predictions and plant data is the function to be minimised. When implemented within existing Central Electricity Generating Board computer models, it will perform a least squares fit of a model to plant data within a single job submission. (author)

  6. Modeling Individual Damped Linear Oscillator Processes with Differential Equations: Using Surrogate Data Analysis to Estimate the Smoothing Parameter

    Science.gov (United States)

    Deboeck, Pascal R.; Boker, Steven M.; Bergeman, C. S.

    2008-01-01

    Among the many methods available for modeling intraindividual time series, differential equation modeling has several advantages that make it promising for applications to psychological data. One interesting differential equation model is that of the damped linear oscillator (DLO), which can be used to model variables that have a tendency to…

  7. Meet the surrogate fish

    International Nuclear Information System (INIS)

    Johnson, Bob; Neitzel, Duane; Moxon, Suzanne

    1999-01-01

    This article gives details of the US Department of Energy's innovative research into the development of a sensor system that will work as a surrogate fish to provide information to aid the design of fish-friendly turbines for hydroelectric power plants. The selection of the dams for the testing of sensor fish, the release and recovery of the sensor fish, the recording of the physical forces exerted on fish as they pass through the turbines, and use of the information gathered to build more sensor fish are discussed. Fish investigations conducted at the Pacific Northwest National Laboratory are briefly described. (UK)

  8. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    Directory of Open Access Journals (Sweden)

    Javier Macias-Guarasa

    2012-10-01

    Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  9. Comments on Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?"

    Science.gov (United States)

    McCluskey, Ken W.

    2010-01-01

    This article presents the author's comments on Hisham B. Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" Ghassib's article focuses on the transformation of science from pre-modern times to the present. Ghassib (2010) notes that, unlike in an earlier era when the economy depended on static…

  10. Checking the Adequacy of Fit of Models from Split-Plot Designs

    DEFF Research Database (Denmark)

    Almini, A. A.; Kulahci, Murat; Montgomery, D. C.

    2009-01-01

    models. In this article, we propose the computation of two R-2, R-2-adjusted, prediction error sums of squares (PRESS), and R-2-prediction statistics to measure the adequacy of fit for the WP and the SP submodels in a split-plot design. This is complemented with the graphical analysis of the two types......One of the main features that distinguish split-plot experiments from other experiments is that they involve two types of experimental errors: the whole-plot (WP) error and the subplot (SP) error. Taking this into consideration is very important when computing measures of adequacy of fit for split-plot...... of errors to check for any violation of the underlying assumptions and the adequacy of fit of split-plot models. Using examples, we show how computing two measures of model adequacy of fit for each split-plot design model is appropriate and useful as they reveal whether the correct WP and SP effects have...

  11. Direct fit of a theoretical model of phase transition in oscillatory finger motions.

    NARCIS (Netherlands)

    Newell, K.M.; Molenaar, P.C.M.

    2003-01-01

    This paper presents a general method to fit the Schoner-Haken-Kelso (SHK) model of human movement phase transitions directly to time series data. A robust variant of the extended Kalman filter technique is applied to the data of a single subject. The options of covariance resetting and iteration

  12. A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.

    Science.gov (United States)

    Glas, Cees A. W.; Meijer, Rob R.

    A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…

  13. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    Science.gov (United States)

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  14. Fit Gap Analysis – The Role of Business Process Reference Models

    Directory of Open Access Journals (Sweden)

    Dejan Pajk

    2013-12-01

    Full Text Available Enterprise resource planning (ERP systems support solutions for standard business processes such as financial, sales, procurement and warehouse. In order to improve the understandability and efficiency of their implementation, ERP vendors have introduced reference models that describe the processes and underlying structure of an ERP system. To select and successfully implement an ERP system, the capabilities of that system have to be compared with a company’s business needs. Based on a comparison, all of the fits and gaps must be identified and further analysed. This step usually forms part of ERP implementation methodologies and is called fit gap analysis. The paper theoretically overviews methods for applying reference models and describes fit gap analysis processes in detail. The paper’s first contribution is its presentation of a fit gap analysis using standard business process modelling notation. The second contribution is the demonstration of a process-based comparison approach between a supply chain process and an ERP system process reference model. In addition to its theoretical contributions, the results can also be practically applied to projects involving the selection and implementation of ERP systems.

  15. Phylogenetic tree reconstruction accuracy and model fit when proportions of variable sites change across the tree.

    Science.gov (United States)

    Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R

    2010-05-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.

  16. Incidence of Changes in Respiration-Induced Tumor Motion and Its Relationship With Respiratory Surrogates During Individual Treatment Fractions

    International Nuclear Information System (INIS)

    Malinowski, Kathleen; McAvoy, Thomas J.; George, Rohini; Dietrich, Sonja; D’Souza, Warren D.

    2012-01-01

    Purpose: To determine how frequently (1) tumor motion and (2) the spatial relationship between tumor and respiratory surrogate markers change during a treatment fraction in lung and pancreas cancer patients. Methods and Materials: A Cyberknife Synchrony system radiographically localized the tumor and simultaneously tracked three respiratory surrogate markers fixed to a form-fitting vest. Data in 55 lung and 29 pancreas fractions were divided into successive 10-min blocks. Mean tumor positions and tumor position distributions were compared across 10-min blocks of data. Treatment margins were calculated from both 10 and 30 min of data. Partial least squares (PLS) regression models of tumor positions as a function of external surrogate marker positions were created from the first 10 min of data in each fraction; the incidence of significant PLS model degradation was used to assess changes in the spatial relationship between tumors and surrogate markers. Results: The absolute change in mean tumor position from first to third 10-min blocks was >5 mm in 13% and 7% of lung and pancreas cases, respectively. Superior–inferior and medial–lateral differences in mean tumor position were significantly associated with the lobe of lung. In 61% and 54% of lung and pancreas fractions, respectively, margins calculated from 30 min of data were larger than margins calculated from 10 min of data. The change in treatment margin magnitude for superior–inferior motion was >1 mm in 42% of lung and 45% of pancreas fractions. Significantly increasing tumor position prediction model error (mean ± standard deviation rates of change of 1.6 ± 2.5 mm per 10 min) over 30 min indicated tumor–surrogate relationship changes in 63% of fractions. Conclusions: Both tumor motion and the relationship between tumor and respiratory surrogate displacements change in most treatment fractions for patient in-room time of 30 min.

  17. Incidence of Changes in Respiration-Induced Tumor Motion and Its Relationship With Respiratory Surrogates During Individual Treatment Fractions

    Energy Technology Data Exchange (ETDEWEB)

    Malinowski, Kathleen [Department of Bioengineering, A. James Clark School of Engineering, University of Maryland, College Park, MD (United States); Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); McAvoy, Thomas J. [Department of Bioengineering, A. James Clark School of Engineering, University of Maryland, College Park, MD (United States); Institute of Systems Research, University of Maryland, College Park, MD (United States); George, Rohini [Department of Bioengineering, A. James Clark School of Engineering, University of Maryland, College Park, MD (United States); Dietrich, Sonja [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, CA (United States); D' Souza, Warren D., E-mail: wdsou001@umaryland.edu [Department of Bioengineering, A. James Clark School of Engineering, University of Maryland, College Park, MD (United States); Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States)

    2012-04-01

    Purpose: To determine how frequently (1) tumor motion and (2) the spatial relationship between tumor and respiratory surrogate markers change during a treatment fraction in lung and pancreas cancer patients. Methods and Materials: A Cyberknife Synchrony system radiographically localized the tumor and simultaneously tracked three respiratory surrogate markers fixed to a form-fitting vest. Data in 55 lung and 29 pancreas fractions were divided into successive 10-min blocks. Mean tumor positions and tumor position distributions were compared across 10-min blocks of data. Treatment margins were calculated from both 10 and 30 min of data. Partial least squares (PLS) regression models of tumor positions as a function of external surrogate marker positions were created from the first 10 min of data in each fraction; the incidence of significant PLS model degradation was used to assess changes in the spatial relationship between tumors and surrogate markers. Results: The absolute change in mean tumor position from first to third 10-min blocks was >5 mm in 13% and 7% of lung and pancreas cases, respectively. Superior-inferior and medial-lateral differences in mean tumor position were significantly associated with the lobe of lung. In 61% and 54% of lung and pancreas fractions, respectively, margins calculated from 30 min of data were larger than margins calculated from 10 min of data. The change in treatment margin magnitude for superior-inferior motion was >1 mm in 42% of lung and 45% of pancreas fractions. Significantly increasing tumor position prediction model error (mean {+-} standard deviation rates of change of 1.6 {+-} 2.5 mm per 10 min) over 30 min indicated tumor-surrogate relationship changes in 63% of fractions. Conclusions: Both tumor motion and the relationship between tumor and respiratory surrogate displacements change in most treatment fractions for patient in-room time of 30 min.

  18. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    Science.gov (United States)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  19. The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing.

    Directory of Open Access Journals (Sweden)

    Jaclyn K Mann

    2014-08-01

    Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  20. Econometric modelling of risk adverse behaviours of entrepreneurs in the provision of house fittings in China

    Directory of Open Access Journals (Sweden)

    Rita Yi Man Li

    2012-03-01

    Full Text Available Entrepreneurs have always born the risk of running their business. They reap a profit in return for their risk taking and work. Housing developers are no different. In many countries, such as Australia, the United Kingdom and the United States, they interpret the tastes of the buyers and provide the dwellings they develop with basic fittings such as floor and wall coverings, bathroom fittings and kitchen cupboards. In mainland China, however, in most of the developments, units or houses are sold without floor or wall coverings, kitchen  or bathroom fittings. What is the motive behind this choice? This paper analyses the factors affecting housing developers’ decisions to provide fittings based on 1701 housing developments in Hangzhou, Chongqing and Hangzhou using a Probit model. The results show that developers build a higher proportion of bare units in mainland China when: 1 there is shortage of housing; 2 land costs are high so that the comparative costs of providing fittings become relatively low.

  1. Anticipating mismatches of HIT investments: Developing a viability-fit model for e-health services.

    Science.gov (United States)

    Mettler, Tobias

    2016-01-01

    Albeit massive investments in the recent years, the impact of health information technology (HIT) has been controversial and strongly disputed by both research and practice. While many studies are concerned with the development of new or the refinement of existing measurement models for assessing the impact of HIT adoption (ex post), this study presents an initial attempt to better understand the factors affecting viability and fit of HIT and thereby underscores the importance of also having instruments for managing expectations (ex ante). We extend prior research by undertaking a more granular investigation into the theoretical assumptions of viability and fit constructs. In doing so, we use a mixed-methods approach, conducting qualitative focus group discussions and a quantitative field study to improve and validate a viability-fit measurement instrument. Our findings suggest two issues for research and practice. First, the results indicate that different stakeholders perceive HIT viability and fit of the same e-health services very unequally. Second, the analysis also demonstrates that there can be a great discrepancy between the organizational viability and individual fit of a particular e-health service. The findings of this study have a number of important implications such as for health policy making, HIT portfolios, and stakeholder communication. Copyright © 2015. Published by Elsevier Ireland Ltd.

  2. The regression-calibration method for fitting generalized linear models with additive measurement error

    OpenAIRE

    James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll

    2003-01-01

    This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...

  3. Fitness cost

    DEFF Research Database (Denmark)

    Nielsen, Karen L.; Pedersen, Thomas M.; Udekwu, Klas I.

    2012-01-01

    phage types, predominantly only penicillin resistant. We investigated whether isolates of this epidemic were associated with a fitness cost, and we employed a mathematical model to ask whether these fitness costs could have led to the observed reduction in frequency. Bacteraemia isolates of S. aureus...... from Denmark have been stored since 1957. We chose 40 S. aureus isolates belonging to phage complex 83A, clonal complex 8 based on spa type, ranging in time of isolation from 1957 to 1980 and with varyous antibiograms, including both methicillin-resistant and -susceptible isolates. The relative fitness...... of each isolate was determined in a growth competition assay with a reference isolate. Significant fitness costs of 215 were determined for the MRSA isolates studied. There was a significant negative correlation between number of antibiotic resistances and relative fitness. Multiple regression analysis...

  4. WE-FG-206-12: Enhanced Laws Textures: A Potential MRI Surrogate Marker of Hepatic Fibrosis in a Murine Model

    International Nuclear Information System (INIS)

    Li, B; Yu, H; Jara, H; Soto, J; Anderson, S

    2016-01-01

    Purpose: To compare enhanced Laws texture derived from parametric proton density (PD) maps to other MRI-based surrogate markers (T2, PD, ADC) in assessing degrees of liver fibrosis in a murine model of hepatic fibrosis using 11.7T scanner. Methods: This animal study was IACUC approved. Fourteen mice were divided into control (n=1) and experimental (n=13). The latter were fed a DDC-supplemented diet to induce hepatic fibrosis. Liver specimens were imaged using an 11.7T scanner; the parametric PD, T2, and ADC maps were generated from spin-echo pulsed field gradient and multi-echo spin-echo acquisitions. Enhanced Laws texture analysis was applied to the PD maps: first, hepatic blood vessels and liver margins were segmented/removed using an automated dual-clustering algorithm; secondly, an optimal thresholding algorithm was applied to reduce the partial volume artifact; next, mean and stdev were corrected to minimize grayscale variation across images; finally, Laws texture was extracted. Degrees of fibrosis was assessed by an experienced pathologist and digital image analysis (%Area Fibrosis). Scatterplots comparing enhanced Laws texture, T2, PD, and ADC values to degrees of fibrosis were generated and correlation coefficients were calculated. Unenhanced Laws texture was also compared to assess the effectiveness of the proposed enhancements. Results: Hepatic fibrosis and the enhanced Laws texture were strongly correlated with higher %Area Fibrosis associated with higher Laws texture (r=0.89). Only a moderate correlation was detected between %Area Fibrosis and unenhanced Laws texture (r=0.70). Strong correlation also existed between ADC and %Area Fibrosis (r=0.86). Moderate correlations were seen between %Area Fibrosis and PD (r=0.65) and T2 (r=0.66). Conclusions: Higher degrees of hepatic fibrosis are associated with increased Laws texture. The proposed enhancements improve the accuracy of Laws texture. Enhanced Laws texture features are more accurate than PD and T2 in

  5. WE-FG-206-12: Enhanced Laws Textures: A Potential MRI Surrogate Marker of Hepatic Fibrosis in a Murine Model

    Energy Technology Data Exchange (ETDEWEB)

    Li, B; Yu, H; Jara, H; Soto, J; Anderson, S [Boston University Medical Center, Boston, MA (United States)

    2016-06-15

    Purpose: To compare enhanced Laws texture derived from parametric proton density (PD) maps to other MRI-based surrogate markers (T2, PD, ADC) in assessing degrees of liver fibrosis in a murine model of hepatic fibrosis using 11.7T scanner. Methods: This animal study was IACUC approved. Fourteen mice were divided into control (n=1) and experimental (n=13). The latter were fed a DDC-supplemented diet to induce hepatic fibrosis. Liver specimens were imaged using an 11.7T scanner; the parametric PD, T2, and ADC maps were generated from spin-echo pulsed field gradient and multi-echo spin-echo acquisitions. Enhanced Laws texture analysis was applied to the PD maps: first, hepatic blood vessels and liver margins were segmented/removed using an automated dual-clustering algorithm; secondly, an optimal thresholding algorithm was applied to reduce the partial volume artifact; next, mean and stdev were corrected to minimize grayscale variation across images; finally, Laws texture was extracted. Degrees of fibrosis was assessed by an experienced pathologist and digital image analysis (%Area Fibrosis). Scatterplots comparing enhanced Laws texture, T2, PD, and ADC values to degrees of fibrosis were generated and correlation coefficients were calculated. Unenhanced Laws texture was also compared to assess the effectiveness of the proposed enhancements. Results: Hepatic fibrosis and the enhanced Laws texture were strongly correlated with higher %Area Fibrosis associated with higher Laws texture (r=0.89). Only a moderate correlation was detected between %Area Fibrosis and unenhanced Laws texture (r=0.70). Strong correlation also existed between ADC and %Area Fibrosis (r=0.86). Moderate correlations were seen between %Area Fibrosis and PD (r=0.65) and T2 (r=0.66). Conclusions: Higher degrees of hepatic fibrosis are associated with increased Laws texture. The proposed enhancements improve the accuracy of Laws texture. Enhanced Laws texture features are more accurate than PD and T2 in

  6. A flexible, interactive software tool for fitting the parameters of neuronal models.

    Science.gov (United States)

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.

  7. A flexible, interactive software tool for fitting the parameters of neuronal models

    Directory of Open Access Journals (Sweden)

    Péter eFriedrich

    2014-07-01

    Full Text Available The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problem of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting

  8. Perinatal outcomes after natural conception versus in vitro fertilization (IVF) in gestational surrogates: a model to evaluate IVF treatment versus maternal effects.

    Science.gov (United States)

    Woo, Irene; Hindoyan, Rita; Landay, Melanie; Ho, Jacqueline; Ingles, Sue Ann; McGinnis, Lynda K; Paulson, Richard J; Chung, Karine

    2017-12-01

    To study the perinatal outcomes between singleton live births achieved with the use of commissioned versus spontaneously conceived embryos carried by the same gestational surrogate. Retrospective cohort study. Academic in vitro fertilization center. Gestational surrogate. None. Pregnancy outcome, gestational age at birth, birth weight, perinatal complications. We identified 124 gestational surrogates who achieved a total of 494 pregnancies. Pregnancy outcomes for surrogate and spontaneous pregnancies were significantly different (Psurrogate pregnancies more likely to result in twin pregnancies: 33% vs. 1%. Miscarriage and ectopic rates were similar. Of these pregnancies, there were 352 singleton live births: 103 achieved from commissioned embryos and 249 conceived spontaneously. Surrogate births had lower mean gestational age at delivery (38.8 ± 2.1 vs. 39.7 ± 1.4), higher rates of preterm birth (10.7% vs. 3.1%), and higher rates of low birth weight (7.8% vs. 2.4%). Neonates from surrogacy had birth weights that were, on average, 105 g lower. Surrogate births had significantly higher obstetrical complications, including gestational diabetes, hypertension, use of amniocentesis, placenta previa, antibiotic requirement during labor, and cesarean section. Neonates born from commissioned embryos and carried by gestational surrogates have increased adverse perinatal outcomes, including preterm birth, low birth weight, hypertension, maternal gestational diabetes, and placenta previa, compared with singletons conceived spontaneously and carried by the same woman. Our data suggest that assisted reproductive procedures may potentially affect embryo quality and that its negative impact can not be overcome even with a proven healthy uterine environment. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  9. Assessment of the 3He pressure inside the CABRI transient rods - Development of a surrogate model based on measurements and complementary CFD calculations

    Science.gov (United States)

    Clamens, Olivier; Lecerf, Johann; Hudelot, Jean-Pascal; Duc, Bertrand; Cadiou, Thierry; Blaise, Patrick; Biard, Bruno

    2018-01-01

    CABRI is an experimental pulse reactor, funded by the French Nuclear Safety and Radioprotection Institute (IRSN) and operated by CEA at the Cadarache research center. It is designed to study fuel behavior under RIA conditions. In order to produce the power transients, reactivity is injected by depressurization of a neutron absorber (3He) situated in transient rods inside the reactor core. The shapes of power transients depend on the total amount of reactivity injected and on the injection speed. The injected reactivity can be calculated by conversion of the 3He gas density into units of reactivity. So, it is of upmost importance to properly master gas density evolution in transient rods during a power transient. The 3He depressurization was studied by CFD calculations and completed with measurements using pressure transducers. The CFD calculations show that the density evolution is slower than the pressure drop. Surrogate models were built based on CFD calculations and validated against preliminary tests in the CABRI transient system. Studies also show that it is harder to predict the depressurization during the power transients because of neutron/3He capture reactions that induce a gas heating. This phenomenon can be studied by a multiphysics approach based on reaction rate calculation thanks to Monte Carlo code and study the resulting heating effect with the validated CFD simulation.

  10. Assessment of the 3He pressure inside the CABRI transient rods - Development of a surrogate model based on measurements and complementary CFD calculations

    Directory of Open Access Journals (Sweden)

    Clamens Olivier

    2018-01-01

    Full Text Available CABRI is an experimental pulse reactor, funded by the French Nuclear Safety and Radioprotection Institute (IRSN and operated by CEA at the Cadarache research center. It is designed to study fuel behavior under RIA conditions. In order to produce the power transients, reactivity is injected by depressurization of a neutron absorber (3He situated in transient rods inside the reactor core. The shapes of power transients depend on the total amount of reactivity injected and on the injection speed. The injected reactivity can be calculated by conversion of the 3He gas density into units of reactivity. So, it is of upmost importance to properly master gas density evolution in transient rods during a power transient. The 3He depressurization was studied by CFD calculations and completed with measurements using pressure transducers. The CFD calculations show that the density evolution is slower than the pressure drop. Surrogate models were built based on CFD calculations and validated against preliminary tests in the CABRI transient system. Studies also show that it is harder to predict the depressurization during the power transients because of neutron/3He capture reactions that induce a gas heating. This phenomenon can be studied by a multiphysics approach based on reaction rate calculation thanks to Monte Carlo code and study the resulting heating effect with the validated CFD simulation.

  11. The fitting parameters extraction of conversion model of the low dose rate effect in bipolar devices

    International Nuclear Information System (INIS)

    Bakerenkov, Alexander

    2011-01-01

    The Enhanced Low Dose Rate Sensitivity (ELDRS) in bipolar devices consists of in base current degradation of NPN and PNP transistors increase as the dose rate is decreased. As a result of almost 20-year studying, the some physical models of effect are developed, being described in detail. Accelerated test methods, based on these models use in standards. The conversion model of the effect, that allows to describe the inverse S-shaped excess base current dependence versus dose rate, was proposed. This paper presents the problem of conversion model fitting parameters extraction.

  12. Validation of the inverse pulse wave transit time series as surrogate of systolic blood pressure in MVAR modeling.

    Science.gov (United States)

    Giassi, Pedro; Okida, Sergio; Oliveira, Maurício G; Moraes, Raimes

    2013-11-01

    Short-term cardiovascular regulation mediated by the sympathetic and parasympathetic branches of the autonomic nervous system has been investigated by multivariate autoregressive (MVAR) modeling, providing insightful analysis. MVAR models employ, as inputs, heart rate (HR), systolic blood pressure (SBP) and respiratory waveforms. ECG (from which HR series is obtained) and respiratory flow waveform (RFW) can be easily sampled from the patients. Nevertheless, the available methods for acquisition of beat-to-beat SBP measurements during exams hamper the wider use of MVAR models in clinical research. Recent studies show an inverse correlation between pulse wave transit time (PWTT) series and SBP fluctuations. PWTT is the time interval between the ECG R-wave peak and photoplethysmography waveform (PPG) base point within the same cardiac cycle. This study investigates the feasibility of using inverse PWTT (IPWTT) series as an alternative input to SBP for MVAR modeling of the cardiovascular regulation. For that, HR, RFW, and IPWTT series acquired from volunteers during postural changes and autonomic blockade were used as input of MVAR models. Obtained results show that IPWTT series can be used as input of MVAR models, replacing SBP measurements in order to overcome practical difficulties related to the continuous sampling of the SBP during clinical exams.

  13. The effect of measurement quality on targeted structural model fit indices: A comment on Lance, Beck, Fan, and Carter (2016).

    Science.gov (United States)

    McNeish, Daniel; Hancock, Gregory R

    2018-03-01

    Lance, Beck, Fan, and Carter (2016) recently advanced 6 new fit indices and associated cutoff values for assessing data-model fit in the structural portion of traditional latent variable path models. The authors appropriately argued that, although most researchers' theoretical interest rests with the latent structure, they still rely on indices of global model fit that simultaneously assess both the measurement and structural portions of the model. As such, Lance et al. proposed indices intended to assess the structural portion of the model in isolation of the measurement model. Unfortunately, although these strategies separate the assessment of the structure from the fit of the measurement model, they do not isolate the structure's assessment from the quality of the measurement model. That is, even with a perfectly fitting measurement model, poorer quality (i.e., less reliable) measurements will yield a more favorable verdict regarding structural fit, whereas better quality (i.e., more reliable) measurements will yield a less favorable structural assessment. This phenomenon, referred to by Hancock and Mueller (2011) as the reliability paradox, affects not only traditional global fit indices but also those structural indices proposed by Lance et al. as well. Fortunately, as this comment will clarify, indices proposed by Hancock and Mueller help to mitigate this problem and allow the structural portion of the model to be assessed independently of both the fit of the measurement model as well as the quality of indicator variables contained therein. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. Two Aspects of the Simplex Model: Goodness of Fit to Linear Growth Curve Structures and the Analysis of Mean Trends.

    Science.gov (United States)

    Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.

    1994-01-01

    Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)

  15. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  16. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan; Krebs-Smith, Susan M.; Midthune, Douglas; Perez, Adriana; Buckman, Dennis W.; Kipnis, Victor; Freedman, Laurence S.; Dodd, Kevin W.; Carroll, Raymond J

    2011-01-01

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  17. THE HERSCHEL ORION PROTOSTAR SURVEY: SPECTRAL ENERGY DISTRIBUTIONS AND FITS USING A GRID OF PROTOSTELLAR MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Furlan, E. [Infrared Processing and Analysis Center, California Institute of Technology, 770 S. Wilson Ave., Pasadena, CA 91125 (United States); Fischer, W. J. [Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771 (United States); Ali, B. [Space Science Institute, 4750 Walnut Street, Boulder, CO 80301 (United States); Stutz, A. M. [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Stanke, T. [ESO, Karl-Schwarzschild-Strasse 2, D-85748 Garching bei München (Germany); Tobin, J. J. [National Radio Astronomy Observatory, Charlottesville, VA 22903 (United States); Megeath, S. T.; Booker, J. [Ritter Astrophysical Research Center, Department of Physics and Astronomy, University of Toledo, 2801 W. Bancroft Street, Toledo, OH 43606 (United States); Osorio, M. [Instituto de Astrofísica de Andalucía, CSIC, Camino Bajo de Huétor 50, E-18008 Granada (Spain); Hartmann, L.; Calvet, N. [Department of Astronomy, University of Michigan, 500 Church Street, Ann Arbor, MI 48109 (United States); Poteet, C. A. [New York Center for Astrobiology, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180 (United States); Manoj, P. [Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India); Watson, D. M. [Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627 (United States); Allen, L., E-mail: furlan@ipac.caltech.edu [National Optical Astronomy Observatory, 950 N. Cherry Avenue, Tucson, AZ 85719 (United States)

    2016-05-01

    We present key results from the Herschel Orion Protostar Survey: spectral energy distributions (SEDs) and model fits of 330 young stellar objects, predominantly protostars, in the Orion molecular clouds. This is the largest sample of protostars studied in a single, nearby star formation complex. With near-infrared photometry from 2MASS, mid- and far-infrared data from Spitzer and Herschel , and submillimeter photometry from APEX, our SEDs cover 1.2–870 μ m and sample the peak of the protostellar envelope emission at ∼100 μ m. Using mid-IR spectral indices and bolometric temperatures, we classify our sample into 92 Class 0 protostars, 125 Class I protostars, 102 flat-spectrum sources, and 11 Class II pre-main-sequence stars. We implement a simple protostellar model (including a disk in an infalling envelope with outflow cavities) to generate a grid of 30,400 model SEDs and use it to determine the best-fit model parameters for each protostar. We argue that far-IR data are essential for accurate constraints on protostellar envelope properties. We find that most protostars, and in particular the flat-spectrum sources, are well fit. The median envelope density and median inclination angle decrease from Class 0 to Class I to flat-spectrum protostars, despite the broad range in best-fit parameters in each of the three categories. We also discuss degeneracies in our model parameters. Our results confirm that the different protostellar classes generally correspond to an evolutionary sequence with a decreasing envelope infall rate, but the inclination angle also plays a role in the appearance, and thus interpretation, of the SEDs.

  18. Testing the goodness of fit of selected infiltration models on soils with different land use histories

    International Nuclear Information System (INIS)

    Mbagwu, J.S.C.

    1993-10-01

    Six infiltration models, some obtained by reformulating the fitting parameters of the classical Kostiakov (1932) and Philip (1957) equations, were investigated for their ability to describe water infiltration into highly permeable sandy soils from the Nsukka plains of SE Nigeria. The models were Kostiakov, Modified Kostiakov (A), Modified Kostiakov (B), Philip, Modified Philip (A) and Modified Philip (B). Infiltration data were obtained from double ring infiltrometers on field plots established on a Knadic Paleustult (Nkpologu series) to investigate the effects of land use on soil properties and maize yield. The treatments were; (i) tilled-mulched (TM), (ii) tilled-unmulched (TU), (iii) untilled-mulched (UM), (iv) untilled-unmulched (UU) and (v) continuous pasture (CP). Cumulative infiltration was highest on the TM and lowest on the CP plots. All estimated model parameters obtained by the best fit of measured data differed significantly among the treatments. Based on the magnitude of R 2 values, the Kostiakov, Modified Kostiakov (A), Philip and Modified Philip (A) models provided best predictions of cumulative infiltration as a function of time. Comparing experimental with model-predicted cumulative infiltration showed, however, that on all treatments the values predicted by the classical Kostiakov, Philip and Modified Philip (A) models deviated most from experimental data. The other models produced values that agreed very well with measured data. Considering the eases of determining the fitting parameters it is proposed that on soils with high infiltration rates, either Modified Kostiakov model (I = Kt a + Ict) or Modified Philip model (I St 1/2 + Ict), (where I is cumulative infiltration, K, the time coefficient, t, time elapsed, 'a' the time exponent, Ic the equilibrium infiltration rate and S, the soil water sorptivity), be used for routine characterization of the infiltration process. (author). 33 refs, 3 figs 6 tabs

  19. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    Science.gov (United States)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  20. Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.

    Science.gov (United States)

    Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F

    2009-11-01

    Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database.

  1. Development and Analysis of Volume Multi-Sphere Method Model Generation using Electric Field Fitting

    Science.gov (United States)

    Ingram, G. J.

    Electrostatic modeling of spacecraft has wide-reaching applications such as detumbling space debris in the Geosynchronous Earth Orbit regime before docking, servicing and tugging space debris to graveyard orbits, and Lorentz augmented orbits. The viability of electrostatic actuation control applications relies on faster-than-realtime characterization of the electrostatic interaction. The Volume Multi-Sphere Method (VMSM) seeks the optimal placement and radii of a small number of equipotential spheres to accurately model the electrostatic force and torque on a conducting space object. Current VMSM models tuned using force and torque comparisons with commercially available finite element software are subject to the modeled probe size and numerical errors of the software. This work first investigates fitting of VMSM models to Surface-MSM (SMSM) generated electrical field data, removing modeling dependence on probe geometry while significantly increasing performance and speed. A proposed electric field matching cost function is compared to a force and torque cost function, the inclusion of a self-capacitance constraint is explored and 4 degree-of-freedom VMSM models generated using electric field matching are investigated. The resulting E-field based VMSM development framework is illustrated on a box-shaped hub with a single solar panel, and convergence properties of select models are qualitatively analyzed. Despite the complex non-symmetric spacecraft geometry, elegantly simple 2-sphere VMSM solutions provide force and torque fits within a few percent.

  2. Using the Flipchem Photochemistry Model When Fitting Incoherent Scatter Radar Data

    Science.gov (United States)

    Reimer, A. S.; Varney, R. H.

    2017-12-01

    The North face Resolute Bay Incoherent Scatter Radar (RISR-N) routinely images the dynamics of the polar ionosphere, providing measurements of the plasma density, electron temperature, ion temperature, and line of sight velocity with seconds to minutes time resolution. RISR-N does not directly measure ionospheric parameters, but backscattered signals, recording them as voltage samples. Using signal processing techniques, radar autocorrelation functions (ACF) are estimated from the voltage samples. A model of the signal ACF is then fitted to the ACF using non-linear least-squares techniques to obtain the best-fit ionospheric parameters. The signal model, and therefore the fitted parameters, depend on the ionospheric ion composition that is used [e.g. Zettergren et. al. (2010), Zou et. al. (2017)].The software used to process RISR-N ACF data includes the "flipchem" model, which is an ion photochemistry model developed by Richards [2011] that was adapted from the Field LineInterhemispheric Plasma (FLIP) model. Flipchem requires neutral densities, neutral temperatures, electron density, ion temperature, electron temperature, solar zenith angle, and F10.7 as inputs to compute ion densities, which are input to the signal model. A description of how the flipchem model is used in RISR-N fitting software will be presented. Additionally, a statistical comparison of the fitted electron density, ion temperature, electron temperature, and velocity obtained using a flipchem ionosphere, a pure O+ ionosphere, and a Chapman O+ ionosphere will be presented. The comparison covers nearly two years of RISR-N data (April 2015 - December 2016). Richards, P. G. (2011), Reexamination of ionospheric photochemistry, J. Geophys. Res., 116, A08307, doi:10.1029/2011JA016613.Zettergren, M., Semeter, J., Burnett, B., Oliver, W., Heinselman, C., Blelly, P.-L., and Diaz, M.: Dynamic variability in F-region ionospheric composition at auroral arc boundaries, Ann. Geophys., 28, 651-664, https

  3. Building Customer Churn Prediction Models in Fitness Industry with Machine Learning Methods

    OpenAIRE

    Shan, Min

    2017-01-01

    With the rapid growth of digital systems, churn management has become a major focus within customer relationship management in many industries. Ample research has been conducted for churn prediction in different industries with various machine learning methods. This thesis aims to combine feature selection and supervised machine learning methods for defining models of churn prediction and apply them on fitness industry. Forward selection is chosen as feature selection methods. Support Vector ...

  4. Hair length, facial attractiveness, personality attribution: A multiple fitness model of hairdressing

    OpenAIRE

    Bereczkei, Tamas; Mesko, Norbert

    2007-01-01

    Multiple Fitness Model states that attractiveness varies across multiple dimensions, with each feature representing a different aspect of mate value. In the present study, male raters judged the attractiveness of young females with neotenous and mature facial features, with various hair lengths. Results revealed that the physical appearance of long-haired women was rated high, regardless of their facial attractiveness being valued high or low. Women rated as most attractive were those whose f...

  5. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    Science.gov (United States)

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  6. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  7. Timing of blunt force injuries in long bones: the effects of the environment, PMI length and human surrogate model.

    Science.gov (United States)

    Coelho, Luís; Cardoso, Hugo F V

    2013-12-10

    Timing of blunt force trauma in human bone is a critical forensic issue, but there is limited knowledge on how different environmental conditions, the duration of postmortem interval (PMI), different bone types and different animal models influence fracture morphology. This study aims at evaluating the influence of the type of postmortem environment and the duration of the postmortem period on fracture morphology, for distinguishing perimortem from postmortem fractures on different types of long bones from different species. Fresh limb segments from pig and goat were sequentially left to decompose, under 3 different environmental circumstances (surface, buried and submerged), resulting in sets with different PMI lengths (0, 28, 56, 84, 112, 140, 168 and 196 days), which were then fractured. Fractured bones (total=325; pig tibia=110; pig fibula=110; goat metatarsals=105) were classified according to the Fracture Freshness Index (FFI). Climatic data for the experiment location was collected. Statistical analysis included descriptive statistics, correlation analysis between FFI and PMI, Mann-Whitney U tests comparing FFI medians for different PMI's and linear regression analysis using PMI, pluviosity and temperature as predictors for FFI. Surface samples presented increases in FFI with increasing PMI, with positive correlations for all bone types. The same results were observed in submerged samples, except for pig tibia. Median FFI values for surface samples could distinguish bones with PMI=0 days from PMI≥56 days. Buried samples presented no significant correlation between FFI and PMI, and nonsignificant regression models. Regression analysis of surface and submerged samples suggested differences in FFI variation with PMI between bone types, although without statistical significance. Adding climatic data to surface regression models resulted in PMI no longer predicting FFI. When comparing different animal models, linear regressions suggested greater increases in

  8. Measuring fit of sequence data to phylogenetic model: gain of power using marginal tests.

    Science.gov (United States)

    Waddell, Peter J; Ota, Rissa; Penny, David

    2009-10-01

    Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (Unended quest: an intellectual autobiography. Fontana, London, 1976) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (Nature 297:197-200, 1982) to the present. We compare the general log-likelihood ratio (the G or G (2) statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (P approximately 0.5), but the marginalized tests do. Tests on pairwise frequency (F) matrices, strongly (P < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (P < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4( t ) patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with P < 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published trees may really be far larger than the analytical methods (e.g., bootstrap) report.

  9. UROX 2.0: an interactive tool for fitting atomic models into electron-microscopy reconstructions

    International Nuclear Information System (INIS)

    Siebert, Xavier; Navaza, Jorge

    2009-01-01

    UROX is software designed for the interactive fitting of atomic models into electron-microscopy reconstructions. The main features of the software are presented, along with a few examples. Electron microscopy of a macromolecular structure can lead to three-dimensional reconstructions with resolutions that are typically in the 30–10 Å range and sometimes even beyond 10 Å. Fitting atomic models of the individual components of the macromolecular structure (e.g. those obtained by X-ray crystallography or nuclear magnetic resonance) into an electron-microscopy map allows the interpretation of the latter at near-atomic resolution, providing insight into the interactions between the components. Graphical software is presented that was designed for the interactive fitting and refinement of atomic models into electron-microscopy reconstructions. Several characteristics enable it to be applied over a wide range of cases and resolutions. Firstly, calculations are performed in reciprocal space, which results in fast algorithms. This allows the entire reconstruction (or at least a sizeable portion of it) to be used by taking into account the symmetry of the reconstruction both in the calculations and in the graphical display. Secondly, atomic models can be placed graphically in the map while the correlation between the model-based electron density and the electron-microscopy reconstruction is computed and displayed in real time. The positions and orientations of the models are refined by a least-squares minimization. Thirdly, normal-mode calculations can be used to simulate conformational changes between the atomic model of an individual component and its corresponding density within a macromolecular complex determined by electron microscopy. These features are illustrated using three practical cases with different symmetries and resolutions. The software, together with examples and user instructions, is available free of charge at http://mem.ibs.fr/UROX/

  10. A hands-on approach for fitting long-term survival models under the GAMLSS framework.

    Science.gov (United States)

    de Castro, Mário; Cancho, Vicente G; Rodrigues, Josemar

    2010-02-01

    In many data sets from clinical studies there are patients insusceptible to the occurrence of the event of interest. Survival models which ignore this fact are generally inadequate. The main goal of this paper is to describe an application of the generalized additive models for location, scale, and shape (GAMLSS) framework to the fitting of long-term survival models. In this work the number of competing causes of the event of interest follows the negative binomial distribution. In this way, some well known models found in the literature are characterized as particular cases of our proposal. The model is conveniently parameterized in terms of the cured fraction, which is then linked to covariates. We explore the use of the gamlss package in R as a powerful tool for inference in long-term survival models. The procedure is illustrated with a numerical example. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  11. Epigenetic Mechanisms Regulate Innate Immunity against Uropathogenic and Commensal-Like Escherichia coli in the Surrogate Insect Model Galleria mellonella.

    Science.gov (United States)

    Heitmueller, Miriam; Billion, André; Dobrindt, Ulrich; Vilcinskas, Andreas; Mukherjee, Krishnendu

    2017-10-01

    Innate-immunity-related genes in humans are activated during urinary tract infections (UTIs) caused by pathogenic strains of Escherichia coli but are suppressed by commensals. Epigenetic mechanisms play a pivotal role in the regulation of gene expression in response to environmental stimuli. To determine whether epigenetic mechanisms can explain the different behaviors of pathogenic and commensal bacteria, we infected larvae of the greater wax moth, Galleria mellonella , a widely used model insect host, with a uropathogenic E. coli (UPEC) strain that causes symptomatic UTIs in humans or a commensal-like strain that causes asymptomatic bacteriuria (ABU). Infection with the UPEC strain (CFT073) was more lethal to larvae than infection with the attenuated ABU strain (83972) due to the recognition of each strain by different Toll-like receptors, ultimately leading to differential DNA/RNA methylation and histone acetylation. We used next-generation sequencing and reverse transcription (RT)-PCR to correlate epigenetic changes with the induction of innate-immunity-related genes. Transcriptomic analysis of G. mellonella larvae infected with E. coli strains CFT073 and 83972 revealed strain-specific variations in the class and expression levels of genes encoding antimicrobial peptides, cytokines, and enzymes controlling DNA methylation and histone acetylation. Our results provide evidence for the differential epigenetic regulation of transcriptional reprogramming by UPEC and ABU strains of E. coli in G. mellonella larvae, which may be relevant to understanding the different behaviors of these bacterial strains in the human urinary tract. Copyright © 2017 American Society for Microbiology.

  12. Assessing a moderating effect and the global fit of a PLS model on online trading

    Directory of Open Access Journals (Sweden)

    Juan J. García-Machado

    2017-12-01

    Full Text Available This paper proposes a PLS Model for the study of Online Trading. Traditional investing has experienced a revolution due to the rise of e-trading services that enable investors to use Internet conduct secure trading. On the hand, model results show that there is a positive, direct and statistically significant relationship between personal outcome expectations, perceived relative advantage, shared vision and economy-based trust with the quality of knowledge. On the other hand, trading frequency and portfolio performance has also this relationship. After including the investor’s income and financial wealth (IFW as moderating effect, the PLS model was enhanced, and we found that the interaction term is negative and statistically significant, so, higher IFW levels entail a weaker relationship between trading frequency and portfolio performance and vice-versa. Finally, with regard to the goodness of overall model fit measures, they showed that the model is fit for SRMR and dG measures, so it is likely that the model is true.

  13. Multiple organ definition in CT using a Bayesian approach for 3D model fitting

    Science.gov (United States)

    Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.

    1995-08-01

    Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.

  14. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    Science.gov (United States)

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  15. Fitting the CDO correlation skew: a tractable structural jump-diffusion model

    DEFF Research Database (Denmark)

    Willemann, Søren

    2007-01-01

    We extend a well-known structural jump-diffusion model for credit risk to handle both correlations through diffusion of asset values and common jumps in asset value. Through a simplifying assumption on the default timing and efficient numerical techniques, we develop a semi-analytic framework...... allowing for instantaneous calibration to heterogeneous CDS curves and fast computation of CDO tranche spreads. We calibrate the model to CDX and iTraxx data from February 2007 and achieve a satisfactory fit. To price the senior tranches for both indices, we require a risk-neutral probability of a market...

  16. Permutation tests for goodness-of-fit testing of mathematical models to experimental data.

    Science.gov (United States)

    Fişek, M Hamit; Barlas, Zeynep

    2013-03-01

    This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. FITTING A THREE DIMENSIONAL PEM FUEL CELL MODEL TO MEASUREMENTS BY TUNING THE POROSITY AND

    DEFF Research Database (Denmark)

    Bang, Mads; Odgaard, Madeleine; Condra, Thomas Joseph

    2004-01-01

    the distribution of current density and further how thisaffects the polarization curve.The porosity and conductivity of the catalyst layer are some ofthe most difficult parameters to measure, estimate and especiallycontrol. Yet the proposed model shows how these two parameterscan have significant influence...... on the performance of the fuel cell.The two parameters are shown to be key elements in adjusting thethree-dimensional model to fit measured polarization curves.Results from the proposed model are compared to single cellmeasurements on a test MEA from IRD Fuel Cells.......A three-dimensional, computational fluid dynamics (CFD) model of a PEM fuel cell is presented. The model consists ofstraight channels, porous gas diffusion layers, porous catalystlayers and a membrane. In this computational domain, most ofthe transport phenomena which govern the performance of the...

  18. Fitting the Fractional Polynomial Model to Non-Gaussian Longitudinal Data

    Directory of Open Access Journals (Sweden)

    Ji Hoon Ryoo

    2017-08-01

    Full Text Available As in cross sectional studies, longitudinal studies involve non-Gaussian data such as binomial, Poisson, gamma, and inverse-Gaussian distributions, and multivariate exponential families. A number of statistical tools have thus been developed to deal with non-Gaussian longitudinal data, including analytic techniques to estimate parameters in both fixed and random effects models. However, as yet growth modeling with non-Gaussian data is somewhat limited when considering the transformed expectation of the response via a linear predictor as a functional form of explanatory variables. In this study, we introduce a fractional polynomial model (FPM that can be applied to model non-linear growth with non-Gaussian longitudinal data and demonstrate its use by fitting two empirical binary and count data models. The results clearly show the efficiency and flexibility of the FPM for such applications.

  19. Urethral and periurethral dosimetry in prostate brachytherapy: is there a convenient surrogate?

    International Nuclear Information System (INIS)

    Bucci, Joseph; Spadinger, Ingrid; Hilts, Michelle; Sidhu, Sabeena; Smith, Clarke; Keyes, Mira; Morris, W. James

    2002-01-01

    Purpose: To assess and compare two models for a surrogate urethra to be used for postimplant dosimetry in prostate brachytherapy. Methods and Materials: Twenty men with a urinary catheter present at the time of postimplant computed tomographic imaging were studied. Urethral and periurethral volumes were defined as 5-mm and 10-mm diameter volumes, respectively. Three contours of each were used: one contour of the true urethra (and periurethra), and two surrogate models. The true volumes were centered on the catheter center. One surrogate model used volumes centered on the geometrical center of each prostate contour (centered surrogate). The other surrogate model was based on the average deviation of the true urethra from a reference line through the geometrical center of the axial midplane of the prostate (deviated surrogate). Maximum point doses and the D 10 , D 25 , D 50 , D 90 , V 100 , V 120 , and V 150 of the true and surrogate volumes were measured and compared (D n is the minimum dose [Gy] received by n% of the structure, and V m is the volume [%] of the structure that received m% of the prescribed dose) as well as the distances between the surrogate urethras and the true urethra. Results: Doses determined from both surrogate urethral and periurethral volumes were in good agreement with the true urethral and periurethral doses except in the superior third of the gland. The deviated surrogate provided a physically superior likeness to the true urethra. Certain dose-volume histogram (DVH)-based parameters could also be predicted reasonably well on the basis of the surrogates. Correlation coefficients ≥0.85 were seen for D 25 , D 50 , V 100 , V 120 , and V 150 for both models. All the other parameters had correlation coefficients in the range of 0.73 - 0.85. Conclusions: Both surrogate models predicted true urethral dosimetry reasonably well. It is recommended that the simpler deviated surrogate would be a more suitable surrogate for routine clinical practice

  20. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    Science.gov (United States)

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  1. Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models

    Science.gov (United States)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.

  2. Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models

    International Nuclear Information System (INIS)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-01-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data

  3. Fitted HBT radii versus space-time variances in flow-dominated models

    International Nuclear Information System (INIS)

    Lisa, Mike; Frodermann, Evan; Heinz, Ulrich

    2007-01-01

    The inability of otherwise successful dynamical models to reproduce the 'HBT radii' extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the 'RHIC HBT Puzzle'. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source which can be directly computed from the emission function, without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models some of which exhibit significant deviations from simple Gaussian behaviour. By Fourier transforming the emission function we compute the 2-particle correlation function and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and measured HBT radii remain, we show that a more 'apples-to-apples' comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data. (author)

  4. Neutron-induced cross-sections via the surrogate method

    International Nuclear Information System (INIS)

    Boutoux, G.

    2011-11-01

    The surrogate reaction method is an indirect way of determining neutron-induced cross sections through transfer or inelastic scattering reactions. This method presents the advantage that in some cases the target material is stable or less radioactive than the material required for a neutron-induced measurement. The method is based on the hypothesis that the excited nucleus is a compound nucleus whose decay depends essentially on its excitation energy and on the spin and parity state of the populated compound state. Nevertheless, the spin and parity population differences between the compound-nuclei produced in the neutron and transfer-induced reactions may be different. This work reviews the surrogate method and its validity. Neutron-induced fission cross sections obtained with the surrogate method are in general good agreement. However, it is not yet clear to what extent the surrogate method can be applied to infer radiative capture cross sections. We performed an experiment to determine the gamma decay probabilities for 176 Lu and 173 Yb by using the surrogate reactions 174 Yb( 3 He,pγ) 176 Lu * and 174 Yb( 3 He,αγ) 173 Yb * , respectively, and compare them with the well-known corresponding probabilities obtained in the 175 Lu(n,γ) and 172 Yb(n,γ) reactions. This experiment provides answers to understand why, in the case of gamma-decay, the surrogate method gives significant deviations compared to the corresponding neutron-induced reaction. In this work, we have also assessed whether the surrogate method can be applied to extract capture probabilities in the actinide region. Previous experiments on fission have also been reinterpreted. Thus, this work provides new insights into the surrogate method. This work is organised in the following way: in chapter 1, the theoretical aspects related to the surrogate method will be introduced. The validity of the surrogate method will be investigated by means of statistical model calculations. In chapter 2, a review on

  5. Fast fitting of non-Gaussian state-space models to animal movement data via Template Model Builder

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Whoriskey, Kim; Yurkowski, David

    2015-01-01

    recommend using the Laplace approximation combined with automatic differentiation (as implemented in the novel R package Template Model Builder; TMB) for the fast fitting of continuous-time multivariate non-Gaussian SSMs. Through Argos satellite tracking data, we demonstrate that the use of continuous...... are able to estimate additional parameters compared to previous methods, all without requiring a substantial increase in computational time. The model implementation is made available through the R package argosTrack....

  6. 64Cu-DOTA as a surrogate positron analog of Gd-DOTA for cardiac fibrosis detection with PET: pharmacokinetic study in a rat model of chronic MI.

    Science.gov (United States)

    Kim, Heejung; Lee, Sung-Jin; Davies-Venn, Cynthia; Kim, Jin Su; Yang, Bo Yeun; Yao, Zhengsheng; Kim, Insook; Paik, Chang H; Bluemke, David A

    2016-02-01

    The aim of this study was to investigate the pharmacokinetics of (64)Cu-DOTA (1,4,7,10-azacyclododecane-N,N',N'',N'''-tetraacetic acid), a positron surrogate analog of the late gadolinium (Gd)-enhancement cardiac magnetic resonance agent, Gd-DOTA, in a rat model of chronic myocardial infarction (MI) and its microdistribution in the cardiac fibrosis by autoradiography. DOTA was labeled with (64)Cu-acetate. CD rats (n=5) with MI by left anterior descending coronary artery ligation and normal rats (n=6) were injected intravenously with (64)Cu-DOTA (18.5 MBq, 0.02 mmol DOTA/kg). Dynamic PET imaging was performed for 60 min after injection. (18)F-Fluorodeoxyglucose ([(18)F]-FDG) PET imaging was performed to identify the viable myocardium. For the region of interest analysis, the (64)Cu-DOTA PET image was coregistered to the [(18)F]-FDG PET image. To validate the PET images, slices of heart samples from the base to the apex were analyzed using autoradiography and by histological staining with Masson's trichrome. (64)Cu-DOTA was rapidly taken up in the infarct area. The time-activity curves demonstrated that (64)Cu-DOTA concentrations in the blood, fibrotic tissue, and perfusion-rich organs peaked within a minute post injection; thereafter, it was rapidly washed out in parallel with blood clearance and excreted through the renal system. The blood clearance curve was biphasic, with a distribution half-life of less than 3 min and an elimination half-life of ∼21.8 min. The elimination half-life of (64)Cu-DOTA from the focal fibrotic tissue (∼22.4 min) and the remote myocardium (∼20.1 min) was similar to the blood elimination half-life. Consequently, the uptake ratios of focal fibrosis-to-blood and remote myocardium-to-blood remained stable for the time period between 10 and 60 min. The corresponding ratios obtained from images acquired from 30 to 60 min were 1.09 and 0.59, respectively, indicating that the concentration of (64)Cu-DOTA in the focal

  7. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    Science.gov (United States)

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.

  8. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.

    2002-01-01

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  9. VizieR Online Data Catalog: GRB prompt emission fitted with the DREAM model (Ahlgren+, 2015)

    Science.gov (United States)

    Ahlgren, B.; Larsson, J.; Nymark, T.; Ryde, F.; Pe'Er, A.

    2018-01-01

    We illustrate the application of the DREAM model by fitting it to two different, bright Fermi GRBs; GRB 090618 and GRB 100724B. While GRB 090618 is well fitted by a Band function, GRB 100724B was the first example of a burst with a significant additional BB component (Guiriec et al. 2011ApJ...727L..33G). GRB 090618 is analysed using Gamma-ray Burst Monitor (GBM) data (Meegan et al. 2009ApJ...702..791M) from the NaI and BGO detectors. For GRB 100724B, we used GBM data from the NaI and BGO detectors as well as Large Area Telescope Low Energy (LAT-LLE) data. For both bursts we selected NaI detectors seeing the GRB at an off-axis angle lower than 60° and the BGO detector as being the best aligned of the two BGO detectors. The spectra were fitted in the energy ranges 8-1000 keV (NaI), 200-40000 keV (BGO) and 30-1000 MeV (LAT-LLE). (2 data files).

  10. Adapted strategic plannig model applied to small business: a case study in the fitness area

    Directory of Open Access Journals (Sweden)

    Eduarda Tirelli Hennig

    2012-06-01

    Full Text Available The strategic planning is an important management tool in the corporate scenario and shall not be restricted to big Companies. However, this kind of planning process in small business may need special adaptations due to their own characteristics. This paper aims to identify and adapt the existent models of strategic planning to the scenario of a small business in the fitness area. Initially, it is accomplished a comparative study among models of different authors to identify theirs phases and activities. Then, it is defined which of these phases and activities should be present in a model that will be utilized in a small business. That model was applied to a Pilates studio; it involves the establishment of an organizational identity, an environmental analysis as well as the definition of strategic goals, strategies and actions to reach them. Finally, benefits to the organization could be identified, as well as hurdles in the implementation of the tool.

  11. Using geometry to improve model fitting and experiment design for glacial isostasy

    Science.gov (United States)

    Kachuck, S. B.; Cathles, L. M.

    2017-12-01

    As scientists we routinely deal with models, which are geometric objects at their core - the manifestation of a set of parameters as predictions for comparison with observations. When the number of observations exceeds the number of parameters, the model is a hypersurface (the model manifold) in the space of all possible predictions. The object of parameter fitting is to find the parameters corresponding to the point on the model manifold as close to the vector of observations as possible. But the geometry of the model manifold can make this difficult. By curving, ending abruptly (where, for instance, parameters go to zero or infinity), and by stretching and compressing the parameters together in unexpected directions, it can be difficult to design algorithms that efficiently adjust the parameters. Even at the optimal point on the model manifold, parameters might not be individually resolved well enough to be applied to new contexts. In our context of glacial isostatic adjustment, models of sparse surface observations have a broad spread of sensitivity to mixtures of the earth's viscous structure and the surface distribution of ice over the last glacial cycle. This impedes precise statements about crucial geophysical processes, such as the planet's thermal history or the climates that controlled the ice age. We employ geometric methods developed in the field of systems biology to improve the efficiency of fitting (geodesic accelerated Levenberg-Marquardt) and to identify the maximally informative sources of additional data to make better predictions of sea levels and ice configurations (optimal experiment design). We demonstrate this in particular in reconstructions of the Barents Sea Ice Sheet, where we show that only certain kinds of data from the central Barents have the power to distinguish between proposed models.

  12. Describing the Process of Adopting Nutrition and Fitness Apps: Behavior Stage Model Approach.

    Science.gov (United States)

    König, Laura M; Sproesser, Gudrun; Schupp, Harald T; Renner, Britta

    2018-03-13

    Although mobile technologies such as smartphone apps are promising means for motivating people to adopt a healthier lifestyle (mHealth apps), previous studies have shown low adoption and continued use rates. Developing the means to address this issue requires further understanding of mHealth app nonusers and adoption processes. This study utilized a stage model approach based on the Precaution Adoption Process Model (PAPM), which proposes that people pass through qualitatively different motivational stages when adopting a behavior. To establish a better understanding of between-stage transitions during app adoption, this study aimed to investigate the adoption process of nutrition and fitness app usage, and the sociodemographic and behavioral characteristics and decision-making style preferences of people at different adoption stages. Participants (N=1236) were recruited onsite within the cohort study Konstanz Life Study. Use of mobile devices and nutrition and fitness apps, 5 behavior adoption stages of using nutrition and fitness apps, preference for intuition and deliberation in eating decision-making (E-PID), healthy eating style, sociodemographic variables, and body mass index (BMI) were assessed. Analysis of the 5 behavior adoption stages showed that stage 1 ("unengaged") was the most prevalent motivational stage for both nutrition and fitness app use, with half of the participants stating that they had never thought about using a nutrition app (52.41%, 533/1017), whereas less than one-third stated they had never thought about using a fitness app (29.25%, 301/1029). "Unengaged" nonusers (stage 1) showed a higher preference for an intuitive decision-making style when making eating decisions, whereas those who were already "acting" (stage 4) showed a greater preference for a deliberative decision-making style (F 4,1012 =21.83, Pdigital interventions. This study highlights that new user groups might be better reached by apps designed to address a more intuitive

  13. Fitting Diffusion Item Response Theory Models for Responses and Response Times Using the R Package diffIRT

    Directory of Open Access Journals (Sweden)

    Dylan Molenaar

    2015-08-01

    Full Text Available In the psychometric literature, item response theory models have been proposed that explicitly take the decision process underlying the responses of subjects to psychometric test items into account. Application of these models is however hampered by the absence of general and flexible software to fit these models. In this paper, we present diffIRT, an R package that can be used to fit item response theory models that are based on a diffusion process. We discuss parameter estimation and model fit assessment, show the viability of the package in a simulation study, and illustrate the use of the package with two datasets pertaining to extraversion and mental rotation. In addition, we illustrate how the package can be used to fit the traditional diffusion model (as it has been originally developed in experimental psychology to data.

  14. A minimalist functional group (MFG) approach for surrogate fuel formulation

    KAUST Repository

    Abdul Jameel, Abdul Gani; Naser, Nimal; Issayev, Gani; Touitou, Jamal; Ghosh, Manik Kumer; Emwas, Abdul-Hamid M.; Farooq, Aamir; Dooley, Stephen; Sarathy, Mani

    2018-01-01

    Surrogate fuel formulation has drawn significant interest due to its relevance towards understanding combustion properties of complex fuel mixtures. In this work, we present a novel approach for surrogate fuel formulation by matching target fuel functional groups, while minimizing the number of surrogate species. Five key functional groups; paraffinic CH, paraffinic CH, paraffinic CH, naphthenic CH–CH and aromatic C–CH groups in addition to structural information provided by the Branching Index (BI) were chosen as matching targets. Surrogates were developed for six FACE (Fuels for Advanced Combustion Engines) gasoline target fuels, namely FACE A, C, F, G, I and J. The five functional groups present in the fuels were qualitatively and quantitatively identified using high resolution H Nuclear Magnetic Resonance (NMR) spectroscopy. A further constraint was imposed in limiting the number of surrogate components to a maximum of two. This simplifies the process of surrogate formulation, facilitates surrogate testing, and significantly reduces the size and time involved in developing chemical kinetic models by reducing the number of thermochemical and kinetic parameters requiring estimation. Fewer species also reduces the computational expenses involved in simulating combustion in practical devices. The proposed surrogate formulation methodology is denoted as the Minimalist Functional Group (MFG) approach. The MFG surrogates were experimentally tested against their target fuels using Ignition Delay Times (IDT) measured in an Ignition Quality Tester (IQT), as specified by the standard ASTM D6890 methodology, and in a Rapid Compression Machine (RCM). Threshold Sooting Index (TSI) and Smoke Point (SP) measurements were also performed to determine the sooting propensities of the surrogates and target fuels. The results showed that MFG surrogates were able to reproduce the aforementioned combustion properties of the target FACE gasolines across a wide range of conditions

  15. A minimalist functional group (MFG) approach for surrogate fuel formulation

    KAUST Repository

    Abdul Jameel, Abdul Gani

    2018-03-20

    Surrogate fuel formulation has drawn significant interest due to its relevance towards understanding combustion properties of complex fuel mixtures. In this work, we present a novel approach for surrogate fuel formulation by matching target fuel functional groups, while minimizing the number of surrogate species. Five key functional groups; paraffinic CH, paraffinic CH, paraffinic CH, naphthenic CH–CH and aromatic C–CH groups in addition to structural information provided by the Branching Index (BI) were chosen as matching targets. Surrogates were developed for six FACE (Fuels for Advanced Combustion Engines) gasoline target fuels, namely FACE A, C, F, G, I and J. The five functional groups present in the fuels were qualitatively and quantitatively identified using high resolution H Nuclear Magnetic Resonance (NMR) spectroscopy. A further constraint was imposed in limiting the number of surrogate components to a maximum of two. This simplifies the process of surrogate formulation, facilitates surrogate testing, and significantly reduces the size and time involved in developing chemical kinetic models by reducing the number of thermochemical and kinetic parameters requiring estimation. Fewer species also reduces the computational expenses involved in simulating combustion in practical devices. The proposed surrogate formulation methodology is denoted as the Minimalist Functional Group (MFG) approach. The MFG surrogates were experimentally tested against their target fuels using Ignition Delay Times (IDT) measured in an Ignition Quality Tester (IQT), as specified by the standard ASTM D6890 methodology, and in a Rapid Compression Machine (RCM). Threshold Sooting Index (TSI) and Smoke Point (SP) measurements were also performed to determine the sooting propensities of the surrogates and target fuels. The results showed that MFG surrogates were able to reproduce the aforementioned combustion properties of the target FACE gasolines across a wide range of conditions

  16. Inverse problem theory methods for data fitting and model parameter estimation

    CERN Document Server

    Tarantola, A

    2002-01-01

    Inverse Problem Theory is written for physicists, geophysicists and all scientists facing the problem of quantitative interpretation of experimental data. Although it contains a lot of mathematics, it is not intended as a mathematical book, but rather tries to explain how a method of acquisition of information can be applied to the actual world.The book provides a comprehensive, up-to-date description of the methods to be used for fitting experimental data, or to estimate model parameters, and to unify these methods into the Inverse Problem Theory. The first part of the book deals wi

  17. On the fit of models to covariances and methodology to the Bulletin.

    Science.gov (United States)

    Bentler, P M

    1992-11-01

    It is noted that 7 of the 10 top-cited articles in the Psychological Bulletin deal with methodological topics. One of these is the Bentler-Bonett (1980) article on the assessment of fit in covariance structure models. Some context is provided on the popularity of this article. In addition, a citation study of methodology articles appearing in the Bulletin since 1978 was carried out. It verified that publications in design, evaluation, measurement, and statistics continue to be important to psychological research. Some thoughts are offered on the role of the journal in making developments in these areas more accessible to psychologists.

  18. Construction and validation of detailed kinetic models for the combustion of gasoline surrogates; Construction et validation de modeles cinetiques detailles pour la combustion de melanges modeles des essences

    Energy Technology Data Exchange (ETDEWEB)

    Touchard, S.

    2005-10-15

    The irreversible reduction of oil resources, the CO{sub 2} emission control and the application of increasingly strict standards of pollutants emission lead the worldwide researchers to work to reduce the pollutants formation and to improve the engine yields, especially by using homogenous charge combustion of lean mixtures. The numerical simulation of fuel blends oxidation is an essential tool to study the influence of fuel formulation and motor conditions on auto-ignition and on pollutants emissions. The automatic generation helps to obtain detailed kinetic models, especially at low temperature, where the number of reactions quickly exceeds thousand. The main purpose of this study is the generation and the validation of detailed kinetic models for the oxidation of gasoline blends using the EXGAS software. This work has implied an improvement of computation rules for thermodynamic and kinetic data, those were validated by numerical simulation using CHEMKIN II softwares. A large part of this work has concerned the understanding of the low temperature oxidation chemistry of the C5 and larger alkenes. Low and high temperature mechanisms were proposed and validated for 1 pentene, 1-hexene, the binary mixtures containing 1 hexene/iso octane, 1 hexene/toluene, iso octane/toluene and the ternary mixture of 1 hexene/toluene/iso octane. Simulations were also done for propene, 1-butene and iso-octane with former models including the modifications proposed in this PhD work. If the generated models allowed us to simulate with a good agreement the auto-ignition delays of the studied molecules and blends, some uncertainties still remains for some reaction paths leading to the formation of cyclic products in the case of alkenes oxidation at low temperature. It would be also interesting to carry on this work for combustion models of gasoline blends at low temperature. (author)

  19. Fitting the two-compartment model in DCE-MRI by linear inversion.

    Science.gov (United States)

    Flouri, Dimitra; Lesnic, Daniel; Sourbron, Steven P

    2016-09-01

    Model fitting of dynamic contrast-enhanced-magnetic resonance imaging-MRI data with nonlinear least squares (NLLS) methods is slow and may be biased by the choice of initial values. The aim of this study was to develop and evaluate a linear least squares (LLS) method to fit the two-compartment exchange and -filtration models. A second-order linear differential equation for the measured concentrations was derived where model parameters act as coefficients. Simulations of normal and pathological data were performed to determine calculation time, accuracy and precision under different noise levels and temporal resolutions. Performance of the LLS was evaluated by comparison against the NLLS. The LLS method is about 200 times faster, which reduces the calculation times for a 256 × 256 MR slice from 9 min to 3 s. For ideal data with low noise and high temporal resolution the LLS and NLLS were equally accurate and precise. The LLS was more accurate and precise than the NLLS at low temporal resolution, but less accurate at high noise levels. The data show that the LLS leads to a significant reduction in calculation times, and more reliable results at low noise levels. At higher noise levels the LLS becomes exceedingly inaccurate compared to the NLLS, but this may be improved using a suitable weighting strategy. Magn Reson Med 76:998-1006, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  20. A classical regression framework for mediation analysis: fitting one model to estimate mediation effects.

    Science.gov (United States)

    Saunders, Christina T; Blume, Jeffrey D

    2017-10-26

    Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.

  1. Innovation Rather than Improvement: A Solvable High-Dimensional Model Highlights the Limitations of Scalar Fitness

    Science.gov (United States)

    Tikhonov, Mikhail; Monasson, Remi

    2018-01-01

    Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.

  2. Multi-binding site model-based curve-fitting program for the computation of RIA data

    International Nuclear Information System (INIS)

    Malan, P.G.; Ekins, R.P.; Cox, M.G.; Long, E.M.R.

    1977-01-01

    In this paper, a comparison will be made of model-based and empirical curve-fitting procedures. The implementation of a multiple binding-site curve-fitting model which will successfully fit a wide range of assay data, and which can be run on a mini-computer is described. The latter sophisticated model also provides estimates of binding site concentrations and the values of the respective equilibrium constants present: the latter have been used for refining assay conditions using computer optimisation techniques. (orig./AJ) [de

  3. GRace: a MATLAB-based application for fitting the discrimination-association model.

    Science.gov (United States)

    Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio

    2014-10-28

    The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed.

  4. Towards greater realism in inclusive fitness models: the case of worker reproduction in insect societies

    Science.gov (United States)

    Wenseleers, Tom; Helanterä, Heikki; Alves, Denise A.; Dueñez-Guzmán, Edgar; Pamilo, Pekka

    2013-01-01

    The conflicts over sex allocation and male production in insect societies have long served as an important test bed for Hamilton's theory of inclusive fitness, but have for the most part been considered separately. Here, we develop new coevolutionary models to examine the interaction between these two conflicts and demonstrate that sex ratio and colony productivity costs of worker reproduction can lead to vastly different outcomes even in species that show no variation in their relatedness structure. Empirical data on worker-produced males in eight species of Melipona bees support the predictions from a model that takes into account the demographic details of colony growth and reproduction. Overall, these models contribute significantly to explaining behavioural variation that previous theories could not account for. PMID:24132088

  5. Introducing the fit-criteria assessment plot - A visualisation tool to assist class enumeration in group-based trajectory modelling.

    Science.gov (United States)

    Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria

    2017-10-01

    Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.

  6. Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.

    Science.gov (United States)

    Mi, Gu; Di, Yanming; Schafer, Daniel W

    2015-01-01

    This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.

  7. Keep Using My Health Apps: Discover Users' Perception of Health and Fitness Apps with the UTAUT2 Model.

    Science.gov (United States)

    Yuan, Shupei; Ma, Wenjuan; Kanthawala, Shaheen; Peng, Wei

    2015-09-01

    Health and fitness applications (apps) are one of the major app categories in the current mobile app market. Few studies have examined this area from the users' perspective. This study adopted the Extended Unified Theory of Acceptance and Use of Technology (UTAUT2) Model to examine the predictors of the users' intention to adopt health and fitness apps. A survey (n=317) was conducted with college-aged smartphone users at a Midwestern university in the United States. Performance expectancy, hedonic motivations, price value, and habit were significant predictors of users' intention of continued usage of health and fitness apps. However, effort expectancy, social influence, and facilitating conditions were not found to predict users' intention of continued usage of health and fitness apps. This study extends the UTATU2 Model to the mobile apps domain and provides health professions, app designers, and marketers with the insights of user experience in terms of continuously using health and fitness apps.

  8. A History of Regression and Related Model-Fitting in the Earth Sciences (1636?-2000)

    International Nuclear Information System (INIS)

    Howarth, Richard J.

    2001-01-01

    The (statistical) modeling of the behavior of a dependent variate as a function of one or more predictors provides examples of model-fitting which span the development of the earth sciences from the 17th Century to the present. The historical development of these methods and their subsequent application is reviewed. Bond's predictions (c. 1636 and 1668) of change in the magnetic declination at London may be the earliest attempt to fit such models to geophysical data. Following publication of Newton's theory of gravitation in 1726, analysis of data on the length of a 1 o meridian arc, and the length of a pendulum beating seconds, as a function of sin 2 (latitude), was used to determine the ellipticity of the oblate spheroid defining the Figure of the Earth. The pioneering computational methods of Mayer in 1750, Boscovich in 1755, and Lambert in 1765, and the subsequent independent discoveries of the principle of least squares by Gauss in 1799, Legendre in 1805, and Adrain in 1808, and its later substantiation on the basis of probability theory by Gauss in 1809 were all applied to the analysis of such geodetic and geophysical data. Notable later applications include: the geomagnetic survey of Ireland by Lloyd, Sabine, and Ross in 1836, Gauss's model of the terrestrial magnetic field in 1838, and Airy's 1845 analysis of the residuals from a fit to pendulum lengths, from which he recognized the anomalous character of measurements of gravitational force which had been made on islands. In the early 20th Century applications to geological topics proliferated, but the computational burden effectively held back applications of multivariate analysis. Following World War II, the arrival of digital computers in universities in the 1950s facilitated computation, and fitting linear or polynomial models as a function of geographic coordinates, trend surface analysis, became popular during the 1950-60s. The inception of geostatistics in France at this time by Matheron had its

  9. Levy flights and self-similar exploratory behaviour of termite workers: beyond model fitting.

    Directory of Open Access Journals (Sweden)

    Octavio Miramontes

    Full Text Available Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties--including Lévy flights--in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.

  10. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh; Genton, Marc G.

    2014-01-01

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte

  11. Psychosocial aspects of surrogate motherhood.

    Science.gov (United States)

    van den Akker, Olga B A

    2007-01-01

    This review addresses the psychosocial research carried out on surrogacy triads (surrogate mothers, commissioning mothers and offspring) and shows that research has focused on a number of specific issues: attachment and disclosure to surrogate offspring; experiences, characteristics and motivations of surrogate mothers; and changes in profiles of the commissioning/intended mothers. Virtually all studies have used highly selected samples making generalizations difficult. There have been a notable lack of theory, no interventions and only a handful of longitudinal studies or studies comparing different populations. Few studies have specifically questioned the meaning of and need for a family or the influence and impact that professionals, treatment availability and financial factors have on the choices made for surrogate and intended mothers. Societal attitudes have changed somewhat; however, according to public opinion, women giving up babies still fall outside the acceptable remit. Surrogate and intended mothers appear to reconcile their unusual choice through a process of cognitive restructuring, and the success or failure of this cognitive appraisal affects people's willingness to be open and honest about their choices. Normal population surveys, on the contrary, are less accepting of third party reproduction; they have no personal need to reconsider and hence maintain their original normative cognitively consonant state.

  12. Licensing Surrogate Decision-Makers.

    Science.gov (United States)

    Rosoff, Philip M

    2017-06-01

    As medical technology continues to improve, more people will live longer lives with multiple chronic illnesses with increasing cumulative debilitation, including cognitive dysfunction. Combined with the aging of society in most developed countries, an ever-growing number of patients will require surrogate decision-makers. While advance care planning by patients still capable of expressing their preferences about medical interventions and end-of-life care can improve the quality and accuracy of surrogate decisions, this is often not the case, not infrequently leading to demands for ineffective, inappropriate and prolonged interventions. In 1980 LaFollette called for the licensing of prospective parents, basing his argument on the harm they can do to vulnerable people (children). In this paper, I apply his arguments to surrogate decision-makers for cognitively incapacitated patients, rhetorically suggesting that we require potential surrogates to qualify for this position by demonstrating their ability to make reasonable and rational decisions for others. I employ this theoretical approach to argue that the loose criteria by which we authorize surrogates' generally unchallenged power should be reconsidered.

  13. Fits of the baryon magnetic moments to the quark model and spectrum-generating SU(3)

    International Nuclear Information System (INIS)

    Bohm, A.; Teese, R.B.

    1982-01-01

    We show that for theoretical as well as phenomenological reasons the baryon magnetic moments that fulfill simple group transformation properties should be taken in intrinsic rather than nuclear magnetons. A fit of the recent experimental data to the reduced matrix elements of the usual octet electromagnetic current is still not good, and in order to obtain acceptable agreement, one has to add correction terms to the octet current. We have texted two kinds of corrections: U-spin-scalar terms, which are singles out by the model-independent algebraic properties of the hadron electromagnetic current, and octet U-spin vectors, which could come from quark-mass breaking in a nonrelativistic quark model. We find that the U-spin-scalar terms are more important than the U-spin vectors for various levels of demanded theoretical accuracy

  14. Fit model between participation statement of exhibitors and visitors to improve the exhibition performance

    Directory of Open Access Journals (Sweden)

    Cristina García Magro

    2015-06-01

    Full Text Available Purpose: The aims of the paper is offers a model of analysis which allows to measure the impact on the performance of fairs, as well as the knowledge or not of the motives of participation of the visitors on the part of the exhibitors. Design/methodology: A review of the literature is established concerning two of the principal interested agents, exhibitors and visitors, focusing. The study is focused on the line of investigation referred to the motives of participation or not in a trade show. According to the information thrown by each perspectives of study, a comparative analysis is carried out in order to determine the degree of existing understanding between both. Findings: The trade shows allow to be studied from an integrated strategic marketing approach. The fit model between the reasons for participation of exhibitors and visitors offer information on the lack of an understanding between exhibitors and visitors, leading to dissatisfaction with the participation, a fact that is reflected in the fair success. The model identified shows that a strategic plan must be designed in which the reason for participation of visitor was incorporated as moderating variable of the reason for participation of exhibitors. The article concludes with the contribution of a series of proposals for the improvement of fairground results. Social implications: The fit model that improve the performance of trade shows, implicitly leads to successful achievement of targets for multiple stakeholders beyond the consideration of visitors and exhibitors. Originality/value: The integrated perspective of stakeholders allows the study of the existing relationships between the principal groups of interest, in such a way that, having knowledge on the condition of the question of the trade shows facilitates the task of the investigator in future academic works and allows that the interested groups obtain a better performance to the participation in fairs, as visitor or as

  15. Fitting Data to Model: Structural Equation Modeling Diagnosis Using Two Scatter Plots

    Science.gov (United States)

    Yuan, Ke-Hai; Hayashi, Kentaro

    2010-01-01

    This article introduces two simple scatter plots for model diagnosis in structural equation modeling. One plot contrasts a residual-based M-distance of the structural model with the M-distance for the factor score. It contains information on outliers, good leverage observations, bad leverage observations, and normal cases. The other plot contrasts…

  16. Real-time tumor motion estimation using respiratory surrogate via memory-based learning

    Science.gov (United States)

    Li, Ruijiang; Lewis, John H.; Berbeco, Ross I.; Xing, Lei

    2012-08-01

    Respiratory tumor motion is a major challenge in radiation therapy for thoracic and abdominal cancers. Effective motion management requires an accurate knowledge of the real-time tumor motion. External respiration monitoring devices (optical, etc) provide a noninvasive, non-ionizing, low-cost and practical approach to obtain the respiratory signal. Due to the highly complex and nonlinear relations between tumor and surrogate motion, its ultimate success hinges on the ability to accurately infer the tumor motion from respiratory surrogates. Given their widespread use in the clinic, such a method is critically needed. We propose to use a powerful memory-based learning method to find the complex relations between tumor motion and respiratory surrogates. The method first stores the training data in memory and then finds relevant data to answer a particular query. Nearby data points are assigned high relevance (or weights) and conversely distant data are assigned low relevance. By fitting relatively simple models to local patches instead of fitting one single global model, it is able to capture highly nonlinear and complex relations between the internal tumor motion and external surrogates accurately. Due to the local nature of weighting functions, the method is inherently robust to outliers in the training data. Moreover, both training and adapting to new data are performed almost instantaneously with memory-based learning, making it suitable for dynamically following variable internal/external relations. We evaluated the method using respiratory motion data from 11 patients. The data set consists of simultaneous measurement of 3D tumor motion and 1D abdominal surface (used as the surrogate signal in this study). There are a total of 171 respiratory traces, with an average peak-to-peak amplitude of ∼15 mm and average duration of ∼115 s per trace. Given only 5 s (roughly one breath) pretreatment training data, the method achieved an average 3D error of 1.5 mm and 95

  17. Real-time tumor motion estimation using respiratory surrogate via memory-based learning

    International Nuclear Information System (INIS)

    Li Ruijiang; Xing Lei; Lewis, John H; Berbeco, Ross I

    2012-01-01

    Respiratory tumor motion is a major challenge in radiation therapy for thoracic and abdominal cancers. Effective motion management requires an accurate knowledge of the real-time tumor motion. External respiration monitoring devices (optical, etc) provide a noninvasive, non-ionizing, low-cost and practical approach to obtain the respiratory signal. Due to the highly complex and nonlinear relations between tumor and surrogate motion, its ultimate success hinges on the ability to accurately infer the tumor motion from respiratory surrogates. Given their widespread use in the clinic, such a method is critically needed. We propose to use a powerful memory-based learning method to find the complex relations between tumor motion and respiratory surrogates. The method first stores the training data in memory and then finds relevant data to answer a particular query. Nearby data points are assigned high relevance (or weights) and conversely distant data are assigned low relevance. By fitting relatively simple models to local patches instead of fitting one single global model, it is able to capture highly nonlinear and complex relations between the internal tumor motion and external surrogates accurately. Due to the local nature of weighting functions, the method is inherently robust to outliers in the training data. Moreover, both training and adapting to new data are performed almost instantaneously with memory-based learning, making it suitable for dynamically following variable internal/external relations. We evaluated the method using respiratory motion data from 11 patients. The data set consists of simultaneous measurement of 3D tumor motion and 1D abdominal surface (used as the surrogate signal in this study). There are a total of 171 respiratory traces, with an average peak-to-peak amplitude of ∼15 mm and average duration of ∼115 s per trace. Given only 5 s (roughly one breath) pretreatment training data, the method achieved an average 3D error of 1.5 mm and 95

  18. Model Atmosphere Spectrum Fit to the Soft X-Ray Outburst Spectrum of SS Cyg

    Directory of Open Access Journals (Sweden)

    V. F. Suleimanov

    2015-02-01

    Full Text Available The X-ray spectrum of SS Cyg in outburst has a very soft component that can be interpreted as the fast-rotating optically thick boundary layer on the white dwarf surface. This component was carefully investigated by Mauche (2004 using the Chandra LETG spectrum of this object in outburst. The spectrum shows broad ( ≈5 °A spectral features that have been interpreted as a large number of absorption lines on a blackbody continuum with a temperature of ≈250 kK. Because the spectrum resembles the photospheric spectra of super-soft X-ray sources, we tried to fit it with high gravity hot LTE stellar model atmospheres with solar chemical composition, specially computed for this purpose. We obtained a reasonably good fit to the 60–125 °A spectrum with the following parameters: Teff = 190 kK, log g = 6.2, and NH = 8 · 1019 cm−2, although at shorter wavelengths the observed spectrum has a much higher flux. The reasons for this are discussed. The hypothesis of a fast rotating boundary layer is supported by the derived low surface gravity.

  19. LVAD patients' and surrogates' perspectives on SPIRIT-HF: An advance care planning discussion.

    Science.gov (United States)

    Metzger, Maureen; Song, Mi-Kyung; Devane-Johnson, Stephanie

    2016-01-01

    To describe LVAD patients' and surrogates' experiences with, and perspectives on SPIRIT-HF, an advance care planning (ACP) intervention. ACP is important for patients with LVAD, yet little is known about their experiences or those of their surrogates who have participated in ACP discussions. We used qualitative content analysis techniques to conduct a secondary analysis of 28 interviews with patients with LVAD (n = 14) and their surrogates (n = 14) who had participated in an RCT pilot study of SPIRIT-HF. Main themes from the data include: 1) sharing their HF stories was very beneficial; 2) participating in SPIRIT-HF led to greater peace of mind for patients and surrogates; 3) "one size does not fit all" when it comes to timing of ACP discussions. An understanding patient and surrogate perspectives may inform clinicians' approach to ACP discussions. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  1. Optimized aerodynamic design process for subsonic transport wing fitted with winglets. [wind tunnel model

    Science.gov (United States)

    Kuhlman, J. M.

    1979-01-01

    The aerodynamic design of a wind-tunnel model of a wing representative of that of a subsonic jet transport aircraft, fitted with winglets, was performed using two recently developed optimal wing-design computer programs. Both potential flow codes use a vortex lattice representation of the near-field of the aerodynamic surfaces for determination of the required mean camber surfaces for minimum induced drag, and both codes use far-field induced drag minimization procedures to obtain the required spanloads. One code uses a discrete vortex wake model for this far-field drag computation, while the second uses a 2-D advanced panel wake model. Wing camber shapes for the two codes are very similar, but the resulting winglet camber shapes differ widely. Design techniques and considerations for these two wind-tunnel models are detailed, including a description of the necessary modifications of the design geometry to format it for use by a numerically controlled machine for the actual model construction.

  2. FIT ANALYSIS OF INDOSAT DOMPETKU BUSINESS MODEL USING A STRATEGIC DIAGNOSIS APPROACH

    Directory of Open Access Journals (Sweden)

    Fauzi Ridwansyah

    2015-09-01

    Full Text Available Mobile payment is an industry's response to global and regional technological-driven, as well as national social-economical driven in less cash society development. The purposes of this study were 1 identifying positioning of PT. Indosat in providing a response to Indonesian mobile payment market, 2 analyzing Indosat’s internal capabilities and business model fit with environment turbulence, and 3 formulating the optimum mobile payment business model development design for Indosat. The method used in this study was a combination of qualitative and quantitative analysis through in-depth interviews with purposive judgment sampling. The analysis tools used in this study were Business Model Canvas (MBC and Ansoff’s Strategic Diagnosis. The interviewees were the representatives of PT. Indosat internal management and mobile payment business value chain stakeholders. Based on BMC mapping which is then analyzed by strategic diagnosis model, a considerable gap (>1 between the current market environment and Indosat strategy of aggressiveness with the expected future of environment turbulence level was obtained. Therefore, changes in the competitive strategy that need to be conducted include 1 developing a new customer segment, 2 shifting the value proposition that leads to the extensification of mobile payment, 3 monetizing effective value proposition, and 4 integrating effective collaboration for harmonizing company’s objective with the government's vision. Keywords: business model canvas, Indosat, mobile payment, less cash society, strategic diagnosis

  3. A new fit-for-purpose model testing framework: Decision Crash Tests

    Science.gov (United States)

    Tolson, Bryan; Craig, James

    2016-04-01

    Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building

  4. Ultra high energy interaction models for Monte Carlo calculations: what model is the best fit

    Energy Technology Data Exchange (ETDEWEB)

    Stanev, Todor [Bartol Research Institute, University of Delaware, Newark DE 19716 (United States)

    2006-01-15

    We briefly outline two methods for extension of hadronic interaction models to extremely high energy. Then we compare the main characteristics of representative computer codes that implement the different models and give examples of air shower parameters predicted by those codes.

  5. Model for fitting longitudinal traits subject to threshold response applied to genetic evaluation for heat tolerance

    Directory of Open Access Journals (Sweden)

    Misztal Ignacy

    2009-01-01

    Full Text Available Abstract A semi-parametric non-linear longitudinal hierarchical model is presented. The model assumes that individual variation exists both in the degree of the linear change of performance (slope beyond a particular threshold of the independent variable scale and in the magnitude of the threshold itself; these individual variations are attributed to genetic and environmental components. During implementation via a Bayesian MCMC approach, threshold levels were sampled using a Metropolis step because their fully conditional posterior distributions do not have a closed form. The model was tested by simulation following designs similar to previous studies on genetics of heat stress. Posterior means of parameters of interest, under all simulation scenarios, were close to their true values with the latter always being included in the uncertain regions, indicating an absence of bias. The proposed models provide flexible tools for studying genotype by environmental interaction as well as for fitting other longitudinal traits subject to abrupt changes in the performance at particular points on the independent variable scale.

  6. Ignoring imperfect detection in biological surveys is dangerous: a response to 'fitting and interpreting occupancy models'.

    Directory of Open Access Journals (Sweden)

    Gurutzeta Guillera-Arroita

    Full Text Available In a recent paper, Welsh, Lindenmayer and Donnelly (WLD question the usefulness of models that estimate species occupancy while accounting for detectability. WLD claim that these models are difficult to fit and argue that disregarding detectability can be better than trying to adjust for it. We think that this conclusion and subsequent recommendations are not well founded and may negatively impact the quality of statistical inference in ecology and related management decisions. Here we respond to WLD's claims, evaluating in detail their arguments, using simulations and/or theory to support our points. In particular, WLD argue that both disregarding and accounting for imperfect detection lead to the same estimator performance regardless of sample size when detectability is a function of abundance. We show that this, the key result of their paper, only holds for cases of extreme heterogeneity like the single scenario they considered. Our results illustrate the dangers of disregarding imperfect detection. When ignored, occupancy and detection are confounded: the same naïve occupancy estimates can be obtained for very different true levels of occupancy so the size of the bias is unknowable. Hierarchical occupancy models separate occupancy and detection, and imprecise estimates simply indicate that more data are required for robust inference about the system in question. As for any statistical method, when underlying assumptions of simple hierarchical models are violated, their reliability is reduced. Resorting in those instances where hierarchical occupancy models do no perform well to the naïve occupancy estimator does not provide a satisfactory solution. The aim should instead be to achieve better estimation, by minimizing the effect of these issues during design, data collection and analysis, ensuring that the right amount of data is collected and model assumptions are met, considering model extensions where appropriate.

  7. Assessing performance of Bayesian state-space models fit to Argos satellite telemetry locations processed with Kalman filtering.

    Directory of Open Access Journals (Sweden)

    Mónica A Silva

    Full Text Available Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF. The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km was nearly half that of LS estimates (11.6 ± 8.4 km. Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.

  8. An approximation to the adaptive exponential integrate-and-fire neuron model allows fast and predictive fitting to physiological data

    Directory of Open Access Journals (Sweden)

    Loreen eHertäg

    2012-09-01

    Full Text Available For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('in-vivo-like' input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  9. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    Science.gov (United States)

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  10. Supersymmetric Fits after the Higgs Discovery and Implications for Model Building

    CERN Document Server

    Ellis, John

    2014-01-01

    The data from the first run of the LHC at 7 and 8 TeV, together with the information provided by other experiments such as precision electroweak measurements, flavour measurements, the cosmological density of cold dark matter and the direct search for the scattering of dark matter particles in the LUX experiment, provide important constraints on supersymmetric models. Important information is provided by the ATLAS and CMS measurements of the mass of the Higgs boson, as well as the negative results of searches at the LHC for events with missing transverse energy accompanied by jets, and the LHCb and CMS measurements off BR($B_s \\to \\mu^+ \\mu^-$). Results are presented from frequentist analyses of the parameter spaces of the CMSSM and NUHM1. The global $\\chi^2$ functions for the supersymmetric models vary slowly over most of the parameter spaces allowed by the Higgs mass and the missing transverse energy search, with best-fit values that are comparable to the $\\chi^2$ for the Standard Model. The $95\\%$ CL lower...

  11. Minimal see-saw model predicting best fit lepton mixing angles

    International Nuclear Information System (INIS)

    King, Stephen F.

    2013-01-01

    We discuss a minimal predictive see-saw model in which the right-handed neutrino mainly responsible for the atmospheric neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (0,1,1) and the right-handed neutrino mainly responsible for the solar neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (1,4,2), with a relative phase η=−2π/5. We show how these patterns of couplings could arise from an A 4 family symmetry model of leptons, together with Z 3 and Z 5 symmetries which fix η=−2π/5 up to a discrete phase choice. The PMNS matrix is then completely determined by one remaining parameter which is used to fix the neutrino mass ratio m 2 /m 3 . The model predicts the lepton mixing angles θ 12 ≈34 ∘ ,θ 23 ≈41 ∘ ,θ 13 ≈9.5 ∘ , which exactly coincide with the current best fit values for a normal neutrino mass hierarchy, together with the distinctive prediction for the CP violating oscillation phase δ≈106 ∘

  12. Physician behavioral adaptability: A model to outstrip a "one size fits all" approach.

    Science.gov (United States)

    Carrard, Valérie; Schmid Mast, Marianne

    2015-10-01

    Based on a literature review, we propose a model of physician behavioral adaptability (PBA) with the goal of inspiring new research. PBA means that the physician adapts his or her behavior according to patients' different preferences. The PBA model shows how physicians infer patients' preferences and adapt their interaction behavior from one patient to the other. We claim that patients will benefit from better outcomes if their physicians show behavioral adaptability rather than a "one size fits all" approach. This literature review is based on a literature search of the PsycINFO(®) and MEDLINE(®) databases. The literature review and first results stemming from the authors' research support the validity and viability of parts of the PBA model. There is evidence suggesting that physicians are able to show behavioral flexibility when interacting with their different patients, that a match between patients' preferences and physician behavior is related to better consultation outcomes, and that physician behavioral adaptability is related to better consultation outcomes. Training of physicians' behavioral flexibility and their ability to infer patients' preferences can facilitate physician behavioral adaptability and positive patient outcomes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Modeling of physical fitness of young karatyst on the pre basic training

    Directory of Open Access Journals (Sweden)

    V. A. Galimskyi

    2014-09-01

    Full Text Available Purpose : to develop a program of physical fitness for the correction of the pre basic training on the basis of model performance. Material: 57 young karate sportsmen of 9-11 years old took part in the research. Results : the level of general and special physical preparedness of young karate 9-11 years old was determined. Classes in the control group occurred in the existing program for yous sports school Muay Thai (Thailand boxing. For the experimental group has developed a program of selective development of general and special physical qualities of model-based training sessions. Special program contains 6 direction: 1. Development of static and dynamic balance; 2. Development of vestibular stability (precision movements after rotation; 3. Development rate movements; 4. The development of the capacity for rapid restructuring movements; 5. Development capabilities to differentiate power and spatial parameters of movement; 6. Development of the ability to perform jumping movements of rotation. Development of special physical qualities continued to work to improve engineering complex shock motions on the place and with movement. Conclusions : the use of selective development of special physical qualities based models of training sessions has a significant performance advantage over the control group.

  14. GPCRM: a homology modeling web service with triple membrane-fitted quality assessment of GPCR models.

    Science.gov (United States)

    Miszta, Przemyslaw; Pasznik, Pawel; Jakowiecki, Jakub; Sztyler, Agnieszka; Latek, Dorota; Filipek, Slawomir

    2018-05-21

    Due to the involvement of G protein-coupled receptors (GPCRs) in most of the physiological and pathological processes in humans they have been attracting a lot of attention from pharmaceutical industry as well as from scientific community. Therefore, the need for new, high quality structures of GPCRs is enormous. The updated homology modeling service GPCRM (http://gpcrm.biomodellab.eu/) meets those expectations by greatly reducing the execution time of submissions (from days to hours/minutes) with nearly the same average quality of obtained models. Additionally, due to three different scoring functions (Rosetta, Rosetta-MP, BCL::Score) it is possible to select accurate models for the required purposes: the structure of the binding site, the transmembrane domain or the overall shape of the receptor. Currently, no other web service for GPCR modeling provides this possibility. GPCRM is continually upgraded in a semi-automatic way and the number of template structures has increased from 20 in 2013 to over 90 including structures the same receptor with different ligands which can influence the structure not only in the on/off manner. Two types of protein viewers can be used for visual inspection of obtained models. The extended sortable tables with available templates provide links to external databases and display ligand-receptor interactions in visual form.

  15. Using Finite Model Analysis and Out of Hot Cell Surrogate Rod Testing to Analyze High Burnup Used Nuclear Fuel Mechanical Properties

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jy-An John [ORNL; Jiang, Hao [ORNL; Wang, Hong [ORNL

    2014-07-01

    Based on a series of FEA simulations, the discussions and the conclusions concerning the impact of the interface bonding efficiency to SNF vibration integrity are provided in this report; this includes the moment carrying capacity distribution between pellets and clad, and the impact of cohesion bonding on the flexural rigidity of the surrogate rod system. As progressive de-bonding occurs at the pellet-pellet interfaces and at the pellet-clad interface, the load ratio of the bending moment carrying capacity gradually shifts from the pellets to the clad; the clad starts to carry a significant portion of the bending moment resistance until reaching the full de-bonding state at the pellet-pellet interface regions. This results in localized plastic deformation of the clad at the pellet-pellet-clad interface region; the associated plastic deformations of SS clad leads to a significant degradation in the stiffness of the surrogate rod. For instance, the flexural rigidity was reduced by 39% from the perfect bond state to the de-bonded state at the pellet-pellet interfaces.

  16. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references.

  17. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    International Nuclear Information System (INIS)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references

  18. Fitting diameter distribution models to data from forest inventories with concentric plot design

    Energy Technology Data Exchange (ETDEWEB)

    Nanos, N.; Sjöstedt de Luna, S.

    2017-11-01

    Aim: Several national forest inventories use a complex plot design based on multiple concentric subplots where smaller diameter trees are inventoried when lying in the smaller-radius subplots and ignored otherwise. Data from these plots are truncated with threshold (truncation) diameters varying according to the distance from the plot centre. In this paper we designed a maximum likelihood method to fit the Weibull diameter distribution to data from concentric plots. Material and methods: Our method (M1) was based on multiple truncated probability density functions to build the likelihood. In addition, we used an alternative method (M2) presented recently. We used methods M1 and M2 as well as two other reference methods to estimate the Weibull parameters in 40000 simulated plots. The spatial tree pattern of the simulated plots was generated using four models of spatial point patterns. Two error indices were used to assess the relative performance of M1 and M2 in estimating relevant stand-level variables. In addition, we estimated the Quadratic Mean plot Diameter (QMD) using Expansion Factors (EFs). Main results: Methods M1 and M2 produced comparable estimation errors in random and cluster tree spatial patterns. Method M2 produced biased parameter estimates in plots with inhomogeneous Poisson patterns. Estimation of QMD using EFs produced biased results in plots within inhomogeneous intensity Poisson patterns. Research highlights:We designed a new method to fit the Weibull distribution to forest inventory data from concentric plots that achieves high accuracy and precision in parameter estimates regardless of the within-plot spatial tree pattern.

  19. Experimental model for non-Newtonian fluid viscosity estimation: Fit to mathematical expressions

    Directory of Open Access Journals (Sweden)

    Guillem Masoliver i Marcos

    2017-01-01

    Full Text Available The  construction  process  of  a  viscometer,  developed  in  collaboration  with  a  final  project  student,  is  here  presented.  It  is  intended  to  be  used  by   first  year's  students  to  know  the  viscosity  as  a  fluid  property, for  both  Newtonian  and  non-Newtonian  flows.  Viscosity  determination  is  crucial  for  the  fluids  behaviour knowledge  related  to  their  reologic  and  physical  properties.  These  have  great  implications  in  engineering aspects  such  as  friction  or  lubrication.  With  the  present  experimental  model  device  three  different fluids are  analyzed  (water,  kétchup  and  a  mixture  with  cornstarch  and  water.  Tangential stress is measured versus velocity in order to characterize all the fluids in different thermal conditions. A mathematical fit process is proposed to be done in order to adjust the results to expected analytical expressions, obtaining good results for these fittings, with R2 greater than 0.88 in any case.

  20. Patient-centered medical home model: do school-based health centers fit the model?

    Science.gov (United States)

    Larson, Satu A; Chapman, Susan A

    2013-01-01

    School-based health centers (SBHCs) are an important component of health care reform. The SBHC model of care offers accessible, continuous, comprehensive, family-centered, coordinated, and compassionate care to infants, children, and adolescents. These same elements comprise the patient-centered medical home (PCMH) model of care being promoted by the Affordable Care Act with the hope of lowering health care costs by rewarding clinicians for primary care services. PCMH survey tools have been developed to help payers determine whether a clinician/site serves as a PCMH. Our concern is that current survey tools will be unable to capture how a SBHC may provide a medical home and therefore be denied needed funding. This article describes how SBHCs might meet the requirements of one PCMH tool. SBHC stakeholders need to advocate for the creation or modification of existing survey tools that allow the unique characteristics of SBHCs to qualify as PCMHs.

  1. Fitness for duty: A tried-and-true model for decision making

    International Nuclear Information System (INIS)

    Horn, G.L.

    1989-01-01

    The US Nuclear Regulatory Commission (NRC) rules and regulations pertaining to fitness for duty specify development of programs designed to ensure that nuclear power plant personnel are not under the influence of legal or illegal substances that cause mental or physical impairment of work performance such that public safety is compromised. These regulations specify the type of decision loop to employ in determining the employee's movement through the process of initial restriction of access to the point at which his access authorization is restores. Suggestions are also offered to determine the roles that various components of the organization should take in the decision loop. This paper discusses some implications and labor concerns arising from the suggested role of employee assistance programs (EAPs) in the decision loop for clinical assessment and return-to-work evaluation of chemical testing failures. A model for a decision loop addressing some of the issues raised is presented. The proposed model has been implemented in one nuclear facility and has withstood the scrutiny of an NRC audit

  2. Temperature dependence of bulk respiration of crop stands. Measurement and model fitting

    International Nuclear Information System (INIS)

    Tani, Takashi; Arai, Ryuji; Tako, Yasuhiro

    2007-01-01

    The objective of the present study was to examine whether the temperature dependence of respiration at a crop-stand scale could be directly represented by an Arrhenius function that was widely used for representing the temperature dependence of leaf respiration. We determined temperature dependences of bulk respiration of monospecific stands of rice and soybean within a range of the air temperature from 15 to 30degC using large closed chambers. Measured responses of respiration rates of the two stands were well fitted by the Arrhenius function (R 2 =0.99). In the existing model to assess the local radiological impact of the anthropogenic carbon-14, effects of the physical environmental factors on photosynthesis and respiration of crop stands are not taken into account for the calculation of the net amount of carbon per cultivation area in crops at harvest which is the crucial parameter for the estimation of the activity concentration of carbon-14 in crops. Our result indicates that the Arrhenius function is useful for incorporating the effect of the temperature on respiration of crop stands into the model which is expected to contribute to a more realistic estimate of the activity concentration of carbon-14 in crops. (author)

  3. Universal fit to p-p elastic diffraction scattering from the Lorentz contracted geometrical model

    International Nuclear Information System (INIS)

    Hansen, P.H.; Krisch, A.D.

    1976-01-01

    The prediction of the Lorentz contracted geometical model for proton-proton elastic scattering at small angles is examined. The model assumes that when two high energy particles collide, each behaves as a geometrical object which has a Gaussian density and is spherically symmetric except for the Lorentz contraction in the incident direction. It is predicted that dsigma/dt should be independent of energy when plotted against the variable β 2 P 2 sub(perpendicular) sigmasub(TOT)(s)/38.3. Thus the energy dependence of the diffraction peak slope (b in an esup(-b mod(t))plot) is given by b(s)=A 2 β 2 sigmasub(TOT)(s)/38.3 where β is the proton's c.m. velocity and A is its radius. Recently measured values of sigmasub(TOT)(s) were used and an excellent fit obtained to the elastic slope in both t regions [-t 2 and 0.1 2 ] at all energies from s=6 to 4000(GeV/c) 2 . (Auth.)

  4. A differential equation for the asymptotic fitness distribution in the Bak-Sneppen model with five species.

    Science.gov (United States)

    Schlemm, Eckhard

    2015-09-01

    The Bak-Sneppen model is an abstract representation of a biological system that evolves according to the Darwinian principles of random mutation and selection. The species in the system are characterized by a numerical fitness value between zero and one. We show that in the case of five species the steady-state fitness distribution can be obtained as a solution to a linear differential equation of order five with hypergeometric coefficients. Similar representations for the asymptotic fitness distribution in larger systems may help pave the way towards a resolution of the question of whether or not, in the limit of infinitely many species, the fitness is asymptotically uniformly distributed on the interval [fc, 1] with fc ≳ 2/3. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. One model to fit all? The pursuit of integrated earth system models in GAIM and AIMES

    OpenAIRE

    Uhrqvist, Ola

    2015-01-01

    Images of Earth from space popularized the view of our planet as a single, fragile entity against the vastness and darkness of space. In the 1980s, the International Geosphere-Biosphere Program (IGBP) was set up to produce a predictive understanding of this fragile entity as the ‘Earth System.’ In order to do so, the program sought to create a common research framework for the different disciplines involved. It suggested that integrated numerical models could provide such a framework. The pap...

  6. Tanning Shade Gradations of Models in Mainstream Fitness and Muscle Enthusiast Magazines: Implications for Skin Cancer Prevention in Men.

    Science.gov (United States)

    Basch, Corey H; Hillyer, Grace Clarke; Ethan, Danna; Berdnik, Alyssa; Basch, Charles E

    2015-07-01

    Tanned skin has been associated with perceptions of fitness and social desirability. Portrayal of models in magazines may reflect and perpetuate these perceptions. Limited research has investigated tanning shade gradations of models in men's versus women's fitness and muscle enthusiast magazines. Such findings are relevant in light of increased incidence and prevalence of melanoma in the United States. This study evaluated and compared tanning shade gradations of adult Caucasian male and female model images in mainstream fitness and muscle enthusiast magazines. Sixty-nine U.S. magazine issues (spring and summer, 2013) were utilized. Two independent reviewers rated tanning shade gradations of adult Caucasian male and female model images on magazines' covers, advertisements, and feature articles. Shade gradations were assessed using stock photographs of Caucasian models with varying levels of tanned skin on an 8-shade scale. A total of 4,683 images were evaluated. Darkest tanning shades were found among males in muscle enthusiast magazines and lightest among females in women's mainstream fitness magazines. By gender, male model images were 54% more likely to portray a darker tanning shade. In this study, images in men's (vs. women's) fitness and muscle enthusiast magazines portrayed Caucasian models with darker skin shades. Despite these magazines' fitness-related messages, pro-tanning images may promote attitudes and behaviors associated with higher skin cancer risk. To date, this is the first study to explore tanning shades in men's magazines of these genres. Further research is necessary to identify effects of exposure to these images among male readers. © The Author(s) 2014.

  7. Fitness landscapes, heuristics and technological paradigms: a critique on random search models in evolutionary economics

    NARCIS (Netherlands)

    Frenken, K.

    2001-01-01

    The biological evolution of complex organisms, in which the functioning of genes is interdependent, has been analyzed as "hill-climbing" on NK fitness landscapes through random mutation and natural selection. In evolutionary economics, NK fitness landscapes have been used to simulate the evolution

  8. Universal Rate Model Selector: A Method to Quickly Find the Best-Fit Kinetic Rate Model for an Experimental Rate Profile

    Science.gov (United States)

    2017-08-01

    k2 – k1) 3.3 Universal Kinetic Rate Platform Development Kinetic rate models range from pure chemical reactions to mass transfer...14 8. The rate model that best fits the experimental data is a first-order or homogeneous catalytic reaction ...Avrami (7), and intraparticle diffusion (6) rate equations to name a few. A single fitting algorithm (kinetic rate model ) for a reaction does not

  9. Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting

    Energy Technology Data Exchange (ETDEWEB)

    Ross, James C., E-mail: jross@bwh.harvard.edu [Channing Laboratory, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Surgical Planning Lab, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Laboratory of Mathematics in Imaging, Brigham and Women' s Hospital, Boston, Massachusetts 02126 (United States); Kindlmann, Gordon L. [Computer Science Department and Computation Institute, University of Chicago, Chicago, Illinois 60637 (United States); Okajima, Yuka; Hatabu, Hiroto [Department of Radiology, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Díaz, Alejandro A. [Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 and Department of Pulmonary Diseases, Pontificia Universidad Católica de Chile, Santiago (Chile); Silverman, Edwin K. [Channing Laboratory, Brigham and Women' s Hospital, Boston, Massachusetts 02215 and Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 (United States); Washko, George R. [Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 (United States); Dy, Jennifer [ECE Department, Northeastern University, Boston, Massachusetts 02115 (United States); Estépar, Raúl San José [Department of Radiology, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Surgical Planning Lab, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Laboratory of Mathematics in Imaging, Brigham and Women' s Hospital, Boston, Massachusetts 02126 (United States)

    2013-12-15

    Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The

  10. Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting

    International Nuclear Information System (INIS)

    Ross, James C.; Kindlmann, Gordon L.; Okajima, Yuka; Hatabu, Hiroto; Díaz, Alejandro A.; Silverman, Edwin K.; Washko, George R.; Dy, Jennifer; Estépar, Raúl San José

    2013-01-01

    Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The proposed

  11. The bystander effect model of Brenner and Sachs fitted to lung cancer data in 11 cohorts of underground miners, and equivalence of fit of a linear relative risk model with adjustment for attained age and age at exposure

    International Nuclear Information System (INIS)

    Little, M P

    2004-01-01

    Bystander effects following exposure to α-particles have been observed in many experimental systems, and imply that linearly extrapolating low dose risks from high dose data might materially underestimate risk. Brenner and Sachs (2002 Int. J. Radiat. Biol. 78 593-604; 2003 Health Phys. 85 103-8) have recently proposed a model of the bystander effect which they use to explain the inverse dose rate effect observed for lung cancer in underground miners exposed to radon daughters. In this paper we fit the model of the bystander effect proposed by Brenner and Sachs to 11 cohorts of underground miners, taking account of the covariance structure of the data and the period of latency between the development of the first pre-malignant cell and clinically overt cancer. We also fitted a simple linear relative risk model, with adjustment for age at exposure and attained age. The methods that we use for fitting both models are different from those used by Brenner and Sachs, in particular taking account of the covariance structure, which they did not, and omitting certain unjustifiable adjustments to the miner data. The fit of the original model of Brenner and Sachs (with 0 y period of latency) is generally poor, although it is much improved by assuming a 5 or 6 y period of latency from the first appearance of a pre-malignant cell to cancer. The fit of this latter model is equivalent to that of a linear relative risk model with adjustment for age at exposure and attained age. In particular, both models are capable of describing the observed inverse dose rate effect in this data set

  12. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    Science.gov (United States)

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  13. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Directory of Open Access Journals (Sweden)

    A H Sabry

    Full Text Available The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  14. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Science.gov (United States)

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  15. Surrogate mothering: exploitation or empowerment?

    Science.gov (United States)

    Purdy, Laura M

    1989-01-01

    The morality of surrogate mothering is analyzed from a "consequentialist" framework which attempts to separate those consequences that invariably accompany a given act from those that accompany it only in particular circumstances. Critics of surrogacy argue that it transfers the burden and risk of pregnancy onto another woman, separates sex and reproduction, and separates reproduction and childrearing; none of these acts is necessarily wrong, either morally or for women's or society's basic interests. While surrogate mothering can be rendered immoral if women are coerced into the practice or become victims of subordinating or penalizing contracts, it has the potential to empower women and increase their status in society by providing a job that is less risky and more enjoyable than other jobs women are forced to take and by achieving greater social recognition for reproductive labor.

  16. Are all models created equal? A content analysis of women in advertisements of fitness versus fashion magazines.

    Science.gov (United States)

    Wasylkiw, L; Emms, A A; Meuse, R; Poirier, K F

    2009-03-01

    The current study is a content analysis of women appearing in advertisements in two types of magazines: fitness/health versus fashion/beauty chosen because of their large and predominantly female readerships. Women appearing in advertisements of the June 2007 issue of five fitness/health magazines were compared to women appearing in advertisements of the June 2007 issue of five beauty/fashion magazines. Female models appearing in advertisements of both types of magazines were primarily young, thin Caucasians; however, images of models were more likely to emphasize appearance over performance when they appeared in fashion magazines. This difference in emphasis has implications for future research.

  17. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  18. Standard Model updates and new physics analysis with the Unitarity Triangle fit

    International Nuclear Information System (INIS)

    Bevan, A.; Bona, M.; Ciuchini, M.; Derkach, D.; Franco, E.; Silvestrini, L.; Lubicz, V.; Tarantino, C.; Martinelli, G.; Parodi, F.; Schiavi, C.; Pierini, M.; Sordini, V.; Stocchi, A.; Vagnoni, V.

    2013-01-01

    We present the summer 2012 update of the Unitarity Triangle (UT) analysis performed by the UTfit Collaboration within the Standard Model (SM) and beyond. The increased accuracy on several of the fundamental constraints is now enhancing some of the tensions amongst and within the constraint themselves. In particular, the long standing tension between exclusive and inclusive determinations of the V ub and V cb CKM matrix elements is now playing a major role. Then we present the generalisation the UT analysis to investigate new physics (NP) effects, updating the constraints on NP contributions to ΔF=2 processes. In the NP analysis, both CKM and NP parameters are fitted simultaneously to obtain the possible NP effects in any specific sector. Finally, based on the NP constraints, we derive upper bounds on the coefficients of the most general ΔF=2 effective Hamiltonian. These upper bounds can be translated into lower bounds on the scale of NP that contributes to these low-energy effective interactions

  19. A Parametric Model of Shoulder Articulation for Virtual Assessment of Space Suit Fit

    Science.gov (United States)

    Kim, K. Han; Young, Karen S.; Bernal, Yaritza; Boppana, Abhishektha; Vu, Linh Q.; Benson, Elizabeth A.; Jarvis, Sarah; Rajulu, Sudhakar L.

    2016-01-01

    Suboptimal suit fit is a known risk factor for crewmember shoulder injury. Suit fit assessment is however prohibitively time consuming and cannot be generalized across wide variations of body shapes and poses. In this work, we have developed a new design tool based on the statistical analysis of body shape scans. This tool is aimed at predicting the skin deformation and shape variations for any body size and shoulder pose for a target population. This new process, when incorporated with CAD software, will enable virtual suit fit assessments, predictively quantifying the contact volume, and clearance between the suit and body surface at reduced time and cost.

  20. Two-Stage Method Based on Local Polynomial Fitting for a Linear Heteroscedastic Regression Model and Its Application in Economics

    Directory of Open Access Journals (Sweden)

    Liyun Su

    2012-01-01

    Full Text Available We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations.

  1. Bedload-surrogate monitoring technologies

    Science.gov (United States)

    Gray, John R.; Laronne, Jonathan B.; Marr, Jeffrey D.G.

    2010-01-01

    Advances in technologies for quantifying bedload fluxes and in some cases bedload size distributions in rivers show promise toward supplanting traditional physical samplers and sampling methods predicated on the collection and analysis of physical bedload samples. Four workshops held from 2002 to 2007 directly or peripherally addressed bedload-surrogate technologies, and results from these workshops have been compiled to evaluate the state-of-the-art in bedload monitoring. Papers from the 2007 workshop are published for the first time with this report. Selected research and publications since the 2007 workshop also are presented. Traditional samplers used for some or all of the last eight decades include box or basket samplers, pan or tray samplers, pressure-difference samplers, and trough or pit samplers. Although still useful, the future niche of these devices may be as a means for calibrating bedload-surrogate technologies operating with active- and passive-type sensors, in many cases continuously and automatically at a river site. Active sensors include acoustic Doppler current profilers (ADCPs), sonar, radar, and smart sensors. Passive sensors include geophones (pipes or plates) in direct contact with the streambed, hydrophones deployed in the water column, impact columns, and magnetic detection. The ADCP for sand and geophones for gravel are currently the most developed techniques, several of which have been calibrated under both laboratory and field conditions. Although none of the bedload-surrogate technologies described herein are broadly accepted for use in large-scale monitoring programs, several are under evaluation. The benefits of verifying and operationally deploying selected bedload-surrogate monitoring technologies could be considerable, providing for more frequent and consistent, less expensive, and arguably more accurate bedload data obtained with reduced personal risk for use in managing the world's sedimentary resources. Twenty-six papers are

  2. Fitting diameter distribution models to data from forest inventories with concentric plot design

    Directory of Open Access Journals (Sweden)

    Nikos Nanos

    2017-10-01

    Research highlights:We designed a new method to fit the Weibull distribution to forest inventory data from concentric plots that achieves high accuracy and precision in parameter estimates regardless of the within-plot spatial tree pattern.

  3. Modelling job support, job fit, job role and job satisfaction for school of nursing sessional academic staff.

    Science.gov (United States)

    Cowin, Leanne S; Moroney, Robyn

    2018-01-01

    Sessional academic staff are an important part of nursing education. Increases in casualisation of the academic workforce continue and satisfaction with the job role is an important bench mark for quality curricula delivery and influences recruitment and retention. This study examined relations between four job constructs - organisation fit, organisation support, staff role and job satisfaction for Sessional Academic Staff at a School of Nursing by creating two path analysis models. A cross-sectional correlational survey design was utilised. Participants who were currently working as sessional or casual teaching staff members were invited to complete an online anonymous survey. The data represents a convenience sample of Sessional Academic Staff in 2016 at a large school of Nursing and Midwifery in Australia. After psychometric evaluation of each of the job construct measures in this study we utilised Structural Equation Modelling to better understand the relations of the variables. The measures used in this study were found to be both valid and reliable for this sample. Job support and job fit are positively linked to job satisfaction. Although the hypothesised model did not meet model fit standards, a new 'nested' model made substantive sense. This small study explored a new scale for measuring academic job role, and demonstrated how it promotes the constructs of job fit and job supports. All four job constructs are important in providing job satisfaction - an outcome that in turn supports staffing stability, retention, and motivation.

  4. Uncertainty quantification for accident management using ACE surrogates

    International Nuclear Information System (INIS)

    Varuttamaseni, A.; Lee, J. C.; Youngblood, R. W.

    2012-01-01

    The alternating conditional expectation (ACE) regression method is used to generate RELAP5 surrogates which are then used to determine the distribution of the peak clad temperature (PCT) during the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed (F and B) operation in the Zion-1 nuclear power plant. The construction of the surrogates assumes conditional independence relations among key reactor parameters. The choice of parameters to model is based on the macroscopic balance statements governing the behavior of the reactor. The peak clad temperature is calculated based on the independent variables that are known to be important in determining the success of the F and B operation. The relationship between these independent variables and the plant parameters such as coolant pressure and temperature is represented by surrogates that are constructed based on 45 RELAP5 cases. The time-dependent PCT for different values of F and B parameters is calculated by sampling the independent variables from their probability distributions and propagating the information through two layers of surrogates. The results of our analysis show that the ACE surrogates are able to satisfactorily reproduce the behavior of the plant parameters even though a quasi-static assumption is primarily used in their construction. The PCT is found to be lower in cases where the F and B operation is initiated, compared to the case without F and B, regardless of the F and B parameters used. (authors)

  5. Surrogate for oropharyngeal cancer HPV status in cancer database studies.

    Science.gov (United States)

    Megwalu, Uchechukwu C; Chen, Michelle M; Ma, Yifei; Divi, Vasu

    2017-12-01

    The utility of cancer databases for oropharyngeal cancer studies is limited by lack of information on human papillomavirus (HPV) status. The purpose of this study was to develop a surrogate that can be used to adjust for the effect of HPV status on survival. The study cohort included 6419 patients diagnosed with oropharyngeal squamous cell carcinoma between 2004 and 2012, identified in the National Cancer Database (NCDB). The HPV surrogate score was developed using a logistic regression model predicting HPV-positive status. The HPV surrogate score was predictive of HPV status (area under the curve [AUC] 0.73; accuracy of 70.4%). Similar to HPV-positive tumors, HPV surrogate positive tumors were associated with improved overall survival (OS; hazard ratio [HR] 0.73; 95% confidence interval [CI] 0.59-0.91; P = .005), after adjusting for important covariates. The HPV surrogate score is useful for adjusting for the effect of HPV status on survival in studies utilizing cancer databases. © 2017 Wiley Periodicals, Inc.

  6. Make the Most of the Data You've Got: Bayesian Models and a Surrogate Species Approach to Assessing Benefits of Upstream Migration Flows for the Endangered Australian Grayling.

    Science.gov (United States)

    Webb, J Angus; Koster, Wayne M; Stuart, Ivor G; Reich, Paul; Stewardson, Michael J

    2018-03-01

    Environmental water managers must make best use of allocations, and adaptive management is one means of improving effectiveness of environmental water delivery. Adaptive management relies on generation of new knowledge from monitoring and evaluation, but it is often difficult to make clear inferences from available monitoring data. Alternative approaches to assessment of flow benefits may offer an improved pathway to adaptive management. We developed Bayesian statistical models to inform adaptive management of the threatened Australian grayling (Prototroctes maraena) in the coastal Thomson River, South-East Victoria Australia. The models assessed the importance of flows in spring and early summer (migration flows) for upstream dispersal and colonization of juveniles of this diadromous species. However, Australian grayling young-of-year were recorded in low numbers, and models provided no indication of the benefit of migration flows. To overcome this limitation, we applied the same models to young-of-year of a surrogate species (tupong-Pseudaphritis urvilli)-a more common diadromous species expected to respond to flow similarly to Australian grayling-and found strong positive responses to migration flows. Our results suggest two complementary approaches to supporting adaptive management of Australian grayling. First, refine monitoring approaches to allow direct measurement of effects of migration flows, a process currently under way. Second, while waiting for improved data, further investigate the use of tupong as a surrogate species. More generally, alternative approaches to assessment can improve knowledge to inform adaptive management, and this can occur while monitoring is being revised to directly target environmental responses of interest.

  7. Are trans diagnostic models of eating disorders fit for purpose? A consideration of the evidence for food addiction.

    Science.gov (United States)

    Treasure, Janet; Leslie, Monica; Chami, Rayane; Fernández-Aranda, Fernando

    2018-03-01

    Explanatory models for eating disorders have changed over time to account for changing clinical presentations. The transdiagnostic model evolved from the maintenance model, which provided the framework for cognitive behavioural therapy for bulimia nervosa. However, for many individuals (especially those at the extreme ends of the weight spectrum), this account does not fully fit. New evidence generated from research framed within the food addiction hypothesis is synthesized here into a model that can explain recurrent binge eating behaviour. New interventions that target core maintenance elements identified within the model may be useful additions to a complex model of treatment for eating disorders. Copyright © 2018 John Wiley & Sons, Ltd and Eating Disorders Association.

  8. Estimation of error components in a multi-error linear regression model, with an application to track fitting

    International Nuclear Information System (INIS)

    Fruehwirth, R.

    1993-01-01

    We present an estimation procedure of the error components in a linear regression model with multiple independent stochastic error contributions. After solving the general problem we apply the results to the estimation of the actual trajectory in track fitting with multiple scattering. (orig.)

  9. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  10. Exploratory Analyses To Improve Model Fit: Errors Due to Misspecification and a Strategy To Reduce Their Occurrence.

    Science.gov (United States)

    Green, Samuel B.; Thompson, Marilyn S.; Poirier, Jennifer

    1999-01-01

    The use of Lagrange multiplier (LM) tests in specification searches and the efforts that involve the addition of extraneous parameters to models are discussed. Presented are a rationale and strategy for conducting specification searches in two stages that involve adding parameters to LM tests to maximize fit and then deleting parameters not needed…

  11. Fit-for-purpose: species distribution model performance depends on evaluation criteria - Dutch Hoverflies as a case study.

    Science.gov (United States)

    Aguirre-Gutiérrez, Jesús; Carvalheiro, Luísa G; Polce, Chiara; van Loon, E Emiel; Raes, Niels; Reemer, Menno; Biesmeijer, Jacobus C

    2013-01-01

    Understanding species distributions and the factors limiting them is an important topic in ecology and conservation, including in nature reserve selection and predicting climate change impacts. While Species Distribution Models (SDM) are the main tool used for these purposes, choosing the best SDM algorithm is not straightforward as these are plentiful and can be applied in many different ways. SDM are used mainly to gain insight in 1) overall species distributions, 2) their past-present-future probability of occurrence and/or 3) to understand their ecological niche limits (also referred to as ecological niche modelling). The fact that these three aims may require different models and outputs is, however, rarely considered and has not been evaluated consistently. Here we use data from a systematically sampled set of species occurrences to specifically test the performance of Species Distribution Models across several commonly used algorithms. Species range in distribution patterns from rare to common and from local to widespread. We compare overall model fit (representing species distribution), the accuracy of the predictions at multiple spatial scales, and the consistency in selection of environmental correlations all across multiple modelling runs. As expected, the choice of modelling algorithm determines model outcome. However, model quality depends not only on the algorithm, but also on the measure of model fit used and the scale at which it is used. Although model fit was higher for the consensus approach and Maxent, Maxent and GAM models were more consistent in estimating local occurrence, while RF and GBM showed higher consistency in environmental variables selection. Model outcomes diverged more for narrowly distributed species than for widespread species. We suggest that matching study aims with modelling approach is essential in Species Distribution Models, and provide suggestions how to do this for different modelling aims and species' data

  12. Recent Progress in the Development of Diesel Surrogate Fuels

    Energy Technology Data Exchange (ETDEWEB)

    Pitz, W J; Mueller, C J

    2009-12-09

    There has been much recent progress in the area of surrogate fuels for diesel. In the last few years, experiments and modeling have been performed on higher molecular weight components of relevance to diesel fuel such as n-hexadecane (n-cetane) and 2,2,4,4,6,8,8-heptamethylnonane (iso-cetane). Chemical kinetic models have been developed for all the n-alkanes up to 16 carbon atoms. Also, there has been much experimental and modeling work on lower molecular weight surrogate components such as n-decane and n-dodecane that are most relevant to jet fuel surrogates, but are also relevant to diesel surrogates where simulation of the full boiling point range is desired. For two-ring compounds, experimental work on decalin and tetralin recently has been published. For multi-component surrogate fuel mixtures, recent work on modeling of these mixtures and comparisons to real diesel fuel is reviewed. Detailed chemical kinetic models for surrogate fuels are very large in size. Significant progress also has been made in improving the mechanism reduction tools that are needed to make these large models practicable in multi-dimensional reacting flow simulations of diesel combustion. Nevertheless, major research gaps remain. In the case of iso-alkanes, there are experiments and modeling work on only one of relevance to diesel: iso-cetane. Also, the iso-alkanes in diesel are lightly branched and no detailed chemical kinetic models or experimental investigations are available for such compounds. More components are needed to fill out the iso-alkane boiling point range. For the aromatic class of compounds, there has been no new work for compounds in the boiling point range of diesel. Most of the new work has been on alkyl aromatics that are of the range C7 to C8, below the C10 to C20 range that is needed. For the chemical class of cycloalkanes, experiments and modeling on higher molecular weight components are warranted. Finally for multi-component surrogates needed to treat real

  13. Facultative control of matrix production optimizes competitive fitness in Pseudomonas aeruginosa PA14 biofilm models.

    Science.gov (United States)

    Madsen, Jonas S; Lin, Yu-Cheng; Squyres, Georgia R; Price-Whelan, Alexa; de Santiago Torio, Ana; Song, Angela; Cornell, William C; Sørensen, Søren J; Xavier, Joao B; Dietrich, Lars E P

    2015-12-01

    As biofilms grow, resident cells inevitably face the challenge of resource limitation. In the opportunistic pathogen Pseudomonas aeruginosa PA14, electron acceptor availability affects matrix production and, as a result, biofilm morphogenesis. The secreted matrix polysaccharide Pel is required for pellicle formation and for colony wrinkling, two activities that promote access to O2. We examined the exploitability and evolvability of Pel production at the air-liquid interface (during pellicle formation) and on solid surfaces (during colony formation). Although Pel contributes to the developmental response to electron acceptor limitation in both biofilm formation regimes, we found variation in the exploitability of its production and necessity for competitive fitness between the two systems. The wild type showed a competitive advantage against a non-Pel-producing mutant in pellicles but no advantage in colonies. Adaptation to the pellicle environment selected for mutants with a competitive advantage against the wild type in pellicles but also caused a severe disadvantage in colonies, even in wrinkled colony centers. Evolution in the colony center produced divergent phenotypes, while adaptation to the colony edge produced mutants with clear competitive advantages against the wild type in this O2-replete niche. In general, the structurally heterogeneous colony environment promoted more diversification than the more homogeneous pellicle. These results suggest that the role of Pel in community structure formation in response to electron acceptor limitation is unique to specific biofilm models and that the facultative control of Pel production is required for PA14 to maintain optimum benefit in different types of communities. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  14. Landscape and flow metrics affecting the distribution of a federally-threatened fish: Improving management, model fit, and model transferability

    Science.gov (United States)

    Worthington, Thomas A.; Zhang, T.; Logue, Daniel R.; Mittelstet, Aaron R.; Brewer, Shannon K.

    2016-01-01

    Truncated distributions of pelagophilic fishes have been observed across the Great Plains of North America, with water use and landscape fragmentation implicated as contributing factors. Developing conservation strategies for these species is hindered by the existence of multiple competing flow regime hypotheses related to species persistence. Our primary study objective was to compare the predicted distributions of one pelagophil, the Arkansas River Shiner Notropis girardi, constructed using different flow regime metrics. Further, we investigated different approaches for improving temporal transferability of the species distribution model (SDM). We compared four hypotheses: mean annual flow (a baseline), the 75th percentile of daily flow, the number of zero-flow days, and the number of days above 55th percentile flows, to examine the relative importance of flows during the spawning period. Building on an earlier SDM, we added covariates that quantified wells in each catchment, point source discharges, and non-native species presence to a structured variable framework. We assessed the effects on model transferability and fit by reducing multicollinearity using Spearman’s rank correlations, variance inflation factors, and principal component analysis, as well as altering the regularization coefficient (β) within MaxEnt. The 75th percentile of daily flow was the most important flow metric related to structuring the species distribution. The number of wells and point source discharges were also highly ranked. At the default level of β, model transferability was improved using all methods to reduce collinearity; however, at higher levels of β, the correlation method performed best. Using β = 5 provided the best model transferability, while retaining the majority of variables that contributed 95% to the model. This study provides a workflow for improving model transferability and also presents water-management options that may be considered to improve the

  15. Mapping surrogate gasoline compositions into RON/MON space

    NARCIS (Netherlands)

    Morgan, N.; Smallbone, A.; Bhave, A.; Kraft, M.; Cracknell, R.; Kalghatgi, G.T.

    2010-01-01

    In this paper, new experimentally determined octane numbers (RON and MON) of blends of a tri-component surrogate consisting of toluene, n-heptane, i-octane (called toluene reference fuel TRF) arranged in an augmented simplex design are used to derive a simple response surface model for the octane

  16. Curve fitting and modeling with splines using statistical variable selection techniques

    Science.gov (United States)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  17. Inactivation kinetics and efficiencies of UV-LEDs against Pseudomonas aeruginosa, Legionella pneumophila, and surrogate microorganisms.

    Science.gov (United States)

    Rattanakul, Surapong; Oguma, Kumiko

    2018-03-01

    To demonstrate the effectiveness of UV light-emitting diodes (UV-LEDs) to disinfect water, UV-LEDs at peak emission wavelengths of 265, 280, and 300 nm were adopted to inactivate pathogenic species, including Pseudomonas aeruginosa and Legionella pneumophila, and surrogate species, including Escherichia coli, Bacillus subtilis spores, and bacteriophage Qβ in water, compared to conventional low-pressure UV lamp emitting at 254 nm. The inactivation profiles of each species showed either a linear or sigmoidal survival curve, which both fit well with the Geeraerd's model. Based on the inactivation rate constant, the 265-nm UV-LED showed most effective fluence, except for with E. coli which showed similar inactivation rates at 265 and 254 nm. Electrical energy consumption required for 3-log 10 inactivation (E E,3 ) was lowest for the 280-nm UV-LED for all microbial species tested. Taken together, the findings of this study determined the inactivation profiles and kinetics of both pathogenic bacteria and surrogate species under UV-LED exposure at different wavelengths. We also demonstrated that not only inactivation rate constants, but also energy efficiency should be considered when selecting an emission wavelength for UV-LEDs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Predicting the Best Fit: A Comparison of Response Surface Models for Midazolam and Alfentanil Sedation in Procedures With Varying Stimulation.

    Science.gov (United States)

    Liou, Jing-Yang; Ting, Chien-Kun; Mandell, M Susan; Chang, Kuang-Yi; Teng, Wei-Nung; Huang, Yu-Yin; Tsou, Mei-Yung

    2016-08-01

    Selecting an effective dose of sedative drugs in combined upper and lower gastrointestinal endoscopy is complicated by varying degrees of pain stimulation. We tested the ability of 5 response surface models to predict depth of sedation after administration of midazolam and alfentanil in this complex model. The procedure was divided into 3 phases: esophagogastroduodenoscopy (EGD), colonoscopy, and the time interval between the 2 (intersession). The depth of sedation in 33 adult patients was monitored by Observer Assessment of Alertness/Scores. A total of 218 combinations of midazolam and alfentanil effect-site concentrations derived from pharmacokinetic models were used to test 5 response surface models in each of the 3 phases of endoscopy. Model fit was evaluated with objective function value, corrected Akaike Information Criterion (AICc), and Spearman ranked correlation. A model was arbitrarily defined as accurate if the predicted probability is effect-site concentrations tested ranged from 1 to 76 ng/mL and from 5 to 80 ng/mL for midazolam and alfentanil, respectively. Midazolam and alfentanil had synergistic effects in colonoscopy and EGD, but additivity was observed in the intersession group. Adequate prediction rates were 84% to 85% in the intersession group, 84% to 88% during colonoscopy, and 82% to 87% during EGD. The reduced Greco and Fixed alfentanil concentration required for 50% of the patients to achieve targeted response Hierarchy models performed better with comparable predictive strength. The reduced Greco model had the lowest AICc with strong correlation in all 3 phases of endoscopy. Dynamic, rather than fixed, γ and γalf in the Hierarchy model improved model fit. The reduced Greco model had the lowest objective function value and AICc and thus the best fit. This model was reliable with acceptable predictive ability based on adequate clinical correlation. We suggest that this model has practical clinical value for patients undergoing procedures

  19. Invited commentary: Lost in estimation--searching for alternatives to markov chains to fit complex Bayesian models.

    Science.gov (United States)

    Molitor, John

    2012-03-01

    Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.

  20. Ethical Problems Related to Surrogate Motherhood

    OpenAIRE

    Erdem Aydin

    2006-01-01

    Being unable to have children is an important problem for married couples. At present, new reproduction techniques help these couples while those who can not find any solution try new approaches. One of these is the phenomenon of surrogate motherhood, which is based upon an agreement between the infertile couple and surrogate mother. Surrogate mother may conceive with the sperm of the male of the involved couple as well as by the transfer of the embryo formed by invitro fertilization. Couples...

  1. Interpretations, perspectives and intentions in surrogate motherhood

    OpenAIRE

    van Zyl, L.; van Niekerk, A.

    2000-01-01

    In this paper we examine the questions "What does it mean to be a surrogate mother?" and "What would be an appropriate perspective for a surrogate mother to have on her pregnancy?" In response to the objection that such contracts are alienating or dehumanising since they require women to suppress their evolving perspective on their pregnancies, liberal supporters of surrogate motherhood argue that the freedom to contract includes the freedom to enter a contract to bear a child for an infertil...

  2. An empirical assessment and comparison of species-based and habitat-based surrogates: a case study of forest vertebrates and large old trees.

    Science.gov (United States)

    Lindenmayer, David B; Barton, Philip S; Lane, Peter W; Westgate, Martin J; McBurney, Lachlan; Blair, David; Gibbons, Philip; Likens, Gene E

    2014-01-01

    A holy grail of conservation is to find simple but reliable measures of environmental change to guide management. For example, particular species or particular habitat attributes are often used as proxies for the abundance or diversity of a subset of other taxa. However, the efficacy of such kinds of species-based surrogates and habitat-based surrogates is rarely assessed, nor are different kinds of surrogates compared in terms of their relative effectiveness. We use 30-year datasets on arboreal marsupials and vegetation structure to quantify the effectiveness of: (1) the abundance of a particular species of arboreal marsupial as a species-based surrogate for other arboreal marsupial taxa, (2) hollow-bearing tree abundance as a habitat-based surrogate for arboreal marsupial abundance, and (3) a combination of species- and habitat-based surrogates. We also quantify the robustness of species-based and habitat-based surrogates over time. We then use the same approach to model overall species richness of arboreal marsupials. We show that a species-based surrogate can appear to be a valid surrogate until a habitat-based surrogate is co-examined, after which the effectiveness of the former is lost. The addition of a species-based surrogate to a habitat-based surrogate made little difference in explaining arboreal marsupial abundance, but altered the co-occurrence relationship between species. Hence, there was limited value in simultaneously using a combination of kinds of surrogates. The habitat-based surrogate also generally performed significantly better and was easier and less costly to gather than the species-based surrogate. We found that over 30 years of study, the relationships which underpinned the habitat-based surrogate generally remained positive but variable over time. Our work highlights why it is important to compare the effectiveness of different broad classes of surrogates and identify situations when either species- or habitat-based surrogates are likely

  3. Surrogate Motherhood and Abortion for Fetal Abnormality.

    Science.gov (United States)

    Walker, Ruth; van Zyl, Liezl

    2015-10-01

    A diagnosis of fetal abnormality presents parents with a difficult - even tragic - moral dilemma. Where this diagnosis is made in the context of surrogate motherhood there is an added difficulty, namely that it is not obvious who should be involved in making decisions about abortion, for the person who would normally have the right to decide - the pregnant woman - does not intend to raise the child. This raises the question: To what extent, if at all, should the intended parents be involved in decision-making? In commercial surrogacy it is thought that as part of the contractual agreement the intended parents acquire the right to make this decision. By contrast, in altruistic surrogacy the pregnant woman retains the right to make these decisions, but the intended parents are free to decide not to adopt the child. We argue that both these strategies are morally unsound, and that the problems encountered serve to highlight more fundamental defects within the commercial and altruistic models, as well as in the legal and institutional frameworks that support them. We argue in favour of the professional model, which acknowledges the rights and responsibilities of both parties and provides a legal and institutional framework that supports good decision-making. In particular, the professional model acknowledges the surrogate's right to decide whether to undergo an abortion, and the intended parents' obligation to accept legal custody of the child. While not solving all the problems that arise in surrogacy, the model provides a framework that supports good decision-making. © 2015 John Wiley & Sons Ltd.

  4. Rheological properties of emulsions stabilized by green banana (Musa cavendishii pulp fitted by power law model

    Directory of Open Access Journals (Sweden)

    Dayane Rosalyn Izidoro

    2009-12-01

    Full Text Available In this work, the rheological behaviour of emulsions (mayonnaises stabilized by green banana pulp using the response surface methodology was studied. In addition, the emulsions stability was investigated. Five formulations were developed, according to design for constrained surfaces and mixtures, with the proportion, respectively: water/soy oil/green banana pulp: F1 (0.10/0.20/0.70, F2 (0.20/0.20/0.60, F3 (0.10/0.25/0.65, F4 (0.20/0.25/0.55 and F5 (0.15/0.225/0.625 .Emulsions rheological properties were performed with a rotational Haake Rheostress 600 rheometer and a cone and plate geometry sensor (60-mm diameter, 2º cone angle, using a gap distance of 1mm. The emulsions showed pseudoplastic behaviour and were adequately described by the Power Law model. The rheological responses were influenced by the difference in green banana pulp proportions and also by the temperatures (10 and 25ºC. The formulations with high pulp content (F1 and F3 presented higher shear stress and apparent viscosity. Response surface methodology, described by the quadratic model,showed that the consistency coefficient (K increased with the interaction between green banana pulp and soy oil concentration and the water fraction contributed to the flow behaviour index increase for all emulsions samples. Analysis of variance showed that the second-order model had not significant lack-of-fit and a significant F-value, indicating that quadratic model fitted well into the experimental data. The emulsions that presented better stability were the formulations F4 (0.20/0.25/0.55 and F5 (0.15/0.225/0.625.No presente trabalho, foi estudado o comportamento reológico de emulsões adicionadas de polpa de banana verde utilizando a metodologia de superfície de resposta e também foram investigadas a estabilidade das emulsões. Foram desenvolvidas cinco formulações, de acordo com o delineamento para superfícies limitadas e misturas, com as proporções respectivamente: água/óleo de

  5. ETHICAL ISSUES IN THE SURROGATE MATERNITY PRACTICE

    OpenAIRE

    TÜRK, Rukiye; TERZİOĞLU, Fusun

    2014-01-01

    The assisted reproductive technology was initially considered to be a treatment tool for infertile couples. However, as it was started in time to use the uteri of other women for the embryos of the other ones, the concept of surrogate maternity appeared.The surrogate maternity is practiced in three types. In the first type of surrogate maternity, the sperm of the spouse of the prospective mother is inseminated with the ovum of the surrogate mother. The second method is the in-vitro inseminati...

  6. Fitness club

    CERN Multimedia

    Fitness club

    2011-01-01

    General fitness Classes Enrolments are open for general fitness classes at CERN taking place on Monday, Wednesday, and Friday lunchtimes in the Pump Hall (building 216). There are shower facilities for both men and women. It is possible to pay for 1, 2 or 3 classes per week for a minimum of 1 month and up to 6 months. Check out our rates and enrol at: http://cern.ch/club-fitness Hope to see you among us! CERN Fitness Club fitness.club@cern.ch  

  7. Black Versus Gray T-Shirts: Comparison of Spectrophotometric and Other Biophysical Properties of Physical Fitness Uniforms and Modeled Heat Strain and Thermal Comfort

    Science.gov (United States)

    2016-09-01

    PROPERTIES OF PHYSICAL FITNESS UNIFORMS AND MODELED HEAT STRAIN AND THERMAL COMFORT DISCLAIMER The opinions or assertions contained herein are the...SHIRTS: COMPARISON OF SPECTROPHOTOMETRIC AND OTHER BIOPHYSICAL PROPERTIES OF PHYSICAL FITNESS UNIFORMS AND MODELED HEAT STRAIN AND THERMAL COMFORT ...the impact of the environment on the wearer. To model these impacts on human thermal sensation (e.g., thermal comfort ) and thermoregulatory

  8. Using Fit Indexes to Select a Covariance Model for Longitudinal Data

    Science.gov (United States)

    Liu, Siwei; Rovine, Michael J.; Molenaar, Peter C. M.

    2012-01-01

    This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error…

  9. An Item Fit Statistic Based on Pseudocounts from the Generalized Graded Unfolding Model: A Preliminary Report.

    Science.gov (United States)

    Roberts, James S.

    Stone and colleagues (C. Stone, R. Ankenman, S. Lane, and M. Liu, 1993; C. Stone, R. Mislevy and J. Mazzeo, 1994; C. Stone, 2000) have proposed a fit index that explicitly accounts for the measurement error inherent in an estimated theta value, here called chi squared superscript 2, subscript i*. The elements of this statistic are natural…

  10. Implementation of a Personal Fitness Unit Using the Personalized System of Instruction Model

    Science.gov (United States)

    Prewitt, Steven; Hannon, James C.; Colquitt, Gavin; Brusseau, Timothy A.; Newton, Maria; Shaw, Janet

    2015-01-01

    Levels of physical activity and health-related fitness (HRF) are decreasing among adolescents in the United States. Several interventions have been implemented to reverse this downtrend. Traditionally, physical educators incorporate a direct instruction (DI) strategy, with teaching potentially leading students to disengage during class. An…

  11. Modeling relationships between physical fitness, executive functioning, and academic achievement in primary school children

    NARCIS (Netherlands)

    van der Niet, Anneke G.; Hartman, Esther; Smith, Joanne; Visscher, Chris

    Objectives: The relationship between physical fitness and academic achievement in children has received much attention, however, whether executive functioning plays a mediating role in this relationship is unclear. The aim of this study therefore was to investigate the relationships between physical

  12. Group Targets Tracking Using Multiple Models GGIW-CPHD Based on Best-Fitting Gaussian Approximation and Strong Tracking Filter

    Directory of Open Access Journals (Sweden)

    Yun Wang

    2016-01-01

    Full Text Available Gamma Gaussian inverse Wishart cardinalized probability hypothesis density (GGIW-CPHD algorithm was always used to track group targets in the presence of cluttered measurements and missing detections. A multiple models GGIW-CPHD algorithm based on best-fitting Gaussian approximation method (BFG and strong tracking filter (STF is proposed aiming at the defect that the tracking error of GGIW-CPHD algorithm will increase when the group targets are maneuvering. The best-fitting Gaussian approximation method is proposed to implement the fusion of multiple models using the strong tracking filter to correct the predicted covariance matrix of the GGIW component. The corresponding likelihood functions are deduced to update the probability of multiple tracking models. From the simulation results we can see that the proposed tracking algorithm MM-GGIW-CPHD can effectively deal with the combination/spawning of groups and the tracking error of group targets in the maneuvering stage is decreased.

  13. Comparing and combining biomarkers as principle surrogates for time-to-event clinical endpoints.

    Science.gov (United States)

    Gabriel, Erin E; Sachs, Michael C; Gilbert, Peter B

    2015-02-10

    Principal surrogate endpoints are useful as targets for phase I and II trials. In many recent trials, multiple post-randomization biomarkers are measured. However, few statistical methods exist for comparison of or combination of biomarkers as principal surrogates, and none of these methods to our knowledge utilize time-to-event clinical endpoint information. We propose a Weibull model extension of the semi-parametric estimated maximum likelihood method that allows for the inclusion of multiple biomarkers in the same risk model as multivariate candidate principal surrogates. We propose several methods for comparing candidate principal surrogates and evaluating multivariate principal surrogates. These include the time-dependent and surrogate-dependent true and false positive fraction, the time-dependent and the integrated standardized total gain, and the cumulative distribution function of the risk difference. We illustrate the operating characteristics of our proposed methods in simulations and outline how these statistics can be used to evaluate and compare candidate principal surrogates. We use these methods to investigate candidate surrogates in the Diabetes Control and Complications Trial. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Fitting model-based psychometric functions to simultaneity and temporal-order judgment data: MATLAB and R routines.

    Science.gov (United States)

    Alcalá-Quintana, Rocío; García-Pérez, Miguel A

    2013-12-01

    Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.

  15. Global Optimization Employing Gaussian Process-Based Bayesian Surrogates

    Directory of Open Access Journals (Sweden)

    Roland Preuss

    2018-03-01

    Full Text Available The simulation of complex physics models may lead to enormous computer running times. Since the simulations are expensive it is necessary to exploit the computational budget in the best possible manner. If for a few input parameter settings an output data set has been acquired, one could be interested in taking these data as a basis for finding an extremum and possibly an input parameter set for further computer simulations to determine it—a task which belongs to the realm of global optimization. Within the Bayesian framework we utilize Gaussian processes for the creation of a surrogate model function adjusted self-consistently via hyperparameters to represent the data. Although the probability distribution of the hyperparameters may be widely spread over phase space, we make the assumption that only the use of their expectation values is sufficient. While this shortcut facilitates a quickly accessible surrogate, it is somewhat justified by the fact that we are not interested in a full representation of the model by the surrogate but to reveal its maximum. To accomplish this the surrogate is fed to a utility function whose extremum determines the new parameter set for the next data point to obtain. Moreover, we propose to alternate between two utility functions—expected improvement and maximum variance—in order to avoid the drawbacks of each. Subsequent data points are drawn from the model function until the procedure either remains in the points found or the surrogate model does not change with the iteration. The procedure is applied to mock data in one and two dimensions in order to demonstrate proof of principle of the proposed approach.

  16. Chempy: A flexible chemical evolution model for abundance fitting. Do the Sun's abundances alone constrain chemical evolution models?

    Science.gov (United States)

    Rybizki, Jan; Just, Andreas; Rix, Hans-Walter

    2017-09-01

    Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar