WorldWideScience

Sample records for surface fitting model

  1. Analytical fitting model for rough-surface BRDF.

    Science.gov (United States)

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  2. Parametric fitting of corneal height data to a biconic surface.

    Science.gov (United States)

    Janunts, Edgar; Kannengießer, Marc; Langenbucher, Achim

    2015-03-01

    As the average corneal shape can effectively be approximated by a conic section, a determination of the corneal shape by biconic parameters is desired. The purpose of the paper is to introduce a straightforward mathematical approach for extracting clinically relevant parameters of corneal surface, such as radii of curvature and conic constants for principle meridians and astigmatism. A general description for modeling the ocular surfaces in a biconic form is given, based on which an implicit parametric surface fitting algorithm is introduced. The solution of the biconic fitting is obtained by a two sequential least squares optimization approach with constrains. The data input can be raw information from any corneal topographer with not necessarily a uniform data distribution. Various simulated and clinical data are studied including surfaces with rotationally symmetric and non-symmetric geometries. The clinical data was obtained from the Pentacam (Oculus) for the patient having undergone a refractive surgery. A sub-micrometer fitting accuracy was obtained for all simulated surfaces: 0,08 μm RMS fitting error at max for rotationally symmetric and 0,125 μm for non-symmetric surfaces. The astigmatism was recovered in a sub-minutes resolution. The equality in rotational symmetric and the superiority in non-symmetric surfaces of the presented model over the widely used quadric fitting model is shown. The introduced biconic surface fitting algorithm is able to recover the apical radii of curvature and conic constants in principle meridians. This methodology could be a platform for advanced IOL calculations and enhanced contact lens fitting. Copyright © 2014. Published by Elsevier GmbH.

  3. Fractal Image Coding Based on a Fitting Surface

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2014-01-01

    Full Text Available A no-search fractal image coding method based on a fitting surface is proposed. In our research, an improved gray-level transform with a fitting surface is introduced. One advantage of this method is that the fitting surface is used for both the range and domain blocks and one set of parameters can be saved. Another advantage is that the fitting surface can approximate the range and domain blocks better than the previous fitting planes; this can result in smaller block matching errors and better decoded image quality. Since the no-search and quadtree techniques are adopted, smaller matching errors also imply less number of blocks matching which results in a faster encoding process. Moreover, by combining all the fitting surfaces, a fitting surface image (FSI is also proposed to speed up the fractal decoding. Experiments show that our proposed method can yield superior performance over the other three methods. Relative to range-averaged image, FSI can provide faster fractal decoding process. Finally, by combining the proposed fractal coding method with JPEG, a hybrid coding method is designed which can provide higher PSNR than JPEG while maintaining the same Bpp.

  4. Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting

    Energy Technology Data Exchange (ETDEWEB)

    Ross, James C., E-mail: jross@bwh.harvard.edu [Channing Laboratory, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Surgical Planning Lab, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Laboratory of Mathematics in Imaging, Brigham and Women' s Hospital, Boston, Massachusetts 02126 (United States); Kindlmann, Gordon L. [Computer Science Department and Computation Institute, University of Chicago, Chicago, Illinois 60637 (United States); Okajima, Yuka; Hatabu, Hiroto [Department of Radiology, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Díaz, Alejandro A. [Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 and Department of Pulmonary Diseases, Pontificia Universidad Católica de Chile, Santiago (Chile); Silverman, Edwin K. [Channing Laboratory, Brigham and Women' s Hospital, Boston, Massachusetts 02215 and Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 (United States); Washko, George R. [Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 (United States); Dy, Jennifer [ECE Department, Northeastern University, Boston, Massachusetts 02115 (United States); Estépar, Raúl San José [Department of Radiology, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Surgical Planning Lab, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Laboratory of Mathematics in Imaging, Brigham and Women' s Hospital, Boston, Massachusetts 02126 (United States)

    2013-12-15

    Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The

  5. Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting

    International Nuclear Information System (INIS)

    Ross, James C.; Kindlmann, Gordon L.; Okajima, Yuka; Hatabu, Hiroto; Díaz, Alejandro A.; Silverman, Edwin K.; Washko, George R.; Dy, Jennifer; Estépar, Raúl San José

    2013-01-01

    Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The proposed

  6. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    International Nuclear Information System (INIS)

    Liang, Zhong Wei; Wang, Yi Jun; Ye, Bang Yan; Brauwer, Richard Kars

    2012-01-01

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process

  7. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Zhong Wei; Wang, Yi Jun [Guangzhou Univ., Guangzhou (China); Ye, Bang Yan [South China Univ. of Technology, Guangzhou (China); Brauwer, Richard Kars [Indian Institute of Technology, Kanpur (India)

    2012-10-15

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process.

  8. Fitting PAC spectra with stochastic models: PolyPacFit

    Energy Technology Data Exchange (ETDEWEB)

    Zacate, M. O., E-mail: zacatem1@nku.edu [Northern Kentucky University, Department of Physics and Geology (United States); Evenson, W. E. [Utah Valley University, College of Science and Health (United States); Newhouse, R.; Collins, G. S. [Washington State University, Department of Physics and Astronomy (United States)

    2010-04-15

    PolyPacFit is an advanced fitting program for time-differential perturbed angular correlation (PAC) spectroscopy. It incorporates stochastic models and provides robust options for customization of fits. Notable features of the program include platform independence and support for (1) fits to stochastic models of hyperfine interactions, (2) user-defined constraints among model parameters, (3) fits to multiple spectra simultaneously, and (4) any spin nuclear probe.

  9. Fitness function and nonunique solutions in x-ray reflectivity curve fitting: crosserror between surface roughness and mass density

    International Nuclear Information System (INIS)

    Tiilikainen, J; Bosund, V; Mattila, M; Hakkarainen, T; Sormunen, J; Lipsanen, H

    2007-01-01

    Nonunique solutions of the x-ray reflectivity (XRR) curve fitting problem were studied by modelling layer structures with neural networks and designing a fitness function to handle the nonidealities of measurements. Modelled atomic-layer-deposited aluminium oxide film structures were used in the simulations to calculate XRR curves based on Parratt's formalism. This approach reduced the dimensionality of the parameter space and allowed the use of fitness landscapes in the study of nonunique solutions. Fitness landscapes, where the height in a map represents the fitness value as a function of the process parameters, revealed tracks where the local fitness optima lie. The tracks were projected on the physical parameter space thus allowing the construction of the crosserror equation between weakly determined parameters, i.e. between the mass density and the surface roughness of a layer. The equation gives the minimum error for the other parameters which is a consequence of the nonuniqueness of the solution if noise is present. Furthermore, the existence of a possible unique solution in a certain parameter range was found to be dependent on the layer thickness and the signal-to-noise ratio

  10. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming; Wang, Wen Ping; Liu, Yang; Yang, Zhouwang

    2012-01-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  11. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming

    2012-11-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  12. Predicting the Best Fit: A Comparison of Response Surface Models for Midazolam and Alfentanil Sedation in Procedures With Varying Stimulation.

    Science.gov (United States)

    Liou, Jing-Yang; Ting, Chien-Kun; Mandell, M Susan; Chang, Kuang-Yi; Teng, Wei-Nung; Huang, Yu-Yin; Tsou, Mei-Yung

    2016-08-01

    Selecting an effective dose of sedative drugs in combined upper and lower gastrointestinal endoscopy is complicated by varying degrees of pain stimulation. We tested the ability of 5 response surface models to predict depth of sedation after administration of midazolam and alfentanil in this complex model. The procedure was divided into 3 phases: esophagogastroduodenoscopy (EGD), colonoscopy, and the time interval between the 2 (intersession). The depth of sedation in 33 adult patients was monitored by Observer Assessment of Alertness/Scores. A total of 218 combinations of midazolam and alfentanil effect-site concentrations derived from pharmacokinetic models were used to test 5 response surface models in each of the 3 phases of endoscopy. Model fit was evaluated with objective function value, corrected Akaike Information Criterion (AICc), and Spearman ranked correlation. A model was arbitrarily defined as accurate if the predicted probability is effect-site concentrations tested ranged from 1 to 76 ng/mL and from 5 to 80 ng/mL for midazolam and alfentanil, respectively. Midazolam and alfentanil had synergistic effects in colonoscopy and EGD, but additivity was observed in the intersession group. Adequate prediction rates were 84% to 85% in the intersession group, 84% to 88% during colonoscopy, and 82% to 87% during EGD. The reduced Greco and Fixed alfentanil concentration required for 50% of the patients to achieve targeted response Hierarchy models performed better with comparable predictive strength. The reduced Greco model had the lowest AICc with strong correlation in all 3 phases of endoscopy. Dynamic, rather than fixed, γ and γalf in the Hierarchy model improved model fit. The reduced Greco model had the lowest objective function value and AICc and thus the best fit. This model was reliable with acceptable predictive ability based on adequate clinical correlation. We suggest that this model has practical clinical value for patients undergoing procedures

  13. Improved parametric fits for the HeH2 ab initio energy surface

    International Nuclear Information System (INIS)

    Muchnick, P.

    1992-01-01

    A brief history of the development of ab initio calculations for the HeH 2 quasi-molecule energy surface, and the parametric fits to these ab initio calculations, is presented. The concept of 'physical reasonableness' of the parametric fit is discussed. Several new improved parametric fits for the energy surface, meeting these requirements, are then proposed. One fit extends the Russek-Garcia parametric fit for the deep repulsion region to include r-dependent parameters, resulting in a more physically reasonable fit with smaller average error. This improved surface fit is applied to quasi-elastic collisions of He on H 2 in the impulse approximation. Previous classical calculations of the scaled inelastic vibrorotational excitation energy distributions are improved with this more accurate parametric fit of the energy surface and with the incorporation of quantum effects in vibrational excitation. It is shown that Sigmund's approach in developing his scaling law is incomplete in the contribution of the three-body interactions to vibrational excitation of the H 2 molecule is concerned. The Sigmund theory is extended to take into account for r-dependency of three-body interactions. A parametric fit for the entire energy surface from essentially 0 ≤R≤∞ and 1.2≤r≤1.6 a.u., where R is the intermolecular spacing and r is the hydrogen bonding length, is also presented. This fit is physically reasonable in all asymptotic limits. This first, full surface parametric fit is based primarily upon a composite of ab initio studies by Russek and Garcia and Meyer, Hariharan and Kutzelnigg. Parametric fits for the H 2 (1sσ g ) 2 , H 2 + (1sσ g ), H 2 + (2pσ u ) and (LiH 2 ) + energy surfaces are also presented. The new parametric fits for H 2 , H 2 + (1sσ g ) are shown to be improvements over the well-known Morse potentials for these surfaces

  14. Rapid world modeling: Fitting range data to geometric primitives

    International Nuclear Information System (INIS)

    Feddema, J.; Little, C.

    1996-01-01

    For the past seven years, Sandia National Laboratories has been active in the development of robotic systems to help remediate DOE's waste sites and decommissioned facilities. Some of these facilities have high levels of radioactivity which prevent manual clean-up. Tele-operated and autonomous robotic systems have been envisioned as the only suitable means of removing the radioactive elements. World modeling is defined as the process of creating a numerical geometric model of a real world environment or workspace. This model is often used in robotics to plan robot motions which perform a task while avoiding obstacles. In many applications where the world model does not exist ahead of time, structured lighting, laser range finders, and even acoustical sensors have been used to create three dimensional maps of the environment. These maps consist of thousands of range points which are difficult to handle and interpret. This paper presents a least squares technique for fitting range data to planar and quadric surfaces, including cylinders and ellipsoids. Once fit to these primitive surfaces, the amount of data associated with a surface is greatly reduced up to three orders of magnitude, thus allowing for more rapid handling and analysis of world data

  15. Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.

    Science.gov (United States)

    Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui

    2018-01-13

    Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.

  16. Numerical generation of boundary-fitted curvilinear coordinate systems for arbitrarily curved surfaces

    International Nuclear Information System (INIS)

    Takagi, T.; Miki, K.; Chen, B.C.J.; Sha, W.T.

    1985-01-01

    A new method is presented for numerically generating boundary-fitted coordinate systems for arbitrarily curved surfaces. The three-dimensional surface has been expressed by functions of two parameters using the geometrical modeling techniques in computer graphics. This leads to new quasi-one- and two-dimensional elliptic partial differential equations for coordinate transformation. Since the equations involve the derivatives of the surface expressions, the grids geneated by the equations distribute on the surface depending on its slope and curvature. A computer program GRID-CS based on the method was developed and applied to a surface of the second order, a torus and a surface of a primary containment vessel for a nuclear reactor. These applications confirm that GRID-CS is a convenient and efficient tool for grid generation on arbitrarily curved surfaces

  17. A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models

    Directory of Open Access Journals (Sweden)

    Erin Scott

    2016-01-01

    Full Text Available The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE models to observation reference data (Mackinson et al. 2009. The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting >1000 specific individual searches to find the statistically ‘best fit’ model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the ‘best fit’ model for ecological accuracy.

  18. Multiple organ definition in CT using a Bayesian approach for 3D model fitting

    Science.gov (United States)

    Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.

    1995-08-01

    Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.

  19. Surface complexation modeling of zinc sorption onto ferrihydrite.

    Science.gov (United States)

    Dyer, James A; Trivedi, Paras; Scrivner, Noel C; Sparks, Donald L

    2004-02-01

    A previous study involving lead(II) [Pb(II)] sorption onto ferrihydrite over a wide range of conditions highlighted the advantages of combining molecular- and macroscopic-scale investigations with surface complexation modeling to predict Pb(II) speciation and partitioning in aqueous systems. In this work, an extensive collection of new macroscopic and spectroscopic data was used to assess the ability of the modified triple-layer model (TLM) to predict single-solute zinc(II) [Zn(II)] sorption onto 2-line ferrihydrite in NaNO(3) solutions as a function of pH, ionic strength, and concentration. Regression of constant-pH isotherm data, together with potentiometric titration and pH edge data, was a much more rigorous test of the modified TLM than fitting pH edge data alone. When coupled with valuable input from spectroscopic analyses, good fits of the isotherm data were obtained with a one-species, one-Zn-sorption-site model using the bidentate-mononuclear surface complex, (triple bond FeO)(2)Zn; however, surprisingly, both the density of Zn(II) sorption sites and the value of the best-fit equilibrium "constant" for the bidentate-mononuclear complex had to be adjusted with pH to adequately fit the isotherm data. Although spectroscopy provided some evidence for multinuclear surface complex formation at surface loadings approaching site saturation at pH >/=6.5, the assumption of a bidentate-mononuclear surface complex provided acceptable fits of the sorption data over the entire range of conditions studied. Regressing edge data in the absence of isotherm and spectroscopic data resulted in a fair number of surface-species/site-type combinations that provided acceptable fits of the edge data, but unacceptable fits of the isotherm data. A linear relationship between logK((triple bond FeO)2Zn) and pH was found, given by logK((triple bond FeO)2Znat1g/l)=2.058 (pH)-6.131. In addition, a surface activity coefficient term was introduced to the model to reduce the ionic strength

  20. Surface Complexation Modeling in Variable Charge Soils: Charge Characterization by Potentiometric Titration

    Directory of Open Access Journals (Sweden)

    Giuliano Marchi

    2015-10-01

    Full Text Available ABSTRACT Intrinsic equilibrium constants of 17 representative Brazilian Oxisols were estimated from potentiometric titration measuring the adsorption of H+ and OH− on amphoteric surfaces in suspensions of varying ionic strength. Equilibrium constants were fitted to two surface complexation models: diffuse layer and constant capacitance. The former was fitted by calculating total site concentration from curve fitting estimates and pH-extrapolation of the intrinsic equilibrium constants to the PZNPC (hand calculation, considering one and two reactive sites, and by the FITEQL software. The latter was fitted only by FITEQL, with one reactive site. Soil chemical and physical properties were correlated to the intrinsic equilibrium constants. Both surface complexation models satisfactorily fit our experimental data, but for results at low ionic strength, optimization did not converge in FITEQL. Data were incorporated in Visual MINTEQ and they provide a modeling system that can predict protonation-dissociation reactions in the soil surface under changing environmental conditions.

  1. Comparative Evaluation of Conventional and Accelerated Castings on Marginal Fit and Surface Roughness

    Science.gov (United States)

    Jadhav, Vivek Dattatray; Motwani, Bhagwan K.; Shinde, Jitendra; Adhapure, Prasad

    2017-01-01

    Aims: The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. Settings and Design: This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. Materials and Methods: A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. Statistical Analysis Used: The results of the t-test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. Results: For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Conclusions: Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness. PMID:29042726

  2. Fitting polynomial surfaces to triangular meshes with Voronoi squared distance minimization

    KAUST Repository

    Nivoliers, Vincent

    2012-11-06

    This paper introduces Voronoi squared distance minimization (VSDM), an algorithm that fits a surface to an input mesh. VSDM minimizes an objective function that corresponds to a Voronoi-based approximation of the overall squared distance function between the surface and the input mesh (SDM). This objective function is a generalization of the one minimized by centroidal Voronoi tessellation, and can be minimized by a quasi-Newton solver. VSDM naturally adapts the orientation of the mesh elements to best approximate the input, without estimating any differential quantities. Therefore, it can be applied to triangle soups or surfaces with degenerate triangles, topological noise and sharp features. Applications of fitting quad meshes and polynomial surfaces to input triangular meshes are demonstrated. © 2012 Springer-Verlag London.

  3. An Improved MUSIC Model for Gibbsite Surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Scott C.; Bickmore, Barry R.; Tadanier, Christopher J.; Rosso, Kevin M.

    2004-06-01

    Here we use gibbsite as a model system with which to test a recently published, bond-valence method for predicting intrinsic pKa values for surface functional groups on oxides. At issue is whether the method is adequate when valence parameters for the functional groups are derived from ab initio structure optimization of surfaces terminated by vacuum. If not, ab initio molecular dynamics (AIMD) simulations of solvated surfaces (which are much more computationally expensive) will have to be used. To do this, we had to evaluate extant gibbsite potentiometric titration data that where some estimate of edge and basal surface area was available. Applying BET and recently developed atomic force microscopy methods, we found that most of these data sets were flawed, in that their surface area estimates were probably wrong. Similarly, there may have been problems with many of the titration procedures. However, one data set was adequate on both counts, and we applied our method of surface pKa int prediction to fitting a MUSIC model to this data with considerable success—several features of the titration data were predicted well. However, the model fit was certainly not perfect, and we experienced some difficulties optimizing highly charged, vacuum-terminated surfaces. Therefore, we conclude that we probably need to do AIMD simulations of solvated surfaces to adequately predict intrinsic pKa values for surface functional groups.

  4. Measured, modeled, and causal conceptions of fitness

    Science.gov (United States)

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  5. Edge detection and mathematic fitting for corneal surface with Matlab software.

    Science.gov (United States)

    Di, Yue; Li, Mei-Yan; Qiao, Tong; Lu, Na

    2017-01-01

    To select the optimal edge detection methods to identify the corneal surface, and compare three fitting curve equations with Matlab software. Fifteen subjects were recruited. The corneal images from optical coherence tomography (OCT) were imported into Matlab software. Five edge detection methods (Canny, Log, Prewitt, Roberts, Sobel) were used to identify the corneal surface. Then two manual identifying methods (ginput and getpts) were applied to identify the edge coordinates respectively. The differences among these methods were compared. Binomial curve (y=Ax 2 +Bx+C), Polynomial curve [p(x)=p1x n +p2x n-1 +....+pnx+pn+1] and Conic section (Ax 2 +Bxy+Cy 2 +Dx+Ey+F=0) were used for curve fitting the corneal surface respectively. The relative merits among three fitting curves were analyzed. Finally, the eccentricity (e) obtained by corneal topography and conic section were compared with paired t -test. Five edge detection algorithms all had continuous coordinates which indicated the edge of the corneal surface. The ordinates of manual identifying were close to the inside of the actual edges. Binomial curve was greatly affected by tilt angle. Polynomial curve was lack of geometrical properties and unstable. Conic section could calculate the tilted symmetry axis, eccentricity, circle center, etc . There were no significant differences between 'e' values by corneal topography and conic section ( t =0.9143, P =0.3760 >0.05). It is feasible to simulate the corneal surface with mathematical curve with Matlab software. Edge detection has better repeatability and higher efficiency. The manual identifying approach is an indispensable complement for detection. Polynomial and conic section are both the alternative methods for corneal curve fitting. Conic curve was the optimal choice based on the specific geometrical properties.

  6. Fitting neuron models to spike trains

    Directory of Open Access Journals (Sweden)

    Cyrille eRossant

    2011-02-01

    Full Text Available Computational modeling is increasingly used to understand the function of neural circuitsin systems neuroscience.These studies require models of individual neurons with realisticinput-output properties.Recently, it was found that spiking models can accurately predict theprecisely timed spike trains produced by cortical neurons in response tosomatically injected currents,if properly fitted. This requires fitting techniques that are efficientand flexible enough to easily test different candidate models.We present a generic solution, based on the Brian simulator(a neural network simulator in Python, which allowsthe user to define and fit arbitrary neuron models to electrophysiological recordings.It relies on vectorization and parallel computing techniques toachieve efficiency.We demonstrate its use on neural recordings in the barrel cortex andin the auditory brainstem, and confirm that simple adaptive spiking modelscan accurately predict the response of cortical neurons. Finally, we show how a complexmulticompartmental model can be reduced to a simple effective spiking model.

  7. Induced subgraph searching for geometric model fitting

    Science.gov (United States)

    Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi

    2017-11-01

    In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.

  8. MOUNTABILITY PARTS OF MACHINE WITH ROTATING SURFACE, FITTED WITH POSITIVE CLEARANCE

    Directory of Open Access Journals (Sweden)

    Zbigniew BUDNIAK

    2014-06-01

    Full Text Available In this paper demonstrates the conditions of automatic assembly the parts of machines with rotating surfaces, fitted with positive clearance. Determination of the general condition of asseblability allowed for designation of the acceptable relative displacement and torsion axle, combined parts on the mounting position. The designation of depending allowed for assess the technological capacity of the installation equipment. On the basis of this mathematical model was developed a computer program that allows to determine the effect of geometric, strength and dynamic parameters of the assembly process. The examples of results of numerical calculations are shown in the graphs

  9. Evaluation of fitting functions for the representation of an O(3P)+H2 potential energy surface. I

    International Nuclear Information System (INIS)

    Wagner, A.F.; Schatz, G.C.; Bowman, J.M.

    1981-01-01

    The DIM surface of Whitlock, Muckerman, and Fisher for the O( 3 P)+H 2 system is used as a test case to evaluate the usefulness of a variety of fitting functions for the representation of potential energy surfaces. Fitting functions based on LEPS, BEBO, and rotated Morse oscillator (RMO) forms are examined. Fitting procedures are developed for combining information about a small portion of the surface and the fitting function to predict where on the surface more information must be obtained to improve the accuracy of the fit. Both unbiased procedures and procedures heavily biased toward the saddle point region of the surface are investigated. Collinear quasiclassical trajectory calculations of the reaction rate constant and one and three dimensional transition state theory rate constant calculations are performed and compared for selected fits and the exact DIM test surface. Fitting functions based on BEBO and RMO forms are found to give quite accurate results

  10. A fitting program for potential energy surfaces of bent triatomic molecules

    International Nuclear Information System (INIS)

    Searles, D.J.; Nagy-Felsobuki, E.I. von

    1992-01-01

    A program has been developed in order to fit analytical power series expansions (Dunham, Simon-Parr-Finlan, Ogilvie and their exponential variants) and Pade approximants to discrete ab initio potential energy surfaces of non-linear triatomic molecules. The program employs standard least-squares fitting techniques using the singular decomposition method in order to dampen the higher-order coefficients (if deemed necessary) without significantly degrading the fit. The program makes full use of the symmetry of a triatomic molecule and so addresses the D 3h , C 2v and C S cases. (orig.)

  11. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  12. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  13. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  14. Modeling Evolution on Nearly Neutral Network Fitness Landscapes

    Science.gov (United States)

    Yakushkina, Tatiana; Saakian, David B.

    2017-08-01

    To describe virus evolution, it is necessary to define a fitness landscape. In this article, we consider the microscopic models with the advanced version of neutral network fitness landscapes. In this problem setting, we suppose a fitness difference between one-point mutation neighbors to be small. We construct a modification of the Wright-Fisher model, which is related to ordinary infinite population models with nearly neutral network fitness landscape at the large population limit. From the microscopic models in the realistic sequence space, we derive two versions of nearly neutral network models: with sinks and without sinks. We claim that the suggested model describes the evolutionary dynamics of RNA viruses better than the traditional Wright-Fisher model with few sequences.

  15. OCT-based profiler for automating ocular surface prosthetic fitting (Conference Presentation)

    Science.gov (United States)

    Mujat, Mircea; Patel, Ankit H.; Maguluri, Gopi N.; Iftimia, Nicusor V.; Patel, Chirag; Agranat, Josh; Tomashevskaya, Olga; Bonte, Eugene; Ferguson, R. Daniel

    2016-03-01

    The use of a Prosthetic Replacement of the Ocular Surface Environment (PROSE) device is a revolutionary treatment for military patients that have lost their eyelids due to 3rd degree facial burns and for civilians who suffer from a host of corneal diseases. However, custom manual fitting is often a protracted painful, inexact process that requires multiple fitting sessions. Training for new practitioners is a long process. Automated methods to measure the complete corneal and scleral topology would provide a valuable tool for both clinicians and PROSE device manufacturers and would help streamline the fitting process. PSI has developed an ocular anterior-segment profiler based on Optical Coherence Tomography (OCT), which provides a 3D measure of the surface of the sclera and cornea. This device will provide topography data that will be used to expedite and improve the fabrication process for PROSE devices. OCT has been used to image portions of the cornea and sclera and to measure surface topology for smaller contact lenses [1-3]. However, current state-of-the-art anterior eye OCT systems can only scan about 16 mm of the eye's anterior surface, which is not sufficient for covering the sclera around the cornea. In addition, there is no systematic method for scanning and aligning/stitching the full scleral/corneal surface and commercial segmentation software is not optimized for the PROSE application. Although preliminary, our results demonstrate the capability of PSI's approach to generate accurate surface plots over relatively large areas of the eye, which is not currently possible with any other existing platform. Testing the technology on human volunteers is currently underway at Boston Foundation for Sight.

  16. Fitting Hidden Markov Models to Psychological Data

    Directory of Open Access Journals (Sweden)

    Ingmar Visser

    2002-01-01

    Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.

  17. Random-growth urban model with geographical fitness

    Science.gov (United States)

    Kii, Masanobu; Akimoto, Keigo; Doi, Kenji

    2012-12-01

    This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.

  18. Dissolution model for a glass having an adherent insoluble surface layer

    International Nuclear Information System (INIS)

    Harvey, K.B.; Larocque, C.A.B.

    1990-01-01

    Waste form glasses that contain substantial quantities of iron, manganese, and aluminum oxides, such as the Savannah River SRL TDS-131 glass, form a thick, hydrated surface layer when placed in contact with water. The dissolution of such a glass has been modeled with the Savannah River Model. The authors showed previously that the equations of the Savannah River Model could be fitted to published experimental data if a time-dependent diffusion coefficient was assumed for species of diffusing through the surface layer. The Savannah River Model assumes that all of the material dissolved from the glass enters solution, whereas it was observed that substantial quantities of material were retained in the surface layer. An alternative model, presented contains a mass balance equation that allows material either to enter solution or to be retained in the surface layer. It is shown that the equations derived using this model can be fitted to the published experimental data assuming a constant diffusion coefficient for species diffusing through the surface layer

  19. Contrast Gain Control Model Fits Masking Data

    Science.gov (United States)

    Watson, Andrew B.; Solomon, Joshua A.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    We studied the fit of a contrast gain control model to data of Foley (JOSA 1994), consisting of thresholds for a Gabor patch masked by gratings of various orientations, or by compounds of two orientations. Our general model includes models of Foley and Teo & Heeger (IEEE 1994). Our specific model used a bank of Gabor filters with octave bandwidths at 8 orientations. Excitatory and inhibitory nonlinearities were power functions with exponents of 2.4 and 2. Inhibitory pooling was broad in orientation, but narrow in spatial frequency and space. Minkowski pooling used an exponent of 4. All of the data for observer KMF were well fit by the model. We have developed a contrast gain control model that fits masking data. Unlike Foley's, our model accepts images as inputs. Unlike Teo & Heeger's, our model did not require multiple channels for different dynamic ranges.

  20. Feature extraction through least squares fit to a simple model

    International Nuclear Information System (INIS)

    Demuth, H.B.

    1976-01-01

    The Oak Ridge National Laboratory (ORNL) presented the Los Alamos Scientific Laboratory (LASL) with 18 radiographs of fuel rod test bundles. The problem is to estimate the thickness of the gap between some cylindrical rods and a flat wall surface. The edges of the gaps are poorly defined due to finite source size, x-ray scatter, parallax, film grain noise, and other degrading effects. The radiographs were scanned and the scan-line data were averaged to reduce noise and to convert the problem to one dimension. A model of the ideal gap, convolved with an appropriate point-spread function, was fit to the averaged data with a least squares program; and the gap width was determined from the final fitted-model parameters. The least squares routine did converge and the gaps obtained are of reasonable size. The method is remarkably insensitive to noise. This report describes the problem, the techniques used to solve it, and the results and conclusions. Suggestions for future work are also given

  1. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    DEFF Research Database (Denmark)

    Bolker, B.M.; Gardner, B.; Maunder, M.

    2013-01-01

    Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. R is convenient and (relatively) easy...... to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield...

  2. Local fit evaluation of structural equation models using graphical criteria.

    Science.gov (United States)

    Thoemmes, Felix; Rosseel, Yves; Textor, Johannes

    2018-03-01

    Evaluation of model fit is critically important for every structural equation model (SEM), and sophisticated methods have been developed for this task. Among them are the χ² goodness-of-fit test, decomposition of the χ², derived measures like the popular root mean square error of approximation (RMSEA) or comparative fit index (CFI), or inspection of residuals or modification indices. Many of these methods provide a global approach to model fit evaluation: A single index is computed that quantifies the fit of the entire SEM to the data. In contrast, graphical criteria like d-separation or trek-separation allow derivation of implications that can be used for local fit evaluation, an approach that is hardly ever applied. We provide an overview of local fit evaluation from the viewpoint of SEM practitioners. In the presence of model misfit, local fit evaluation can potentially help in pinpointing where the problem with the model lies. For models that do fit the data, local tests can identify the parts of the model that are corroborated by the data. Local tests can also be conducted before a model is fitted at all, and they can be used even for models that are globally underidentified. We discuss appropriate statistical local tests, and provide applied examples. We also present novel software in R that automates this type of local fit evaluation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. topicmodels: An R Package for Fitting Topic Models

    Directory of Open Access Journals (Sweden)

    Bettina Grun

    2011-05-01

    Full Text Available Topic models allow the probabilistic modeling of term frequency occurrences in documents. The fitted model can be used to estimate the similarity between documents as well as between a set of specified keywords using an additional layer of latent variables which are referred to as topics. The R package topicmodels provides basic infrastructure for fitting topic models based on data structures from the text mining package tm. The package includes interfaces to two algorithms for fitting topic models: the variational expectation-maximization algorithm provided by David M. Blei and co-authors and an algorithm using Gibbs sampling by Xuan-Hieu Phan and co-authors.

  4. Improving the Fit of a Land-Surface Model to Data Using its Adjoint

    Science.gov (United States)

    Raoult, N.; Jupp, T. E.; Cox, P. M.; Luke, C.

    2015-12-01

    Land-surface models (LSMs) are of growing importance in the world of climate prediction. They are crucial components of larger Earth system models that are aimed at understanding the effects of land surface processes on the global carbon cycle. The Joint UK Land Environment Simulator (JULES) is the land-surface model used by the UK Met Office. It has been automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or 'adjoint', of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. adJULES presents an opportunity to confront JULES with many different observations, and make improvements to the model parameterisation. In the newest version of adJULES, multiple sites can be used in the calibration, to giving a generic set of parameters that can be generalised over plant functional types. We present an introduction to the adJULES system and its applications to data from a variety of flux tower sites. We show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.

  5. Fitting polynomial surfaces to triangular meshes with Voronoi Squared Distance Minimization

    KAUST Repository

    Nivoliers, Vincent; Yan, Dongming; Lé vy, Bruno L.

    2011-01-01

    This paper introduces Voronoi Squared Distance Minimization (VSDM), an algorithm that fits a surface to an input mesh. VSDM minimizes an objective function that corresponds to a Voronoi-based approximation of the overall squared distance function

  6. Fitting polynomial surfaces to triangular meshes with Voronoi squared distance minimization

    KAUST Repository

    Nivoliers, Vincent; Yan, Dongming; Lé vy, Bruno L.

    2012-01-01

    This paper introduces Voronoi squared distance minimization (VSDM), an algorithm that fits a surface to an input mesh. VSDM minimizes an objective function that corresponds to a Voronoi-based approximation of the overall squared distance function

  7. Goodness-of-Fit Assessment of Item Response Theory Models

    Science.gov (United States)

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  8. A fitting LEGACY – modelling Kepler's best stars

    Directory of Open Access Journals (Sweden)

    Aarslev Magnus J.

    2017-01-01

    Full Text Available The LEGACY sample represents the best solar-like stars observed in the Kepler mission[5, 8]. The 66 stars in the sample are all on the main sequence or only slightly more evolved. They each have more than one year's observation data in short cadence, allowing for precise extraction of individual frequencies. Here we present model fits using a modified ASTFIT procedure employing two different near-surface-effect corrections, one by Christensen-Dalsgaard[4] and a newer correction proposed by Ball & Gizon[1]. We then compare the results obtained using the different corrections. We find that using the latter correction yields lower masses and significantly lower χ2 values for a large part of the sample.

  9. Automatic fitting of spiking neuron models to electrophysiological recordings

    Directory of Open Access Journals (Sweden)

    Cyrille Rossant

    2010-03-01

    Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.

  10. The FITS model office ergonomics program: a model for best practice.

    Science.gov (United States)

    Chim, Justine M Y

    2014-01-01

    An effective office ergonomics program can predict positive results in reducing musculoskeletal injury rates, enhancing productivity, and improving staff well-being and job satisfaction. Its objective is to provide a systematic solution to manage the potential risk of musculoskeletal disorders among computer users in an office setting. A FITS Model office ergonomics program is developed. The FITS Model Office Ergonomics Program has been developed which draws on the legislative requirements for promoting the health and safety of workers using computers for extended periods as well as previous research findings. The Model is developed according to the practical industrial knowledge in ergonomics, occupational health and safety management, and human resources management in Hong Kong and overseas. This paper proposes a comprehensive office ergonomics program, the FITS Model, which considers (1) Furniture Evaluation and Selection; (2) Individual Workstation Assessment; (3) Training and Education; (4) Stretching Exercises and Rest Break as elements of an effective program. An experienced ergonomics practitioner should be included in the program design and implementation. Through the FITS Model Office Ergonomics Program, the risk of musculoskeletal disorders among computer users can be eliminated or minimized, and workplace health and safety and employees' wellness enhanced.

  11. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  12. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    Science.gov (United States)

    Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise

    2013-01-01

    1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.

  13. Experimental Rugged Fitness Landscape in Protein Sequence Space

    Science.gov (United States)

    Hayashi, Yuuki; Aita, Takuyo; Toyota, Hitoshi; Husimi, Yuzuru; Urabe, Itaru; Yomo, Tetsuya

    2006-01-01

    The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12–130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7×104-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1) the dependence of stationary fitness on library size, which increased gradually, and (2) the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18–24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region. PMID:17183728

  14. Experimental rugged fitness landscape in protein sequence space.

    Science.gov (United States)

    Hayashi, Yuuki; Aita, Takuyo; Toyota, Hitoshi; Husimi, Yuzuru; Urabe, Itaru; Yomo, Tetsuya

    2006-12-20

    The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12-130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7x10(4)-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1) the dependence of stationary fitness on library size, which increased gradually, and (2) the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18-24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region.

  15. Experimental rugged fitness landscape in protein sequence space.

    Directory of Open Access Journals (Sweden)

    Yuuki Hayashi

    Full Text Available The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12-130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7x10(4-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1 the dependence of stationary fitness on library size, which increased gradually, and (2 the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18-24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region.

  16. Are Physical Education Majors Models for Fitness?

    Science.gov (United States)

    Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela

    2012-01-01

    The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…

  17. Learning-based automated segmentation of the carotid artery vessel wall in dual-sequence MRI using subdivision surface fitting.

    Science.gov (United States)

    Gao, Shan; van 't Klooster, Ronald; Kitslaar, Pieter H; Coolen, Bram F; van den Berg, Alexandra M; Smits, Loek P; Shahzad, Rahil; Shamonin, Denis P; de Koning, Patrick J H; Nederveen, Aart J; van der Geest, Rob J

    2017-10-01

    The quantification of vessel wall morphology and plaque burden requires vessel segmentation, which is generally performed by manual delineations. The purpose of our work is to develop and evaluate a new 3D model-based approach for carotid artery wall segmentation from dual-sequence MRI. The proposed method segments the lumen and outer wall surfaces including the bifurcation region by fitting a subdivision surface constructed hierarchical-tree model to the image data. In particular, a hybrid segmentation which combines deformable model fitting with boundary classification was applied to extract the lumen surface. The 3D model ensures the correct shape and topology of the carotid artery, while the boundary classification uses combined image information of 3D TOF-MRA and 3D BB-MRI to promote accurate delineation of the lumen boundaries. The proposed algorithm was validated on 25 subjects (48 arteries) including both healthy volunteers and atherosclerotic patients with 30% to 70% carotid stenosis. For both lumen and outer wall border detection, our result shows good agreement between manually and automatically determined contours, with contour-to-contour distance less than 1 pixel as well as Dice overlap greater than 0.87 at all different carotid artery sections. The presented 3D segmentation technique has demonstrated the capability of providing vessel wall delineation for 3D carotid MRI data with high accuracy and limited user interaction. This brings benefits to large-scale patient studies for assessing the effect of pharmacological treatment of atherosclerosis by reducing image analysis time and bias between human observers. © 2017 American Association of Physicists in Medicine.

  18. Fermi surface changes in dilute magnesium alloys: a pseudopotential band structure model

    International Nuclear Information System (INIS)

    Fung, W.K.

    1976-01-01

    The de Haas-van Alphen effect has been used to study the Fermi surface of pure magnesium and its dilute alloys containing lithium and indium. The quantum oscillations in magnetization were detected by means of a torque magnetometer in magnetic field up to 36 kilogauss and temperature range of 4.2 0 to 1.7 0 K. The results provide information on the effects of lithium and indium solutes on the Fermi surface of magnesium in changes of extremal cross sections and effective masses as well as the relaxation times associated with the orbits. The nonlocal pseudopotential model proposed by Kimball, Stark and Mueller has been fitted to the Fermi surface of magnesium and extended to include the dilute alloys, fitting all the observed de Haas-van Alphen frequencies with an accuracy of better than 1 percent. A modified rigid band interpretation including both Fermi energy and local band edge changes computed from the model, gives an overall satisfactory description of the observed frequency shifts. With the pseudo-wavefunctions provided by the nonlocal model, the relaxation times in terms of Dingle temperatures for several orbits have been predicted using Sorbello's multiple-plane-wave phase shift model. The calculation with phase shifts obtained from a model potential yields a greater anisotropy than has been observed experimentally, while a two-parameter phase shift model provides a good fit to the experimental results

  19. ITEM LEVEL DIAGNOSTICS AND MODEL - DATA FIT IN ITEM ...

    African Journals Online (AJOL)

    Global Journal

    Item response theory (IRT) is a framework for modeling and analyzing item response ... data. Though, there is an argument that the evaluation of fit in IRT modeling has been ... National Council on Measurement in Education ... model data fit should be based on three types of ... prediction should be assessed through the.

  20. SURFACE FITTING FILTERING OF LIDAR POINT CLOUD WITH WAVEFORM INFORMATION

    Directory of Open Access Journals (Sweden)

    S. Xing

    2017-09-01

    Full Text Available Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from “WATER (Watershed Allied Telemetry Experimental Research” are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.

  1. Using geometry to improve model fitting and experiment design for glacial isostasy

    Science.gov (United States)

    Kachuck, S. B.; Cathles, L. M.

    2017-12-01

    As scientists we routinely deal with models, which are geometric objects at their core - the manifestation of a set of parameters as predictions for comparison with observations. When the number of observations exceeds the number of parameters, the model is a hypersurface (the model manifold) in the space of all possible predictions. The object of parameter fitting is to find the parameters corresponding to the point on the model manifold as close to the vector of observations as possible. But the geometry of the model manifold can make this difficult. By curving, ending abruptly (where, for instance, parameters go to zero or infinity), and by stretching and compressing the parameters together in unexpected directions, it can be difficult to design algorithms that efficiently adjust the parameters. Even at the optimal point on the model manifold, parameters might not be individually resolved well enough to be applied to new contexts. In our context of glacial isostatic adjustment, models of sparse surface observations have a broad spread of sensitivity to mixtures of the earth's viscous structure and the surface distribution of ice over the last glacial cycle. This impedes precise statements about crucial geophysical processes, such as the planet's thermal history or the climates that controlled the ice age. We employ geometric methods developed in the field of systems biology to improve the efficiency of fitting (geodesic accelerated Levenberg-Marquardt) and to identify the maximally informative sources of additional data to make better predictions of sea levels and ice configurations (optimal experiment design). We demonstrate this in particular in reconstructions of the Barents Sea Ice Sheet, where we show that only certain kinds of data from the central Barents have the power to distinguish between proposed models.

  2. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.; Katzfuss, M.; Hu, J.; Johnson, V. E.

    2014-01-01

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  3. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.

    2014-09-16

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models\\' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  4. Modelling the growth of Listeria monocytogenes on the surface of smear- or mould-ripened cheese

    Directory of Open Access Journals (Sweden)

    Sol eSchvartzman

    2014-07-01

    Full Text Available Surface-ripened cheeses are matured by means of manual or mechanical technologies posing a risk of cross-contamination, if any cheeses are contaminated with Listeria monocytogenes. In predictive microbiology, primary models are used to describe microbial responses, such as growth rate over time and secondary models explain how those responses change with environmental factors. In this way, primary models were used to assess the growth rate of L. monocytogenes during ripening of the cheeses and the secondary models to test how much the growth rate was affected by either the pH and/or the water activity (aw of the cheeses. The two models combined can be used to predict outcomes. The purpose of these experiments was to test three primary (the modified Gompertz equation, the Baranyi and Roberts model and the Logistic model and three secondary (the Cardinal model, the Ratowski model and the Presser model mathematical models in order to define which combination of models would best predict the growth of L. monocytogenes on the surface of artificially contaminated surface-ripened cheeses. Growth on the surface of the cheese was assessed and modelled. The primary models were firstly fitted to the data and the effects of pH and aw on the growth rate (μmax were incorporated and assessed one by one with the secondary models. The Logistic primary model by itself did not show a better fit of the data among the other primary models tested, but the inclusion of the Cardinal secondary model improved the final fit. The aw was not related to the growth of Listeria. This study suggests that surface-ripened cheese should be separately regulated within EU microbiological food legislation and results expressed as counts per surface area rather than per gram.

  5. Modeling the growth of Listeria monocytogenes on the surface of smear- or mold-ripened cheese.

    Science.gov (United States)

    Schvartzman, M Sol; Gonzalez-Barron, Ursula; Butler, Francis; Jordan, Kieran

    2014-01-01

    Surface-ripened cheeses are matured by means of manual or mechanical technologies posing a risk of cross-contamination, if any cheeses are contaminated with Listeria monocytogenes. In predictive microbiology, primary models are used to describe microbial responses, such as growth rate over time and secondary models explain how those responses change with environmental factors. In this way, primary models were used to assess the growth rate of L. monocytogenes during ripening of the cheeses and the secondary models to test how much the growth rate was affected by either the pH and/or the water activity (aw) of the cheeses. The two models combined can be used to predict outcomes. The purpose of these experiments was to test three primary (the modified Gompertz equation, the Baranyi and Roberts model, and the Logistic model) and three secondary (the Cardinal model, the Ratowski model, and the Presser model) mathematical models in order to define which combination of models would best predict the growth of L. monocytogenes on the surface of artificially contaminated surface-ripened cheeses. Growth on the surface of the cheese was assessed and modeled. The primary models were firstly fitted to the data and the effects of pH and aw on the growth rate (μmax) were incorporated and assessed one by one with the secondary models. The Logistic primary model by itself did not show a better fit of the data among the other primary models tested, but the inclusion of the Cardinal secondary model improved the final fit. The aw was not related to the growth of Listeria. This study suggests that surface-ripened cheese should be separately regulated within EU microbiological food legislation and results expressed as counts per surface area rather than per gram.

  6. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  7. Automated model fit method for diesel engine control development

    NARCIS (Netherlands)

    Seykens, X.L.J.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.J.H.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  8. Extracting surface diffusion coefficients from batch adsorption measurement data: application of the classic Langmuir kinetics model.

    Science.gov (United States)

    Chu, Khim Hoong

    2017-11-09

    Surface diffusion coefficients may be estimated by fitting solutions of a diffusion model to batch kinetic data. For non-linear systems, a numerical solution of the diffusion model's governing equations is generally required. We report here the application of the classic Langmuir kinetics model to extract surface diffusion coefficients from batch kinetic data. The use of the Langmuir kinetics model in lieu of the conventional surface diffusion model allows derivation of an analytical expression. The parameter estimation procedure requires determining the Langmuir rate coefficient from which the pertinent surface diffusion coefficient is calculated. Surface diffusion coefficients within the 10 -9 to 10 -6  cm 2 /s range obtained by fitting the Langmuir kinetics model to experimental kinetic data taken from the literature are found to be consistent with the corresponding values obtained from the traditional surface diffusion model. The virtue of this simplified parameter estimation method is that it reduces the computational complexity as the analytical expression involves only an algebraic equation in closed form which is easily evaluated by spreadsheet computation.

  9. [Modeling polarimetric BRDF of leaves surfaces].

    Science.gov (United States)

    Xie, Dong-Hui; Wang, Pei-Juan; Zhu, Qi-Jiang; Zhou, Hong-Min

    2010-12-01

    The purpose of the present paper is to model a physical polarimetric bidirectional reflectance distribution function (pBRDF), which can character not only the non-Lambertian but also the polarized features in order that the pBRDF can be applied to analyze the relationship between the degree of polarization and the physiological and biochemical parameters of leaves quantitatively later. Firstly, the bidirectional polarized reflectance distributions from several leaves surfaces were measured by the polarized goniometer developed by Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences. The samples of leaves include two pieces of zea mays L. leaves (young leaf and mature leaf) and a piece of E. palcherrima wild leaf. Non-Lambertian characteristics of directional reflectance from the surfaces of these three leaves are obvious. A Cook-Torrance model was modified by coupling the polarized Fresnel equations to simulate the bidirectional polarized reflectance properties of leaves surfaces. The three parameters in the modified pBRDF model, such as diffuse reflectivity, refractive index and roughness of leaf surface were inversed with genetic algorithm (GA). It was found that the pBRDF model can fit with the measured data well. In addition, these parameters in the model are related with both the physiological and biochemical properties and the polarized characteristics of leaves, therefore it is possible to build the relationships between them later.

  10. An R package for fitting age, period and cohort models

    Directory of Open Access Journals (Sweden)

    Adriano Decarli

    2014-11-01

    Full Text Available In this paper we present the R implementation of a GLIM macro which fits age-period-cohort model following Osmond and Gardner. In addition to the estimates of the corresponding model, owing to the programming capability of R as an object oriented language, methods for printing, plotting and summarizing the results are provided. Furthermore, the researcher has fully access to the output of the main function (apc which returns all the models fitted within the function. It is so possible to critically evaluate the goodness of fit of the resulting model.

  11. Are Fit Indices Biased in Favor of Bi-Factor Models in Cognitive Ability Research?: A Comparison of Fit in Correlated Factors, Higher-Order, and Bi-Factor Models via Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Grant B. Morgan

    2015-02-01

    Full Text Available Bi-factor confirmatory factor models have been influential in research on cognitive abilities because they often better fit the data than correlated factors and higher-order models. They also instantiate a perspective that differs from that offered by other models. Motivated by previous work that hypothesized an inherent statistical bias of fit indices favoring the bi-factor model, we compared the fit of correlated factors, higher-order, and bi-factor models via Monte Carlo methods. When data were sampled from a true bi-factor structure, each of the approximate fit indices was more likely than not to identify the bi-factor solution as the best fitting. When samples were selected from a true multiple correlated factors structure, approximate fit indices were more likely overall to identify the correlated factors solution as the best fitting. In contrast, when samples were generated from a true higher-order structure, approximate fit indices tended to identify the bi-factor solution as best fitting. There was extensive overlap of fit values across the models regardless of true structure. Although one model may fit a given dataset best relative to the other models, each of the models tended to fit the data well in absolute terms. Given this variability, models must also be judged on substantive and conceptual grounds.

  12. Analysis of Surface Plasmon Resonance Curves with a Novel Sigmoid-Asymmetric Fitting Algorithm

    Directory of Open Access Journals (Sweden)

    Daeho Jang

    2015-09-01

    Full Text Available The present study introduces a novel curve-fitting algorithm for surface plasmon resonance (SPR curves using a self-constructed, wedge-shaped beam type angular interrogation SPR spectroscopy technique. Previous fitting approaches such as asymmetric and polynomial equations are still unsatisfactory for analyzing full SPR curves and their use is limited to determining the resonance angle. In the present study, we developed a sigmoid-asymmetric equation that provides excellent curve-fitting for the whole SPR curve over a range of incident angles, including regions of the critical angle and resonance angle. Regardless of the bulk fluid type (i.e., water and air, the present sigmoid-asymmetric fitting exhibited nearly perfect matching with a full SPR curve, whereas the asymmetric and polynomial curve fitting methods did not. Because the present curve-fitting sigmoid-asymmetric equation can determine the critical angle as well as the resonance angle, the undesired effect caused by the bulk fluid refractive index was excluded by subtracting the critical angle from the resonance angle in real time. In conclusion, the proposed sigmoid-asymmetric curve-fitting algorithm for SPR curves is widely applicable to various SPR measurements, while excluding the effect of bulk fluids on the sensing layer.

  13. Does model fit decrease the uncertainty of the data in comparison with a general non-model least squares fit?

    International Nuclear Information System (INIS)

    Pronyaev, V.G.

    2003-01-01

    The information entropy is taken as a measure of knowledge about the object and the reduced univariante variance as a common measure of uncertainty. Covariances in the model versus non-model least square fits are discussed

  14. Efficient occupancy model-fitting for extensive citizen-science data

    Science.gov (United States)

    Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen

  15. A Model Fit Statistic for Generalized Partial Credit Model

    Science.gov (United States)

    Liang, Tie; Wells, Craig S.

    2009-01-01

    Investigating the fit of a parametric model is an important part of the measurement process when implementing item response theory (IRT), but research examining it is limited. A general nonparametric approach for detecting model misfit, introduced by J. Douglas and A. S. Cohen (2001), has exhibited promising results for the two-parameter logistic…

  16. Chromate adsorption on selected soil minerals: Surface complexation modeling coupled with spectroscopic investigation

    Energy Technology Data Exchange (ETDEWEB)

    Veselská, Veronika, E-mail: veselskav@fzp.czu.cz [Department of Environmental Geosciences, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamýcka 129, CZ-16521, Prague (Czech Republic); Fajgar, Radek [Department of Analytical and Material Chemistry, Institute of Chemical Process Fundamentals of the CAS, v.v.i., Rozvojová 135/1, CZ-16502, Prague (Czech Republic); Číhalová, Sylva [Department of Environmental Geosciences, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamýcka 129, CZ-16521, Prague (Czech Republic); Bolanz, Ralph M. [Institute of Geosciences, Friedrich-Schiller-University Jena, Carl-Zeiss-Promenade 10, DE-07745, Jena (Germany); Göttlicher, Jörg; Steininger, Ralph [ANKA Synchrotron Radiation Facility, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, DE-76344, Eggenstein-Leopoldshafen (Germany); Siddique, Jamal A.; Komárek, Michael [Department of Environmental Geosciences, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamýcka 129, CZ-16521, Prague (Czech Republic)

    2016-11-15

    Highlights: • Study of Cr(VI) adsorption on soil minerals over a large range of conditions. • Combined surface complexation modeling and spectroscopic techniques. • Diffuse-layer and triple-layer models used to obtain fits to experimental data. • Speciation of Cr(VI) and Cr(III) was assessed. - Abstract: This study investigates the mechanisms of Cr(VI) adsorption on natural clay (illite and kaolinite) and synthetic (birnessite and ferrihydrite) minerals, including its speciation changes, and combining quantitative thermodynamically based mechanistic surface complexation models (SCMs) with spectroscopic measurements. Series of adsorption experiments have been performed at different pH values (3–10), ionic strengths (0.001–0.1 M KNO{sub 3}), sorbate concentrations (10{sup −4}, 10{sup −5}, and 10{sup −6} M Cr(VI)), and sorbate/sorbent ratios (50–500). Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy, and X-ray absorption spectroscopy were used to determine the surface complexes, including surface reactions. Adsorption of Cr(VI) is strongly ionic strength dependent. For ferrihydrite at pH <7, a simple diffuse-layer model provides a reasonable prediction of adsorption. For birnessite, bidentate inner-sphere complexes of chromate and dichromate resulted in a better diffuse-layer model fit. For kaolinite, outer-sphere complexation prevails mainly at lower Cr(VI) loadings. Dissolution of solid phases needs to be considered for better SCMs fits. The coupled SCM and spectroscopic approach is thus useful for investigating individual minerals responsible for Cr(VI) retention in soils, and improving the handling and remediation processes.

  17. Protonation of Different Goethite Surfaces - Unified Models for NaNO3 and NaCl Media

    International Nuclear Information System (INIS)

    Lutzenkirchen, Johannes; Boily, Jean-Francois F.; Gunneriusson, Lars; Lovgren, L.; Sjojberg, S.

    2008-01-01

    Acid-base titration data for two goethites samples in sodium nitrate and sodium chloride media are discussed. The data are modeled based on various surface complexation models in the framework of the MUlti SIte Complexation (MUSIC) model. Various assumptions with respect to the goethite morphology are considered in determining the site density of the surface functional groups. The results from the various model applications are not statistically significant in terms of goodness of fit. More importantly, various published assumptions with respect to the goethite morphology (i.e. the contributions of different crystal planes and their repercussions on the ''overall'' site densities of the various surface functional groups) do not significantly affect the final model parameters. The simultaneous fit of the chloride and nitrate data results in electrolyte binding constants, which are applicable over a wide range of electrolyte concentrations including mixtures of chloride and nitrate. Model parameters for the high surface area goethite sample are in excellent agreement with parameters that were independently obtained by another group on different goethite titration data sets

  18. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    Science.gov (United States)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on

  19. Optimized aerodynamic design process for subsonic transport wing fitted with winglets. [wind tunnel model

    Science.gov (United States)

    Kuhlman, J. M.

    1979-01-01

    The aerodynamic design of a wind-tunnel model of a wing representative of that of a subsonic jet transport aircraft, fitted with winglets, was performed using two recently developed optimal wing-design computer programs. Both potential flow codes use a vortex lattice representation of the near-field of the aerodynamic surfaces for determination of the required mean camber surfaces for minimum induced drag, and both codes use far-field induced drag minimization procedures to obtain the required spanloads. One code uses a discrete vortex wake model for this far-field drag computation, while the second uses a 2-D advanced panel wake model. Wing camber shapes for the two codes are very similar, but the resulting winglet camber shapes differ widely. Design techniques and considerations for these two wind-tunnel models are detailed, including a description of the necessary modifications of the design geometry to format it for use by a numerically controlled machine for the actual model construction.

  20. A Comparison of Item Fit Statistics for Mixed IRT Models

    Science.gov (United States)

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  1. Diversity of Bacterial Communities of Fitness Center Surfaces in a U.S. Metropolitan Area

    Directory of Open Access Journals (Sweden)

    Nabanita Mukherjee

    2014-12-01

    Full Text Available Public fitness centers and exercise facilities have been implicated as possible sources for transmitting community-acquired bacterial infections. However, the overall diversity of the bacterial community residing on the surfaces in these indoor environments is still unknown. In this study, we investigated the overall bacterial ecology of selected fitness centers in a metropolitan area (Memphis, TN, USA utilizing culture-independent pyrosequencing of the 16S rRNA genes. Samples were collected from the skin-contact surfaces (e.g., exercise instruments, floor mats, handrails, etc. within fitness centers. Taxonomical composition revealed the abundance of Firmicutes phyla, followed by Proteobacter and Actinobacteria, with a total of 17 bacterial families and 25 bacterial genera. Most of these bacterial genera are of human and environmental origin (including, air, dust, soil, and water. Additionally, we found the presence of some pathogenic or potential pathogenic bacterial genera including Salmonella, Staphylococcus, Klebsiella, and Micrococcus. Staphylococcus was found to be the most prevalent genus. Presence of viable forms of these pathogens elevates risk of exposure of any susceptible individuals. Several factors (including personal hygiene, surface cleaning and disinfection schedules of the facilities may be the reasons for the rich bacterial diversity found in this study. The current finding underscores the need to increase public awareness on the importance of personal hygiene and sanitation for public gym users.

  2. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    Science.gov (United States)

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  3. Standard error propagation in R-matrix model fitting for light elements

    International Nuclear Information System (INIS)

    Chen Zhenpeng; Zhang Rui; Sun Yeying; Liu Tingjin

    2003-01-01

    The error propagation features with R-matrix model fitting 7 Li, 11 B and 17 O systems were researched systematically. Some laws of error propagation were revealed, an empirical formula P j = U j c / U j d = K j · S-bar · √m / √N for describing standard error propagation was established, the most likely error ranges for standard cross sections of 6 Li(n,t), 10 B(n,α0) and 10 B(n,α1) were estimated. The problem that the standard error of light nuclei standard cross sections may be too small results mainly from the R-matrix model fitting, which is not perfect. Yet R-matrix model fitting is the most reliable evaluation method for such data. The error propagation features of R-matrix model fitting for compound nucleus system of 7 Li, 11 B and 17 O has been studied systematically, some laws of error propagation are revealed, and these findings are important in solving the problem mentioned above. Furthermore, these conclusions are suitable for similar model fitting in other scientific fields. (author)

  4. Model-fitting approach to kinetic analysis of non-isothermal oxidation of molybdenite

    International Nuclear Information System (INIS)

    Ebrahimi Kahrizsangi, R.; Abbasi, M. H.; Saidi, A.

    2007-01-01

    The kinetics of molybdenite oxidation was studied by non-isothermal TGA-DTA with heating rate 5 d eg C .min -1 . The model-fitting kinetic approach applied to TGA data. The Coats-Redfern method used of model fitting. The popular model-fitting gives excellent fit non-isothermal data in chemically controlled regime. The apparent activation energy was determined to be about 34.2 kcalmol -1 With pre-exponential factor about 10 8 sec -1 for extent of reaction less than 0.5

  5. Thoracic cavity segmentation algorithm using multiorgan extraction and surface fitting in volumetric CT

    Energy Technology Data Exchange (ETDEWEB)

    Bae, JangPyo [Interdisciplinary Program, Bioengineering Major, Graduate School, Seoul National University, Seoul 110-744, South Korea and Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Namkug, E-mail: namkugkim@gmail.com; Lee, Sang Min; Seo, Joon Beom [Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Hee Chan [Department of Biomedical Engineering, College of Medicine and Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul 110-744 (Korea, Republic of)

    2014-04-15

    Purpose: To develop and validate a semiautomatic segmentation method for thoracic cavity volumetry and mediastinum fat quantification of patients with chronic obstructive pulmonary disease. Methods: The thoracic cavity region was separated by segmenting multiorgans, namely, the rib, lung, heart, and diaphragm. To encompass various lung disease-induced variations, the inner thoracic wall and diaphragm were modeled by using a three-dimensional surface-fitting method. To improve the accuracy of the diaphragm surface model, the heart and its surrounding tissue were segmented by a two-stage level set method using a shape prior. To assess the accuracy of the proposed algorithm, the algorithm results of 50 patients were compared to the manual segmentation results of two experts with more than 5 years of experience (these manual results were confirmed by an expert thoracic radiologist). The proposed method was also compared to three state-of-the-art segmentation methods. The metrics used to evaluate segmentation accuracy were volumetric overlap ratio (VOR), false positive ratio on VOR (FPRV), false negative ratio on VOR (FNRV), average symmetric absolute surface distance (ASASD), average symmetric squared surface distance (ASSSD), and maximum symmetric surface distance (MSSD). Results: In terms of thoracic cavity volumetry, the mean ± SD VOR, FPRV, and FNRV of the proposed method were (98.17 ± 0.84)%, (0.49 ± 0.23)%, and (1.34 ± 0.83)%, respectively. The ASASD, ASSSD, and MSSD for the thoracic wall were 0.28 ± 0.12, 1.28 ± 0.53, and 23.91 ± 7.64 mm, respectively. The ASASD, ASSSD, and MSSD for the diaphragm surface were 1.73 ± 0.91, 3.92 ± 1.68, and 27.80 ± 10.63 mm, respectively. The proposed method performed significantly better than the other three methods in terms of VOR, ASASD, and ASSSD. Conclusions: The proposed semiautomatic thoracic cavity segmentation method, which extracts multiple organs (namely, the rib, thoracic wall, diaphragm, and heart

  6. When the model fits the frame: the impact of regulatory fit on efficacy appraisal and persuasion in health communication.

    Science.gov (United States)

    Bosone, Lucia; Martinez, Frédéric; Kalampalikis, Nikos

    2015-04-01

    In health-promotional campaigns, positive and negative role models can be deployed to illustrate the benefits or costs of certain behaviors. The main purpose of this article is to investigate why, how, and when exposure to role models strengthens the persuasiveness of a message, according to regulatory fit theory. We argue that exposure to a positive versus a negative model activates individuals' goals toward promotion rather than prevention. By means of two experiments, we demonstrate that high levels of persuasion occur when a message advertising healthy dietary habits offers a regulatory fit between its framing and the described role model. Our data also establish that the effects of such internal regulatory fit by vicarious experience depend on individuals' perceptions of response-efficacy and self-efficacy. Our findings constitute a significant theoretical complement to previous research on regulatory fit and contain valuable practical implications for health-promotional campaigns. © 2015 by the Society for Personality and Social Psychology, Inc.

  7. Modeling Surface Roughness to Estimate Surface Moisture Using Radarsat-2 Quad Polarimetric SAR Data

    Science.gov (United States)

    Nurtyawan, R.; Saepuloh, A.; Budiharto, A.; Wikantika, K.

    2016-08-01

    Microwave backscattering from the earth's surface depends on several parameters such as surface roughness and dielectric constant of surface materials. The two parameters related to water content and porosity are crucial for estimating soil moisture. The soil moisture is an important parameter for ecological study and also a factor to maintain energy balance of land surface and atmosphere. Direct roughness measurements to a large area require extra time and cost. Heterogeneity roughness scale for some applications such as hydrology, climate, and ecology is a problem which could lead to inaccuracies of modeling. In this study, we modeled surface roughness using Radasat-2 quad Polarimetric Synthetic Aperture Radar (PolSAR) data. The statistical approaches to field roughness measurements were used to generate an appropriate roughness model. This modeling uses a physical SAR approach to predicts radar backscattering coefficient in the parameter of radar configuration (wavelength, polarization, and incidence angle) and soil parameters (surface roughness and dielectric constant). Surface roughness value is calculated using a modified Campbell and Shepard model in 1996. The modification was applied by incorporating the backscattering coefficient (σ°) of quad polarization HH, HV and VV. To obtain empirical surface roughness model from SAR backscattering intensity, we used forty-five sample points from field roughness measurements. We selected paddy field in Indramayu district, West Java, Indonesia as the study area. This area was selected due to intensive decreasing of rice productivity in the Northern Coast region of West Java. Third degree polynomial is the most suitable data fitting with coefficient of determination R2 and RMSE are about 0.82 and 1.18 cm, respectively. Therefore, this model is used as basis to generate the map of surface roughness.

  8. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....

  9. An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models

    Science.gov (United States)

    Ames, Allison J.; Penfield, Randall D.

    2015-01-01

    Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…

  10. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    Science.gov (United States)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  11. Item level diagnostics and model - data fit in item response theory ...

    African Journals Online (AJOL)

    Item response theory (IRT) is a framework for modeling and analyzing item response data. Item-level modeling gives IRT advantages over classical test theory. The fit of an item score pattern to an item response theory (IRT) models is a necessary condition that must be assessed for further use of item and models that best fit ...

  12. Simple model of surface roughness for binary collision sputtering simulations

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey, Sloan J. [Institute of Solid-State Electronics, TU Wien, Floragasse 7, A-1040 Wien (Austria); Hobler, Gerhard, E-mail: gerhard.hobler@tuwien.ac.at [Institute of Solid-State Electronics, TU Wien, Floragasse 7, A-1040 Wien (Austria); Maciążek, Dawid; Postawa, Zbigniew [Institute of Physics, Jagiellonian University, ul. Lojasiewicza 11, 30348 Kraków (Poland)

    2017-02-15

    Highlights: • A simple model of surface roughness is proposed. • Its key feature is a linearly varying target density at the surface. • The model can be used in 1D/2D/3D Monte Carlo binary collision simulations. • The model fits well experimental glancing incidence sputtering yield data. - Abstract: It has been shown that surface roughness can strongly influence the sputtering yield – especially at glancing incidence angles where the inclusion of surface roughness leads to an increase in sputtering yields. In this work, we propose a simple one-parameter model (the “density gradient model”) which imitates surface roughness effects. In the model, the target’s atomic density is assumed to vary linearly between the actual material density and zero. The layer width is the sole model parameter. The model has been implemented in the binary collision simulator IMSIL and has been evaluated against various geometric surface models for 5 keV Ga ions impinging an amorphous Si target. To aid the construction of a realistic rough surface topography, we have performed MD simulations of sequential 5 keV Ga impacts on an initially crystalline Si target. We show that our new model effectively reproduces the sputtering yield, with only minor variations in the energy and angular distributions of sputtered particles. The success of the density gradient model is attributed to a reduction of the reflection coefficient – leading to increased sputtering yields, similar in effect to surface roughness.

  13. Simple model of surface roughness for binary collision sputtering simulations

    International Nuclear Information System (INIS)

    Lindsey, Sloan J.; Hobler, Gerhard; Maciążek, Dawid; Postawa, Zbigniew

    2017-01-01

    Highlights: • A simple model of surface roughness is proposed. • Its key feature is a linearly varying target density at the surface. • The model can be used in 1D/2D/3D Monte Carlo binary collision simulations. • The model fits well experimental glancing incidence sputtering yield data. - Abstract: It has been shown that surface roughness can strongly influence the sputtering yield – especially at glancing incidence angles where the inclusion of surface roughness leads to an increase in sputtering yields. In this work, we propose a simple one-parameter model (the “density gradient model”) which imitates surface roughness effects. In the model, the target’s atomic density is assumed to vary linearly between the actual material density and zero. The layer width is the sole model parameter. The model has been implemented in the binary collision simulator IMSIL and has been evaluated against various geometric surface models for 5 keV Ga ions impinging an amorphous Si target. To aid the construction of a realistic rough surface topography, we have performed MD simulations of sequential 5 keV Ga impacts on an initially crystalline Si target. We show that our new model effectively reproduces the sputtering yield, with only minor variations in the energy and angular distributions of sputtered particles. The success of the density gradient model is attributed to a reduction of the reflection coefficient – leading to increased sputtering yields, similar in effect to surface roughness.

  14. Fitting the curve in Excel® : Systematic curve fitting of laboratory and remotely sensed planetary spectra

    NARCIS (Netherlands)

    McCraig, M.A.; Osinski, G.R.; Cloutis, E.A.; Flemming, R.L.; Izawa, M.R.M.; Reddy, V.; Fieber-Beyer, S.K.; Pompilio, L.; van der Meer, F.D.; Berger, J.A.; Bramble, M.S.; Applin, D.M.

    2017-01-01

    Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to

  15. Fitting ARMA Time Series by Structural Equation Models.

    Science.gov (United States)

    van Buuren, Stef

    1997-01-01

    This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)

  16. PolyFit: Polygonal Surface Reconstruction from Point Clouds

    KAUST Repository

    Nan, Liangliang; Wonka, Peter

    2017-01-01

    We propose a novel framework for reconstructing lightweight polygonal surfaces from point clouds. Unlike traditional methods that focus on either extracting good geometric primitives or obtaining proper arrangements of primitives, the emphasis of this work lies in intersecting the primitives (planes only) and seeking for an appropriate combination of them to obtain a manifold polygonal surface model without boundary.,We show that reconstruction from point clouds can be cast as a binary labeling problem. Our method is based on a hypothesizing and selection strategy. We first generate a reasonably large set of face candidates by intersecting the extracted planar primitives. Then an optimal subset of the candidate faces is selected through optimization. Our optimization is based on a binary linear programming formulation under hard constraints that enforce the final polygonal surface model to be manifold and watertight. Experiments on point clouds from various sources demonstrate that our method can generate lightweight polygonal surface models of arbitrary piecewise planar objects. Besides, our method is capable of recovering sharp features and is robust to noise, outliers, and missing data.

  17. PolyFit: Polygonal Surface Reconstruction from Point Clouds

    KAUST Repository

    Nan, Liangliang

    2017-12-25

    We propose a novel framework for reconstructing lightweight polygonal surfaces from point clouds. Unlike traditional methods that focus on either extracting good geometric primitives or obtaining proper arrangements of primitives, the emphasis of this work lies in intersecting the primitives (planes only) and seeking for an appropriate combination of them to obtain a manifold polygonal surface model without boundary.,We show that reconstruction from point clouds can be cast as a binary labeling problem. Our method is based on a hypothesizing and selection strategy. We first generate a reasonably large set of face candidates by intersecting the extracted planar primitives. Then an optimal subset of the candidate faces is selected through optimization. Our optimization is based on a binary linear programming formulation under hard constraints that enforce the final polygonal surface model to be manifold and watertight. Experiments on point clouds from various sources demonstrate that our method can generate lightweight polygonal surface models of arbitrary piecewise planar objects. Besides, our method is capable of recovering sharp features and is robust to noise, outliers, and missing data.

  18. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  19. Gfitter - Revisiting the global electroweak fit of the Standard Model and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Flaecher, H.; Hoecker, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Goebel, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Haller, J. [Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Moenig, K.; Stelzer, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2008-11-15

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M{sub H}=116.4{sup +18.3}{sub -1.3} GeV, and the 2{sigma} and 3{sigma} allowed regions [114,145] GeV and [[113,168] and [180,225

  20. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  1. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  2. LEP asymmetries and fits of the standard model

    International Nuclear Information System (INIS)

    Pietrzyk, B.

    1994-01-01

    The lepton and quark asymmetries measured at LEP are presented. The results of the Standard Model fits to the electroweak data presented at this conference are given. The top mass obtained from the fit to the LEP data is 172 -14-20 +13+18 GeV; it is 177 -11-19 +11+18 when also the collider, ν and A LR data are included. (author). 10 refs., 3 figs., 2 tabs

  3. Hierarchical Threshold Adaptive for Point Cloud Filter Algorithm of Moving Surface Fitting

    Directory of Open Access Journals (Sweden)

    ZHU Xiaoxiao

    2018-02-01

    Full Text Available In order to improve the accuracy,efficiency and adaptability of point cloud filtering algorithm,a hierarchical threshold adaptive for point cloud filter algorithm of moving surface fitting was proposed.Firstly,the noisy points are removed by using a statistic histogram method.Secondly,the grid index is established by grid segmentation,and the surface equation is set up through the lowest point among the neighborhood grids.The real height and fit are calculated.The difference between the elevation and the threshold can be determined.Finally,in order to improve the filtering accuracy,hierarchical filtering is used to change the grid size and automatically set the neighborhood size and threshold until the filtering result reaches the accuracy requirement.The test data provided by the International Photogrammetry and Remote Sensing Society (ISPRS is used to verify the algorithm.The first and second error and the total error are 7.33%,10.64% and 6.34% respectively.The algorithm is compared with the eight classical filtering algorithms published by ISPRS.The experiment results show that the method has well-adapted and it has high accurate filtering result.

  4. Response Surface Design Model to Predict Surface Roughness when Machining Hastelloy C-2000 using Uncoated Carbide Insert

    International Nuclear Information System (INIS)

    Razak, N H; Rahman, M M; Kadirgama, K

    2012-01-01

    This paper presents to develop of the response surface design model to predict the surface roughness for end-milling operation of Hastelloy C-2000 using uncoated carbide insert. Mathematical model is developed to study the effect of three input cutting parameters includes the feed rate, axial depth of cut and cutting speed. Design of experiments (DOE) was implemented with the aid of the statistical software package. Analysis of variance (ANOVA) has been performed to verify the fit and adequacy of the developed mathematical model. The result shows that the feed rate gave the more effect on the both prediction values of Ra compared to the cutting speed and axial depth of cut. SEM and EDX analyses were performed in different cutting conditions. It can be concluded that the feed rate and cutting force give the higher impact to influence the machining characteristics of surface roughness. Thus, the optimizing the cutting conditions are essential in order to improve the surface roughness in machining of Hastlelloy C-2000.

  5. Research on calculation of the IOL tilt and decentration based on surface fitting.

    Science.gov (United States)

    Li, Lin; Wang, Ke; Yan, Yan; Song, Xudong; Liu, Zhicheng

    2013-01-01

    The tilt and decentration of intraocular lens (IOL) result in defocussing, astigmatism, and wavefront aberration after operation. The objective is to give a method to estimate the tilt and decentration of IOL more accurately. Based on AS-OCT images of twelve eyes from eight cases with subluxation lens after operation, we fitted spherical equation to the data obtained from the images of the anterior and posterior surfaces of the IOL. By the established relationship between IOL tilt (decentration) and the scanned angle, at which a piece of AS-OCT image was taken by the instrument, the IOL tilt and decentration were calculated. IOL tilt angle and decentration of each subject were given. Moreover, the horizontal and vertical tilt was also obtained. Accordingly, the possible errors of IOL tilt and decentration existed in the method employed by AS-OCT instrument. Based on 6-12 pieces of AS-OCT images at different directions, the tilt angle and decentration values were shown, respectively. The method of the surface fitting to the IOL surface can accurately analyze the IOL's location, and six pieces of AS-OCT images at three pairs symmetrical directions are enough to get tilt angle and decentration value of IOL more precisely.

  6. Research on Calculation of the IOL Tilt and Decentration Based on Surface Fitting

    Directory of Open Access Journals (Sweden)

    Lin Li

    2013-01-01

    Full Text Available The tilt and decentration of intraocular lens (IOL result in defocussing, astigmatism, and wavefront aberration after operation. The objective is to give a method to estimate the tilt and decentration of IOL more accurately. Based on AS-OCT images of twelve eyes from eight cases with subluxation lens after operation, we fitted spherical equation to the data obtained from the images of the anterior and posterior surfaces of the IOL. By the established relationship between IOL tilt (decentration and the scanned angle, at which a piece of AS-OCT image was taken by the instrument, the IOL tilt and decentration were calculated. IOL tilt angle and decentration of each subject were given. Moreover, the horizontal and vertical tilt was also obtained. Accordingly, the possible errors of IOL tilt and decentration existed in the method employed by AS-OCT instrument. Based on 6–12 pieces of AS-OCT images at different directions, the tilt angle and decentration values were shown, respectively. The method of the surface fitting to the IOL surface can accurately analyze the IOL’s location, and six pieces of AS-OCT images at three pairs symmetrical directions are enough to get tilt angle and decentration value of IOL more precisely.

  7. Development and Analysis of Volume Multi-Sphere Method Model Generation using Electric Field Fitting

    Science.gov (United States)

    Ingram, G. J.

    Electrostatic modeling of spacecraft has wide-reaching applications such as detumbling space debris in the Geosynchronous Earth Orbit regime before docking, servicing and tugging space debris to graveyard orbits, and Lorentz augmented orbits. The viability of electrostatic actuation control applications relies on faster-than-realtime characterization of the electrostatic interaction. The Volume Multi-Sphere Method (VMSM) seeks the optimal placement and radii of a small number of equipotential spheres to accurately model the electrostatic force and torque on a conducting space object. Current VMSM models tuned using force and torque comparisons with commercially available finite element software are subject to the modeled probe size and numerical errors of the software. This work first investigates fitting of VMSM models to Surface-MSM (SMSM) generated electrical field data, removing modeling dependence on probe geometry while significantly increasing performance and speed. A proposed electric field matching cost function is compared to a force and torque cost function, the inclusion of a self-capacitance constraint is explored and 4 degree-of-freedom VMSM models generated using electric field matching are investigated. The resulting E-field based VMSM development framework is illustrated on a box-shaped hub with a single solar panel, and convergence properties of select models are qualitatively analyzed. Despite the complex non-symmetric spacecraft geometry, elegantly simple 2-sphere VMSM solutions provide force and torque fits within a few percent.

  8. Checking the Adequacy of Fit of Models from Split-Plot Designs

    DEFF Research Database (Denmark)

    Almini, A. A.; Kulahci, Murat; Montgomery, D. C.

    2009-01-01

    models. In this article, we propose the computation of two R-2, R-2-adjusted, prediction error sums of squares (PRESS), and R-2-prediction statistics to measure the adequacy of fit for the WP and the SP submodels in a split-plot design. This is complemented with the graphical analysis of the two types......One of the main features that distinguish split-plot experiments from other experiments is that they involve two types of experimental errors: the whole-plot (WP) error and the subplot (SP) error. Taking this into consideration is very important when computing measures of adequacy of fit for split-plot...... of errors to check for any violation of the underlying assumptions and the adequacy of fit of split-plot models. Using examples, we show how computing two measures of model adequacy of fit for each split-plot design model is appropriate and useful as they reveal whether the correct WP and SP effects have...

  9. Repair models of cell survival and corresponding computer program for survival curve fitting

    International Nuclear Information System (INIS)

    Shen Xun; Hu Yiwei

    1992-01-01

    Some basic concepts and formulations of two repair models of survival, the incomplete repair (IR) model and the lethal-potentially lethal (LPL) model, are introduced. An IBM-PC computer program for survival curve fitting with these models was developed and applied to fit the survivals of human melanoma cells HX118 irradiated at different dose rates. Comparison was made between the repair models and two non-repair models, the multitar get-single hit model and the linear-quadratic model, in the fitting and analysis of the survival-dose curves. It was shown that either IR model or LPL model can fit a set of survival curves of different dose rates with same parameters and provide information on the repair capacity of cells. These two mathematical models could be very useful in quantitative study on the radiosensitivity and repair capacity of cells

  10. HDFITS: Porting the FITS data model to HDF5

    Science.gov (United States)

    Price, D. C.; Barsdell, B. R.; Greenhill, L. J.

    2015-09-01

    The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new 'HDFITS' format, where data are stored in HDF5 in a FITS-like manner. We show that HDFITS allows faster reading of data (up to 100x of FITS in some use cases), and improved compression (higher compression ratios and higher throughput). Finally, we show that by only changing the import lines in Python-based FITS utilities, HDFITS formatted data can be presented transparently as an in-memory FITS equivalent.

  11. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    Science.gov (United States)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye

    2013-11-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  12. Neural networks vs Gaussian process regression for representing potential energy surfaces: A comparative study of fit quality and vibrational spectrum accuracy

    Science.gov (United States)

    Kamath, Aditya; Vargas-Hernández, Rodrigo A.; Krems, Roman V.; Carrington, Tucker; Manzhos, Sergei

    2018-06-01

    For molecules with more than three atoms, it is difficult to fit or interpolate a potential energy surface (PES) from a small number of (usually ab initio) energies at points. Many methods have been proposed in recent decades, each claiming a set of advantages. Unfortunately, there are few comparative studies. In this paper, we compare neural networks (NNs) with Gaussian process (GP) regression. We re-fit an accurate PES of formaldehyde and compare PES errors on the entire point set used to solve the vibrational Schrödinger equation, i.e., the only error that matters in quantum dynamics calculations. We also compare the vibrational spectra computed on the underlying reference PES and the NN and GP potential surfaces. The NN and GP surfaces are constructed with exactly the same points, and the corresponding spectra are computed with the same points and the same basis. The GP fitting error is lower, and the GP spectrum is more accurate. The best NN fits to 625/1250/2500 symmetry unique potential energy points have global PES root mean square errors (RMSEs) of 6.53/2.54/0.86 cm-1, whereas the best GP surfaces have RMSE values of 3.87/1.13/0.62 cm-1, respectively. When fitting 625 symmetry unique points, the error in the first 100 vibrational levels is only 0.06 cm-1 with the best GP fit, whereas the spectrum on the best NN PES has an error of 0.22 cm-1, with respect to the spectrum computed on the reference PES. This error is reduced to about 0.01 cm-1 when fitting 2500 points with either the NN or GP. We also find that the GP surface produces a relatively accurate spectrum when obtained based on as few as 313 points.

  13. Model Atmosphere Spectrum Fit to the Soft X-Ray Outburst Spectrum of SS Cyg

    Directory of Open Access Journals (Sweden)

    V. F. Suleimanov

    2015-02-01

    Full Text Available The X-ray spectrum of SS Cyg in outburst has a very soft component that can be interpreted as the fast-rotating optically thick boundary layer on the white dwarf surface. This component was carefully investigated by Mauche (2004 using the Chandra LETG spectrum of this object in outburst. The spectrum shows broad ( ≈5 °A spectral features that have been interpreted as a large number of absorption lines on a blackbody continuum with a temperature of ≈250 kK. Because the spectrum resembles the photospheric spectra of super-soft X-ray sources, we tried to fit it with high gravity hot LTE stellar model atmospheres with solar chemical composition, specially computed for this purpose. We obtained a reasonably good fit to the 60–125 °A spectrum with the following parameters: Teff = 190 kK, log g = 6.2, and NH = 8 · 1019 cm−2, although at shorter wavelengths the observed spectrum has a much higher flux. The reasons for this are discussed. The hypothesis of a fast rotating boundary layer is supported by the derived low surface gravity.

  14. Model Fit and Item Factor Analysis: Overfactoring, Underfactoring, and a Program to Guide Interpretation.

    Science.gov (United States)

    Clark, D Angus; Bowles, Ryan P

    2018-04-23

    In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.

  15. Modelling population dynamics model formulation, fitting and assessment using state-space methods

    CERN Document Server

    Newman, K B; Morgan, B J T; King, R; Borchers, D L; Cole, D J; Besbeas, P; Gimenez, O; Thomas, L

    2014-01-01

    This book gives a unifying framework for estimating the abundance of open populations: populations subject to births, deaths and movement, given imperfect measurements or samples of the populations.  The focus is primarily on populations of vertebrates for which dynamics are typically modelled within the framework of an annual cycle, and for which stochastic variability in the demographic processes is usually modest. Discrete-time models are developed in which animals can be assigned to discrete states such as age class, gender, maturity,  population (within a metapopulation), or species (for multi-species models). The book goes well beyond estimation of abundance, allowing inference on underlying population processes such as birth or recruitment, survival and movement. This requires the formulation and fitting of population dynamics models.  The resulting fitted models yield both estimates of abundance and estimates of parameters characterizing the underlying processes.  

  16. Revisiting the Global Electroweak Fit of the Standard Model and Beyond with Gfitter

    CERN Document Server

    Flächer, Henning; Haller, J; Höcker, A; Mönig, K; Stelzer, J

    2009-01-01

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter projec...

  17. Tests of fit of historically-informed models of African American Admixture.

    Science.gov (United States)

    Gross, Jessica M

    2018-02-01

    African American populations in the U.S. formed primarily by mating between Africans and Europeans over the last 500 years. To date, studies of admixture have focused on either a one-time admixture event or continuous input into the African American population from Europeans only. Our goal is to gain a better understanding of the admixture process by examining models that take into account (a) assortative mating by ancestry in the African American population, (b) continuous input from both Europeans and Africans, and (c) historically informed variation in the rate of African migration over time. We used a model-based clustering method to generate distributions of African ancestry in three samples comprised of 147 African Americans from two published sources. We used a log-likelihood method to examine the fit of four models to these distributions and used a log-likelihood ratio test to compare the relative fit of each model. The mean ancestry estimates for our datasets of 77% African/23% European to 83% African/17% European ancestry are consistent with previous studies. We find admixture models that incorporate continuous gene flow from Europeans fit significantly better than one-time event models, and that a model involving continuous gene flow from Africans and Europeans fits better than one with continuous gene flow from Europeans only for two samples. Importantly, models that involve continuous input from Africans necessitate a higher level of gene flow from Europeans than previously reported. We demonstrate that models that take into account information about the rate of African migration over the past 500 years fit observed patterns of African ancestry better than alternative models. Our approach will enrich our understanding of the admixture process in extant and past populations. © 2017 Wiley Periodicals, Inc.

  18. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    Science.gov (United States)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to

  19. Surface complexation modeling of uranium (Vi) retained onto zirconium diphosphate in presence of organic acids

    International Nuclear Information System (INIS)

    Almazan T, M. G.; Garcia G, N.; Ordonez R, E.

    2010-10-01

    In the field of nuclear waste disposal, predictions regarding radionuclide migration through the geosphere, have to take account the effects of natural organic matter. This work presents an investigation of interaction mechanisms between U (Vi) and zirconium diphosphate (ZrP 2 O 7 ) in presence of organic acids (citric acid and oxalic acid). The retention reactions were previously examined using a batch equilibrium method. Previous results showed that U (Vi) retention was more efficient when citric acid or oxalic acid was present in solid surface at lower ph values. In order to determine the retention equilibria for both systems studied, a phosphorescence spectroscopy study was carried out. The experimental data were then fitted using the Constant Capacitance Model included in the FITEQL4.0 code. Previous results concerning surface characterization of ZrP 2 O 7 (surface sites density and surface acidity constants) were used to constraint the modeling. The best fit for U (Vi)/citric acid/ZrP 2 O 7 and U (Vi)/oxalic acid/ZrP 2 O 7 systems considered the formation of a ternary surface complex. (Author)

  20. [How to fit and interpret multilevel models using SPSS].

    Science.gov (United States)

    Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael

    2007-05-01

    Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.

  1. A versatile curve-fit model for linear to deeply concave rank abundance curves

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    A new, flexible curve-fit model for linear to concave rank abundance curves was conceptualized and validated using observational data. The model links the geometric-series model and log-series model and can also fit deeply concave rank abundance curves. The model is based ¿ in an unconventional way

  2. An initial research on solute migration model coupled with adsorption of surface complexation in groundwater

    International Nuclear Information System (INIS)

    Qian Tianwei; Chen Fanrong

    2003-01-01

    The influence of solution chemical action in groundwater on solute migration has attracted increasing public attention, especially adsorption action occurring on surface of solid phase and liquid phase, which has play a great role in solute migration. There are various interpretations on adsorption mechanism, in which surface complexion is one of successful hypothesis. This paper first establishes a geochemical model based on surface complexion and then coupled it with traditional advection-dispersion model to constitute a solute migration model, which can deal with surface complexion action. The simulated results fit very well with those obtained by the precursors, as compared with a published famous example, which indicates that the model set up by this paper is successful. (authors)

  3. A New Method to Estimate Changes in Glacier Surface Elevation Based on Polynomial Fitting of Sparse ICESat—GLAS Footprints

    Directory of Open Access Journals (Sweden)

    Tianjin Huang

    2017-08-01

    Full Text Available We present in this paper a polynomial fitting method applicable to segments of footprints measured by the Geoscience Laser Altimeter System (GLAS to estimate glacier thickness change. Our modification makes the method applicable to complex topography, such as a large mountain glacier. After a full analysis of the planar fitting method to characterize errors of estimates due to complex topography, we developed an improved fitting method by adjusting a binary polynomial surface to local topography. The improved method and the planar fitting method were tested on the accumulation areas of the Naimona’nyi glacier and Yanong glacier on along-track facets with lengths of 1000 m, 1500 m, 2000 m, and 2500 m, respectively. The results show that the improved method gives more reliable estimates of changes in elevation than planar fitting. The improved method was also tested on Guliya glacier with a large and relatively flat area and the Chasku Muba glacier with very complex topography. The results in these test sites demonstrate that the improved method can give estimates of glacier thickness change on glaciers with a large area and a complex topography. Additionally, the improved method based on GLAS Data and Shuttle Radar Topography Mission-Digital Elevation Model (SRTM-DEM can give estimates of glacier thickness change from 2000 to 2008/2009, since it takes the 2000 SRTM-DEM as a reference, which is a longer period than 2004 to 2008/2009, when using the GLAS data only and the planar fitting method.

  4. Modeling uranium(VI) adsorption onto montmorillonite under varying carbonate concentrations: A surface complexation model accounting for the spillover effect on surface potential

    Science.gov (United States)

    Tournassat, C.; Tinnacher, R. M.; Grangeon, S.; Davis, J. A.

    2018-01-01

    The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonite edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites ('spillover' effect). A series of U(VI) - Na-montmorillonite batch adsorption experiments was conducted as a function of pH, with variable U(VI), Ca, and dissolved carbonate concentrations. Based on the experimental data, a new type of surface complexation model (SCM) was developed for montmorillonite, that specifically accounts for the spillover effect using the edge surface speciation model by Tournassat et al. (2016a). The SCM allows for a prediction of U(VI) adsorption under varying chemical conditions with a minimum number of fitting parameters, not only for our own experimental results, but also for a number of published data sets. The model agreed well with many of these datasets without introducing a second site type or including the formation of ternary U(VI)-carbonato surface complexes. The model predictions were greatly impacted by utilizing analytical measurements of dissolved inorganic carbon (DIC) concentrations in individual sample solutions rather than assuming solution equilibration with a specific partial pressure of CO2, even when the gas phase was

  5. Fitting Equilibrium Search Models to Labour Market Data

    DEFF Research Database (Denmark)

    Bowlus, Audra J.; Kiefer, Nicholas M.; Neumann, George R.

    1996-01-01

    Specification and estimation of a Burdett-Mortensen type equilibrium search model is considered. The estimation is nonstandard. An estimation strategy asymptotically equivalent to maximum likelihood is proposed and applied. The results indicate that specifications with a small number of productiv...... of productivity types fit the data well compared to the homogeneous model....

  6. Evaluation of surface-wave waveform modeling for lithosphere velocity structure

    Science.gov (United States)

    Chang, Tao-Ming

    Surface-waveform modeling methods will become standard tools for studying the lithosphere structures because they can place greater constraints on earth structure and because of interest in the three-dimensional earth. The purpose of this study is to begin to learn the applicabilities and limitations of these methods. A surface-waveform inversion method is implemented using generalized seismological data functional theory. The method has been tested using synthetic and real seismic data and show that this method is well suited for teleseismic and regional seismograms. Like other linear inversion problems, this method also requires a good starting model. To ease reliance on good starting models, a global search technique, the genetic algorithm, has been applied to surface waveform modeling. This method can rapidly find good models for explaining surface-wave waveform at regional distance. However, this implementation also reveals that criteria which are widely used in seismological studies are not good enough to indicate the goodness of waveform fit. These two methods with the linear waveform inversion method, and traditional surface wave dispersion inversion method have been applied to a western Texas earthquake to test their abilities. The focal mechanism of the Texas event has been reestimated using a grid search for surface wave spectral amplitudes. A comparison of these four algorithms shows some interesting seismic evidences for lithosphere structure.

  7. Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Pantic, Maja

    2016-01-01

    Fitting algorithms for Active Appearance Models (AAMs) are usually considered to be robust but slow or fast but less able to generalize well to unseen variations. In this paper, we look into AAM fitting algorithms and make the following orthogonal contributions: We present a simple “project-out‿

  8. Fast and exact Newton and Bidirectional fitting of Active Appearance Models.

    Science.gov (United States)

    Kossaifi, Jean; Tzimiropoulos, Yorgos; Pantic, Maja

    2016-12-21

    Active Appearance Models (AAMs) are generative models of shape and appearance that have proven very attractive for their ability to handle wide changes in illumination, pose and occlusion when trained in the wild, while not requiring large training dataset like regression-based or deep learning methods. The problem of fitting an AAM is usually formulated as a non-linear least squares one and the main way of solving it is a standard Gauss-Newton algorithm. In this paper we extend Active Appearance Models in two ways: we first extend the Gauss-Newton framework by formulating a bidirectional fitting method that deforms both the image and the template to fit a new instance. We then formulate a second order method by deriving an efficient Newton method for AAMs fitting. We derive both methods in a unified framework for two types of Active Appearance Models, holistic and part-based, and additionally show how to exploit the structure in the problem to derive fast yet exact solutions. We perform a thorough evaluation of all algorithms on three challenging and recently annotated inthe- wild datasets, and investigate fitting accuracy, convergence properties and the influence of noise in the initialisation. We compare our proposed methods to other algorithms and show that they yield state-of-the-art results, out-performing other methods while having superior convergence properties.

  9. Elevation data fitting and precision analysis of Google Earth in road survey

    Science.gov (United States)

    Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei

    2018-05-01

    Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously

  10. The Meaning of Goodness-of-Fit Tests: Commentary on "Goodness-of-Fit Assessment of Item Response Theory Models"

    Science.gov (United States)

    Thissen, David

    2013-01-01

    In this commentary, David Thissen states that "Goodness-of-fit assessment for IRT models is maturing; it has come a long way from zero." Thissen then references prior works on "goodness of fit" in the index of Lord and Novick's (1968) classic text; Yen (1984); Drasgow, Levine, Tsien, Williams, and Mead (1995); Chen and…

  11. A Coupled 2 × 2D Babcock-Leighton Solar Dynamo Model. I. Surface Magnetic Flux Evolution

    Science.gov (United States)

    Lemerle, Alexandre; Charbonneau, Paul; Carignan-Dugas, Arnaud

    2015-09-01

    The need for reliable predictions of the solar activity cycle motivates the development of dynamo models incorporating a representation of surface processes sufficiently detailed to allow assimilation of magnetographic data. In this series of papers we present one such dynamo model, and document its behavior and properties. This first paper focuses on one of the model’s key components, namely surface magnetic flux evolution. Using a genetic algorithm, we obtain best-fit parameters of the transport model by least-squares minimization of the differences between the associated synthetic synoptic magnetogram and real magnetographic data for activity cycle 21. Our fitting procedure also returns Monte Carlo-like error estimates. We show that the range of acceptable surface meridional flow profiles is in good agreement with Doppler measurements, even though the latter are not used in the fitting process. Using a synthetic database of bipolar magnetic region (BMR) emergences reproducing the statistical properties of observed emergences, we also ascertain the sensitivity of global cycle properties, such as the strength of the dipole moment and timing of polarity reversal, to distinct realizations of BMR emergence, and on this basis argue that this stochasticity represents a primary source of uncertainty for predicting solar cycle characteristics.

  12. A COUPLED 2 × 2D BABCOCK–LEIGHTON SOLAR DYNAMO MODEL. I. SURFACE MAGNETIC FLUX EVOLUTION

    International Nuclear Information System (INIS)

    Lemerle, Alexandre; Charbonneau, Paul; Carignan-Dugas, Arnaud

    2015-01-01

    The need for reliable predictions of the solar activity cycle motivates the development of dynamo models incorporating a representation of surface processes sufficiently detailed to allow assimilation of magnetographic data. In this series of papers we present one such dynamo model, and document its behavior and properties. This first paper focuses on one of the model’s key components, namely surface magnetic flux evolution. Using a genetic algorithm, we obtain best-fit parameters of the transport model by least-squares minimization of the differences between the associated synthetic synoptic magnetogram and real magnetographic data for activity cycle 21. Our fitting procedure also returns Monte Carlo-like error estimates. We show that the range of acceptable surface meridional flow profiles is in good agreement with Doppler measurements, even though the latter are not used in the fitting process. Using a synthetic database of bipolar magnetic region (BMR) emergences reproducing the statistical properties of observed emergences, we also ascertain the sensitivity of global cycle properties, such as the strength of the dipole moment and timing of polarity reversal, to distinct realizations of BMR emergence, and on this basis argue that this stochasticity represents a primary source of uncertainty for predicting solar cycle characteristics

  13. A COUPLED 2 × 2D BABCOCK–LEIGHTON SOLAR DYNAMO MODEL. I. SURFACE MAGNETIC FLUX EVOLUTION

    Energy Technology Data Exchange (ETDEWEB)

    Lemerle, Alexandre; Charbonneau, Paul; Carignan-Dugas, Arnaud, E-mail: lemerle@astro.umontreal.ca, E-mail: paulchar@astro.umontreal.ca [Département de physique, Université de Montréal, 2900 boul. Édouard-Montpetit, Montréal, QC, H3T 1J4 (Canada)

    2015-09-01

    The need for reliable predictions of the solar activity cycle motivates the development of dynamo models incorporating a representation of surface processes sufficiently detailed to allow assimilation of magnetographic data. In this series of papers we present one such dynamo model, and document its behavior and properties. This first paper focuses on one of the model’s key components, namely surface magnetic flux evolution. Using a genetic algorithm, we obtain best-fit parameters of the transport model by least-squares minimization of the differences between the associated synthetic synoptic magnetogram and real magnetographic data for activity cycle 21. Our fitting procedure also returns Monte Carlo-like error estimates. We show that the range of acceptable surface meridional flow profiles is in good agreement with Doppler measurements, even though the latter are not used in the fitting process. Using a synthetic database of bipolar magnetic region (BMR) emergences reproducing the statistical properties of observed emergences, we also ascertain the sensitivity of global cycle properties, such as the strength of the dipole moment and timing of polarity reversal, to distinct realizations of BMR emergence, and on this basis argue that this stochasticity represents a primary source of uncertainty for predicting solar cycle characteristics.

  14. The l z ( p ) * Person-Fit Statistic in an Unfolding Model Context.

    Science.gov (United States)

    Tendeiro, Jorge N

    2017-01-01

    Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded unfolding model is used. Results from a simulation study indicate that the person-fit statistic performed relatively well in detecting midpoint response style patterns and not so well in detecting extreme response style patterns.

  15. Fit Gap Analysis – The Role of Business Process Reference Models

    Directory of Open Access Journals (Sweden)

    Dejan Pajk

    2013-12-01

    Full Text Available Enterprise resource planning (ERP systems support solutions for standard business processes such as financial, sales, procurement and warehouse. In order to improve the understandability and efficiency of their implementation, ERP vendors have introduced reference models that describe the processes and underlying structure of an ERP system. To select and successfully implement an ERP system, the capabilities of that system have to be compared with a company’s business needs. Based on a comparison, all of the fits and gaps must be identified and further analysed. This step usually forms part of ERP implementation methodologies and is called fit gap analysis. The paper theoretically overviews methods for applying reference models and describes fit gap analysis processes in detail. The paper’s first contribution is its presentation of a fit gap analysis using standard business process modelling notation. The second contribution is the demonstration of a process-based comparison approach between a supply chain process and an ERP system process reference model. In addition to its theoretical contributions, the results can also be practically applied to projects involving the selection and implementation of ERP systems.

  16. Soil physical properties influencing the fitting parameters in Philip and Kostiakov infiltration models

    International Nuclear Information System (INIS)

    Mbagwu, J.S.C.

    1994-05-01

    Among the many models developed for monitoring the infiltration process those of Philip and Kostiakov have been studied in detail because of their simplicity and the ease of estimating their fitting parameters. The important soil physical factors influencing the fitting parameters in these infiltration models are reported in this study. The results of the study show that the single most important soil property affecting the fitting parameters in these models is the effective porosity. 36 refs, 2 figs, 5 tabs

  17. Characterization of surface antigen protein 1 (SurA1) from Acinetobacter baumannii and its role in virulence and fitness.

    Science.gov (United States)

    Liu, Dong; Liu, Zeng-Shan; Hu, Pan; Cai, Ling; Fu, Bao-Quan; Li, Yan-Song; Lu, Shi-Ying; Liu, Nan-Nan; Ma, Xiao-Long; Chi, Dan; Chang, Jiang; Shui, Yi-Ming; Li, Zhao-Hui; Ahmad, Waqas; Zhou, Yu; Ren, Hong-Lin

    2016-04-15

    Acinetobacter baumannii is a Gram-negative bacillus that causes nosocomial infections, such as bacteremia, pneumonia, and meningitis and urinary tract and wound infections. In the present study, the surface antigen protein 1 (SurA1) gene of A. baumannii strain CCGGD201101 was identified, cloned and expressed, and then its roles in fitness and virulence were investigated. Virulence was observed in the human lung cancer cell lines A549 and HEp-2 at one week after treatment with recombinant SurA1. One isogenic SurA1 knock-out strain, GR0015, which was derived from the A. baumannii strain CCGGD201101 isolated from diseased chicks in a previous study, highlighted the effect of SurA1 on fitness and growth. Its growth rate in LB broth and killing activity in human sera were significantly decreased compared with strain CCGGD201101. In the Galleria mellonella insect model, the isogenic SurA1 knock-out strain exhibited a lower survival rate and decreased dissemination. These results suggest that SurA1 plays an important role in the fitness and virulence of A. baumannii. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing.

    Directory of Open Access Journals (Sweden)

    Jaclyn K Mann

    2014-08-01

    Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  19. Modeling the Acid-Base Properties of Montmorillonite Edge Surfaces.

    Science.gov (United States)

    Tournassat, Christophe; Davis, James A; Chiaberge, Christophe; Grangeon, Sylvain; Bourg, Ian C

    2016-12-20

    The surface reactivity of clay minerals remains challenging to characterize because of a duality of adsorption surfaces and mechanisms that does not exist in the case of simple oxide surfaces: edge surfaces of clay minerals have a variable proton surface charge arising from hydroxyl functional groups, whereas basal surfaces have a permanent negative charge arising from isomorphic substitutions. Hence, the relationship between surface charge and surface potential on edge surfaces cannot be described using the Gouy-Chapman relation, because of a spillover of negative electrostatic potential from the basal surface onto the edge surface. While surface complexation models can be modified to account for these features, a predictive fit of experimental data was not possible until recently, because of uncertainty regarding the densities and intrinsic pK a values of edge functional groups. Here, we reexamine this problem in light of new knowledge on intrinsic pK a values obtained over the past decade using ab initio molecular dynamics simulations, and we propose a new formalism to describe edge functional groups. Our simulation results yield reasonable predictions of the best available experimental acid-base titration data.

  20. Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data

    Science.gov (United States)

    McNeish, Daniel; Harring, Jeffrey R.

    2017-01-01

    To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…

  1. Supersymmetry with prejudice: Fitting the wrong model to LHC data

    Science.gov (United States)

    Allanach, B. C.; Dolan, Matthew J.

    2012-09-01

    We critically examine interpretations of hypothetical supersymmetric LHC signals, fitting to alternative wrong models of supersymmetry breaking. The signals we consider are some of the most constraining on the sparticle spectrum: invariant mass distributions with edges and endpoints from the golden decay chain q˜→qχ20(→l˜±l∓q)→χ10l+l-q. We assume a constrained minimal supersymmetric standard model (CMSSM) point to be the ‘correct’ one, but fit the signals instead with minimal gauge mediated supersymmetry breaking models (mGMSB) with a neutralino quasistable lightest supersymmetric particle, minimal anomaly mediation and large volume string compactification models. Minimal anomaly mediation and large volume scenario can be unambiguously discriminated against the CMSSM for the assumed signal and 1fb-1 of LHC data at s=14TeV. However, mGMSB would not be discriminated on the basis of the kinematic endpoints alone. The best-fit point spectra of mGMSB and CMSSM look remarkably similar, making experimental discrimination at the LHC based on the edges or Higgs properties difficult. However, using rate information for the golden chain should provide the additional separation required.

  2. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species.

    Science.gov (United States)

    Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R

    2017-01-04

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  3. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    Science.gov (United States)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  4. A person fit test for IRT models for polytomous items

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Dagohoy, A.V.

    2007-01-01

    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability

  5. The lz(p)* Person-Fit Statistic in an Unfolding Model Context

    NARCIS (Netherlands)

    Tendeiro, Jorge N.

    2017-01-01

    Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded

  6. Surface complexation modelling applied to the sorption of nickel on silica

    International Nuclear Information System (INIS)

    Olin, M.

    1995-10-01

    The modelling based on a mechanistic approach, of a sorption experiment is presented in the report. The system chosen for experiments (nickel + silica) is modelled by using literature values for some parameters, the remainder being fitted by existing experimental results. All calculations are performed by HYDRAQL, a model planned especially for surface complexation modelling. Allmost all the calculations are made by using the Triple-Layer Model (TLM) approach, which appeared to be sufficiently flexible for the silica system. The report includes a short description of mechanistic sorption models, input data, experimental results and modelling results (mostly graphical presentations). (13 refs., 40 figs., 4 tabs.)

  7. A computer program for fitting smooth surfaces to an aircraft configuration and other three dimensional geometries

    Science.gov (United States)

    Craidon, C. B.

    1975-01-01

    A computer program that uses a three-dimensional geometric technique for fitting a smooth surface to the component parts of an aircraft configuration is presented. The resulting surface equations are useful in performing various kinds of calculations in which a three-dimensional mathematical description is necessary. Programs options may be used to compute information for three-view and orthographic projections of the configuration as well as cross-section plots at any orientation through the configuration. The aircraft geometry input section of the program may be easily replaced with a surface point description in a different form so that the program could be of use for any three-dimensional surface equations.

  8. A Parametric Model of Shoulder Articulation for Virtual Assessment of Space Suit Fit

    Science.gov (United States)

    Kim, K. Han; Young, Karen S.; Bernal, Yaritza; Boppana, Abhishektha; Vu, Linh Q.; Benson, Elizabeth A.; Jarvis, Sarah; Rajulu, Sudhakar L.

    2016-01-01

    Suboptimal suit fit is a known risk factor for crewmember shoulder injury. Suit fit assessment is however prohibitively time consuming and cannot be generalized across wide variations of body shapes and poses. In this work, we have developed a new design tool based on the statistical analysis of body shape scans. This tool is aimed at predicting the skin deformation and shape variations for any body size and shoulder pose for a target population. This new process, when incorporated with CAD software, will enable virtual suit fit assessments, predictively quantifying the contact volume, and clearance between the suit and body surface at reduced time and cost.

  9. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan; Krebs-Smith, Susan M.; Midthune, Douglas; Perez, Adriana; Buckman, Dennis W.; Kipnis, Victor; Freedman, Laurence S.; Dodd, Kevin W.; Carroll, Raymond J

    2011-01-01

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  10. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  11. Nonlinear models for fitting growth curves of Nellore cows reared in the Amazon Biome

    Directory of Open Access Journals (Sweden)

    Kedma Nayra da Silva Marinho

    2013-09-01

    Full Text Available Growth curves of Nellore cows were estimated by comparing six nonlinear models: Brody, Logistic, two alternatives by Gompertz, Richards and Von Bertalanffy. The models were fitted to weight-age data, from birth to 750 days of age of 29,221 cows, born between 1976 and 2006 in the Brazilian states of Acre, Amapá, Amazonas, Pará, Rondônia, Roraima and Tocantins. The models were fitted by the Gauss-Newton method. The goodness of fit of the models was evaluated by using mean square error, adjusted coefficient of determination, prediction error and mean absolute error. Biological interpretation of parameters was accomplished by plotting estimated weights versus the observed weight means, instantaneous growth rate, absolute maturity rate, relative instantaneous growth rate, inflection point and magnitude of the parameters A (asymptotic weight and K (maturing rate. The Brody and Von Bertalanffy models fitted the weight-age data but the other models did not. The average weight (A and growth rate (K were: 384.6±1.63 kg and 0.0022±0.00002 (Brody and 313.40±0.70 kg and 0.0045±0.00002 (Von Bertalanffy. The Brody model provides better goodness of fit than the Von Bertalanffy model.

  12. A statistical model for the wettability of surfaces with heterogeneous pore geometries

    Science.gov (United States)

    Brockway, Lance; Taylor, Hayden

    2016-10-01

    We describe a new approach to modeling the wetting behavior of micro- and nano-textured surfaces with varying degrees of geometrical heterogeneity. Surfaces are modeled as pore arrays with a Gaussian distribution of sidewall reentrant angles and a characteristic wall roughness. Unlike conventional wettability models, our model considers the fraction of a surface’s pores that are filled at any time, allowing us to capture more subtle dependences of a liquid’s apparent contact angle on its surface tension. The model has four fitting parameters and is calibrated for a particular surface by measuring the apparent contact angles between the surface and at least four probe liquids. We have calibrated the model for three heterogeneous nanoporous surfaces that we have fabricated: a hydrothermally grown zinc oxide, a film of polyvinylidene fluoride (PVDF) microspheres formed by spinodal decomposition, and a polytetrafluoroethylene (PTFE) film with pores defined by sacrificial polystyrene microspheres. These three surfaces show markedly different dependences of a liquid’s apparent contact angle on the liquid’s surface tension, and the results can be explained by considering geometric variability. The highly variable PTFE pores yield the most gradual variation of apparent contact angle with probe liquid surface tension. The PVDF microspheres are more regular in diameter and, although connected in an irregular manner, result in a much sharper transition from non-wetting to wetting behavior as surface tension reduces. We also demonstrate, by terminating porous zinc oxide with three alternative hydrophobic molecules, that a single geometrical model can capture a structure’s wetting behavior for multiple surface chemistries and liquids. Finally, we contrast our results with those from a highly regular, lithographically-produced structure which shows an extremely sharp dependence of wettability on surface tension. This new model could be valuable in designing and

  13. Research on Calculation of the IOL Tilt and Decentration Based on Surface Fitting

    OpenAIRE

    Li, Lin; Wang, Ke; Yan, Yan; Song, Xudong; Liu, Zhicheng

    2013-01-01

    The tilt and decentration of intraocular lens (IOL) result in defocussing, astigmatism, and wavefront aberration after operation. The objective is to give a method to estimate the tilt and decentration of IOL more accurately. Based on AS-OCT images of twelve eyes from eight cases with subluxation lens after operation, we fitted spherical equation to the data obtained from the images of the anterior and posterior surfaces of the IOL. By the established relationship between IOL tilt (decentrati...

  14. Model-driven harmonic parameterization of the cortical surface: HIP-HOP.

    Science.gov (United States)

    Auzias, G; Lefèvre, J; Le Troter, A; Fischer, C; Perrot, M; Régis, J; Coulon, O

    2013-05-01

    In the context of inter subject brain surface matching, we present a parameterization of the cortical surface constrained by a model of cortical organization. The parameterization is defined via an harmonic mapping of each hemisphere surface to a rectangular planar domain that integrates a representation of the model. As opposed to previous landmark-based registration methods we do not match folds between individuals but instead optimize the fit between cortical sulci and specific iso-coordinate axis in the model. This strategy overcomes some limitation to sulcus-based registration techniques such as topological variability in sulcal landmarks across subjects. Experiments on 62 subjects with manually traced sulci are presented and compared with the result of the Freesurfer software. The evaluation involves a measure of dispersion of sulci with both angular and area distortions. We show that the model-based strategy can lead to a natural, efficient and very fast (less than 5 min per hemisphere) method for defining inter subjects correspondences. We discuss how this approach also reduces the problems inherent to anatomically defined landmarks and open the way to the investigation of cortical organization through the notion of orientation and alignment of structures across the cortex.

  15. Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.

    Science.gov (United States)

    Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F

    2009-11-01

    Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database.

  16. Multi-binding site model-based curve-fitting program for the computation of RIA data

    International Nuclear Information System (INIS)

    Malan, P.G.; Ekins, R.P.; Cox, M.G.; Long, E.M.R.

    1977-01-01

    In this paper, a comparison will be made of model-based and empirical curve-fitting procedures. The implementation of a multiple binding-site curve-fitting model which will successfully fit a wide range of assay data, and which can be run on a mini-computer is described. The latter sophisticated model also provides estimates of binding site concentrations and the values of the respective equilibrium constants present: the latter have been used for refining assay conditions using computer optimisation techniques. (orig./AJ) [de

  17. Simultaneous fitting of a potential-energy surface and its corresponding force fields using feedforward neural networks

    Science.gov (United States)

    Pukrittayakamee, A.; Malshe, M.; Hagan, M.; Raff, L. M.; Narulkar, R.; Bukkapatnum, S.; Komanduri, R.

    2009-04-01

    An improved neural network (NN) approach is presented for the simultaneous development of accurate potential-energy hypersurfaces and corresponding force fields that can be utilized to conduct ab initio molecular dynamics and Monte Carlo studies on gas-phase chemical reactions. The method is termed as combined function derivative approximation (CFDA). The novelty of the CFDA method lies in the fact that although the NN has only a single output neuron that represents potential energy, the network is trained in such a way that the derivatives of the NN output match the gradient of the potential-energy hypersurface. Accurate force fields can therefore be computed simply by differentiating the network. Both the computed energies and the gradients are then accurately interpolated using the NN. This approach is superior to having the gradients appear in the output layer of the NN because it greatly simplifies the required architecture of the network. The CFDA permits weighting of function fitting relative to gradient fitting. In every test that we have run on six different systems, CFDA training (without a validation set) has produced smaller out-of-sample testing error than early stopping (with a validation set) or Bayesian regularization (without a validation set). This indicates that CFDA training does a better job of preventing overfitting than the standard methods currently in use. The training data can be obtained using an empirical potential surface or any ab initio method. The accuracy and interpolation power of the method have been tested for the reaction dynamics of H+HBr using an analytical potential. The results show that the present NN training technique produces more accurate fits to both the potential-energy surface as well as the corresponding force fields than the previous methods. The fitting and interpolation accuracy is so high (rms error=1.2 cm-1) that trajectories computed on the NN potential exhibit point-by-point agreement with corresponding

  18. Using the Flipchem Photochemistry Model When Fitting Incoherent Scatter Radar Data

    Science.gov (United States)

    Reimer, A. S.; Varney, R. H.

    2017-12-01

    The North face Resolute Bay Incoherent Scatter Radar (RISR-N) routinely images the dynamics of the polar ionosphere, providing measurements of the plasma density, electron temperature, ion temperature, and line of sight velocity with seconds to minutes time resolution. RISR-N does not directly measure ionospheric parameters, but backscattered signals, recording them as voltage samples. Using signal processing techniques, radar autocorrelation functions (ACF) are estimated from the voltage samples. A model of the signal ACF is then fitted to the ACF using non-linear least-squares techniques to obtain the best-fit ionospheric parameters. The signal model, and therefore the fitted parameters, depend on the ionospheric ion composition that is used [e.g. Zettergren et. al. (2010), Zou et. al. (2017)].The software used to process RISR-N ACF data includes the "flipchem" model, which is an ion photochemistry model developed by Richards [2011] that was adapted from the Field LineInterhemispheric Plasma (FLIP) model. Flipchem requires neutral densities, neutral temperatures, electron density, ion temperature, electron temperature, solar zenith angle, and F10.7 as inputs to compute ion densities, which are input to the signal model. A description of how the flipchem model is used in RISR-N fitting software will be presented. Additionally, a statistical comparison of the fitted electron density, ion temperature, electron temperature, and velocity obtained using a flipchem ionosphere, a pure O+ ionosphere, and a Chapman O+ ionosphere will be presented. The comparison covers nearly two years of RISR-N data (April 2015 - December 2016). Richards, P. G. (2011), Reexamination of ionospheric photochemistry, J. Geophys. Res., 116, A08307, doi:10.1029/2011JA016613.Zettergren, M., Semeter, J., Burnett, B., Oliver, W., Heinselman, C., Blelly, P.-L., and Diaz, M.: Dynamic variability in F-region ionospheric composition at auroral arc boundaries, Ann. Geophys., 28, 651-664, https

  19. Fitting Latent Cluster Models for Networks with latentnet

    Directory of Open Access Journals (Sweden)

    Pavel N. Krivitsky

    2007-12-01

    Full Text Available latentnet is a package to fit and evaluate statistical latent position and cluster models for networks. Hoff, Raftery, and Handcock (2002 suggested an approach to modeling networks based on positing the existence of an latent space of characteristics of the actors. Relationships form as a function of distances between these characteristics as well as functions of observed dyadic level covariates. In latentnet social distances are represented in a Euclidean space. It also includes a variant of the extension of the latent position model to allow for clustering of the positions developed in Handcock, Raftery, and Tantrum (2007.The package implements Bayesian inference for the models based on an Markov chain Monte Carlo algorithm. It can also compute maximum likelihood estimates for the latent position model and a two-stage maximum likelihood method for the latent position cluster model. For latent position cluster models, the package provides a Bayesian way of assessing how many groups there are, and thus whether or not there is any clustering (since if the preferred number of groups is 1, there is little evidence for clustering. It also estimates which cluster each actor belongs to. These estimates are probabilistic, and provide the probability of each actor belonging to each cluster. It computes four types of point estimates for the coefficients and positions: maximum likelihood estimate, posterior mean, posterior mode and the estimator which minimizes Kullback-Leibler divergence from the posterior. You can assess the goodness-of-fit of the model via posterior predictive checks. It has a function to simulate networks from a latent position or latent position cluster model.

  20. Twitter classification model: the ABC of two million fitness tweets.

    Science.gov (United States)

    Vickey, Theodore A; Ginis, Kathleen Martin; Dabrowski, Maciej

    2013-09-01

    The purpose of this project was to design and test data collection and management tools that can be used to study the use of mobile fitness applications and social networking within the context of physical activity. This project was conducted over a 6-month period and involved collecting publically shared Twitter data from five mobile fitness apps (Nike+, RunKeeper, MyFitnessPal, Endomondo, and dailymile). During that time, over 2.8 million tweets were collected, processed, and categorized using an online tweet collection application and a customized JavaScript. Using the grounded theory, a classification model was developed to categorize and understand the types of information being shared by application users. Our data show that by tracking mobile fitness app hashtags, a wealth of information can be gathered to include but not limited to daily use patterns, exercise frequency, location-based workouts, and overall workout sentiment.

  1. Interstation phase speed and amplitude measurements of surface waves with nonlinear waveform fitting: application to USArray

    Science.gov (United States)

    Hamada, K.; Yoshizawa, K.

    2015-09-01

    A new method of fully nonlinear waveform fitting to measure interstation phase speeds and amplitude ratios is developed and applied to USArray. The Neighbourhood Algorithm is used as a global optimizer, which efficiently searches for model parameters that fit two observed waveforms on a common great-circle path by modulating the phase and amplitude terms of the fundamental-mode surface waves. We introduce the reliability parameter that represents how well the waveforms at two stations can be fitted in a time-frequency domain, which is used as a data selection criterion. The method is applied to observed waveforms of USArray for seismic events in the period from 2007 to 2010 with moment magnitude greater than 6.0. We collect a large number of phase speed data (about 75 000 for Rayleigh and 20 000 for Love) and amplitude ratio data (about 15 000 for Rayleigh waves) in a period range from 30 to 130 s. The majority of the interstation distances of measured dispersion data is less than 1000 km, which is much shorter than the typical average path-length of the conventional single-station measurements for source-receiver pairs. The phase speed models for Rayleigh and Love waves show good correlations on large scales with the recent tomographic maps derived from different approaches for phase speed mapping; for example, significant slow anomalies in volcanic regions in the western Unites States and fast anomalies in the cratonic region. Local-scale phase speed anomalies corresponding to the major tectonic features in the western United States, such as Snake River Plains, Basin and Range, Colorado Plateau and Rio Grande Rift have also been identified clearly in the phase speed models. The short-path information derived from our interstation measurements helps to increase the achievable horizontal resolution. We have also performed joint inversions for phase speed maps using the measured phase and amplitude ratio data of vertical component Rayleigh waves. These maps exhibit

  2. General Fit-Basis Functions and Specialized Coordinates in an Adaptive Density-Guided Approach to Potential Energy Surfaces

    DEFF Research Database (Denmark)

    Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide

    . This results in a decreased number of single point calculations required during the potential construction. Especially the Morse-like fit-basis functions are of interest, when combined with rectilinear hybrid optimized and localized coordinates (HOLCs), which can be generated as orthogonal transformations......The overall shape of a molecular energy surface can be very different for different molecules and different vibrational coordinates. This means that the fit-basis functions used to generate an analytic representation of a potential will be met with different requirements. It is therefore worthwhile...... single point calculations when constructing the molecular potential. We therefore present a uniform framework that can handle general fit-basis functions of any type which are specified on input. This framework is implemented to suit the black-box nature of the ADGA in order to avoid arbitrary choices...

  3. Laser surface texturing for high control of interference fit joint load bearing

    Science.gov (United States)

    Obeidi, M. Ahmed; McCarthy, E.; Brabazon, D.

    2017-10-01

    Laser beams attract the attention of researchers, engineers and manufacturer as they can deliver high energy with finite controlled processing parameters and heat affected zone (HAZ) on almost all kind of materials [1-3]. Laser beams can be generated in the broad range of wavelengths, energies and beam modes in addition to the unique property of propagation in straight lines with less or negligible divergence [3]. These features made lasers preferential for metal treatment and surface modification over the conventional machining and heat treatment methods. Laser material forming and processing is prosperous and competitive because of its flexibility and the creation of new solutions and techniques [3-5]. This study is focused on the laser surface texture of 316L stainless steel pins for the application of interference fit, widely used in automotive and aerospace industry. The main laser processing parameters applied are the power, frequency and the overlapping laser beam scans. The produced samples were characterized by measuring the increase in the insertion diameter, insertion and removal force, surface morphology and cross section alteration and the modified layer chemical composition and residual stresses.

  4. Brief communication: human cranial variation fits iterative founder effect model with African origin.

    Science.gov (United States)

    von Cramon-Taubadel, Noreen; Lycett, Stephen J

    2008-05-01

    Recent studies comparing craniometric and neutral genetic affinity matrices have concluded that, on average, human cranial variation fits a model of neutral expectation. While human craniometric and genetic data fit a model of isolation by geographic distance, it is not yet clear whether this is due to geographically mediated gene flow or human dispersal events. Recently, human genetic data have been shown to fit an iterative founder effect model of dispersal with an African origin, in line with the out-of-Africa replacement model for modern human origins, and Manica et al. (Nature 448 (2007) 346-349) have demonstrated that human craniometric data also fit this model. However, in contrast with the neutral model of cranial evolution suggested by previous studies, Manica et al. (2007) made the a priori assumption that cranial form has been subject to climatically driven natural selection and therefore correct for climate prior to conducting their analyses. Here we employ a modified theoretical and methodological approach to test whether human cranial variability fits the iterative founder effect model. In contrast with Manica et al. (2007) we employ size-adjusted craniometric variables, since climatic factors such as temperature have been shown to correlate with aspects of cranial size. Despite these differences, we obtain similar results to those of Manica et al. (2007), with up to 26% of global within-population craniometric variation being explained by geographic distance from sub-Saharan Africa. Comparative analyses using non-African origins do not yield significant results. The implications of these results are discussed in the light of the modern human origins debate. (c) 2007 Wiley-Liss, Inc.

  5. Surface Complexation Modeling in Variable Charge Soils: Prediction of Cadmium Adsorption

    Directory of Open Access Journals (Sweden)

    Giuliano Marchi

    2015-10-01

    Full Text Available ABSTRACT Intrinsic equilibrium constants for 22 representative Brazilian Oxisols were estimated from a cadmium adsorption experiment. Equilibrium constants were fitted to two surface complexation models: diffuse layer and constant capacitance. Intrinsic equilibrium constants were optimized by FITEQL and by hand calculation using Visual MINTEQ in sweep mode, and Excel spreadsheets. Data from both models were incorporated into Visual MINTEQ. Constants estimated by FITEQL and incorporated in Visual MINTEQ software failed to predict observed data accurately. However, FITEQL raw output data rendered good results when predicted values were directly compared with observed values, instead of incorporating the estimated constants into Visual MINTEQ. Intrinsic equilibrium constants optimized by hand calculation and incorporated in Visual MINTEQ reliably predicted Cd adsorption reactions on soil surfaces under changing environmental conditions.

  6. Kernel-density estimation and approximate Bayesian computation for flexible epidemiological model fitting in Python.

    Science.gov (United States)

    Irvine, Michael A; Hollingsworth, T Déirdre

    2018-05-26

    Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Acid base properties of a goethite surface model: A theoretical view

    Science.gov (United States)

    Aquino, Adelia J. A.; Tunega, Daniel; Haberhauer, Georg; Gerzabek, Martin H.; Lischka, Hans

    2008-08-01

    Density functional theory is used to compute the effect of protonation, deprotonation, and dehydroxylation of different reactive sites of a goethite surface modeled as a cluster containing six iron atoms constructed from a slab model of the (1 1 0) goethite surface. Solvent effects were treated at two different levels: (i) by inclusion of up to six water molecules explicitly into the quantum chemical calculation and (ii) by using additionally a continuum solvation model for the long-range interactions. Systematic studies were made in order to test the limit of the fully hydrated cluster surfaces by a monomolecular water layer. The main finding is that from the three different types of surface hydroxyl groups (hydroxo, μ-hydroxo, and μ 3-hydroxo), the hydroxo group is most active for protonation whereas μ- and μ 3-hydroxo sites undergo deprotonation more easily. Proton affinity constants (p Ka values) were computed from appropriate protonation/deprotonation reactions for all sites investigated and compared to results obtained from the multisite complexation model (MUSIC). The approach used was validated for the consecutive deprotonation reactions of the [Fe(H 2O) 6] 3+ complex in solution and good agreement between calculated and experimental p Ka values was found. The computed p Ka for all sites of the modeled goethite surface were used in the prediction of the pristine point of zero charge, pH PPZN. The obtained value of 9.1 fits well with published experimental values of 7.0-9.5.

  8. Model-independent partial wave analysis using a massively-parallel fitting framework

    Science.gov (United States)

    Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.

    2017-10-01

    The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.

  9. Load-dependent surface diffusion model for analyzing the kinetics of protein adsorption onto mesoporous materials.

    Science.gov (United States)

    Marbán, Gregorio; Ramírez-Montoya, Luis A; García, Héctor; Menéndez, J Ángel; Arenillas, Ana; Montes-Morán, Miguel A

    2018-02-01

    The adsorption of cytochrome c in water onto organic and carbon xerogels with narrow pore size distributions has been studied by carrying out transient and equilibrium batch adsorption experiments. It was found that equilibrium adsorption exhibits a quasi-Langmuirian behavior (a g coefficient in the Redlich-Peterson isotherms of over 0.95) involving the formation of a monolayer of cyt c with a depth of ∼4nm on the surface of all xerogels for a packing density of the protein inside the pores of 0.29gcm -3 . A load-dependent surface diffusion model (LDSDM) has been developed and numerically solved to fit the experimental kinetic adsorption curves. The results of the LDSDM show better fittings than the standard homogeneous surface diffusion model. The value of the external mass transfer coefficient obtained by numerical optimization confirms that the process is controlled by the intraparticle surface diffusion of cyt c. The surface diffusion coefficients decrease with increasing protein load down to zero for the maximum possible load. The decrease is steeper in the case of the xerogels with the smallest average pore diameter (∼15nm), the limit at which the zero-load diffusion coefficient of cyt c also begins to be negatively affected by interactions with the opposite wall of the pore. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Surface characteristics modeling and performance evaluation of urban building materials using LiDAR data.

    Science.gov (United States)

    Li, Xiaolu; Liang, Yu

    2015-05-20

    Analysis of light detection and ranging (LiDAR) intensity data to extract surface features is of great interest in remote sensing research. One potential application of LiDAR intensity data is target classification. A new bidirectional reflectance distribution function (BRDF) model is derived for target characterization of rough and smooth surfaces. Based on the geometry of our coaxial full-waveform LiDAR system, the integration method is improved through coordinate transformation to establish the relationship between the BRDF model and intensity data of LiDAR. A series of experiments using typical urban building materials are implemented to validate the proposed BRDF model and integration method. The fitting results show that three parameters extracted from the proposed BRDF model can distinguish the urban building materials from perspectives of roughness, specular reflectance, and diffuse reflectance. A comprehensive analysis of these parameters will help characterize surface features in a physically rigorous manner.

  11. The issue of statistical power for overall model fit in evaluating structural equation models

    Directory of Open Access Journals (Sweden)

    Richard HERMIDA

    2015-06-01

    Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.

  12. Fitting and comparing competing models of the species abundance distribution: assessment and prospect

    Directory of Open Access Journals (Sweden)

    Thomas J Matthews

    2014-06-01

    Full Text Available A species abundance distribution (SAD characterises patterns in the commonness and rarity of all species within an ecological community. As such, the SAD provides the theoretical foundation for a number of other biogeographical and macroecological patterns, such as the species–area relationship, as well as being an interesting pattern in its own right. While there has been resurgence in the study of SADs in the last decade, less focus has been placed on methodology in SAD research, and few attempts have been made to synthesise the vast array of methods which have been employed in SAD model evaluation. As such, our review has two aims. First, we provide a general overview of SADs, including descriptions of the commonly used distributions, plotting methods and issues with evaluating SAD models. Second, we review a number of recent advances in SAD model fitting and comparison. We conclude by providing a list of recommendations for fitting and evaluating SAD models. We argue that it is time for SAD studies to move away from many of the traditional methods available for fitting and evaluating models, such as sole reliance on the visual examination of plots, and embrace statistically rigorous techniques. In particular, we recommend the use of both goodness-of-fit tests and model-comparison analyses because each provides unique information which one can use to draw inferences.

  13. Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.

    Science.gov (United States)

    Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei

    2015-02-01

    This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.

  14. A goodness-of-fit test for occupancy models with correlated within-season revisits

    Science.gov (United States)

    Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.

    2016-01-01

    Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and

  15. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    OpenAIRE

    Matthew P. Adams; Catherine J. Collier; Sven Uthicke; Yan X. Ow; Lucas Langlois; Katherine R. O’Brien

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluat...

  16. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development.

    Science.gov (United States)

    Tøndel, Kristin; Niederer, Steven A; Land, Sander; Smith, Nicolas P

    2014-05-20

    Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input-output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on

  17. Modeling Bacteria Surface Acid-Base Properties: The Overprint Of Biology

    Science.gov (United States)

    Amores, D. R.; Smith, S.; Warren, L. A.

    2009-05-01

    Bacteria are ubiquitous in the environment and are important repositories for metals as well as nucleation templates for a myriad of secondary minerals due to an abundance of reactive surface binding sites. Model elucidation of whole cell surface reactivity simplifies bacteria as viable but static, i.e., no metabolic activity, to enable fits of microbial data sets from models derived from mineral surfaces. Here we investigate the surface proton charging behavior of live and dead whole cell cyanobacteria (Synechococcus sp.) harvested from a single parent culture by acid-base titration using a Fully Optimized ContinUouS (FOCUS) pKa spectrum method. Viability of live cells was verified by successful recultivation post experimentation, whereas dead cells were consistently non-recultivable. Surface site identities derived from binding constants determined for both the live and dead cells are consistent with molecular analogs for organic functional groups known to occur on microbial surfaces: carboxylic (pKa = 2.87-3.11), phosphoryl (pKa = 6.01-6.92) and amine/hydroxyl groups (pKa = 9.56-9.99). However, variability in total ligand concentration among the live cells is greater than those between the live and dead. The total ligand concentrations (LT, mol- mg-1 dry solid) derived from the live cell titrations (n=12) clustered into two sub-populations: high (LT = 24.4) and low (LT = 5.8), compared to the single concentration for the dead cell titrations (LT = 18.8; n=5). We infer from these results that metabolic activity can substantively impact surface reactivity of morphologically identical cells. These results and their modeling implications for bacteria surface reactivities will be discussed.

  18. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    Science.gov (United States)

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  19. Effects of core strength training using stable versus unstable surfaces on physical fitness in adolescents: a randomized controlled trial.

    Science.gov (United States)

    Granacher, Urs; Schellbach, Jörg; Klein, Katja; Prieske, Olaf; Baeyens, Jean-Pierre; Muehlbauer, Thomas

    2014-01-01

    It has been demonstrated that core strength training is an effective means to enhance trunk muscle strength (TMS) and proxies of physical fitness in youth. Of note, cross-sectional studies revealed that the inclusion of unstable elements in core strengthening exercises produced increases in trunk muscle activity and thus provide potential extra training stimuli for performance enhancement. Thus, utilizing unstable surfaces during core strength training may even produce larger performance gains. However, the effects of core strength training using unstable surfaces are unresolved in youth. This randomized controlled study specifically investigated the effects of core strength training performed on stable surfaces (CSTS) compared to unstable surfaces (CSTU) on physical fitness in school-aged children. Twenty-seven (14 girls, 13 boys) healthy subjects (mean age: 14 ± 1 years, age range: 13-15 years) were randomly assigned to a CSTS (n = 13) or a CSTU (n = 14) group. Both training programs lasted 6 weeks (2 sessions/week) and included frontal, dorsal, and lateral core exercises. During CSTU, these exercises were conducted on unstable surfaces (e.g., TOGU© DYNAIR CUSSIONS, THERA-BAND© STABILITY TRAINER). Significant main effects of Time (pre vs. post) were observed for the TMS tests (8-22%, f = 0.47-0.76), the jumping sideways test (4-5%, f = 1.07), and the Y balance test (2-3%, f = 0.46-0.49). Trends towards significance were found for the standing long jump test (1-3%, f = 0.39) and the stand-and-reach test (0-2%, f = 0.39). We could not detect any significant main effects of Group. Significant Time x Group interactions were detected for the stand-and-reach test in favour of the CSTU group (2%, f = 0.54). Core strength training resulted in significant increases in proxies of physical fitness in adolescents. However, CSTU as compared to CSTS had only limited additional effects (i.e., stand-and-reach test). Consequently, if the

  20. inner-sphere complexation of cations at the rutile-water interface: A concise surface structural interpretation with the CD and MUSIC model

    Energy Technology Data Exchange (ETDEWEB)

    Ridley, Mora K. [Texas Tech University, Lubbock; Hiemstra, T [Oak Ridge National Laboratory (ORNL); Van Riemsdijk, Willem H. [Wageningen University and Research Centre, The Netherlands; Machesky, Michael L. [Illinois State Water Survey, Champaign, IL

    2009-01-01

    Acid base reactivity and ion-interaction between mineral surfaces and aqueous solutions is most frequently investigated at the macroscopic scale as a function of pH. Experimental data are then rationalized by a variety of surface complexation models. These models are thermodynamically based which in principle does not require a molecular picture. The models are typically calibrated to relatively simple solid-electrolyte solution pairs and may provide poor descriptions of complex multicomponent mineral aqueous solutions, including those found in natural environments. Surface complexation models may be improved by incorporating molecular-scale surface structural information to constrain the modeling efforts. Here, we apply a concise, molecularly-constrained surface complexation model to a diverse suite of surface titration data for rutile and thereby begin to address the complexity of multi-component systems. Primary surface charging curves in NaCl, KCl, and RbCl electrolyte media were fit simultaneously using a charge distribution (CD) and multisite complexation (MUSIC) model [Hiemstra T. and Van Riemsdijk W. H. (1996) A surface structural approach to ion adsorption: the charge distribution (CD) model. J. Colloid Interf. Sci. 179, 488 508], coupled with a Basic Stern layer description of the electric double layer. In addition, data for the specific interaction of Ca2+ and Sr2+ with rutile, in NaCl and RbCl media, were modeled. In recent developments, spectroscopy, quantum calculations, and molecular simulations have shown that electrolyte and divalent cations are principally adsorbed in various inner-sphere configurations on the rutile 110 surface [Zhang Z., Fenter P., Cheng L., Sturchio N. C., Bedzyk M. J., Pr edota M., Bandura A., Kubicki J., Lvov S. N., Cummings P. T., Chialvo A. A., Ridley M. K., Be ne zeth P., Anovitz L., Palmer D. A., Machesky M. L. and Wesolowski D. J. (2004) Ion adsorption at the rutile water interface: linking molecular and macroscopic

  1. Inner-sphere complexation of cations at the rutile-water interface: A concise surface structural interpretation with the CD and MUSIC model

    Science.gov (United States)

    Ridley, Moira K.; Hiemstra, Tjisse; van Riemsdijk, Willem H.; Machesky, Michael L.

    2009-04-01

    Acid-base reactivity and ion-interaction between mineral surfaces and aqueous solutions is most frequently investigated at the macroscopic scale as a function of pH. Experimental data are then rationalized by a variety of surface complexation models. These models are thermodynamically based which in principle does not require a molecular picture. The models are typically calibrated to relatively simple solid-electrolyte solution pairs and may provide poor descriptions of complex multi-component mineral-aqueous solutions, including those found in natural environments. Surface complexation models may be improved by incorporating molecular-scale surface structural information to constrain the modeling efforts. Here, we apply a concise, molecularly-constrained surface complexation model to a diverse suite of surface titration data for rutile and thereby begin to address the complexity of multi-component systems. Primary surface charging curves in NaCl, KCl, and RbCl electrolyte media were fit simultaneously using a charge distribution (CD) and multisite complexation (MUSIC) model [Hiemstra T. and Van Riemsdijk W. H. (1996) A surface structural approach to ion adsorption: the charge distribution (CD) model. J. Colloid Interf. Sci. 179, 488-508], coupled with a Basic Stern layer description of the electric double layer. In addition, data for the specific interaction of Ca 2+ and Sr 2+ with rutile, in NaCl and RbCl media, were modeled. In recent developments, spectroscopy, quantum calculations, and molecular simulations have shown that electrolyte and divalent cations are principally adsorbed in various inner-sphere configurations on the rutile 1 1 0 surface [Zhang Z., Fenter P., Cheng L., Sturchio N. C., Bedzyk M. J., Předota M., Bandura A., Kubicki J., Lvov S. N., Cummings P. T., Chialvo A. A., Ridley M. K., Bénézeth P., Anovitz L., Palmer D. A., Machesky M. L. and Wesolowski D. J. (2004) Ion adsorption at the rutile-water interface: linking molecular and macroscopic

  2. A radiosity-based model to compute the radiation transfer of soil surface

    Science.gov (United States)

    Zhao, Feng; Li, Yuguang

    2011-11-01

    A good understanding of interactions of electromagnetic radiation with soil surface is important for a further improvement of remote sensing methods. In this paper, a radiosity-based analytical model for soil Directional Reflectance Factor's (DRF) distributions was developed and evaluated. The model was specifically dedicated to the study of radiation transfer for the soil surface under tillage practices. The soil was abstracted as two dimensional U-shaped or V-shaped geometric structures with periodic macroscopic variations. The roughness of the simulated surfaces was expressed as a ratio of the height to the width for the U and V-shaped structures. The assumption was made that the shadowing of soil surface, simulated by U or V-shaped grooves, has a greater influence on the soil reflectance distribution than the scattering properties of basic soil particles of silt and clay. Another assumption was that the soil is a perfectly diffuse reflector at a microscopic level, which is a prerequisite for the application of the radiosity method. This radiosity-based analytical model was evaluated by a forward Monte Carlo ray-tracing model under the same structural scenes and identical spectral parameters. The statistics of these two models' BRF fitting results for several soil structures under the same conditions showed the good agreements. By using the model, the physical mechanism of the soil bidirectional reflectance pattern was revealed.

  3. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  4. Person-fit to the Five Factor Model of personality

    Czech Academy of Sciences Publication Activity Database

    Allik, J.; Realo, A.; Mõttus, R.; Borkenau, P.; Kuppens, P.; Hřebíčková, Martina

    2012-01-01

    Roč. 71, č. 1 (2012), s. 35-45 ISSN 1421-0185 R&D Projects: GA ČR GAP407/10/2394 Institutional research plan: CEZ:AV0Z70250504 Keywords : Five Factor Model * cross - cultural comparison * person-fit Subject RIV: AN - Psychology Impact factor: 0.638, year: 2012

  5. Study on fitness functions of genetic algorithm for dynamically correcting nuclide atmospheric diffusion model

    International Nuclear Information System (INIS)

    Ji Zhilong; Ma Yuanwei; Wang Dezhong

    2014-01-01

    Background: In radioactive nuclides atmospheric diffusion models, the empirical dispersion coefficients were deduced under certain experiment conditions, whose difference with nuclear accident conditions is a source of deviation. A better estimation of the radioactive nuclide's actual dispersion process could be done by correcting dispersion coefficients with observation data, and Genetic Algorithm (GA) is an appropriate method for this correction procedure. Purpose: This study is to analyze the fitness functions' influence on the correction procedure and the forecast ability of diffusion model. Methods: GA, coupled with Lagrange dispersion model, was used in a numerical simulation to compare 4 fitness functions' impact on the correction result. Results: In the numerical simulation, the fitness function with observation deviation taken into consideration stands out when significant deviation exists in the observed data. After performing the correction procedure on the Kincaid experiment data, a significant boost was observed in the diffusion model's forecast ability. Conclusion: As the result shows, in order to improve dispersion models' forecast ability using GA, observation data should be given different weight in the fitness function corresponding to their error. (authors)

  6. Model Fitting for Predicted Precipitation in Darwin: Some Issues with Model Choice

    Science.gov (United States)

    Farmer, Jim

    2010-01-01

    In Volume 23(2) of the "Australian Senior Mathematics Journal," Boncek and Harden present an exercise in fitting a Markov chain model to rainfall data for Darwin Airport (Boncek & Harden, 2009). Days are subdivided into those with precipitation and precipitation-free days. The author abbreviates these labels to wet days and dry days.…

  7. Predictive model for convective flows induced by surface reactivity contrast

    Science.gov (United States)

    Davidson, Scott M.; Lammertink, Rob G. H.; Mani, Ali

    2018-05-01

    Concentration gradients in a fluid adjacent to a reactive surface due to contrast in surface reactivity generate convective flows. These flows result from contributions by electro- and diffusio-osmotic phenomena. In this study, we have analyzed reactive patterns that release and consume protons, analogous to bimetallic catalytic conversion of peroxide. Similar systems have typically been studied using either scaling analysis to predict trends or costly numerical simulation. Here, we present a simple analytical model, bridging the gap in quantitative understanding between scaling relations and simulations, to predict the induced potentials and consequent velocities in such systems without the use of any fitting parameters. Our model is tested against direct numerical solutions to the coupled Poisson, Nernst-Planck, and Stokes equations. Predicted slip velocities from the model and simulations agree to within a factor of ≈2 over a multiple order-of-magnitude change in the input parameters. Our analysis can be used to predict enhancement of mass transport and the resulting impact on overall catalytic conversion, and is also applicable to predicting the speed of catalytic nanomotors.

  8. Parameter optimization for surface flux transport models

    Science.gov (United States)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  9. Hydrous ferric oxide: evaluation of Cd-HFO surface complexation models combining Cd(K) EXAFS data, potentiometric titration results, and surface site structures identified from mineralogical knowledge.

    Science.gov (United States)

    Spadini, Lorenzo; Schindler, Paul W; Charlet, Laurent; Manceau, Alain; Vala Ragnarsdottir, K

    2003-10-01

    The surface properties of ferrihydrite were studied by combining wet chemical data, Cd(K) EXAFS data, and a surface structure and protonation model of the ferrihydrite surface. Acid-base titration experiments and Cd(II)-ferrihydrite sorption experiments were performed within 3titration data could be adequately modeled by triple bond Fe- OH(2)(+1/2)-H(+)triple bond Fe-OH(-1/2),logk((int))=-8.29, assuming the existence of a unique intrinsic microscopic constant, logk((int)), and consequently the existence of a single significant type of acid-base reactive functional groups. The surface structure model indicates that these groups are terminal water groups. The Cd(II) data were modeled assuming the existence of a single reactive site. The model fits the data set at low Cd(II) concentration and up to 50% surface coverage. At high coverage more Cd(II) ions than predicted are adsorbed, which is indicative of the existence of a second type of site of lower affinity. This agrees with the surface structure and protonation model developed, which indicates comparable concentrations of high- and low-affinity sites. The model further shows that for each class of low- and high-affinity sites there exists a variety of corresponding Cd surface complex structure, depending on the model crystal faces on which the complexes develop. Generally, high-affinity surface structures have surface coordinations of 3 and 4, as compared to 1 and 2 for low-affinity surface structures.

  10. ANALYSIS OF COMBINED POLYSURFACES TO MESH SURFACES MATCHING

    Directory of Open Access Journals (Sweden)

    Marek WYLEŻOŁ

    2014-06-01

    Full Text Available This article applies to an example of the process of quantitatively evaluate the fit of combined polysurface (NURBS class to a surface mesh. The fitting process of the polysurface and the evaluation of obtained results have been realized in the environment of the CATIA v5 system. Obtained quantitative evaluation are shown graphically in the form of three-dimensional graphs and histograms. As the base surface mesh was used a pelvic bone stl model (the model was created by digitizing didactic physical model.

  11. A History of Regression and Related Model-Fitting in the Earth Sciences (1636?-2000)

    International Nuclear Information System (INIS)

    Howarth, Richard J.

    2001-01-01

    The (statistical) modeling of the behavior of a dependent variate as a function of one or more predictors provides examples of model-fitting which span the development of the earth sciences from the 17th Century to the present. The historical development of these methods and their subsequent application is reviewed. Bond's predictions (c. 1636 and 1668) of change in the magnetic declination at London may be the earliest attempt to fit such models to geophysical data. Following publication of Newton's theory of gravitation in 1726, analysis of data on the length of a 1 o meridian arc, and the length of a pendulum beating seconds, as a function of sin 2 (latitude), was used to determine the ellipticity of the oblate spheroid defining the Figure of the Earth. The pioneering computational methods of Mayer in 1750, Boscovich in 1755, and Lambert in 1765, and the subsequent independent discoveries of the principle of least squares by Gauss in 1799, Legendre in 1805, and Adrain in 1808, and its later substantiation on the basis of probability theory by Gauss in 1809 were all applied to the analysis of such geodetic and geophysical data. Notable later applications include: the geomagnetic survey of Ireland by Lloyd, Sabine, and Ross in 1836, Gauss's model of the terrestrial magnetic field in 1838, and Airy's 1845 analysis of the residuals from a fit to pendulum lengths, from which he recognized the anomalous character of measurements of gravitational force which had been made on islands. In the early 20th Century applications to geological topics proliferated, but the computational burden effectively held back applications of multivariate analysis. Following World War II, the arrival of digital computers in universities in the 1950s facilitated computation, and fitting linear or polynomial models as a function of geographic coordinates, trend surface analysis, became popular during the 1950-60s. The inception of geostatistics in France at this time by Matheron had its

  12. Introducing the fit-criteria assessment plot - A visualisation tool to assist class enumeration in group-based trajectory modelling.

    Science.gov (United States)

    Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria

    2017-10-01

    Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.

  13. A highly accurate dynamic contact angle algorithm for drops on inclined surface based on ellipse-fitting.

    Science.gov (United States)

    Xu, Z N; Wang, S Y

    2015-02-01

    To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.

  14. Deriving global parameter estimates for the Noah land surface model using FLUXNET and machine learning

    Science.gov (United States)

    Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.

    2016-11-01

    With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.

  15. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  16. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  17. Universal Rate Model Selector: A Method to Quickly Find the Best-Fit Kinetic Rate Model for an Experimental Rate Profile

    Science.gov (United States)

    2017-08-01

    k2 – k1) 3.3 Universal Kinetic Rate Platform Development Kinetic rate models range from pure chemical reactions to mass transfer...14 8. The rate model that best fits the experimental data is a first-order or homogeneous catalytic reaction ...Avrami (7), and intraparticle diffusion (6) rate equations to name a few. A single fitting algorithm (kinetic rate model ) for a reaction does not

  18. Power Prediction Model for Turning EN-31 Steel Using Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    M. Hameedullah

    2010-01-01

    Full Text Available Power consumption in turning EN-31 steel (a material that is most extensively used in automotive industry with tungstencarbide tool under different cutting conditions was experimentally investigated. The experimental runs were planned accordingto 24+8 added centre point factorial design of experiments, replicated thrice. The data collected was statisticallyanalyzed using Analysis of Variance technique and first order and second order power consumption prediction models weredeveloped by using response surface methodology (RSM. It is concluded that second-order model is more accurate than thefirst-order model and fit well with the experimental data. The model can be used in the automotive industries for decidingthe cutting parameters for minimum power consumption and hence maximum productivity

  19. The global electroweak Standard Model fit after the Higgs discovery

    CERN Document Server

    Baak, Max

    2013-01-01

    We present an update of the global Standard Model (SM) fit to electroweak precision data under the assumption that the new particle discovered at the LHC is the SM Higgs boson. In this scenario all parameters entering the calculations of electroweak precision observalbes are known, allowing, for the first time, to over-constrain the SM at the electroweak scale and assert its validity. Within the SM the W boson mass and the effective weak mixing angle can be accurately predicted from the global fit. The results are compatible with, and exceed in precision, the direct measurements. An updated determination of the S, T and U parameters, which parametrize the oblique vacuum corrections, is given. The obtained values show good consistency with the SM expectation and no direct signs of new physics are seen. We conclude with an outlook to the global electroweak fit for a future e+e- collider.

  20. Cs sorption to potential host rock of low-level radioactive waste repository in Taiwan: experiments and numerical fitting study.

    Science.gov (United States)

    Wang, Tsing-Hai; Chen, Chin-Lung; Ou, Lu-Yen; Wei, Yuan-Yaw; Chang, Fu-Lin; Teng, Shi-Ping

    2011-09-15

    A reliable performance assessment of radioactive waste repository depends on better knowledge of interactions between nuclides and geological substances. Numerical fitting of acquired experimental results by the surface complexation model enables us to interpret sorption behavior at molecular scale and thus to build a solid basis for simulation study. A lack of consensus on a standard set of assessment criteria (such as determination of sorption site concentration, reaction formula) during numerical fitting, on the other hand, makes lower case comparison between various studies difficult. In this study we explored the sorption of cesium to argillite by conducting experiments under different pH and solid/liquid ratio (s/l) with two specific initial Cs concentrations (100mg/L, 7.5 × 10(-4)mol/L and 0.01 mg/L, 7.5 × 10(-8)mol/L). After this, numerical fitting was performed, focusing on assessment criteria and their consequences. It was found that both ion exchange and electrostatic interactions governed Cs sorption on argillite. At higher initial Cs concentration the Cs sorption showed an increasing dependence on pH as the solid/liquid ratio was lowered. In contrast at trace Cs levels, the Cs sorption was neither s/l dependent nor pH sensitive. It is therefore proposed that ion exchange mechanism dominates Cs sorption when the concentration of surface sorption site exceeds that of Cs, whereas surface complexation is attributed to Cs uptake under alkaline environments. Numerical fitting was conducted using two different strategies to determine concentration of surface sorption sites: the clay model (based on the cation exchange capacity plus surface titration results) and the iron oxide model (where the concentration of sorption sites is proportional to the surface area of argillite). It was found that the clay model led to better fitting than the iron oxide model, which is attributed to more amenable sorption sites (two specific sorption sites along with larger site

  1. Surface modeling method for aircraft engine blades by using speckle patterns based on the virtual stereo vision system

    Science.gov (United States)

    Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang

    2018-03-01

    A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.

  2. Petrologically-constrained thermo-chemical modelling of cratonic upper mantle consistent with elevation, geoid, surface heat flow, seismic surface waves and MT data

    Science.gov (United States)

    Jones, A. G.; Afonso, J. C.

    2015-12-01

    The Earth comprises a single physio-chemical system that we interrogate from its surface and/or from space making observations related to various physical and chemical parameters. A change in one of those parameters affects many of the others; for example a change in velocity is almost always indicative of a concomitant change in density, which results in changes to elevation, gravity and geoid observations. Similarly, a change in oxide chemistry affects almost all physical parameters to a greater or lesser extent. We have now developed sophisticated tools to model/invert data in our individual disciplines to such an extent that we are obtaining high resolution, robust models from our datasets. However, in the vast majority of cases the different datasets are modelled/inverted independently of each other, and often even without considering other data in a qualitative sense. The LitMod framework of Afonso and colleagues presents integrated inversion of geoscientific data to yield thermo-chemical models that are petrologically consistent and constrained. Input data can comprise any combination of elevation, geoid, surface heat flow, seismic surface wave (Rayleigh and Love) data and receiver function data, and MT data. The basis of LitMod is characterization of the upper mantle in terms of five oxides in the CFMAS system and a thermal structure that is conductive to the LAB and convective along the adiabat below the LAB to the 410 km discontinuity. Candidate solutions are chosen from prior distributions of the oxides. For the crust, candidate solutions are chosen from distributions of crustal layering, velocity and density parameters. Those candidate solutions that fit the data within prescribed error limits are kept, and are used to establish broad posterior distributions from which new candidate solutions are chosen. Examples will be shown of application of this approach fitting data from the Kaapvaal Craton in South Africa and the Rae Craton in northern Canada. I

  3. Comparison of parametric methods for modeling corneal surfaces

    Science.gov (United States)

    Bouazizi, Hala; Brunette, Isabelle; Meunier, Jean

    2017-02-01

    Corneal topography is a medical imaging technique to get the 3D shape of the cornea as a set of 3D points of its anterior and posterior surfaces. From these data, topographic maps can be derived to assist the ophthalmologist in the diagnosis of disorders. In this paper, we compare three different mathematical parametric representations of the corneal surfaces leastsquares fitted to the data provided by corneal topography. The parameters obtained from these models reduce the dimensionality of the data from several thousand 3D points to only a few parameters and could eventually be useful for diagnosis, biometry, implant design etc. The first representation is based on Zernike polynomials that are commonly used in optics. A variant of these polynomials, named Bhatia-Wolf will also be investigated. These two sets of polynomials are defined over a circular domain which is convenient to model the elevation (height) of the corneal surface. The third representation uses Spherical Harmonics that are particularly well suited for nearly-spherical object modeling, which is the case for cornea. We compared the three methods using the following three criteria: the root-mean-square error (RMSE), the number of parameters and the visual accuracy of the reconstructed topographic maps. A large dataset of more than 2000 corneal topographies was used. Our results showed that Spherical Harmonics were superior with a RMSE mean lower than 2.5 microns with 36 coefficients (order 5) for normal corneas and lower than 5 microns for two diseases affecting the corneal shapes: keratoconus and Fuchs' dystrophy.

  4. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    Science.gov (United States)

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  5. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    Science.gov (United States)

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  6. Reconstruction of freeform surfaces for metrology

    International Nuclear Information System (INIS)

    El-Hayek, N; Nouira, H; Anwer, N; Damak, M; Gibaru, O

    2014-01-01

    The application of freeform surfaces has increased since their complex shapes closely express a product's functional specifications and their machining is obtained with higher accuracy. In particular, optical surfaces exhibit enhanced performance especially when they take aspheric forms or more complex forms with multi-undulations. This study is mainly focused on the reconstruction of complex shapes such as freeform optical surfaces, and on the characterization of their form. The computer graphics community has proposed various algorithms for constructing a mesh based on the cloud of sample points. The mesh is a piecewise linear approximation of the surface and an interpolation of the point set. The mesh can further be processed for fitting parametric surfaces (Polyworks ® or Geomagic ® ). The metrology community investigates direct fitting approaches. If the surface mathematical model is given, fitting is a straight forward task. Nonetheless, if the surface model is unknown, fitting is only possible through the association of polynomial Spline parametric surfaces. In this paper, a comparative study carried out on methods proposed by the computer graphics community will be presented to elucidate the advantages of these approaches. We stress the importance of the pre-processing phase as well as the significance of initial conditions. We further emphasize the importance of the meshing phase by stating that a proper mesh has two major advantages. First, it organizes the initially unstructured point set and it provides an insight of orientation, neighbourhood and curvature, and infers information on both its geometry and topology. Second, it conveys a better segmentation of the space, leading to a correct patching and association of parametric surfaces

  7. CHF Enhancement by Surface Patterning based on Hydrodynamic Instability Model

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Han; Bang, In Cheol [UNIST, Ulsan (Korea, Republic of)

    2015-05-15

    If the power density of a device exceeds the CHF point, bubbles and vapor films will be covered on the whole heater surface. Because vapor films have much lower heat transfer capabilities compared to the liquid layer, the temperature of the heater surface will increase rapidly, and the device could be damaged due to the heater burnout. Therefore, the prediction and the enhancement of the CHF are essential to maximizing the efficient heat removal region. Numerous studies have been conducted to describe the CHF phenomenon, such as hydrodynamic instability theory, macrolayer dryout theory, hot/dry spot theory, and bubble interaction theory. The hydrodynamic instability model, proposed by Zuber, is the predominant CHF model that Helmholtz instability attributed to the CHF. Zuber assumed that the Rayleigh-Taylor (RT) instability wavelength is related to the Helmholtz wavelength. Lienhard and Dhir proposed a CHF model that Helmholtz instability wavelength is equal to the most dangerous RT wavelength. In addition, they showed the heater size effect using various heater surfaces. Lu et al. proposed a modified hydrodynamic theory that the Helmholtz instability was assumed to be the heater size and the area of the vapor column was used as a fitting factor. The modified hydrodynamic theories were based on the change of Helmholtz wavelength related to the RT instability wavelength. In the present study, the change of the RT instability wavelength, based on the heater surface modification, was conducted to show the CHF enhancement based on the heater surface patterning in a plate pool boiling. Sapphire glass was used as a base heater substrate, and the Pt film was used as a heating source. The patterning surface was based on the change of RT instability wavelength. In the present work the study of the CHF was conducted using bare Pt and patterned heating surfaces.

  8. Fitting Diffusion Item Response Theory Models for Responses and Response Times Using the R Package diffIRT

    Directory of Open Access Journals (Sweden)

    Dylan Molenaar

    2015-08-01

    Full Text Available In the psychometric literature, item response theory models have been proposed that explicitly take the decision process underlying the responses of subjects to psychometric test items into account. Application of these models is however hampered by the absence of general and flexible software to fit these models. In this paper, we present diffIRT, an R package that can be used to fit item response theory models that are based on a diffusion process. We discuss parameter estimation and model fit assessment, show the viability of the package in a simulation study, and illustrate the use of the package with two datasets pertaining to extraversion and mental rotation. In addition, we illustrate how the package can be used to fit the traditional diffusion model (as it has been originally developed in experimental psychology to data.

  9. Modelling dust rings in early-type galaxies through a sequence of radiative transfer simulations and 2D image fitting

    Science.gov (United States)

    Bonfini, P.; González-Martín, O.; Fritz, J.; Bitsakis, T.; Bruzual, G.; Sodi, B. Cervantes

    2018-05-01

    A large fraction of early-type galaxies (ETGs) host prominent dust features, and central dust rings are arguably the most interesting among them. We present here `Lord Of The Rings' (LOTR), a new methodology which allows to integrate the extinction by dust rings in a 2D fitting modelling of the surface brightness distribution. Our pipeline acts in two steps, first using the surface fitting software GALFIT to determine the unabsorbed stellar emission, and then adopting the radiative transfer code SKIRT to apply dust extinction. We apply our technique to NGC 4552 and NGC 4494, two nearby ETGs. We show that the extinction by a dust ring can mimic, in a surface brightness profile, a central point source (e.g. an unresolved nuclear stellar cluster or an active galactic nucleus; AGN) superimposed to a `core' (i.e. a central flattening of the stellar light commonly observed in massive ETGs). We discuss how properly accounting for dust features is of paramount importance to derive correct fluxes especially for low luminosity AGNs (LLAGNs). We suggest that the geometries of dust features are strictly connected with how relaxed is the gravitational potential, i.e. with the evolutionary stage of the host galaxy. Additionally, we find hints that the dust mass contained in the ring relates to the AGN activity.

  10. Sustained fitness gains and variability in fitness trajectories in the long-term evolution experiment with Escherichia coli

    Science.gov (United States)

    Lenski, Richard E.; Wiser, Michael J.; Ribeck, Noah; Blount, Zachary D.; Nahum, Joshua R.; Morris, J. Jeffrey; Zaman, Luis; Turner, Caroline B.; Wade, Brian D.; Maddamsetti, Rohan; Burmeister, Alita R.; Baird, Elizabeth J.; Bundy, Jay; Grant, Nkrumah A.; Card, Kyle J.; Rowles, Maia; Weatherspoon, Kiyana; Papoulis, Spiridon E.; Sullivan, Rachel; Clark, Colleen; Mulka, Joseph S.; Hajela, Neerja

    2015-01-01

    Many populations live in environments subject to frequent biotic and abiotic changes. Nonetheless, it is interesting to ask whether an evolving population's mean fitness can increase indefinitely, and potentially without any limit, even in a constant environment. A recent study showed that fitness trajectories of Escherichia coli populations over 50 000 generations were better described by a power-law model than by a hyperbolic model. According to the power-law model, the rate of fitness gain declines over time but fitness has no upper limit, whereas the hyperbolic model implies a hard limit. Here, we examine whether the previously estimated power-law model predicts the fitness trajectory for an additional 10 000 generations. To that end, we conducted more than 1100 new competitive fitness assays. Consistent with the previous study, the power-law model fits the new data better than the hyperbolic model. We also analysed the variability in fitness among populations, finding subtle, but significant, heterogeneity in mean fitness. Some, but not all, of this variation reflects differences in mutation rate that evolved over time. Taken together, our results imply that both adaptation and divergence can continue indefinitely—or at least for a long time—even in a constant environment. PMID:26674951

  11. Protofit: A program for determining surface protonation constants from titration data

    Science.gov (United States)

    Turner, Benjamin F.; Fein, Jeremy B.

    2006-11-01

    Determining the surface protonation behavior of natural adsorbents is essential to understand how they interact with their environments. ProtoFit is a tool for analysis of acid-base titration data and optimization of surface protonation models. The program offers a number of useful features including: (1) enables visualization of adsorbent buffering behavior; (2) uses an optimization approach independent of starting titration conditions or initial surface charge; (3) does not require an initial surface charge to be defined or to be treated as an optimizable parameter; (4) includes an error analysis intrinsically as part of the computational methods; and (5) generates simulated titration curves for comparison with observation. ProtoFit will typically be run through ProtoFit-GUI, a graphical user interface providing user-friendly control of model optimization, simulation, and data visualization. ProtoFit calculates an adsorbent proton buffering value as a function of pH from raw titration data (including pH and volume of acid or base added). The data is reduced to a form where the protons required to change the pH of the solution are subtracted out, leaving protons exchanged between solution and surface per unit mass of adsorbent as a function of pH. The buffering intensity function Qads* is calculated as the instantaneous slope of this reduced titration curve. Parameters for a surface complexation model are obtained by minimizing the sum of squares between the modeled (i.e. simulated) buffering intensity curve and the experimental data. The variance in the slope estimate, intrinsically produced as part of the Qads* calculation, can be used to weight the sum of squares calculation between the measured buffering intensity and a simulated curve. Effects of analytical error on data visualization and model optimization are discussed. Examples are provided of using ProtoFit for data visualization, model optimization, and model evaluation.

  12. A flexible, interactive software tool for fitting the parameters of neuronal models.

    Science.gov (United States)

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.

  13. A flexible, interactive software tool for fitting the parameters of neuronal models

    Directory of Open Access Journals (Sweden)

    Péter eFriedrich

    2014-07-01

    Full Text Available The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problem of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting

  14. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  15. INFLUENCE OF RESIDENCE-TIME DISTRIBUTION ON A SURFACE-RENEWAL MODEL OF CONSTANT-PRESSURE CROSS-FLOW MICROFILTRATION

    Directory of Open Access Journals (Sweden)

    W. Zhang

    2015-03-01

    Full Text Available Abstract This work examines the influence of the residence-time distribution (RTD of surface elements on a model of cross-flow microfiltration that has been proposed recently (Hasan et al., 2013. Along with the RTD from the previous work (Case 1, two other RTD functions (Cases 2 and 3 are used to develop theoretical expressions for the permeate-flux decline and cake buildup in the filter as a function of process time. The three different RTDs correspond to three different startup conditions of the filtration process. The analytical expressions for the permeate flux, each of which contains three basic parameters (membrane resistance, specific cake resistance and rate of surface renewal, are fitted to experimental permeate flow rate data in the microfiltration of fermentation broths in laboratory- and pilot-scale units. All three expressions for the permeate flux fit the experimental data fairly well with average root-mean-square errors of 4.6% for Cases 1 and 2, and 4.2% for Case 3, respectively, which points towards the constructive nature of the model - a common feature of theoretical models used in science and engineering.

  16. Development and evaluation of an empirical diurnal sea surface temperature model

    Science.gov (United States)

    Weihs, R. R.; Bourassa, M. A.

    2013-12-01

    An innovative method is developed to determine the diurnal heating amplitude of sea surface temperatures (SSTs) using observations of high-quality satellite SST measurements and NWP atmospheric meteorological data. The diurnal cycle results from heating that develops at the surface of the ocean from low mechanical or shear produced turbulence and large solar radiation absorption. During these typically calm weather conditions, the absorption of solar radiation causes heating of the upper few meters of the ocean, which become buoyantly stable; this heating causes a temperature differential between the surface and the mixed [or bulk] layer on the order of a few degrees. It has been shown that capturing the diurnal cycle is important for a variety of applications, including surface heat flux estimates, which have been shown to be underestimated when neglecting diurnal warming, and satellite and buoy calibrations, which can be complicated because of the heating differential. An empirical algorithm using a pre-dawn sea surface temperature, peak solar radiation, and accumulated wind stress is used to estimate the cycle. The empirical algorithm is derived from a multistep process in which SSTs from MTG's SEVIRI SST experimental hourly data set are combined with hourly wind stress fields derived from a bulk flux algorithm. Inputs for the flux model are taken from NASA's MERRA reanalysis product. NWP inputs are necessary because the inputs need to incorporate diurnal and air-sea interactive processes, which are vital to the ocean surface dynamics, with a high enough temporal resolution. The MERRA winds are adjusted with CCMP winds to obtain more realistic spatial and variance characteristics and the other atmospheric inputs (air temperature, specific humidity) are further corrected on the basis of in situ comparisons. The SSTs are fitted to a Gaussian curve (using one or two peaks), forming a set of coefficients used to fit the data. The coefficient data are combined with

  17. The universal Higgs fit

    DEFF Research Database (Denmark)

    Giardino, P. P.; Kannike, K.; Masina, I.

    2014-01-01

    We perform a state-of-the-art global fit to all Higgs data. We synthesise them into a 'universal' form, which allows to easily test any desired model. We apply the proposed methodology to extract from data the Higgs branching ratios, production cross sections, couplings and to analyse composite...... Higgs models, models with extra Higgs doublets, supersymmetry, extra particles in the loops, anomalous top couplings, and invisible Higgs decays into Dark Matter. Best fit regions lie around the Standard Model predictions and are well approximated by our 'universal' fit. Latest data exclude the dilaton...... as an alternative to the Higgs, and disfavour fits with negative Yukawa couplings. We derive for the first time the SM Higgs boson mass from the measured rates, rather than from the peak positions, obtaining M-h = 124.4 +/- 1.6 GeV....

  18. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    Science.gov (United States)

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  19. Surface complexation modeling calculation of Pb(II) adsorption onto the calcined diatomite

    Science.gov (United States)

    Ma, Shu-Cui; Zhang, Ji-Lin; Sun, De-Hui; Liu, Gui-Xia

    2015-12-01

    Removal of noxious heavy metal ions (e.g. Pb(II)) by surface adsorption of minerals (e.g. diatomite) is an important means in the environmental aqueous pollution control. Thus, it is very essential to understand the surface adsorptive behavior and mechanism. In this work, the Pb(II) apparent surface complexation reaction equilibrium constants on the calcined diatomite and distributions of Pb(II) surface species were investigated through modeling calculations of Pb(II) based on diffuse double layer model (DLM) with three amphoteric sites. Batch experiments were used to study the adsorption of Pb(II) onto the calcined diatomite as a function of pH (3.0-7.0) and different ionic strengths (0.05 and 0.1 mol L-1 NaCl) under ambient atmosphere. Adsorption of Pb(II) can be well described by Freundlich isotherm models. The apparent surface complexation equilibrium constants (log K) were obtained by fitting the batch experimental data using the PEST 13.0 together with PHREEQC 3.1.2 codes and there is good agreement between measured and predicted data. Distribution of Pb(II) surface species on the diatomite calculated by PHREEQC 3.1.2 program indicates that the impurity cations (e.g. Al3+, Fe3+, etc.) in the diatomite play a leading role in the Pb(II) adsorption and dominant formation of complexes and additional electrostatic interaction are the main adsorption mechanism of Pb(II) on the diatomite under weak acidic conditions.

  20. Modeling Np and Pu transport with a surface complexation model and spatially variant sorption capacities: Implications for reactive transport modeling and performance assessments of nuclear waste disposal sites

    Science.gov (United States)

    Glynn, P.D.

    2003-01-01

    One-dimensional (1D) geochemical transport modeling is used to demonstrate the effects of speciation and sorption reactions on the ground-water transport of Np and Pu, two redox-sensitive elements. Earlier 1D simulations (Reardon, 1981) considered the kinetically limited dissolution of calcite and its effect on ion-exchange reactions (involving 90Sr, Ca, Na, Mg and K), and documented the spatial variation of a 90Sr partition coefficient under both transient and steady-state chemical conditions. In contrast, the simulations presented here assume local equilibrium for all reactions, and consider sorption on constant potential, rather than constant charge, surfaces. Reardon's (1981) seminal findings on the spatial and temporal variability of partitioning (of 90Sr) are reexamined and found partially caused by his assumption of a kinetically limited reaction. In the present work, sorption is assumed the predominant retardation process controlling Pu and Np transport, and is simulated using a diffuse-double-layer-surface-complexation (DDLSC) model. Transport simulations consider the infiltration of Np- and Pu-contaminated waters into an initially uncontaminated environment, followed by the cleanup of the resultant contamination with uncontaminated water. Simulations are conducted using different spatial distributions of sorption capacities (with the same total potential sorption capacity, but with different variances and spatial correlation structures). Results obtained differ markedly from those that would be obtained in transport simulations using constant Kd, Langmuir or Freundlich sorption models. When possible, simulation results (breakthrough curves) are fitted to a constant K d advection-dispersion transport model and compared. Functional differences often are great enough that they prevent a meaningful fit of the simulation results with a constant K d (or even a Langmuir or Freundlich) model, even in the case of Np, a weakly sorbed radionuclide under the

  1. Trend analysis by a piecewise linear regression model applied to surface air temperatures in Southeastern Spain (1973–2014)

    OpenAIRE

    Campra, Pablo; Morales, Maria

    2016-01-01

    The magnitude of the trends of environmental and climatic changes is mostly derived from the slopes of the linear trends using ordinary least-square fitting. An alternative flexible fitting model, piecewise regression, has been applied here to surface air temperature records in southeastern Spain for the recent warming period (1973–2014) to gain accuracy in the description of the inner structure of change, dividing the time series into linear segments with different slopes. Breakpoint y...

  2. Nuclear fuel element end fitting

    International Nuclear Information System (INIS)

    Jabsen, F.S.

    1980-01-01

    An invention is described whereby end fittings are formed from lattices of mutually perpendicular plates. At the plate intersections, sockets are secured to the end fittings in a manner that permits the longitudinal axes of each of the sockets to align with the respective lines of intersection of the plates. The sockets all protrude above one of the surfaces of the end fitting. Further, a detent is formed in the proturding sides of each of the sockets. Annular grooves are formed in each of the ends of the fuel rods that are to be mounted between the end fittings. The socket detents protrude into the respective annular grooves, thus engaging the grooves and retaining the fuel rods and end fittings in one integral structure. (auth)

  3. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    Science.gov (United States)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  4. Tensor-guided fitting of subduction slab depths

    Science.gov (United States)

    Bazargani, Farhad; Hayes, Gavin P.

    2013-01-01

    Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.

  5. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    Science.gov (United States)

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  6. Fitness cost

    DEFF Research Database (Denmark)

    Nielsen, Karen L.; Pedersen, Thomas M.; Udekwu, Klas I.

    2012-01-01

    phage types, predominantly only penicillin resistant. We investigated whether isolates of this epidemic were associated with a fitness cost, and we employed a mathematical model to ask whether these fitness costs could have led to the observed reduction in frequency. Bacteraemia isolates of S. aureus...... from Denmark have been stored since 1957. We chose 40 S. aureus isolates belonging to phage complex 83A, clonal complex 8 based on spa type, ranging in time of isolation from 1957 to 1980 and with varyous antibiograms, including both methicillin-resistant and -susceptible isolates. The relative fitness...... of each isolate was determined in a growth competition assay with a reference isolate. Significant fitness costs of 215 were determined for the MRSA isolates studied. There was a significant negative correlation between number of antibiotic resistances and relative fitness. Multiple regression analysis...

  7. GOODNESS-OF-FIT TEST FOR THE ACCELERATED FAILURE TIME MODEL BASED ON MARTINGALE RESIDUALS

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2013-01-01

    Roč. 49, č. 1 (2013), s. 40-59 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:GA MŠk(CZ) SVV 261315/2011 Keywords : accelerated failure time model * survival analysis * goodness-of-fit Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/novak-goodness-of-fit test for the aft model based on martingale residuals.pdf

  8. Bidirectional reflectance distribution function modeling of one-dimensional rough surface in the microwave band

    International Nuclear Information System (INIS)

    Guo Li-Xin; Gou Xue-Yin; Zhang Lian-Bo

    2014-01-01

    In this study, the bidirectional reflectance distribution function (BRDF) of a one-dimensional conducting rough surface and a dielectric rough surface are calculated with different frequencies and roughness values in the microwave band by using the method of moments, and the relationship between the bistatic scattering coefficient and the BRDF of a rough surface is expressed. From the theory of the parameters of the rough surface BRDF, the parameters of the BRDF are obtained using a genetic algorithm. The BRDF of a rough surface is calculated using the obtained parameter values. Further, the fitting values and theoretical calculations of the BRDF are compared, and the optimization results are in agreement with the theoretical calculation results. Finally, a reference for BRDF modeling of a Gaussian rough surface in the microwave band is provided by the proposed method. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  9. Modification of transition's factor in the compact surface-potential-based MOSFET model

    Directory of Open Access Journals (Sweden)

    Kevkić Tijana

    2016-01-01

    Full Text Available The modification of an important transition's factor which enables continual behavior of the surface potential in entire useful range of MOSFET operation is presented. The various modifications have been made in order to obtain an accurate and computationally efficient compact MOSFET model. The best results have been achieved by introducing the generalized logistic function (GL in fitting of considered factor. The smoothness and speed of the transition of the surface potential from the depletion to the strong inversion region can be controlled in this way. The results of the explicit model with this GL functional form for transition's factor have been verified extensively with the numerical data. A great agreement was found for a wide range of substrate doping and oxide thickness. Moreover, the proposed approach can be also applied on the case where quantum mechanical effects play important role in inversion mode.

  10. Assessing model fit in latent class analysis when asymptotics do not hold

    NARCIS (Netherlands)

    van Kollenburg, Geert H.; Mulder, Joris; Vermunt, Jeroen K.

    2015-01-01

    The application of latent class (LC) analysis involves evaluating the LC model using goodness-of-fit statistics. To assess the misfit of a specified model, say with the Pearson chi-squared statistic, a p-value can be obtained using an asymptotic reference distribution. However, asymptotic p-values

  11. The effect of measurement quality on targeted structural model fit indices: A comment on Lance, Beck, Fan, and Carter (2016).

    Science.gov (United States)

    McNeish, Daniel; Hancock, Gregory R

    2018-03-01

    Lance, Beck, Fan, and Carter (2016) recently advanced 6 new fit indices and associated cutoff values for assessing data-model fit in the structural portion of traditional latent variable path models. The authors appropriately argued that, although most researchers' theoretical interest rests with the latent structure, they still rely on indices of global model fit that simultaneously assess both the measurement and structural portions of the model. As such, Lance et al. proposed indices intended to assess the structural portion of the model in isolation of the measurement model. Unfortunately, although these strategies separate the assessment of the structure from the fit of the measurement model, they do not isolate the structure's assessment from the quality of the measurement model. That is, even with a perfectly fitting measurement model, poorer quality (i.e., less reliable) measurements will yield a more favorable verdict regarding structural fit, whereas better quality (i.e., more reliable) measurements will yield a less favorable structural assessment. This phenomenon, referred to by Hancock and Mueller (2011) as the reliability paradox, affects not only traditional global fit indices but also those structural indices proposed by Lance et al. as well. Fortunately, as this comment will clarify, indices proposed by Hancock and Mueller help to mitigate this problem and allow the structural portion of the model to be assessed independently of both the fit of the measurement model as well as the quality of indicator variables contained therein. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. LOGISTIC FUNCTION PROFILE FIT: A least-squares program for fitting interface profiles to an extended logistic function

    International Nuclear Information System (INIS)

    Kirchhoff, William H.

    2012-01-01

    The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton–Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals from the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, “Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,” and may also prove useful in applying ISO 18516: 2006, “Surface chemical analysis—Auger electron spectroscopy and x-ray photoelectron spectroscopy—determination of lateral resolution.” Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.

  13. Pavement Aging Model by Response Surface Modeling

    Directory of Open Access Journals (Sweden)

    Manzano-Ramírez A.

    2011-10-01

    Full Text Available In this work, surface course aging was modeled by Response Surface Methodology (RSM. The Marshall specimens were placed in a conventional oven for time and temperature conditions established on the basis of the environment factors of the region where the surface course is constructed by AC-20 from the Ing. Antonio M. Amor refinery. Volatilized material (VM, load resistance increment (ΔL and flow resistance increment (ΔF models were developed by the RSM. Cylindrical specimens with real aging were extracted from the surface course pilot to evaluate the error of the models. The VM model was adequate, in contrast (ΔL and (ΔF models were almost adequate with an error of 20 %, that was associated with the other environmental factors, which were not considered at the beginning of the research.

  14. Improving Frozen Precipitation Density Estimation in Land Surface Modeling

    Science.gov (United States)

    Sparrow, K.; Fall, G. M.

    2017-12-01

    The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in

  15. FitSKIRT: genetic algorithms to automatically fit dusty galaxies with a Monte Carlo radiative transfer code

    Science.gov (United States)

    De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.

    2013-02-01

    We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.

  16. A scaled Lagrangian method for performing a least squares fit of a model to plant data

    International Nuclear Information System (INIS)

    Crisp, K.E.

    1988-01-01

    Due to measurement errors, even a perfect mathematical model will not be able to match all the corresponding plant measurements simultaneously. A further discrepancy may be introduced if an un-modelled change in conditions occurs within the plant which should have required a corresponding change in model parameters - e.g. a gradual deterioration in the performance of some component(s). Taking both these factors into account, what is required is that the overall discrepancy between the model predictions and the plant data is kept to a minimum. This process is known as 'model fitting', A method is presented for minimising any function which consists of the sum of squared terms, subject to any constraints. Its most obvious application is in the process of model fitting, where a weighted sum of squares of the differences between model predictions and plant data is the function to be minimised. When implemented within existing Central Electricity Generating Board computer models, it will perform a least squares fit of a model to plant data within a single job submission. (author)

  17. The disconnected values model improves mental well-being and fitness in an employee wellness program.

    Science.gov (United States)

    Anshel, Mark H; Brinthaupt, Thomas M; Kang, Minsoo

    2010-01-01

    This study examined the effect of a 10-week wellness program on changes in physical fitness and mental well-being. The conceptual framework for this study was the Disconnected Values Model (DVM). According to the DVM, detecting the inconsistencies between negative habits and values (e.g., health, family, faith, character) and concluding that these "disconnects" are unacceptable promotes the need for health behavior change. Participants were 164 full-time employees at a university in the southeastern U.S. The program included fitness coaching and a 90-minute orientation based on the DVM. Multivariate Mixed Model analyses indicated significantly improved scores from pre- to post-intervention on selected measures of physical fitness and mental well-being. The results suggest that the Disconnected Values Model provides an effective cognitive-behavioral approach to generating health behavior change in a 10-week workplace wellness program.

  18. A Modelling Method of Bolt Joints Based on Basic Characteristic Parameters of Joint Surfaces

    Science.gov (United States)

    Yuansheng, Li; Guangpeng, Zhang; Zhen, Zhang; Ping, Wang

    2018-02-01

    Bolt joints are common in machine tools and have a direct impact on the overall performance of the tools. Therefore, the understanding of bolt joint characteristics is essential for improving machine design and assembly. Firstly, According to the experimental data obtained from the experiment, the stiffness curve formula was fitted. Secondly, a finite element model of unit bolt joints such as bolt flange joints, bolt head joints, and thread joints was constructed, and lastly the stiffness parameters of joint surfaces were implemented in the model by the secondary development of ABAQUS. The finite element model of the bolt joint established by this method can simulate the contact state very well.

  19. GOSSIP: SED fitting code

    Science.gov (United States)

    Franzetti, Paolo; Scodeggio, Marco

    2012-10-01

    GOSSIP fits the electro-magnetic emission of an object (the SED, Spectral Energy Distribution) against synthetic models to find the simulated one that best reproduces the observed data. It builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions.

  20. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    Science.gov (United States)

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  1. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Science.gov (United States)

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  2. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Directory of Open Access Journals (Sweden)

    A H Sabry

    Full Text Available The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  3. Thickness and fit of mouthguards according to heating methods.

    Science.gov (United States)

    Mizuhashi, Fumi; Koide, Kaoru; Takahashi, Mutsumi

    2014-02-01

    The purpose of this study was to examine the difference in the thickness and fit of mouthguards made by four different heating methods of the mouthguard sheet material. A Sports Mouthguard(®) of 3.8-mm thickness was used in this study. Four heating methods were performed. In one method, the sheet was heated only one side. In the other methods, one side of the sheet was heated first until the center of the sheet was displaced by 0.5 cm, 1.0 cm, and 1.5 cm from the baseline, and then turned upside down and heated. The sheets were adapted using a vacuum former when the heated sheets hung 1.5 cm from the baseline. We measured the thickness and fit of the mouthguard at the areas of the central incisor and first molar. The difference in thickness at the central incisor and first molar regions was analyzed by two-way anova. The difference in fit with different heating methods was analyzed by one-way anova. The results showed that the thickness of the mouthguard differed in the central incisor and first molar areas (P heating methods. The fit of the mouthguard at the central incisor and first molar areas was significantly different among the heating methods (P heated surface of the sheet contacted the surface of the working model. This finding may help to fabricate accurate mouthguards. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Fluctuating fitness shapes the clone-size distribution of immune repertoires.

    Science.gov (United States)

    Desponds, Jonathan; Mora, Thierry; Walczak, Aleksandra M

    2016-01-12

    The adaptive immune system relies on the diversity of receptors expressed on the surface of B- and T cells to protect the organism from a vast amount of pathogenic threats. The proliferation and degradation dynamics of different cell types (B cells, T cells, naive, memory) is governed by a variety of antigenic and environmental signals, yet the observed clone sizes follow a universal power-law distribution. Guided by this reproducibility we propose effective models of somatic evolution where cell fate depends on an effective fitness. This fitness is determined by growth factors acting either on clones of cells with the same receptor responding to specific antigens, or directly on single cells with no regard for clones. We identify fluctuations in the fitness acting specifically on clones as the essential ingredient leading to the observed distributions. Combining our models with experiments, we characterize the scale of fluctuations in antigenic environments and we provide tools to identify the relevant growth signals in different tissues and organisms. Our results generalize to any evolving population in a fluctuating environment.

  5. Detection analysis of surface hydroxyl active sites and simulation calculation of the surface dissociation constants of aqueous diatomite suspensions

    International Nuclear Information System (INIS)

    Ma, Shu-Cui; Wang, Zhi-Gang; Zhang, Ji-Lin; Sun, De-Hui; Liu, Gui-Xia

    2015-01-01

    Highlights: • To examine surface hydroxyl functional groups of the calcined diatomite by TGA-DSC, FTIR, and XPS. • To calculate the optimized log K 1 , log K 2 and log C values and the surface species distribution of each surface reactive site using ProtoFit and PHREEQC, respectively. - Abstract: The surface properties of the diatomite were investigated using nitrogen adsorption/deadsorption isotherms, TG-DSC, FTIR, and XPS, and surface protonation–deprotonation behavior was determined by continuous acid–base potentiometric titration technique. The diatomite sample with porous honeycomb structure has a BET specific surface area of 10.21 m 2 /g and large numbers of surface hydroxyl functional groups (i.e. ≡Si-OH, ≡Fe-OH, and ≡Al-OH). These surface hydroxyls can be protonated or deprotonated depending on the pH of the suspension. The experimental potentiometric data in two different ionic strength solutions (0.1 and 0.05 mol/L NaCl) were fitted using ProtoFit GUI V2.1 program by applying diffuse double layer model (DLM) with three amphoteric sites and minimizing the sum of squares between a dataset derivative function and a model derivative function. The optimized surface parameters (i.e. surface dissociation constants (log K 1 , log K 2 ) and surface site concentrations (log C)) of the sample were obtained. Based on the optimized surface parameters, the surface species distribution was calculated using Program-free PHREEQC 3.1.2. Thus, this work reveals considerable new information about surface protonation–deprotonation processes and surface adsorptive behaviors of the diatomite, which helps us to effectively use the cheap and cheerful diatomite clay adsorbent

  6. Detection analysis of surface hydroxyl active sites and simulation calculation of the surface dissociation constants of aqueous diatomite suspensions

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Shu-Cui [State Key Laboratory of Rare Earth Resource Utilization, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun 130022 (China); Key Laboratory of Applied Chemistry and Nanotechnology at Universities of Jilin Province, Changchun University of Science and Technology, Changchun 130022 (China); Wang, Zhi-Gang [State Key Laboratory of Rare Earth Resource Utilization, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun 130022 (China); Zhang, Ji-Lin, E-mail: zjl@ciac.ac.cn [State Key Laboratory of Rare Earth Resource Utilization, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun 130022 (China); Sun, De-Hui [Changchun Institute Technology, Changchun 130012 (China); Liu, Gui-Xia, E-mail: liuguixia22@163.com [Key Laboratory of Applied Chemistry and Nanotechnology at Universities of Jilin Province, Changchun University of Science and Technology, Changchun 130022 (China)

    2015-02-01

    Highlights: • To examine surface hydroxyl functional groups of the calcined diatomite by TGA-DSC, FTIR, and XPS. • To calculate the optimized log K{sub 1}, log K{sub 2} and log C values and the surface species distribution of each surface reactive site using ProtoFit and PHREEQC, respectively. - Abstract: The surface properties of the diatomite were investigated using nitrogen adsorption/deadsorption isotherms, TG-DSC, FTIR, and XPS, and surface protonation–deprotonation behavior was determined by continuous acid–base potentiometric titration technique. The diatomite sample with porous honeycomb structure has a BET specific surface area of 10.21 m{sup 2}/g and large numbers of surface hydroxyl functional groups (i.e. ≡Si-OH, ≡Fe-OH, and ≡Al-OH). These surface hydroxyls can be protonated or deprotonated depending on the pH of the suspension. The experimental potentiometric data in two different ionic strength solutions (0.1 and 0.05 mol/L NaCl) were fitted using ProtoFit GUI V2.1 program by applying diffuse double layer model (DLM) with three amphoteric sites and minimizing the sum of squares between a dataset derivative function and a model derivative function. The optimized surface parameters (i.e. surface dissociation constants (log K{sub 1}, log K{sub 2}) and surface site concentrations (log C)) of the sample were obtained. Based on the optimized surface parameters, the surface species distribution was calculated using Program-free PHREEQC 3.1.2. Thus, this work reveals considerable new information about surface protonation–deprotonation processes and surface adsorptive behaviors of the diatomite, which helps us to effectively use the cheap and cheerful diatomite clay adsorbent.

  7. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  8. The bystander effect model of Brenner and Sachs fitted to lung cancer data in 11 cohorts of underground miners, and equivalence of fit of a linear relative risk model with adjustment for attained age and age at exposure

    International Nuclear Information System (INIS)

    Little, M P

    2004-01-01

    Bystander effects following exposure to α-particles have been observed in many experimental systems, and imply that linearly extrapolating low dose risks from high dose data might materially underestimate risk. Brenner and Sachs (2002 Int. J. Radiat. Biol. 78 593-604; 2003 Health Phys. 85 103-8) have recently proposed a model of the bystander effect which they use to explain the inverse dose rate effect observed for lung cancer in underground miners exposed to radon daughters. In this paper we fit the model of the bystander effect proposed by Brenner and Sachs to 11 cohorts of underground miners, taking account of the covariance structure of the data and the period of latency between the development of the first pre-malignant cell and clinically overt cancer. We also fitted a simple linear relative risk model, with adjustment for age at exposure and attained age. The methods that we use for fitting both models are different from those used by Brenner and Sachs, in particular taking account of the covariance structure, which they did not, and omitting certain unjustifiable adjustments to the miner data. The fit of the original model of Brenner and Sachs (with 0 y period of latency) is generally poor, although it is much improved by assuming a 5 or 6 y period of latency from the first appearance of a pre-malignant cell to cancer. The fit of this latter model is equivalent to that of a linear relative risk model with adjustment for age at exposure and attained age. In particular, both models are capable of describing the observed inverse dose rate effect in this data set

  9. Fitting measurement models to vocational interest data: are dominance models ideal?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz; Rounds, James; Williams, Bruce A

    2009-09-01

    In this study, the authors examined the item response process underlying 3 vocational interest inventories: the Occupational Preference Inventory (C.-P. Deng, P. I. Armstrong, & J. Rounds, 2007), the Interest Profiler (J. Rounds, T. Smith, L. Hubert, P. Lewis, & D. Rivkin, 1999; J. Rounds, C. M. Walker, et al., 1999), and the Interest Finder (J. E. Wall & H. E. Baker, 1997; J. E. Wall, L. L. Wise, & H. E. Baker, 1996). Item response theory (IRT) dominance models, such as the 2-parameter and 3-parameter logistic models, assume that item response functions (IRFs) are monotonically increasing as the latent trait increases. In contrast, IRT ideal point models, such as the generalized graded unfolding model, have IRFs that peak where the latent trait matches the item. Ideal point models are expected to fit better because vocational interest inventories ask about typical behavior, as opposed to requiring maximal performance. Results show that across all 3 interest inventories, the ideal point model provided better descriptions of the response process. The importance of specifying the correct item response model for precise measurement is discussed. In particular, scores computed by a dominance model were shown to be sometimes illogical: individuals endorsing mostly realistic or mostly social items were given similar scores, whereas scores based on an ideal point model were sensitive to which type of items respondents endorsed.

  10. A new kinetic model based on the remote control mechanism to fit experimental data in the selective oxidation of propene into acrolein on biphasic catalysts

    Energy Technology Data Exchange (ETDEWEB)

    Abdeldayem, H.M.; Ruiz, P.; Delmon, B. [Unite de Catalyse et Chimie des Materiaux Divises, Universite Catholique de Louvain, Louvain-La-Neuve (Belgium); Thyrion, F.C. [Unite des Procedes Faculte des Sciences Appliquees, Universite Catholique de Louvain, Louvain-La-Neuve (Belgium)

    1998-12-31

    A new kinetic model for a more accurate and detailed fitting of the experimental data is proposed. The model is based on the remote control mechanism (RCM). The RCM assumes that some oxides (called `donors`) are able to activate molecular oxygen transforming it to very active mobile species (spillover oxygen (O{sub OS})). O{sub OS} migrates onto the surface of the other oxide (called `acceptor`) where it creates and/or regenerates the active sites during the reaction. The model contains tow terms, one considering the creation of selective sites and the other the catalytic reaction at each site. The model has been tested in the selective oxidation of propene into acrolein (T=380, 400, 420 C; oxygen and propene partial pressures between 38 and 152 Torr). Catalysts were prepared as pure MoO{sub 3} (acceptor) and their mechanical mixtures with {alpha}-Sb{sub 2}O{sub 4} (donor) in different proportions. The presence of {alpha}-Sb{sub 2}O{sub 4} changes the reaction order, the activation energy of the reaction and the number of active sites of MoO{sub 3} produced by oxygen spillover. These changes are consistent with a modification in the degree of irrigation of the surface by oxygen spillover. The fitting of the model to experimental results shows that the number of sites created by O{sub SO} increases with the amount of {alpha}-Sb{sub 2}O{sub 4}. (orig.)

  11. Fitted HBT radii versus space-time variances in flow-dominated models

    International Nuclear Information System (INIS)

    Lisa, Mike; Frodermann, Evan; Heinz, Ulrich

    2007-01-01

    The inability of otherwise successful dynamical models to reproduce the 'HBT radii' extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the 'RHIC HBT Puzzle'. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source which can be directly computed from the emission function, without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models some of which exhibit significant deviations from simple Gaussian behaviour. By Fourier transforming the emission function we compute the 2-particle correlation function and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and measured HBT radii remain, we show that a more 'apples-to-apples' comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data. (author)

  12. Comparison of a layered slab and an atlas head model for Monte Carlo fitting of time-domain near-infrared spectroscopy data of the adult head.

    Science.gov (United States)

    Selb, Juliette; Ogden, Tyler M; Dubb, Jay; Fang, Qianqian; Boas, David A

    2014-01-01

    Near-infrared spectroscopy (NIRS) estimations of the adult brain baseline optical properties based on a homogeneous model of the head are known to introduce significant contamination from extracerebral layers. More complex models have been proposed and occasionally applied to in vivo data, but their performances have never been characterized on realistic head structures. Here we implement a flexible fitting routine of time-domain NIRS data using graphics processing unit based Monte Carlo simulations. We compare the results for two different geometries: a two-layer slab with variable thickness of the first layer and a template atlas head registered to the subject's head surface. We characterize the performance of the Monte Carlo approaches for fitting the optical properties from simulated time-resolved data of the adult head. We show that both geometries provide better results than the commonly used homogeneous model, and we quantify the improvement in terms of accuracy, linearity, and cross-talk from extracerebral layers.

  13. A cautionary note on the use of information fit indexes in covariance structure modeling with means

    NARCIS (Netherlands)

    Wicherts, J.M.; Dolan, C.V.

    2004-01-01

    Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases

  14. Assessing performance of Bayesian state-space models fit to Argos satellite telemetry locations processed with Kalman filtering.

    Directory of Open Access Journals (Sweden)

    Mónica A Silva

    Full Text Available Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF. The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km was nearly half that of LS estimates (11.6 ± 8.4 km. Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.

  15. CONTROLLING INFLUENCE OF MAGNETIC FIELD ON SOLAR WIND OUTFLOW: AN INVESTIGATION USING CURRENT SHEET SOURCE SURFACE MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Poduval, B., E-mail: bpoduval@spacescience.org [Space Science Institute, Boulder, CO 80303 (United States)

    2016-08-10

    This Letter presents the results of an investigation into the controlling influence of large-scale magnetic field of the Sun in determining the solar wind outflow using two magnetostatic coronal models: current sheet source surface (CSSS) and potential field source surface. For this, we made use of the Wang and Sheeley inverse correlation between magnetic flux expansion rate (FTE) and observed solar wind speed (SWS) at 1 au. During the period of study, extended over solar cycle 23 and beginning of solar cycle 24, we found that the coefficients of the fitted quadratic equation representing the FTE–SWS inverse relation exhibited significant temporal variation, implying the changing pattern of the influence of FTE on SWS over time. A particularly noteworthy feature is an anomaly in the behavior of the fitted coefficients during the extended minimum, 2008–2010 (CRs 2073–2092), which is considered due to the particularly complex nature of the solar magnetic field during this period. However, this variation was significant only for the CSSS model, though not a systematic dependence on the phase of the solar cycle. Further, we noticed that the CSSS model demonstrated better solar wind prediction during the period of study, which we attribute to the treatment of volume and sheet currents throughout the corona and the more accurate tracing of footpoint locations resulting from the geometry of the model.

  16. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    Science.gov (United States)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  17. The FIT Model - Fuel-cycle Integration and Tradeoffs

    International Nuclear Information System (INIS)

    Piet, Steven J.; Soelberg, Nick R.; Bays, Samuel E.; Pereira, Candido; Pincock, Layne F.; Shaber, Eric L.; Teague, Melissa C.; Teske, Gregory M.; Vedros, Kurt G.

    2010-01-01

    All mass streams from fuel separation and fabrication are products that must meet some set of product criteria - fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the 'system losses study' team that developed it (Shropshire2009, Piet2010) are an initial step by the FCR and D program toward a global analysis that accounts for the requirements and capabilities of each component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R and D needs and set longer-term goals. The question originally posed to the 'system losses study' was the cost of separation, fuel fabrication, waste management, etc. versus the separation efficiency. In other words, are the costs associated with marginal reductions in separations losses (or improvements in product recovery) justified by the gains in the performance of other systems? We have learned that that is the wrong question. The right question is: how does one adjust the compositions and quantities of all mass streams, given uncertain product criteria, to balance competing objectives including cost? FIT is a method to analyze different fuel cycles using common bases to determine how chemical performance changes in one part of a fuel cycle (say used fuel cooling times or separation efficiencies) affect other parts of the fuel cycle. FIT estimates impurities in fuel and waste via a rough estimate of physics and mass balance for a set of technologies. If feasibility is an issue for a set, as it is for 'minimum fuel treatment' approaches such as melt refining and AIROX, it can help to make an estimate of how performances would have to change to achieve feasibility.

  18. APPLICATION OF A SURFACE-RENEWAL MODEL TO PERMEATE-FLUX DATA FOR CONSTANTPRESSURE CROSS-FLOW MICROFILTRATION WITH DEAN VORTICES

    Directory of Open Access Journals (Sweden)

    G. Idan

    2015-06-01

    Full Text Available AbstractThe introduction of flow instabilities into a microfiltration process can dramatically change several elements such as the surface-renewal rate, permeate flux, specific cake resistance, and cake buildup on the membrane in a positive way. A recently developed surface-renewal model for constant-pressure, cross-flow microfiltration (Hasan et al., 2013 is applied to the permeate-flux data reported by Mallubhotla and Belfort (1997, one set of which included flow instabilities (Dean vortices while the other set did not. The surface-renewal model has two forms - the complete model and an approximate model. For the complete model, the introduction of vortices leads to a 53% increase in the surface-renewal rate, which increases the limiting (i.e., steady-state permeate flux by 30%, decreases the specific cake resistance by 14.5% and decreases the limiting cake mass by 15.5% compared to operation without vortices. For the approximate model, a 50% increase in the value of surface renewal rate is shown due to vortices, which increases the limiting permeate flux by 30%, decreases the specific cake resistance by 10.5% and decreases the limiting cake mass by 13.7%. The cake-filtration version of the critical-flux model of microfiltration (Field et al., 1995 is also compared against the experimental permeate-flux data of Mallubhotla and Belfort (1997. Although this model can represent the data, the quality of its fit is inferior compared to that of the surface-renewal model.

  19. An Improved Cognitive Model of the Iowa and Soochow Gambling Tasks With Regard to Model Fitting Performance and Tests of Parameter Consistency

    Directory of Open Access Journals (Sweden)

    Junyi eDai

    2015-03-01

    Full Text Available The Iowa Gambling Task (IGT and the Soochow Gambling Task (SGT are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning model (EVL and the prospect valence learning model (PVL, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79 and 27 control participants (mean age 35; SD 10.44 completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.

  20. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    Science.gov (United States)

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  1. Fitting the Fractional Polynomial Model to Non-Gaussian Longitudinal Data

    Directory of Open Access Journals (Sweden)

    Ji Hoon Ryoo

    2017-08-01

    Full Text Available As in cross sectional studies, longitudinal studies involve non-Gaussian data such as binomial, Poisson, gamma, and inverse-Gaussian distributions, and multivariate exponential families. A number of statistical tools have thus been developed to deal with non-Gaussian longitudinal data, including analytic techniques to estimate parameters in both fixed and random effects models. However, as yet growth modeling with non-Gaussian data is somewhat limited when considering the transformed expectation of the response via a linear predictor as a functional form of explanatory variables. In this study, we introduce a fractional polynomial model (FPM that can be applied to model non-linear growth with non-Gaussian longitudinal data and demonstrate its use by fitting two empirical binary and count data models. The results clearly show the efficiency and flexibility of the FPM for such applications.

  2. Response mechanism for surface acoustic wave gas sensors based on surface-adsorption.

    Science.gov (United States)

    Liu, Jiansheng; Lu, Yanyan

    2014-04-16

    A theoretical model is established to describe the response mechanism of surface acoustic wave (SAW) gas sensors based on physical adsorption on the detector surface. Wohljent's method is utilized to describe the relationship of sensor output (frequency shift of SAW oscillator) and the mass loaded on the detector surface. The Brunauer-Emmett-Teller (BET) formula and its improved form are introduced to depict the adsorption behavior of gas on the detector surface. By combining the two methods, we obtain a theoretical model for the response mechanism of SAW gas sensors. By using a commercial SAW gas chromatography (GC) analyzer, an experiment is performed to measure the frequency shifts caused by different concentration of dimethyl methylphosphonate (DMMP). The parameters in the model are given by fitting the experimental results and the theoretical curve agrees well with the experimental data.

  3. Tanning Shade Gradations of Models in Mainstream Fitness and Muscle Enthusiast Magazines: Implications for Skin Cancer Prevention in Men.

    Science.gov (United States)

    Basch, Corey H; Hillyer, Grace Clarke; Ethan, Danna; Berdnik, Alyssa; Basch, Charles E

    2015-07-01

    Tanned skin has been associated with perceptions of fitness and social desirability. Portrayal of models in magazines may reflect and perpetuate these perceptions. Limited research has investigated tanning shade gradations of models in men's versus women's fitness and muscle enthusiast magazines. Such findings are relevant in light of increased incidence and prevalence of melanoma in the United States. This study evaluated and compared tanning shade gradations of adult Caucasian male and female model images in mainstream fitness and muscle enthusiast magazines. Sixty-nine U.S. magazine issues (spring and summer, 2013) were utilized. Two independent reviewers rated tanning shade gradations of adult Caucasian male and female model images on magazines' covers, advertisements, and feature articles. Shade gradations were assessed using stock photographs of Caucasian models with varying levels of tanned skin on an 8-shade scale. A total of 4,683 images were evaluated. Darkest tanning shades were found among males in muscle enthusiast magazines and lightest among females in women's mainstream fitness magazines. By gender, male model images were 54% more likely to portray a darker tanning shade. In this study, images in men's (vs. women's) fitness and muscle enthusiast magazines portrayed Caucasian models with darker skin shades. Despite these magazines' fitness-related messages, pro-tanning images may promote attitudes and behaviors associated with higher skin cancer risk. To date, this is the first study to explore tanning shades in men's magazines of these genres. Further research is necessary to identify effects of exposure to these images among male readers. © The Author(s) 2014.

  4. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    Directory of Open Access Journals (Sweden)

    Javier Macias-Guarasa

    2012-10-01

    Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  5. An approximation to the adaptive exponential integrate-and-fire neuron model allows fast and predictive fitting to physiological data

    Directory of Open Access Journals (Sweden)

    Loreen eHertäg

    2012-09-01

    Full Text Available For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('in-vivo-like' input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  6. [An experimental study on the effect of different optical impression methods on marginal and internal fit of all-ceramic crowns].

    Science.gov (United States)

    Tan, Fa-Bing; Wang, Lu; Fu, Gang; Wu, Shu-Hong; Jin, Ping

    2010-02-01

    To study the effect of different optical impression methods in Cerec 3D/Inlab MC XL system on marginal and internal fit of all-ceramic crowns. A right mandibular first molar in the standard model was used to prepare full crown and replicated into thirty-two plaster casts. Sixteen of them were selected randomly for bonding crown and the others were used for taking optical impression, in half of which the direct optical impression taking method were used and the others were used for the indirect method, and then eight Cerec Blocs all-ceramic crowns were manufactured respectively. The fit of all-ceramic crowns were evaluated by modified United States Public Health Service (USPHS) criteria and scanning electron microscope (SEM) imaging, and the data were statistically analyzed with SAS 9.1 software. The clinically acceptable rate for all marginal measurement sites was 87.5% according to USPHS criteria. There was no statistically significant difference in marginal fit between direct and indirect method group (P > 0.05). With SEM imaging, all marginal measurement sites were less than 120 microm and no statistically significant difference was found between direct and indirect method group in terms of marginal or internal fit (P > 0.05). But the direct method group showed better fit than indirect method group in terms of mesial surface, lingual surface, buccal surface and occlusal surface (P impression method had no significant effect on marginal fit of Cerec Blocs crowns, but it had certain effect on internal fit. Overall all-ceramic crowns appeared to have clinically acceptable marginal fit.

  7. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Huang, Can

    2018-01-01

    –slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost......In order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master...... optimality. Numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods....

  8. Fusion and display of 3D spect and MR images registered by a surface fitting method

    International Nuclear Information System (INIS)

    Oghabian, M.A.; Kaboli, P.

    2002-01-01

    Since 3D medical images such as SPECT and MRI are taken under different positioning and imaging parameters, interpretation of them, as reconstructed originally, dose not provide an easy and accurate understanding of similarities and differences between them. The problem becomes more crucial where a clinician would like to map accurately region of interest from one study to the other, by which some surgical or therapeutical planning may be based. the research presented here is an investigation into the problems of the registration and display of brain images obtained by different imaging modalities. Following the introduction of an efficient method some clinical useful application of the registration and superimposition were also defined. The various widely used registration algorithms were first studied and their advantages and disadvantages of each method were evaluated. In this approach, an edge-based algorithm (called surface fitting), which are based on a least-square-distance matching, were suggested for registering of brain images. This algorithm minimizes the sum of square-distances between the two surfaces obtained from two modalities. The minimization is performed to find a set of six geometrical transformation parameters (3 shifts and 3 rotations) which indicate how one surface should be transformed in order to match with the other surface

  9. FitEM2EM--tools for low resolution study of macromolecular assembly and dynamics.

    Directory of Open Access Journals (Sweden)

    Ziv Frankenstein

    Full Text Available Studies of the structure and dynamics of macromolecular assemblies often involve comparison of low resolution models obtained using different techniques such as electron microscopy or atomic force microscopy. We present new computational tools for comparing (matching and docking of low resolution structures, based on shape complementarity. The matched or docked objects are represented by three dimensional grids where the value of each grid point depends on its position with regard to the interior, surface or exterior of the object. The grids are correlated using fast Fourier transformations producing either matches of related objects or docking models depending on the details of the grid representations. The procedures incorporate thickening and smoothing of the surfaces of the objects which effectively compensates for differences in the resolution of the matched/docked objects, circumventing the need for resolution modification. The presented matching tool FitEM2EMin successfully fitted electron microscopy structures obtained at different resolutions, different conformers of the same structure and partial structures, ranking correct matches at the top in every case. The differences between the grid representations of the matched objects can be used to study conformation differences or to characterize the size and shape of substructures. The presented low-to-low docking tool FitEM2EMout ranked the expected models at the top.

  10. A global model for SF6 plasmas coupling reaction kinetics in the gas phase and on the surface of the reactor walls

    International Nuclear Information System (INIS)

    Kokkoris, George; Panagiotopoulos, Apostolos; Gogolides, Evangelos; Goodyear, Andy; Cooke, Mike

    2009-01-01

    Gas phase and reactor wall-surface kinetics are coupled in a global model for SF 6 plasmas. A complete set of gas phase and surface reactions is formulated. The rate coefficients of the electron impact reactions are based on pertinent cross section data from the literature, which are integrated over a Druyvesteyn electron energy distribution function. The rate coefficients of the surface reactions are adjustable parameters and are calculated by fitting the model to experimental data from an inductively coupled plasma reactor, i.e. F atom density and pressure change after the ignition of the discharge. The model predicts that SF 6 , F, F 2 and SF 4 are the dominant neutral species while SF 5 + and F - are the dominant ions. The fit sheds light on the interaction between the gas phase and the reactor walls. A loss mechanism for SF x radicals by deposition of a fluoro-sulfur film on the reactor walls is needed to predict the experimental data. It is found that there is a net production of SF 5 , F 2 and SF 6 , and a net consumption of F, SF 3 and SF 4 on the reactor walls. Surface reactions as well as reactions between neutral species in the gas phase are found to be important sources and sinks of the neutral species.

  11. A Data-Driven Method for Selecting Optimal Models Based on Graphical Visualisation of Differences in Sequentially Fitted ROC Model Parameters

    Directory of Open Access Journals (Sweden)

    K S Mwitondi

    2013-05-01

    Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.

  12. Fast fitting of non-Gaussian state-space models to animal movement data via Template Model Builder

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Whoriskey, Kim; Yurkowski, David

    2015-01-01

    recommend using the Laplace approximation combined with automatic differentiation (as implemented in the novel R package Template Model Builder; TMB) for the fast fitting of continuous-time multivariate non-Gaussian SSMs. Through Argos satellite tracking data, we demonstrate that the use of continuous...... are able to estimate additional parameters compared to previous methods, all without requiring a substantial increase in computational time. The model implementation is made available through the R package argosTrack....

  13. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Segmentation of 3D ultrasound computer tomography reflection images using edge detection and surface fitting

    Science.gov (United States)

    Hopp, T.; Zapf, M.; Ruiter, N. V.

    2014-03-01

    An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.

  15. Data assimilation of surface altimetry on the North-Easter Ice Stream using the Ice Sheet System Model (ISSM)

    Science.gov (United States)

    Larour, Eric; Utke, Jean; Morlighem, Mathieu; Seroussi, Helene; Csatho, Beata; Schenk, Anton; Rignot, Eric; Khazendar, Ala

    2014-05-01

    Extensive surface altimetry data has been collected on polar ice sheets over the past decades, following missions such as Envisat and IceSat. This data record will further increase in size with the new CryoSat mission, the ongoing Operation IceBridge Mission and the soon to launch IceSat-2 mission. In order to make the best use of these dataset, ice flow models need to improve on the way they ingest surface altimetry to infer: 1) parameterizations of poorly known physical processes such as basal friction; 2) boundary conditions such as Surface Mass Balance (SMB). Ad-hoc sensitivity studies and adjoint-based inversions have so far been the way ice sheet models have attempted to resolve the impact of 1) on their results. As for boundary conditions or the lack thereof, most studies assume that they are a fixed quantity, which, though prone to large errors from the measurement itself, is not varied according to the simulated results. Here, we propose a method based on automatic differentiation to improve boundary conditions at the base and surface of the ice sheet during a short-term transient run for which surface altimetry observations are available. The method relies on minimizing a cost-function, the best fit between modeled surface evolution and surface altimetry observations, using gradients that are computed for each time step from automatic differentiation of the ISSM (Ice Sheet System Model) code. The approach relies on overloaded operators using the ADOLC (Automatic Differentiation by OverLoading in C++) package. It is applied to the 79 North Glacier, Greenland, for a short term transient spanning a couple of decades before the start of the retreat of the Zachariae Isstrom outlet glacier. Our results show adjustments required on the basal friction and the SMB of the whole basin to best fit surface altimetry observations, along with sensitivities each one of these parameters has on the overall cost function. Our approach presents a pathway towards assimilating

  16. Surface speciation of yttrium and neodymium sorbed on rutile: Interpretations using the charge distribution model

    Science.gov (United States)

    Ridley, Moira K.; Hiemstra, Tjisse; Machesky, Michael L.; Wesolowski, David J.; van Riemsdijk, Willem H.

    2012-10-01

    The adsorption of Y3+ and Nd3+ onto rutile has been evaluated over a wide range of pH (3-11) and surface loading conditions, as well as at two ionic strengths (0.03 and 0.3 m), and temperatures (25 and 50 °C). The experimental results reveal the same adsorption behavior for the two trivalent ions onto the rutile surface, with Nd3+ first adsorbing at slightly lower pH values. The adsorption of both Y3+ and Nd3+ commences at pH values below the pHznpc of rutile. The experimental results were evaluated using a charge distribution (CD) and multisite complexation (MUSIC) model, and Basic Stern layer description of the electric double layer (EDL). The coordination geometry of possible surface complexes were constrained by molecular-level information obtained from X-ray standing wave measurements and molecular dynamic (MD) simulation studies. X-ray standing wave measurements showed an inner-sphere tetradentate complex for Y3+ adsorption onto the (1 1 0) rutile surface (Zhang et al., 2004b). The MD simulation studies suggest additional bidentate complexes may form. The CD values for all surface species were calculated based on a bond valence interpretation of the surface complexes identified by X-ray and MD. The calculated CD values were corrected for the effect of dipole orientation of interfacial water. At low pH, the tetradentate complex provided excellent fits to the Y3+ and Nd3+ experimental data. The experimental and surface complexation modeling results show a strong pH dependence, and suggest that the tetradentate surface species hydrolyze with increasing pH. Furthermore, with increased surface loading of Y3+ on rutile the tetradentate binding mode was augmented by a hydrolyzed-bidentate Y3+ surface complex. Collectively, the experimental and surface complexation modeling results demonstrate that solution chemistry and surface loading impacts Y3+ surface speciation. The approach taken of incorporating molecular-scale information into surface complexation models

  17. Insights from Synthetic Star-forming Regions. II. Verifying Dust Surface Density, Dust Temperature, and Gas Mass Measurements With Modified Blackbody Fitting

    Energy Technology Data Exchange (ETDEWEB)

    Koepferl, Christine M.; Robitaille, Thomas P. [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Dale, James E., E-mail: koepferl@usm.lmu.de [University Observatory Munich, Scheinerstr. 1, D-81679 Munich (Germany)

    2017-11-01

    We use a large data set of realistic synthetic observations (produced in Paper I of this series) to assess how observational techniques affect the measurement physical properties of star-forming regions. In this part of the series (Paper II), we explore the reliability of the measured total gas mass, dust surface density and dust temperature maps derived from modified blackbody fitting of synthetic Herschel observations. We find from our pixel-by-pixel analysis of the measured dust surface density and dust temperature a worrisome error spread especially close to star formation sites and low-density regions, where for those “contaminated” pixels the surface densities can be under/overestimated by up to three orders of magnitude. In light of this, we recommend to treat the pixel-based results from this technique with caution in regions with active star formation. In regions of high background typical in the inner Galactic plane, we are not able to recover reliable surface density maps of individual synthetic regions, since low-mass regions are lost in the far-infrared background. When measuring the total gas mass of regions in moderate background, we find that modified blackbody fitting works well (absolute error: + 9%; −13%) up to 10 kpc distance (errors increase with distance). Commonly, the initial images are convolved to the largest common beam-size, which smears contaminated pixels over large areas. The resulting information loss makes this commonly used technique less verifiable as now χ {sup 2} values cannot be used as a quality indicator of a fitted pixel. Our control measurements of the total gas mass (without the step of convolution to the largest common beam size) produce similar results (absolute error: +20%; −7%) while having much lower median errors especially for the high-mass stellar feedback phase. In upcoming papers (Paper III; Paper IV) of this series we test the reliability of measured star formation rate with direct and indirect

  18. Insights from Synthetic Star-forming Regions. II. Verifying Dust Surface Density, Dust Temperature, and Gas Mass Measurements with Modified Blackbody Fitting

    Science.gov (United States)

    Koepferl, Christine M.; Robitaille, Thomas P.; Dale, James E.

    2017-11-01

    We use a large data set of realistic synthetic observations (produced in Paper I of this series) to assess how observational techniques affect the measurement physical properties of star-forming regions. In this part of the series (Paper II), we explore the reliability of the measured total gas mass, dust surface density and dust temperature maps derived from modified blackbody fitting of synthetic Herschel observations. We find from our pixel-by-pixel analysis of the measured dust surface density and dust temperature a worrisome error spread especially close to star formation sites and low-density regions, where for those “contaminated” pixels the surface densities can be under/overestimated by up to three orders of magnitude. In light of this, we recommend to treat the pixel-based results from this technique with caution in regions with active star formation. In regions of high background typical in the inner Galactic plane, we are not able to recover reliable surface density maps of individual synthetic regions, since low-mass regions are lost in the far-infrared background. When measuring the total gas mass of regions in moderate background, we find that modified blackbody fitting works well (absolute error: + 9%; -13%) up to 10 kpc distance (errors increase with distance). Commonly, the initial images are convolved to the largest common beam-size, which smears contaminated pixels over large areas. The resulting information loss makes this commonly used technique less verifiable as now χ 2 values cannot be used as a quality indicator of a fitted pixel. Our control measurements of the total gas mass (without the step of convolution to the largest common beam size) produce similar results (absolute error: +20%; -7%) while having much lower median errors especially for the high-mass stellar feedback phase. In upcoming papers (Paper III; Paper IV) of this series we test the reliability of measured star formation rate with direct and indirect techniques.

  19. The hydrogen abstraction reaction O(3P) + CH4: A new analytical potential energy surface based on fit to ab initio calculations

    International Nuclear Information System (INIS)

    González-Lavado, Eloisa; Corchado, Jose C.; Espinosa-Garcia, Joaquin

    2014-01-01

    Based exclusively on high-level ab initio calculations, a new full-dimensional analytical potential energy surface (PES-2014) for the gas-phase reaction of hydrogen abstraction from methane by an oxygen atom is developed. The ab initio information employed in the fit includes properties (equilibrium geometries, relative energies, and vibrational frequencies) of the reactants, products, saddle point, points on the reaction path, and points on the reaction swath, taking especial caution respecting the location and characterization of the intermediate complexes in the entrance and exit channels. By comparing with the reference results we show that the resulting PES-2014 reproduces reasonably well the whole set of ab initio data used in the fitting, obtained at the CCSD(T) = FULL/aug-cc-pVQZ//CCSD(T) = FC/cc-pVTZ single point level, which represents a severe test of the new surface. As a first application, on this analytical surface we perform an extensive dynamics study using quasi-classical trajectory calculations, comparing the results with recent experimental and theoretical data. The excitation function increases with energy (concave-up) reproducing experimental and theoretical information, although our values are somewhat larger. The OH rotovibrational distribution is cold in agreement with experiment. Finally, our results reproduce experimental backward scattering distribution, associated to a rebound mechanism. These results lend confidence to the accuracy of the new surface, which substantially improves the results obtained with our previous surface (PES-2000) for the same system

  20. Incremental Contributions of FbaA and Other Impetigo-Associated Surface Proteins to Fitness and Virulence of a Classical Group A Streptococcal Skin Strain.

    Science.gov (United States)

    Rouchon, Candace N; Ly, Anhphan T; Noto, John P; Luo, Feng; Lizano, Sergio; Bessen, Debra E

    2017-11-01

    Group A streptococci (GAS) are highly prevalent human pathogens whose primary ecological niche is the superficial epithelial layers of the throat and/or skin. Many GAS strains with a strong tendency to cause pharyngitis are distinct from strains that tend to cause impetigo; thus, genetic differences between them may confer host tissue-specific virulence. In this study, the FbaA surface protein gene was found to be present in most skin specialist strains but largely absent from a genetically related subset of pharyngitis isolates. In an Δ fbaA mutant constructed in the impetigo strain Alab49, loss of FbaA resulted in a slight but significant decrease in GAS fitness in a humanized mouse model of impetigo; the Δ fbaA mutant also exhibited decreased survival in whole human blood due to phagocytosis. In assays with highly sensitive outcome measures, Alab49ΔfbaA was compared to other isogenic mutants lacking virulence genes known to be disproportionately associated with classical skin strains. FbaA and PAM (i.e., the M53 protein) had additive effects in promoting GAS survival in whole blood. The pilus adhesin tip protein Cpa promoted Alab49 survival in whole blood and appears to fully account for the antiphagocytic effect attributable to pili. The finding that numerous skin strain-associated virulence factors make slight but significant contributions to virulence underscores the incremental contributions to fitness of individual surface protein genes and the multifactorial nature of GAS-host interactions. Copyright © 2017 American Society for Microbiology.

  1. The 'fitting problem' in cosmology

    International Nuclear Information System (INIS)

    Ellis, G.F.R.; Stoeger, W.

    1987-01-01

    The paper considers the best way to fit an idealised exactly homogeneous and isotropic universe model to a realistic ('lumpy') universe; whether made explicit or not, some such approach of necessity underlies the use of the standard Robertson-Walker models as models of the real universe. Approaches based on averaging, normal coordinates and null data are presented, the latter offering the best opportunity to relate the fitting procedure to data obtainable by astronomical observations. (author)

  2. Surface effects on the red giant branch

    Science.gov (United States)

    Ball, W. H.; Themeßl, N.; Hekker, S.

    2018-05-01

    Individual mode frequencies have been detected in thousands of individual solar-like oscillators on the red giant branch (RGB). Fitting stellar models to these mode frequencies, however, is more difficult than in main-sequence stars. This is partly because of the uncertain magnitude of the surface effect: the systematic difference between observed and modelled frequencies caused by poor modelling of the near-surface layers. We aim to study the magnitude of the surface effect in RGB stars. Surface effect corrections used for main-sequence targets are potentially large enough to put the non-radial mixed modes in RGB stars out of order, which is unphysical. Unless this can be circumvented, model-fitting of evolved RGB stars is restricted to the radial modes, which reduces the number of available modes. Here, we present a method to suppress gravity modes (g-modes) in the cores of our stellar models, so that they have only pure pressure modes (p-modes). We show that the method gives unbiased results and apply it to three RGB solar-like oscillators in double-lined eclipsing binaries: KIC 8410637, KIC 9540226 and KIC 5640750. In all three stars, the surface effect decreases the model frequencies consistently by about 0.1-0.3 μHz at the frequency of maximum oscillation power νmax, which agrees with existing predictions from three-dimensional radiation hydrodynamics simulations. Though our method in essence discards information about the stellar cores, it provides a useful step forward in understanding the surface effect in RGB stars.

  3. Detection analysis of surface hydroxyl active sites and simulation calculation of the surface dissociation constants of aqueous diatomite suspensions

    Science.gov (United States)

    Ma, Shu-Cui; Wang, Zhi-Gang; Zhang, Ji-Lin; Sun, De-Hui; Liu, Gui-Xia

    2015-02-01

    The surface properties of the diatomite were investigated using nitrogen adsorption/deadsorption isotherms, TG-DSC, FTIR, and XPS, and surface protonation-deprotonation behavior was determined by continuous acid-base potentiometric titration technique. The diatomite sample with porous honeycomb structure has a BET specific surface area of 10.21 m2/g and large numbers of surface hydroxyl functional groups (i.e. tbnd Si-OH, tbnd Fe-OH, and tbnd Al-OH). These surface hydroxyls can be protonated or deprotonated depending on the pH of the suspension. The experimental potentiometric data in two different ionic strength solutions (0.1 and 0.05 mol/L NaCl) were fitted using ProtoFit GUI V2.1 program by applying diffuse double layer model (DLM) with three amphoteric sites and minimizing the sum of squares between a dataset derivative function and a model derivative function. The optimized surface parameters (i.e. surface dissociation constants (log K1, log K2) and surface site concentrations (log C)) of the sample were obtained. Based on the optimized surface parameters, the surface species distribution was calculated using Program-free PHREEQC 3.1.2. Thus, this work reveals considerable new information about surface protonation-deprotonation processes and surface adsorptive behaviors of the diatomite, which helps us to effectively use the cheap and cheerful diatomite clay adsorbent.

  4. Fitting the CDO correlation skew: a tractable structural jump-diffusion model

    DEFF Research Database (Denmark)

    Willemann, Søren

    2007-01-01

    We extend a well-known structural jump-diffusion model for credit risk to handle both correlations through diffusion of asset values and common jumps in asset value. Through a simplifying assumption on the default timing and efficient numerical techniques, we develop a semi-analytic framework...... allowing for instantaneous calibration to heterogeneous CDS curves and fast computation of CDO tranche spreads. We calibrate the model to CDX and iTraxx data from February 2007 and achieve a satisfactory fit. To price the senior tranches for both indices, we require a risk-neutral probability of a market...

  5. Development and design of a late-model fitness test instrument based on LabView

    Science.gov (United States)

    Xie, Ying; Wu, Feiqing

    2010-12-01

    Undergraduates are pioneers of China's modernization program and undertake the historic mission of rejuvenating our nation in the 21st century, whose physical fitness is vital. A smart fitness test system can well help them understand their fitness and health conditions, thus they can choose more suitable approaches and make practical plans for exercising according to their own situation. following the future trends, a Late-model fitness test Instrument based on LabView has been designed to remedy defects of today's instruments. The system hardware consists of fives types of sensors with their peripheral circuits, an acquisition card of NI USB-6251 and a computer, while the system software, on the basis of LabView, includes modules of user register, data acquisition, data process and display, and data storage. The system, featured by modularization and an open structure, is able to be revised according to actual needs. Tests results have verified the system's stability and reliability.

  6. Invited commentary: Lost in estimation--searching for alternatives to markov chains to fit complex Bayesian models.

    Science.gov (United States)

    Molitor, John

    2012-03-01

    Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.

  7. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    Science.gov (United States)

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  8. Decision making on fitness landscapes

    Science.gov (United States)

    Arthur, R.; Sibani, P.

    2017-04-01

    We discuss fitness landscapes and how they can be modified to account for co-evolution. We are interested in using the landscape as a way to model rational decision making in a toy economic system. We develop a model very similar to the Tangled Nature Model of Christensen et al. that we call the Tangled Decision Model. This is a natural setting for our discussion of co-evolutionary fitness landscapes. We use a Monte Carlo step to simulate decision making and investigate two different decision making procedures.

  9. Decision Making on Fitness Landscapes

    DEFF Research Database (Denmark)

    Arthur, Rudy; Sibani, Paolo

    2017-01-01

    We discuss fitness landscapes and how they can be modified to account for co-evolution. We are interested in using the landscape as a way to model rational decision making in a toy economic system. We develop a model very similar to the Tangled Nature Model of Christensen et. al. that we call...... the Tangled Decision Model. This is a natural setting for our discussion of co-evolutionary fitness landscapes. We use a Monte Carlo step to simulate decision making and investigate two different decision making procedures....

  10. Assessing a moderating effect and the global fit of a PLS model on online trading

    Directory of Open Access Journals (Sweden)

    Juan J. García-Machado

    2017-12-01

    Full Text Available This paper proposes a PLS Model for the study of Online Trading. Traditional investing has experienced a revolution due to the rise of e-trading services that enable investors to use Internet conduct secure trading. On the hand, model results show that there is a positive, direct and statistically significant relationship between personal outcome expectations, perceived relative advantage, shared vision and economy-based trust with the quality of knowledge. On the other hand, trading frequency and portfolio performance has also this relationship. After including the investor’s income and financial wealth (IFW as moderating effect, the PLS model was enhanced, and we found that the interaction term is negative and statistically significant, so, higher IFW levels entail a weaker relationship between trading frequency and portfolio performance and vice-versa. Finally, with regard to the goodness of overall model fit measures, they showed that the model is fit for SRMR and dG measures, so it is likely that the model is true.

  11. Keep Using My Health Apps: Discover Users' Perception of Health and Fitness Apps with the UTAUT2 Model.

    Science.gov (United States)

    Yuan, Shupei; Ma, Wenjuan; Kanthawala, Shaheen; Peng, Wei

    2015-09-01

    Health and fitness applications (apps) are one of the major app categories in the current mobile app market. Few studies have examined this area from the users' perspective. This study adopted the Extended Unified Theory of Acceptance and Use of Technology (UTAUT2) Model to examine the predictors of the users' intention to adopt health and fitness apps. A survey (n=317) was conducted with college-aged smartphone users at a Midwestern university in the United States. Performance expectancy, hedonic motivations, price value, and habit were significant predictors of users' intention of continued usage of health and fitness apps. However, effort expectancy, social influence, and facilitating conditions were not found to predict users' intention of continued usage of health and fitness apps. This study extends the UTATU2 Model to the mobile apps domain and provides health professions, app designers, and marketers with the insights of user experience in terms of continuously using health and fitness apps.

  12. Surface Flux Modeling for Air Quality Applications

    Directory of Open Access Journals (Sweden)

    Limei Ran

    2011-08-01

    Full Text Available For many gasses and aerosols, dry deposition is an important sink of atmospheric mass. Dry deposition fluxes are also important sources of pollutants to terrestrial and aquatic ecosystems. The surface fluxes of some gases, such as ammonia, mercury, and certain volatile organic compounds, can be upward into the air as well as downward to the surface and therefore should be modeled as bi-directional fluxes. Model parameterizations of dry deposition in air quality models have been represented by simple electrical resistance analogs for almost 30 years. Uncertainties in surface flux modeling in global to mesoscale models are being slowly reduced as more field measurements provide constraints on parameterizations. However, at the same time, more chemical species are being added to surface flux models as air quality models are expanded to include more complex chemistry and are being applied to a wider array of environmental issues. Since surface flux measurements of many of these chemicals are still lacking, resistances are usually parameterized using simple scaling by water or lipid solubility and reactivity. Advances in recent years have included bi-directional flux algorithms that require a shift from pre-computation of deposition velocities to fully integrated surface flux calculations within air quality models. Improved modeling of the stomatal component of chemical surface fluxes has resulted from improved evapotranspiration modeling in land surface models and closer integration between meteorology and air quality models. Satellite-derived land use characterization and vegetation products and indices are improving model representation of spatial and temporal variations in surface flux processes. This review describes the current state of chemical dry deposition modeling, recent progress in bi-directional flux modeling, synergistic model development research with field measurements, and coupling with meteorological land surface models.

  13. Inference-Based Surface Reconstruction of Cluttered Environments

    KAUST Repository

    Biggers, K.

    2012-08-01

    We present an inference-based surface reconstruction algorithm that is capable of identifying objects of interest among a cluttered scene, and reconstructing solid model representations even in the presence of occluded surfaces. Our proposed approach incorporates a predictive modeling framework that uses a set of user-provided models for prior knowledge, and applies this knowledge to the iterative identification and construction process. Our approach uses a local to global construction process guided by rules for fitting high-quality surface patches obtained from these prior models. We demonstrate the application of this algorithm on several example data sets containing heavy clutter and occlusion. © 2012 IEEE.

  14. Hydrological land surface modelling

    DEFF Research Database (Denmark)

    Ridler, Marc-Etienne Francois

    Recent advances in integrated hydrological and soil-vegetation-atmosphere transfer (SVAT) modelling have led to improved water resource management practices, greater crop production, and better flood forecasting systems. However, uncertainty is inherent in all numerical models ultimately leading...... temperature are explored in a multi-objective calibration experiment to optimize the parameters in a SVAT model in the Sahel. The two satellite derived variables were effective at constraining most land-surface and soil parameters. A data assimilation framework is developed and implemented with an integrated...... and disaster management. The objective of this study is to develop and investigate methods to reduce hydrological model uncertainty by using supplementary data sources. The data is used either for model calibration or for model updating using data assimilation. Satellite estimates of soil moisture and surface...

  15. Phylogenetic tree reconstruction accuracy and model fit when proportions of variable sites change across the tree.

    Science.gov (United States)

    Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R

    2010-05-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.

  16. Measuring fit of sequence data to phylogenetic model: gain of power using marginal tests.

    Science.gov (United States)

    Waddell, Peter J; Ota, Rissa; Penny, David

    2009-10-01

    Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (Unended quest: an intellectual autobiography. Fontana, London, 1976) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (Nature 297:197-200, 1982) to the present. We compare the general log-likelihood ratio (the G or G (2) statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (P approximately 0.5), but the marginalized tests do. Tests on pairwise frequency (F) matrices, strongly (P < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (P < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4( t ) patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with P < 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published trees may really be far larger than the analytical methods (e.g., bootstrap) report.

  17. UROX 2.0: an interactive tool for fitting atomic models into electron-microscopy reconstructions

    International Nuclear Information System (INIS)

    Siebert, Xavier; Navaza, Jorge

    2009-01-01

    UROX is software designed for the interactive fitting of atomic models into electron-microscopy reconstructions. The main features of the software are presented, along with a few examples. Electron microscopy of a macromolecular structure can lead to three-dimensional reconstructions with resolutions that are typically in the 30–10 Å range and sometimes even beyond 10 Å. Fitting atomic models of the individual components of the macromolecular structure (e.g. those obtained by X-ray crystallography or nuclear magnetic resonance) into an electron-microscopy map allows the interpretation of the latter at near-atomic resolution, providing insight into the interactions between the components. Graphical software is presented that was designed for the interactive fitting and refinement of atomic models into electron-microscopy reconstructions. Several characteristics enable it to be applied over a wide range of cases and resolutions. Firstly, calculations are performed in reciprocal space, which results in fast algorithms. This allows the entire reconstruction (or at least a sizeable portion of it) to be used by taking into account the symmetry of the reconstruction both in the calculations and in the graphical display. Secondly, atomic models can be placed graphically in the map while the correlation between the model-based electron density and the electron-microscopy reconstruction is computed and displayed in real time. The positions and orientations of the models are refined by a least-squares minimization. Thirdly, normal-mode calculations can be used to simulate conformational changes between the atomic model of an individual component and its corresponding density within a macromolecular complex determined by electron microscopy. These features are illustrated using three practical cases with different symmetries and resolutions. The software, together with examples and user instructions, is available free of charge at http://mem.ibs.fr/UROX/

  18. Fitness, Sleep-Disordered Breathing, Symptoms of Depression, and Cognition in Inactive Overweight Children: Mediation Models.

    Science.gov (United States)

    Stojek, Monika M K; Montoya, Amanda K; Drescher, Christopher F; Newberry, Andrew; Sultan, Zain; Williams, Celestine F; Pollock, Norman K; Davis, Catherine L

    We used mediation models to examine the mechanisms underlying the relationships among physical fitness, sleep-disordered breathing (SDB), symptoms of depression, and cognitive functioning. We conducted a cross-sectional secondary analysis of the cohorts involved in the 2003-2006 project PLAY (a trial of the effects of aerobic exercise on health and cognition) and the 2008-2011 SMART study (a trial of the effects of exercise on cognition). A total of 397 inactive overweight children aged 7-11 received a fitness test, standardized cognitive test (Cognitive Assessment System, yielding Planning, Attention, Simultaneous, Successive, and Full Scale scores), and depression questionnaire. Parents completed a Pediatric Sleep Questionnaire. We used bootstrapped mediation analyses to test whether SDB mediated the relationship between fitness and depression and whether SDB and depression mediated the relationship between fitness and cognition. Fitness was negatively associated with depression ( B = -0.041; 95% CI, -0.06 to -0.02) and SDB ( B = -0.005; 95% CI, -0.01 to -0.001). SDB was positively associated with depression ( B = 0.99; 95% CI, 0.32 to 1.67) after controlling for fitness. The relationship between fitness and depression was mediated by SDB (indirect effect = -0.005; 95% CI, -0.01 to -0.0004). The relationship between fitness and the attention component of cognition was independently mediated by SDB (indirect effect = 0.058; 95% CI, 0.004 to 0.13) and depression (indirect effect = -0.071; 95% CI, -0.01 to -0.17). SDB mediates the relationship between fitness and depression, and SDB and depression separately mediate the relationship between fitness and the attention component of cognition.

  19. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    Science.gov (United States)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  20. Mathematical model quantifies multiple daylight exposure and burial events for rock surfaces using luminescence dating

    International Nuclear Information System (INIS)

    Freiesleben, Trine; Sohbati, Reza; Murray, Andrew; Jain, Mayank; Al Khasawneh, Sahar; Hvidt, Søren; Jakobsen, Bo

    2015-01-01

    Interest in the optically stimulated luminescence (OSL) dating of rock surfaces has increased significantly over the last few years, as the potential of the method has been explored. It has been realized that luminescence-depth profiles show qualitative evidence for multiple daylight exposure and burial events. To quantify both burial and exposure events a new mathematical model is developed by expanding the existing models of evolution of luminescence–depth profiles, to include repeated sequential events of burial and exposure to daylight. This new model is applied to an infrared stimulated luminescence-depth profile from a feldspar-rich granite cobble from an archaeological site near Aarhus, Denmark. This profile shows qualitative evidence for multiple daylight exposure and burial events; these are quantified using the model developed here. By determining the burial ages from the surface layer of the cobble and by fitting the new model to the luminescence profile, it is concluded that the cobble was well bleached before burial. This indicates that the OSL burial age is likely to be reliable. In addition, a recent known exposure event provides an approximate calibration for older daylight exposure events. This study confirms the suggestion that rock surfaces contain a record of exposure and burial history, and that these events can be quantified. The burial age of rock surfaces can thus be dated with confidence, based on a knowledge of their pre-burial light exposure; it may also be possible to determine the length of a fossil exposure, using a known natural light exposure as calibration. - Highlights: • Evidence for multiple exposure and burial events in the history of a single cobble. • OSL rock surface dating model improved to include multiple burial/exposure cycles. • Application of the new model quantifies burial and exposure events.

  1. Robust Locally Weighted Regression For Ground Surface Extraction In Mobile Laser Scanning 3D Data

    Directory of Open Access Journals (Sweden)

    A. Nurunnabi

    2013-10-01

    Full Text Available A new robust way for ground surface extraction from mobile laser scanning 3D point cloud data is proposed in this paper. Fitting polynomials along 2D/3D points is one of the well-known methods for filtering ground points, but it is evident that unorganized point clouds consist of multiple complex structures by nature so it is not suitable for fitting a parametric global model. The aim of this research is to develop and implement an algorithm to classify ground and non-ground points based on statistically robust locally weighted regression which fits a regression surface (line in 2D by fitting without any predefined global functional relation among the variables of interest. Afterwards, the z (elevation-values are robustly down weighted based on the residuals for the fitted points. The new set of down weighted z-values along with x (or y values are used to get a new fit of the (lower surface (line. The process of fitting and down-weighting continues until the difference between two consecutive fits is insignificant. Then the final fit represents the ground level of the given point cloud and the ground surface points can be extracted. The performance of the new method has been demonstrated through vehicle based mobile laser scanning 3D point cloud data from urban areas which include different problematic objects such as short walls, large buildings, electric poles, sign posts and cars. The method has potential in areas like building/construction footprint determination, 3D city modelling, corridor mapping and asset management.

  2. Inference-Based Surface Reconstruction of Cluttered Environments

    KAUST Repository

    Biggers, K.; Keyser, J.

    2012-01-01

    guided by rules for fitting high-quality surface patches obtained from these prior models. We demonstrate the application of this algorithm on several example data sets containing heavy clutter and occlusion. © 2012 IEEE.

  3. A preliminary investigation of the applicability of surface complexation modeling to the understanding of transportation cask weeping

    International Nuclear Information System (INIS)

    Granstaff, V.E.; Chambers, W.B.; Doughty, D.H.

    1994-01-01

    A new application for surface complexation modeling is described. These models, which describe chemical equilibria among aqueous and adsorbed species, have typically been used for predicting groundwater transport of contaminants by modeling the natural adsorbents as various metal oxides. Our experiments suggest that this type of modeling can also explain stainless steel surface contamination and decontamination mechanisms. Stainless steel transportation casks, when submerged in a spent fuel storage pool at nuclear power stations, can become contaminated with radionuclides such as 137 Cs, 134 Cs, and 60 Co. Subsequent release or desorption of these contaminants under varying environmental conditions occasionally results in the phenomenon known as open-quotes cask weeping.close quotes We have postulated that contaminants in the storage pool adsorb onto the hydrous metal oxide surface of the passivated stainless steel and are subsequently released (by conversion from a fixed to a removable form) during transportation, due to varying environmental factors, such as humidity, road salt, dirt, and acid rain. It is well known that 304 stainless steel has a chromium enriched passive surface layer; thus its adsorption behavior should be similar to that of a mixed chromium/iron oxide. To help us interpret our studies of reversible binding of dissolved metals on stainless steel surfaces, we have studied the adsorption of Co +2 on Cr 2 O 3 . The data are interpreted using electrostatic surface complexation models. The FITEQL computer program was used to obtain the model binding constants and site densities from the experimental data. The MINTEQA2 computer speciation model was used, with the fitted constants, in an attempt to validate this approach

  4. Testing the validity of stock-recruitment curve fits

    International Nuclear Information System (INIS)

    Christensen, S.W.; Goodyear, C.P.

    1988-01-01

    The utilities relied heavily on the Ricker stock-recruitment model as the basis for quantifying biological compensation in the Hudson River power case. They presented many fits of the Ricker model to data derived from striped bass catch and effort records compiled by the National Marine Fisheries Service. Based on this curve-fitting exercise, a value of 4 was chosen for the parameter alpha in the Ricker model, and this value was used to derive the utilities' estimates of the long-term impact of power plants on striped bass populations. A technique was developed and applied to address a single fundamental question: if the Ricker model were applicable to the Hudson River striped bass population, could the estimates of alpha from the curve-fitting exercise be considered reliable. The technique involved constructing a simulation model that incorporated the essential biological features of the population and simulated the characteristics of the available actual catch-per-unit-effort data through time. The ability or failure to retrieve the known parameter values underlying the simulation model via the curve-fitting exercise was a direct test of the reliability of the results of fitting stock-recruitment curves to the real data. The results demonstrated that estimates of alpha from the curve-fitting exercise were not reliable. The simulation-modeling technique provides an effective way to identify whether or not particular data are appropriate for use in fitting such models. 39 refs., 2 figs., 3 tabs

  5. Describing the Process of Adopting Nutrition and Fitness Apps: Behavior Stage Model Approach.

    Science.gov (United States)

    König, Laura M; Sproesser, Gudrun; Schupp, Harald T; Renner, Britta

    2018-03-13

    Although mobile technologies such as smartphone apps are promising means for motivating people to adopt a healthier lifestyle (mHealth apps), previous studies have shown low adoption and continued use rates. Developing the means to address this issue requires further understanding of mHealth app nonusers and adoption processes. This study utilized a stage model approach based on the Precaution Adoption Process Model (PAPM), which proposes that people pass through qualitatively different motivational stages when adopting a behavior. To establish a better understanding of between-stage transitions during app adoption, this study aimed to investigate the adoption process of nutrition and fitness app usage, and the sociodemographic and behavioral characteristics and decision-making style preferences of people at different adoption stages. Participants (N=1236) were recruited onsite within the cohort study Konstanz Life Study. Use of mobile devices and nutrition and fitness apps, 5 behavior adoption stages of using nutrition and fitness apps, preference for intuition and deliberation in eating decision-making (E-PID), healthy eating style, sociodemographic variables, and body mass index (BMI) were assessed. Analysis of the 5 behavior adoption stages showed that stage 1 ("unengaged") was the most prevalent motivational stage for both nutrition and fitness app use, with half of the participants stating that they had never thought about using a nutrition app (52.41%, 533/1017), whereas less than one-third stated they had never thought about using a fitness app (29.25%, 301/1029). "Unengaged" nonusers (stage 1) showed a higher preference for an intuitive decision-making style when making eating decisions, whereas those who were already "acting" (stage 4) showed a greater preference for a deliberative decision-making style (F 4,1012 =21.83, Pdigital interventions. This study highlights that new user groups might be better reached by apps designed to address a more intuitive

  6. Modelling job support, job fit, job role and job satisfaction for school of nursing sessional academic staff.

    Science.gov (United States)

    Cowin, Leanne S; Moroney, Robyn

    2018-01-01

    Sessional academic staff are an important part of nursing education. Increases in casualisation of the academic workforce continue and satisfaction with the job role is an important bench mark for quality curricula delivery and influences recruitment and retention. This study examined relations between four job constructs - organisation fit, organisation support, staff role and job satisfaction for Sessional Academic Staff at a School of Nursing by creating two path analysis models. A cross-sectional correlational survey design was utilised. Participants who were currently working as sessional or casual teaching staff members were invited to complete an online anonymous survey. The data represents a convenience sample of Sessional Academic Staff in 2016 at a large school of Nursing and Midwifery in Australia. After psychometric evaluation of each of the job construct measures in this study we utilised Structural Equation Modelling to better understand the relations of the variables. The measures used in this study were found to be both valid and reliable for this sample. Job support and job fit are positively linked to job satisfaction. Although the hypothesised model did not meet model fit standards, a new 'nested' model made substantive sense. This small study explored a new scale for measuring academic job role, and demonstrated how it promotes the constructs of job fit and job supports. All four job constructs are important in providing job satisfaction - an outcome that in turn supports staffing stability, retention, and motivation.

  7. Optical properties and surface characterization of pulsed laser-deposited Cu2ZnSnS4 by spectroscopic ellipsometry

    International Nuclear Information System (INIS)

    Crovetto, Andrea; Cazzaniga, Andrea; Ettlinger, Rebecca B.; Schou, Jørgen; Hansen, Ole

    2015-01-01

    Cu 2 ZnSnS 4 films prepared by pulsed laser deposition at different temperatures are characterized by spectroscopic ellipsometry. The focus is on confirming results from direct measurement techniques, by finding appropriate models of the surface overlayer for data fitting, and extracting the dielectric function of the films. It is found that the surface overlayer changes with film thickness and deposition temperature. Adopting different ellipsometry measurements and modeling strategies for each film, dielectric functions are extracted and compared. As the deposition temperature is increased, the dielectric functions exhibit additional critical points related to optical transitions in the material other than absorption across the fundamental band gap. In the case of a thin film < 200 nm thick, surface features observed by scanning electron microscopy and atomic force microscopy are accurately reproduced by ellipsometry data fitting. - Highlights: • Inhomogeneous Cu 2 ZnSnS 4 films are prepared by pulsed laser deposition. • The film surface includes secondary phases and topographic structures. • We model a film surface layer that fits ellipsometry data. • Ellipsometry data fits confirm results from direct measurement techniques. • We obtain the dielectric function of inhomogeneous Cu 2 ZnSnS 4 films

  8. IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING

    Energy Technology Data Exchange (ETDEWEB)

    Erwin, Peter [Max-Planck-Insitut für Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching, GermanyAND (Germany); Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München (Germany)

    2015-02-01

    I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.

  9. A differential equation for the asymptotic fitness distribution in the Bak-Sneppen model with five species.

    Science.gov (United States)

    Schlemm, Eckhard

    2015-09-01

    The Bak-Sneppen model is an abstract representation of a biological system that evolves according to the Darwinian principles of random mutation and selection. The species in the system are characterized by a numerical fitness value between zero and one. We show that in the case of five species the steady-state fitness distribution can be obtained as a solution to a linear differential equation of order five with hypergeometric coefficients. Similar representations for the asymptotic fitness distribution in larger systems may help pave the way towards a resolution of the question of whether or not, in the limit of infinitely many species, the fitness is asymptotically uniformly distributed on the interval [fc, 1] with fc ≳ 2/3. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Permutation tests for goodness-of-fit testing of mathematical models to experimental data.

    Science.gov (United States)

    Fişek, M Hamit; Barlas, Zeynep

    2013-03-01

    This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. THE HERSCHEL ORION PROTOSTAR SURVEY: SPECTRAL ENERGY DISTRIBUTIONS AND FITS USING A GRID OF PROTOSTELLAR MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Furlan, E. [Infrared Processing and Analysis Center, California Institute of Technology, 770 S. Wilson Ave., Pasadena, CA 91125 (United States); Fischer, W. J. [Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771 (United States); Ali, B. [Space Science Institute, 4750 Walnut Street, Boulder, CO 80301 (United States); Stutz, A. M. [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Stanke, T. [ESO, Karl-Schwarzschild-Strasse 2, D-85748 Garching bei München (Germany); Tobin, J. J. [National Radio Astronomy Observatory, Charlottesville, VA 22903 (United States); Megeath, S. T.; Booker, J. [Ritter Astrophysical Research Center, Department of Physics and Astronomy, University of Toledo, 2801 W. Bancroft Street, Toledo, OH 43606 (United States); Osorio, M. [Instituto de Astrofísica de Andalucía, CSIC, Camino Bajo de Huétor 50, E-18008 Granada (Spain); Hartmann, L.; Calvet, N. [Department of Astronomy, University of Michigan, 500 Church Street, Ann Arbor, MI 48109 (United States); Poteet, C. A. [New York Center for Astrobiology, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180 (United States); Manoj, P. [Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India); Watson, D. M. [Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627 (United States); Allen, L., E-mail: furlan@ipac.caltech.edu [National Optical Astronomy Observatory, 950 N. Cherry Avenue, Tucson, AZ 85719 (United States)

    2016-05-01

    We present key results from the Herschel Orion Protostar Survey: spectral energy distributions (SEDs) and model fits of 330 young stellar objects, predominantly protostars, in the Orion molecular clouds. This is the largest sample of protostars studied in a single, nearby star formation complex. With near-infrared photometry from 2MASS, mid- and far-infrared data from Spitzer and Herschel , and submillimeter photometry from APEX, our SEDs cover 1.2–870 μ m and sample the peak of the protostellar envelope emission at ∼100 μ m. Using mid-IR spectral indices and bolometric temperatures, we classify our sample into 92 Class 0 protostars, 125 Class I protostars, 102 flat-spectrum sources, and 11 Class II pre-main-sequence stars. We implement a simple protostellar model (including a disk in an infalling envelope with outflow cavities) to generate a grid of 30,400 model SEDs and use it to determine the best-fit model parameters for each protostar. We argue that far-IR data are essential for accurate constraints on protostellar envelope properties. We find that most protostars, and in particular the flat-spectrum sources, are well fit. The median envelope density and median inclination angle decrease from Class 0 to Class I to flat-spectrum protostars, despite the broad range in best-fit parameters in each of the three categories. We also discuss degeneracies in our model parameters. Our results confirm that the different protostellar classes generally correspond to an evolutionary sequence with a decreasing envelope infall rate, but the inclination angle also plays a role in the appearance, and thus interpretation, of the SEDs.

  12. Improving maps of ice-sheet surface elevation change using combined laser altimeter and stereoscopic elevation model data

    DEFF Research Database (Denmark)

    Fredenslund Levinsen, Joanna; Howat, I. M.; Tscherning, C. C.

    2013-01-01

    We combine the complementary characteristics of laser altimeter data and stereoscopic digital elevation models (DEMs) to construct high-resolution (_100 m) maps of surface elevations and elevation changes over rapidly changing outlet glaciers in Greenland. Measurements from spaceborne and airborne...... laser altimeters have relatively low errors but are spatially limited to the ground tracks, while DEMs have larger errors but provide spatially continuous surfaces. The principle of our method is to fit the DEM surface to the altimeter point clouds in time and space to minimize the DEM errors and use...... that surface to extrapolate elevations away from altimeter flight lines. This reduces the DEM registration errors and fills the gap between the altimeter paths. We use data from ICESat and ATM as well as SPOT 5 DEMs from 2007 and 2008 and apply them to the outlet glaciers Jakobshavn Isbræ (JI...

  13. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    Science.gov (United States)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  14. Anticipating mismatches of HIT investments: Developing a viability-fit model for e-health services.

    Science.gov (United States)

    Mettler, Tobias

    2016-01-01

    Albeit massive investments in the recent years, the impact of health information technology (HIT) has been controversial and strongly disputed by both research and practice. While many studies are concerned with the development of new or the refinement of existing measurement models for assessing the impact of HIT adoption (ex post), this study presents an initial attempt to better understand the factors affecting viability and fit of HIT and thereby underscores the importance of also having instruments for managing expectations (ex ante). We extend prior research by undertaking a more granular investigation into the theoretical assumptions of viability and fit constructs. In doing so, we use a mixed-methods approach, conducting qualitative focus group discussions and a quantitative field study to improve and validate a viability-fit measurement instrument. Our findings suggest two issues for research and practice. First, the results indicate that different stakeholders perceive HIT viability and fit of the same e-health services very unequally. Second, the analysis also demonstrates that there can be a great discrepancy between the organizational viability and individual fit of a particular e-health service. The findings of this study have a number of important implications such as for health policy making, HIT portfolios, and stakeholder communication. Copyright © 2015. Published by Elsevier Ireland Ltd.

  15. Modeling of Throughput in Production Lines Using Response Surface Methodology and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Federico Nuñez-Piña

    2018-01-01

    Full Text Available The problem of assigning buffers in a production line to obtain an optimum production rate is a combinatorial problem of type NP-Hard and it is known as Buffer Allocation Problem. It is of great importance for designers of production systems due to the costs involved in terms of space requirements. In this work, the relationship among the number of buffer slots, the number of work stations, and the production rate is studied. Response surface methodology and artificial neural network were used to develop predictive models to find optimal throughput values. 360 production rate values for different number of buffer slots and workstations were used to obtain a fourth-order mathematical model and four hidden layers’ artificial neural network. Both models have a good performance in predicting the throughput, although the artificial neural network model shows a better fit (R=1.0000 against the response surface methodology (R=0.9996. Moreover, the artificial neural network produces better predictions for data not utilized in the models construction. Finally, this study can be used as a guide to forecast the maximum or near maximum throughput of production lines taking into account the buffer size and the number of machines in the line.

  16. Amino acids intake and physical fitness among adolescents.

    Science.gov (United States)

    Gracia-Marco, Luis; Bel-Serrat, Silvia; Cuenca-Garcia, Magdalena; Gonzalez-Gross, Marcela; Pedrero-Chamizo, Raquel; Manios, Yannis; Marcos, Ascensión; Molnar, Denes; Widhalm, Kurt; Polito, Angela; Vanhelst, Jeremy; Hagströmer, Maria; Sjöström, Michael; Kafatos, Anthony; de Henauw, Stefaan; Gutierrez, Ángel; Castillo, Manuel J; Moreno, Luis A

    2017-06-01

    The aim was to investigate whether there was an association between amino acid (AA) intake and physical fitness and if so, to assess whether this association was independent of carbohydrates intake. European adolescents (n = 1481, 12.5-17.5 years) were measured. Intake was assessed via two non-consecutive 24-h dietary recalls. Lower and upper limbs muscular fitness was assessed by standing long jump and handgrip strength tests, respectively. Cardiorespiratory fitness was assessed by the 20-m shuttle run test. Physical activity was objectively measured. Socioeconomic status was obtained via questionnaires. Lower limbs muscular fitness seems to be positively associated with tryptophan, histidine and methionine intake in boys, regardless of centre, age, socioeconomic status, physical activity and total energy intake (model 1). However, these associations disappeared once carbohydrates intake was controlled for (model 2). In girls, only proline intake seems to be positively associated with lower limbs muscular fitness (model 2) while cardiorespiratory fitness seems to be positively associated with leucine (model 1) and proline intake (models 1 and 2). None of the observed significant associations remained significant once multiple testing was controlled for. In conclusion, we failed to detect any associations between any of the evaluated AAs and physical fitness after taking into account the effect of multiple testing.

  17. Fitting the two-compartment model in DCE-MRI by linear inversion.

    Science.gov (United States)

    Flouri, Dimitra; Lesnic, Daniel; Sourbron, Steven P

    2016-09-01

    Model fitting of dynamic contrast-enhanced-magnetic resonance imaging-MRI data with nonlinear least squares (NLLS) methods is slow and may be biased by the choice of initial values. The aim of this study was to develop and evaluate a linear least squares (LLS) method to fit the two-compartment exchange and -filtration models. A second-order linear differential equation for the measured concentrations was derived where model parameters act as coefficients. Simulations of normal and pathological data were performed to determine calculation time, accuracy and precision under different noise levels and temporal resolutions. Performance of the LLS was evaluated by comparison against the NLLS. The LLS method is about 200 times faster, which reduces the calculation times for a 256 × 256 MR slice from 9 min to 3 s. For ideal data with low noise and high temporal resolution the LLS and NLLS were equally accurate and precise. The LLS was more accurate and precise than the NLLS at low temporal resolution, but less accurate at high noise levels. The data show that the LLS leads to a significant reduction in calculation times, and more reliable results at low noise levels. At higher noise levels the LLS becomes exceedingly inaccurate compared to the NLLS, but this may be improved using a suitable weighting strategy. Magn Reson Med 76:998-1006, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  18. The regression-calibration method for fitting generalized linear models with additive measurement error

    OpenAIRE

    James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll

    2003-01-01

    This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...

  19. Modelling land surface - atmosphere interactions

    DEFF Research Database (Denmark)

    Rasmussen, Søren Højmark

    representation of groundwater in the hydrological model is found to important and this imply resolving the small river valleys. Because, the important shallow groundwater is found in the river valleys. If the model does not represent the shallow groundwater then the area mean surface flux calculation......The study is investigates modelling of land surface – atmosphere interactions in context of fully coupled climatehydrological model. With a special focus of under what condition a fully coupled model system is needed. Regional climate model inter-comparison projects as ENSEMBLES have shown bias...... by the hydrological model is found to be insensitive to model resolution. Furthermore, this study highlights the effect of bias precipitation by regional climate model and it implications for hydrological modelling....

  20. Corrigendum to "A semi-empirical airfoil stall noise model based on surface pressure measurements" [J. Sound Vib. 387 (2017) 127-162

    Science.gov (United States)

    Bertagnolio, Franck; Madsen, Helge Aa.; Fischer, Andreas; Bak, Christian

    2018-06-01

    In the above-mentioned paper, two model formulae were tuned to fit experimental data of surface pressure spectra measured in various wind tunnels. They correspond to high and low Reynolds number flow scalings, respectively. It turns out that there exist typographical errors in both formulae numbered (9) and (10) in the original paper. There, these formulae read:

  1. Econometric modelling of risk adverse behaviours of entrepreneurs in the provision of house fittings in China

    Directory of Open Access Journals (Sweden)

    Rita Yi Man Li

    2012-03-01

    Full Text Available Entrepreneurs have always born the risk of running their business. They reap a profit in return for their risk taking and work. Housing developers are no different. In many countries, such as Australia, the United Kingdom and the United States, they interpret the tastes of the buyers and provide the dwellings they develop with basic fittings such as floor and wall coverings, bathroom fittings and kitchen cupboards. In mainland China, however, in most of the developments, units or houses are sold without floor or wall coverings, kitchen  or bathroom fittings. What is the motive behind this choice? This paper analyses the factors affecting housing developers’ decisions to provide fittings based on 1701 housing developments in Hangzhou, Chongqing and Hangzhou using a Probit model. The results show that developers build a higher proportion of bare units in mainland China when: 1 there is shortage of housing; 2 land costs are high so that the comparative costs of providing fittings become relatively low.

  2. FIREFLY (Fitting IteRativEly For Likelihood analYsis): a full spectral fitting code

    Science.gov (United States)

    Wilkinson, David M.; Maraston, Claudia; Goddard, Daniel; Thomas, Daniel; Parikh, Taniya

    2017-12-01

    We present a new spectral fitting code, FIREFLY, for deriving the stellar population properties of stellar systems. FIREFLY is a chi-squared minimization fitting code that fits combinations of single-burst stellar population models to spectroscopic data, following an iterative best-fitting process controlled by the Bayesian information criterion. No priors are applied, rather all solutions within a statistical cut are retained with their weight. Moreover, no additive or multiplicative polynomials are employed to adjust the spectral shape. This fitting freedom is envisaged in order to map out the effect of intrinsic spectral energy distribution degeneracies, such as age, metallicity, dust reddening on galaxy properties, and to quantify the effect of varying input model components on such properties. Dust attenuation is included using a new procedure, which was tested on Integral Field Spectroscopic data in a previous paper. The fitting method is extensively tested with a comprehensive suite of mock galaxies, real galaxies from the Sloan Digital Sky Survey and Milky Way globular clusters. We also assess the robustness of the derived properties as a function of signal-to-noise ratio (S/N) and adopted wavelength range. We show that FIREFLY is able to recover age, metallicity, stellar mass, and even the star formation history remarkably well down to an S/N ∼ 5, for moderately dusty systems. Code and results are publicly available.1

  3. FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES

    Directory of Open Access Journals (Sweden)

    U. S. Panday

    2012-09-01

    Full Text Available In literature and in photogrammetric workstations many approaches and systems to automatically reconstruct buildings from remote sensing data are described and available. Those building models are being used for instance in city modeling or in cadastre context. If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS data. This leads to inconsistent building outlines, which has a negative influence on visual impression, but more seriously also represents a wrong legal boundary in the cadaster. Oblique aerial images as opposed to nadir-view images reveal greater detail, enabling to see different views of an object taken from different directions. Building walls are visible from oblique images directly and those images are used for automated roof overhang estimation in this research. A fitting algorithm is employed to find roof parameters of simple buildings. It uses a least squares algorithm to fit projected wire frames to their corresponding edge lines extracted from the images. Self-occlusion is detected based on intersection result of viewing ray and the planes formed by the building whereas occlusion from other objects is detected using an ALS point cloud. Overhang and ground height are obtained by sweeping vertical and horizontal planes respectively. Experimental results are verified with high resolution ortho-images, field survey, and ALS data. Planimetric accuracy of 1cm mean and 5cm standard deviation was obtained, while buildings' orientation were accurate to mean of 0.23° and standard deviation of 0.96° with ortho-image. Overhang parameters were aligned to approximately 10cm with field survey. The ground and roof heights were accurate to mean of – 9cm and 8cm with standard deviations of 16cm and 8cm with ALS respectively. The developed approach reconstructs 3D building models well in cases of sufficient texture. More images should be acquired for

  4. Joint surface modeling with thin-plate splines.

    Science.gov (United States)

    Boyd, S K; Ronsky, J L; Lichti, D D; Salkauskas, K; Chapman, M A; Salkauskas, D

    1999-10-01

    Mathematical joint surface models based on experimentally determined data points can be used to investigate joint characteristics such as curvature, congruency, cartilage thickness, joint contact areas, as well as to provide geometric information well suited for finite element analysis. Commonly, surface modeling methods are based on B-splines, which involve tensor products. These methods have had success; however, they are limited due to the complex organizational aspect of working with surface patches, and modeling unordered, scattered experimental data points. An alternative method for mathematical joint surface modeling is presented based on the thin-plate spline (TPS). It has the advantage that it does not involve surface patches, and can model scattered data points without experimental data preparation. An analytical surface was developed and modeled with the TPS to quantify its interpolating and smoothing characteristics. Some limitations of the TPS include discontinuity of curvature at exactly the experimental surface data points, and numerical problems dealing with data sets in excess of 2000 points. However, suggestions for overcoming these limitations are presented. Testing the TPS with real experimental data, the patellofemoral joint of a cat was measured with multistation digital photogrammetry and modeled using the TPS to determine cartilage thicknesses and surface curvature. The cartilage thickness distribution ranged between 100 to 550 microns on the patella, and 100 to 300 microns on the femur. It was found that the TPS was an effective tool for modeling joint surfaces because no preparation of the experimental data points was necessary, and the resulting unique function representing the entire surface does not involve surface patches. A detailed algorithm is presented for implementation of the TPS.

  5. Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2017-01-01

    The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days

  6. Computational Fluid Dynamics Modeling of Steam Condensation on Nuclear Containment Wall Surfaces Based on Semiempirical Generalized Correlations

    Directory of Open Access Journals (Sweden)

    Pavan K. Sharma

    2012-01-01

    Full Text Available In water-cooled nuclear power reactors, significant quantities of steam and hydrogen could be produced within the primary containment following the postulated design basis accidents (DBA or beyond design basis accidents (BDBA. For accurate calculation of the temperature/pressure rise and hydrogen transport calculation in nuclear reactor containment due to such scenarios, wall condensation heat transfer coefficient (HTC is used. In the present work, the adaptation of a commercial CFD code with the implementation of models for steam condensation on wall surfaces in presence of noncondensable gases is explained. Steam condensation has been modeled using the empirical average HTC, which was originally developed to be used for “lumped-parameter” (volume-averaged modeling of steam condensation in the presence of noncondensable gases. The present paper suggests a generalized HTC based on curve fitting of most of the reported semiempirical condensation models, which are valid for specific wall conditions. The present methodology has been validated against limited reported experimental data from the COPAIN experimental facility. This is the first step towards the CFD-based generalized analysis procedure for condensation modeling applicable for containment wall surfaces that is being evolved further for specific wall surfaces within the multicompartment containment atmosphere.

  7. Single-layer model for surface roughness.

    Science.gov (United States)

    Carniglia, C K; Jensen, D G

    2002-06-01

    Random roughness of an optical surface reduces its specular reflectance and transmittance by the scattering of light. The reduction in reflectance can be modeled by a homogeneous layer on the surface if the refractive index of the layer is intermediate to the indices of the media on either side of the surface. Such a layer predicts an increase in the transmittance of the surface and therefore does not provide a valid model for the effects of scatter on the transmittance. Adding a small amount of absorption to the layer provides a model that predicts a reduction in both reflectance and transmittance. The absorbing layer model agrees with the predictions of a scalar scattering theory for a layer with a thickness that is twice the rms roughness of the surface. The extinction coefficient k for the layer is proportional to the thickness of the layer.

  8. vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments

    Directory of Open Access Journals (Sweden)

    Demeter Lisa

    2010-05-01

    Full Text Available Abstract Background The replication rate (or fitness between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV. HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Results Based on a mathematical model and several statistical methods (least-squares approach and measurement error models, a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1. Conclusions Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/.

  9. Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models

    Science.gov (United States)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.

  10. Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models

    International Nuclear Information System (INIS)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-01-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data

  11. Worm plot to diagnose fit in quantile regression

    NARCIS (Netherlands)

    Buuren, S. van

    2007-01-01

    The worm plot is a series of detrended Q-Q plots, split by covariate levels. The worm plot is a diagnostic tool for visualizing how well a statistical model fits the data, for finding locations at which the fit can be improved, and for comparing the fit of different models. This paper shows how the

  12. Worm plot to diagnose fit in quantile regression

    NARCIS (Netherlands)

    Buuren, S. van

    2007-01-01

    The worm plot is a series of detrended Q-Q plots, split by covariate levels. The worm plot is a diagnostic tool for visualizing how well a statistical model fits the data, for finding locations at which the fit can be improved, and for comparing the fit of different models. This paper shows how

  13. Prediction of Pressing Quality for Press-Fit Assembly Based on Press-Fit Curve and Maximum Press-Mounting Force

    Directory of Open Access Journals (Sweden)

    Bo You

    2015-01-01

    Full Text Available In order to predict pressing quality of precision press-fit assembly, press-fit curves and maximum press-mounting force of press-fit assemblies were investigated by finite element analysis (FEA. The analysis was based on a 3D Solidworks model using the real dimensions of the microparts and the subsequent FEA model that was built using ANSYS Workbench. The press-fit process could thus be simulated on the basis of static structure analysis. To verify the FEA results, experiments were carried out using a press-mounting apparatus. The results show that the press-fit curves obtained by FEA agree closely with the curves obtained using the experimental method. In addition, the maximum press-mounting force calculated by FEA agrees with that obtained by the experimental method, with the maximum deviation being 4.6%, a value that can be tolerated. The comparison shows that the press-fit curve and max press-mounting force calculated by FEA can be used for predicting the pressing quality during precision press-fit assembly.

  14. Direct fit of a theoretical model of phase transition in oscillatory finger motions.

    NARCIS (Netherlands)

    Newell, K.M.; Molenaar, P.C.M.

    2003-01-01

    This paper presents a general method to fit the Schoner-Haken-Kelso (SHK) model of human movement phase transitions directly to time series data. A robust variant of the extended Kalman filter technique is applied to the data of a single subject. The options of covariance resetting and iteration

  15. Acid-base properties and surface complexation modeling of phosphate anion adsorption by wasted low grade iron ore with high phosphorus.

    Science.gov (United States)

    Yuan, Xiaoli; Bai, Chenguang; Xia, Wentang; An, Juan

    2014-08-15

    The adsorption phenomena and specific reaction processes of phosphate onto wasted low grade iron ore with high phosphorus (WLGIOWHP) were studied in this work. Zeta potential and Fourier transform infrared spectroscopy (FTIR) analyses were used to elucidate the interaction mechanism between WLGIOWHP and aqueous solution. The results implied that the main adsorption mechanism was the replacement of surface hydroxyl groups by phosphate via the formation of inner-sphere complex. The adsorption process was characterized by chemical adsorption onto WLGIOWHP. The non-electrostatic model (NEM) was used to simulate the surface adsorption of phosphate onto WLGIOWHP. The total surface site density and protonation constants for NEM (N(T)=1.6×10(-4) mol/g, K(a1)=2.2×10(-4), K(a2)=6.82×10(-9)) were obtained by non-linear data fitting of acid-base titrations. In addition, the NEM was used to establish the surface adsorption complexation modeling of phosphate onto WLGIOWHP. The model successfully predicted the adsorption of phosphate onto WLGIOWHP from municipal wastewater. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Permutation invariant polynomial neural network approach to fitting potential energy surfaces. II. Four-atom systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jun; Jiang, Bin; Guo, Hua, E-mail: hguo@unm.edu [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States)

    2013-11-28

    A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resulting in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.

  17. Complex growing networks with intrinsic vertex fitness

    International Nuclear Information System (INIS)

    Bedogne, C.; Rodgers, G. J.

    2006-01-01

    One of the major questions in complex network research is to identify the range of mechanisms by which a complex network can self organize into a scale-free state. In this paper we investigate the interplay between a fitness linking mechanism and both random and preferential attachment. In our models, each vertex is assigned a fitness x, drawn from a probability distribution ρ(x). In Model A, at each time step a vertex is added and joined to an existing vertex, selected at random, with probability p and an edge is introduced between vertices with fitnesses x and y, with a rate f(x,y), with probability 1-p. Model B differs from Model A in that, with probability p, edges are added with preferential attachment rather than randomly. The analysis of Model A shows that, for every fixed fitness x, the network's degree distribution decays exponentially. In Model B we recover instead a power-law degree distribution whose exponent depends only on p, and we show how this result can be generalized. The properties of a number of particular networks are examined

  18. Surface complexation modelling: Experiments on the sorption of nickel on quartz

    International Nuclear Information System (INIS)

    Puukko, E.; Hakanen, M.

    1995-10-01

    Assessing the safety of a final repository for nuclear wastes requires knowledge concerning the way in which the radionuclides released are retarded in the geosphere. The aim of the work is to aquire knowledge of empirical methods repeating the experiments on the sorption of nickel on quartz described in the reports published by the British Geological Survey (BGS). The experimental results were modelled with computer models at the Technical Research Centre of Finland (VTT Chemical Technology). The results showed that the experimental knowledge of the sorption of Ni on quartz have been acheved by repeating the experiments of BGS. Experiments made with the two quartz types, Min-U-Sil 5 (MUS) and Nilsiae, showed the difference in sorption of Ni in the low ionic strength solution (0.001 M NaNO 3 ). The sorption of Ni on MUS was higher than predicted by the Surface Complexation Model (SCM). The phenomenon was also observed by the BGS, and may be due to the different amounts of inpurities in the MUS and in the NLS. In other respects, the results of the sorption experiments fitted quite well with those predicted by the SCM model. (8 refs., 8 figs., 11 tabs.)

  19. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    Science.gov (United States)

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.

  20. Understanding Surface Adhesion in Nature: A Peeling Model.

    Science.gov (United States)

    Gu, Zhen; Li, Siheng; Zhang, Feilong; Wang, Shutao

    2016-07-01

    Nature often exhibits various interesting and unique adhesive surfaces. The attempt to understand the natural adhesion phenomena can continuously guide the design of artificial adhesive surfaces by proposing simplified models of surface adhesion. Among those models, a peeling model can often effectively reflect the adhesive property between two surfaces during their attachment and detachment processes. In the context, this review summarizes the recent advances about the peeling model in understanding unique adhesive properties on natural and artificial surfaces. It mainly includes four parts: a brief introduction to natural surface adhesion, the theoretical basis and progress of the peeling model, application of the peeling model, and finally, conclusions. It is believed that this review is helpful to various fields, such as surface engineering, biomedicine, microelectronics, and so on.

  1. Measuring Quasar Spin via X-ray Continuum Fitting

    Science.gov (United States)

    Jenkins, Matthew; Pooley, David; Rappaport, Saul; Steiner, Jack

    2018-01-01

    We have identified several quasars whose X-ray spectra appear very soft. When fit with power-law models, the best-fit indices are greater than 3. This is very suggestive of thermal disk emission, indicating that the X-ray spectrum is dominated by the disk component. Galactic black hole binaries in such states have been successfully fit with disk-blackbody models to constrain the inner radius, which also constrains the spin of the black hole. We have fit those models to XMM-Newton spectra of several of our identified soft X-ray quasars to place constraints on the spins of the supermassive black holes.

  2. RATES OF FITNESS DECLINE AND REBOUND SUGGEST PERVASIVE EPISTASIS

    Science.gov (United States)

    Perfeito, L; Sousa, A; Bataillon, T; Gordo, I

    2014-01-01

    Unraveling the factors that determine the rate of adaptation is a major question in evolutionary biology. One key parameter is the effect of a new mutation on fitness, which invariably depends on the environment and genetic background. The fate of a mutation also depends on population size, which determines the amount of drift it will experience. Here, we manipulate both population size and genotype composition and follow adaptation of 23 distinct Escherichia coli genotypes. These have previously accumulated mutations under intense genetic drift and encompass a substantial fitness variation. A simple rule is uncovered: the net fitness change is negatively correlated with the fitness of the genotype in which new mutations appear—a signature of epistasis. We find that Fisher's geometrical model can account for the observed patterns of fitness change and infer the parameters of this model that best fit the data, using Approximate Bayesian Computation. We estimate a genomic mutation rate of 0.01 per generation for fitness altering mutations, albeit with a large confidence interval, a mean fitness effect of mutations of −0.01, and an effective number of traits nine in mutS− E. coli. This framework can be extended to confront a broader range of models with data and test different classes of fitness landscape models. PMID:24372601

  3. The Use of a Modular Titanium Baseplate with a Press-Fit Keel Implanted with a Surface Cementing Technique for Primary Total Knee Arthroplasty

    Directory of Open Access Journals (Sweden)

    Christopher E. Pelt

    2014-01-01

    Full Text Available Little data exists regarding outcomes following TKA performed with surface-cementation for the fixation of modular tibial baseplates with press-fit keels. Thus, we retrospectively reviewed the clinical and radiographic outcomes of 439 consecutive primary TKAs performed with surface cemented tibial components. There were 290 female patients and 149 male patients with average age of 62 years (range 30–84. Two tibial components were revised for aseptic loosening (0.5% and four tibial components (0.9% were removed to improve instability (n=2 or malalignment (n=2. Complications included 13 deep infections treated with 2-stage revision (12 and fusion (1. These results support the surface cement technique with a modular grit-blasted titanium surface and cruciform stem during primary TKA.

  4. Fitting model-based psychometric functions to simultaneity and temporal-order judgment data: MATLAB and R routines.

    Science.gov (United States)

    Alcalá-Quintana, Rocío; García-Pérez, Miguel A

    2013-12-01

    Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.

  5. Fitness

    Science.gov (United States)

    ... gov home http://www.girlshealth.gov/ Home Fitness Fitness Want to look and feel your best? Physical ... are? Check out this info: What is physical fitness? top Physical fitness means you can do everyday ...

  6. Variation in the post-mating fitness landscape in fruit flies.

    Science.gov (United States)

    Fricke, C; Chapman, T

    2017-07-01

    Sperm competition is pervasive and fundamental to determining a male's overall fitness. Sperm traits and seminal fluid proteins (Sfps) are key factors. However, studies of sperm competition may often exclude females that fail to remate during a defined period. Hence, the resulting data sets contain fewer data from the potentially fittest males that have most success in preventing female remating. It is also important to consider a male's reproductive success before entering sperm competition, which is a major contributor to fitness. The exclusion of these data can both hinder our understanding of the complete fitness landscapes of competing males and lessen our ability to assess the contribution of different determinants of reproductive success to male fitness. We addressed this here, using the Drosophila melanogaster model system, by (i) capturing a comprehensive range of intermating intervals that define the fitness of interacting wild-type males and (ii) analysing outcomes of sperm competition using selection analyses. We conducted additional tests using males lacking the sex peptide (SP) ejaculate component vs. genetically matched (SP + ) controls. This allowed us to assess the comprehensive fitness effects of this important Sfp on sperm competition. The results showed a signature of positive, linear selection in wild-type and SP + control males on the length of the intermating interval and on male sperm competition defence. However, the fitness surface for males lacking SP was distinct, with local fitness peaks depending on contrasting combinations of remating intervals and offspring numbers. The results suggest that there are alternative routes to success in sperm competition and provide an explanation for the maintenance of variation in sperm competition traits. © 2017 The Authors. Journal of Evolutionary Biology published by John Wiley & Sons Ltd on behalf of European Society for Evolutionary Biology.

  7. Finsler Geometry Modeling of an Orientation-Asymmetric Surface Model for Membranes

    Science.gov (United States)

    Proutorov, Evgenii; Koibuchi, Hiroshi

    2017-12-01

    In this paper, a triangulated surface model is studied in the context of Finsler geometry (FG) modeling. This FG model is an extended version of a recently reported model for two-component membranes, and it is asymmetric under surface inversion. We show that the definition of the model is independent of how the Finsler length of a bond is defined. This leads us to understand that the canonical (or Euclidean) surface model is obtained from the FG model such that it is uniquely determined as a trivial model from the viewpoint of well definedness.

  8. Surface Adsorption in Nonpolarizable Atomic Models.

    Science.gov (United States)

    Whitmer, Jonathan K; Joshi, Abhijeet A; Carlton, Rebecca J; Abbott, Nicholas L; de Pablo, Juan J

    2014-12-09

    Many ionic solutions exhibit species-dependent properties, including surface tension and the salting-out of proteins. These effects may be loosely quantified in terms of the Hofmeister series, first identified in the context of protein solubility. Here, our interest is to develop atomistic models capable of capturing Hofmeister effects rigorously. Importantly, we aim to capture this dependence in computationally cheap "hard" ionic models, which do not exhibit dynamic polarization. To do this, we have performed an investigation detailing the effects of the water model on these properties. Though incredibly important, the role of water models in simulation of ionic solutions and biological systems is essentially unexplored. We quantify this via the ion-dependent surface attraction of the halide series (Cl, Br, I) and, in so doing, determine the relative importance of various hypothesized contributions to ionic surface free energies. Importantly, we demonstrate surface adsorption can result in hard ionic models combined with a thermodynamically accurate representation of the water molecule (TIP4Q). The effect observed in simulations of iodide is commensurate with previous calculations of the surface potential of mean force in rigid molecular dynamics and polarizable density-functional models. Our calculations are direct simulation evidence of the subtle but sensitive role of water thermodynamics in atomistic simulations.

  9. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  10. A fit method for the determination of inherent filtration with diagnostic x-ray units

    International Nuclear Information System (INIS)

    Meghzifene, K; Nowotny, R; Aiginger, H

    2006-01-01

    A method for the determination of total inherent filtration for clinical x-ray units using attenuation curves was devised. A model for the calculation of x-ray spectra is used to calculate kerma values which are then adjusted to the experimental data in minimizing the sum of the squared relative differences in kerma using a modified simplex fit process. The model considers tube voltage, voltage ripple, anode angle and additional filters. Fit parameters are the thickness of an additional inherent Al filter and a general normalization factor. Nineteen sets of measurements including attenuation data for three tube voltages and five Al-filter settings each were obtained. Relative differences of experimental and calculated kerma using the data for the additional filter thickness are within a range of -7.6% to 6.4%. Quality curves, i.e. the relationship of additional filtration to HVL, are often used to determine filtration but the results show that standard quality curves do not reflect the variety of conditions encountered in practice. To relate the thickness of the additional filter to the condition of the anode surface, the data fits were also made using tungsten as the filter material. These fits gave an identical fit quality compared to aluminium with a tungsten filter thickness of 2.12-8.21 μm which is within the range of the additional absorbing layers determined for rough anodes

  11. Applied stochastic modelling

    CERN Document Server

    Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P

    2008-01-01

    Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...

  12. Surface-complexation models for sorption onto heterogeneous surfaces

    International Nuclear Information System (INIS)

    Harvey, K.B.

    1997-10-01

    This report provides a description of the discrete-logK spectrum model, together with a description of its derivation, and of its place in the larger context of surface-complexation modelling. The tools necessary to apply the discrete-logK spectrum model are discussed, and background information appropriate to this discussion is supplied as appendices. (author)

  13. Hamiltonian inclusive fitness: a fitter fitness concept.

    Science.gov (United States)

    Costa, James T

    2013-01-01

    In 1963-1964 W. D. Hamilton introduced the concept of inclusive fitness, the only significant elaboration of Darwinian fitness since the nineteenth century. I discuss the origin of the modern fitness concept, providing context for Hamilton's discovery of inclusive fitness in relation to the puzzle of altruism. While fitness conceptually originates with Darwin, the term itself stems from Spencer and crystallized quantitatively in the early twentieth century. Hamiltonian inclusive fitness, with Price's reformulation, provided the solution to Darwin's 'special difficulty'-the evolution of caste polymorphism and sterility in social insects. Hamilton further explored the roles of inclusive fitness and reciprocation to tackle Darwin's other difficulty, the evolution of human altruism. The heuristically powerful inclusive fitness concept ramified over the past 50 years: the number and diversity of 'offspring ideas' that it has engendered render it a fitter fitness concept, one that Darwin would have appreciated.

  14. Fitting the Phenomenological MSSM

    CERN Document Server

    AbdusSalam, S S; Quevedo, F; Feroz, F; Hobson, M

    2010-01-01

    We perform a global Bayesian fit of the phenomenological minimal supersymmetric standard model (pMSSM) to current indirect collider and dark matter data. The pMSSM contains the most relevant 25 weak-scale MSSM parameters, which are simultaneously fit using `nested sampling' Monte Carlo techniques in more than 15 years of CPU time. We calculate the Bayesian evidence for the pMSSM and constrain its parameters and observables in the context of two widely different, but reasonable, priors to determine which inferences are robust. We make inferences about sparticle masses, the sign of the $\\mu$ parameter, the amount of fine tuning, dark matter properties and the prospects for direct dark matter detection without assuming a restrictive high-scale supersymmetry breaking model. We find the inferred lightest CP-even Higgs boson mass as an example of an approximately prior independent observable. This analysis constitutes the first statistically convergent pMSSM global fit to all current data.

  15. CRAPONE, Optical Model Potential Fit of Neutron Scattering Data

    International Nuclear Information System (INIS)

    Fabbri, F.; Fratamico, G.; Reffo, G.

    2004-01-01

    1 - Description of problem or function: Automatic search for local and non-local optical potential parameters for neutrons. Total, elastic, differential elastic cross sections, l=0 and l=1 strength functions and scattering length can be considered. 2 - Method of solution: A fitting procedure is applied to different sets of experimental data depending on the local or non-local approximation chosen. In the non-local approximation the fitting procedure can be simultaneously performed over the whole energy range. The best fit is obtained when a set of parameters is found where CHI 2 is at its minimum. The solution of the system equations is obtained by diagonalization of the matrix according to the Jacobi method

  16. Remote measurement of surface roughness, surface reflectance, and body reflectance with LiDAR.

    Science.gov (United States)

    Li, Xiaolu; Liang, Yu

    2015-10-20

    Light detection and ranging (LiDAR) intensity data are attracting increasing attention because of the great potential for use of such data in a variety of remote sensing applications. To fully investigate the data potential for target classification and identification, we carried out a series of experiments with typical urban building materials and employed our reconstructed built-in-lab LiDAR system. Received intensity data were analyzed on the basis of the derived bidirectional reflectance distribution function (BRDF) model and the established integration method. With an improved fitting algorithm, parameters involved in the BRDF model can be obtained to depict the surface characteristics. One of these parameters related to surface roughness was converted to a most used roughness parameter, the arithmetical mean deviation of the roughness profile (Ra), which can be used to validate the feasibility of the BRDF model in surface characterizations and performance evaluations.

  17. State Authenticity as Fit to Environment: The Implications of Social Identity for Fit, Authenticity, and Self-Segregation.

    Science.gov (United States)

    Schmader, Toni; Sedikides, Constantine

    2017-10-01

    People seek out situations that "fit," but the concept of fit is not well understood. We introduce State Authenticity as Fit to the Environment (SAFE), a conceptual framework for understanding how social identities motivate the situations that people approach or avoid. Drawing from but expanding the authenticity literature, we first outline three types of person-environment fit: self-concept fit, goal fit, and social fit. Each type of fit, we argue, facilitates cognitive fluency, motivational fluency, and social fluency that promote state authenticity and drive approach or avoidance behaviors. Using this model, we assert that contexts subtly signal social identities in ways that implicate each type of fit, eliciting state authenticity for advantaged groups but state inauthenticity for disadvantaged groups. Given that people strive to be authentic, these processes cascade down to self-segregation among social groups, reinforcing social inequalities. We conclude by mapping out directions for research on relevant mechanisms and boundary conditions.

  18. Modeling of ion beam surface treatment

    Energy Technology Data Exchange (ETDEWEB)

    Stinnett, R W [Quantum Manufacturing Technologies, Inc., Albuquerque, NM (United States); Maenchen, J E; Renk, T J [Sandia National Laboratories, Albuquerque, NM (United States); Struve, K W [Mission Research Corporation, Albuquerque, NM (United States); Campbell, M M [PASTDCO, Albuquerque, NM (United States)

    1997-12-31

    The use of intense pulsed ion beams is providing a new capability for surface engineering based on rapid thermal processing of the top few microns of metal, ceramic, and glass surfaces. The Ion Beam Surface Treatment (IBEST) process has been shown to produce enhancements in the hardness, corrosion, wear, and fatigue properties of surfaces by rapid melt and re-solidification. A new code called IBMOD was created, enabling the modeling of intense ion beam deposition and the resulting rapid thermal cycling of surfaces. This code was used to model the effect of treatment of aluminum, iron, and titanium using different ion species and pulse durations. (author). 3 figs., 4 refs.

  19. Fit-for-purpose: species distribution model performance depends on evaluation criteria - Dutch Hoverflies as a case study.

    Science.gov (United States)

    Aguirre-Gutiérrez, Jesús; Carvalheiro, Luísa G; Polce, Chiara; van Loon, E Emiel; Raes, Niels; Reemer, Menno; Biesmeijer, Jacobus C

    2013-01-01

    Understanding species distributions and the factors limiting them is an important topic in ecology and conservation, including in nature reserve selection and predicting climate change impacts. While Species Distribution Models (SDM) are the main tool used for these purposes, choosing the best SDM algorithm is not straightforward as these are plentiful and can be applied in many different ways. SDM are used mainly to gain insight in 1) overall species distributions, 2) their past-present-future probability of occurrence and/or 3) to understand their ecological niche limits (also referred to as ecological niche modelling). The fact that these three aims may require different models and outputs is, however, rarely considered and has not been evaluated consistently. Here we use data from a systematically sampled set of species occurrences to specifically test the performance of Species Distribution Models across several commonly used algorithms. Species range in distribution patterns from rare to common and from local to widespread. We compare overall model fit (representing species distribution), the accuracy of the predictions at multiple spatial scales, and the consistency in selection of environmental correlations all across multiple modelling runs. As expected, the choice of modelling algorithm determines model outcome. However, model quality depends not only on the algorithm, but also on the measure of model fit used and the scale at which it is used. Although model fit was higher for the consensus approach and Maxent, Maxent and GAM models were more consistent in estimating local occurrence, while RF and GBM showed higher consistency in environmental variables selection. Model outcomes diverged more for narrowly distributed species than for widespread species. We suggest that matching study aims with modelling approach is essential in Species Distribution Models, and provide suggestions how to do this for different modelling aims and species' data

  20. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    Science.gov (United States)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  1. Testing the goodness of fit of selected infiltration models on soils with different land use histories

    International Nuclear Information System (INIS)

    Mbagwu, J.S.C.

    1993-10-01

    Six infiltration models, some obtained by reformulating the fitting parameters of the classical Kostiakov (1932) and Philip (1957) equations, were investigated for their ability to describe water infiltration into highly permeable sandy soils from the Nsukka plains of SE Nigeria. The models were Kostiakov, Modified Kostiakov (A), Modified Kostiakov (B), Philip, Modified Philip (A) and Modified Philip (B). Infiltration data were obtained from double ring infiltrometers on field plots established on a Knadic Paleustult (Nkpologu series) to investigate the effects of land use on soil properties and maize yield. The treatments were; (i) tilled-mulched (TM), (ii) tilled-unmulched (TU), (iii) untilled-mulched (UM), (iv) untilled-unmulched (UU) and (v) continuous pasture (CP). Cumulative infiltration was highest on the TM and lowest on the CP plots. All estimated model parameters obtained by the best fit of measured data differed significantly among the treatments. Based on the magnitude of R 2 values, the Kostiakov, Modified Kostiakov (A), Philip and Modified Philip (A) models provided best predictions of cumulative infiltration as a function of time. Comparing experimental with model-predicted cumulative infiltration showed, however, that on all treatments the values predicted by the classical Kostiakov, Philip and Modified Philip (A) models deviated most from experimental data. The other models produced values that agreed very well with measured data. Considering the eases of determining the fitting parameters it is proposed that on soils with high infiltration rates, either Modified Kostiakov model (I = Kt a + Ict) or Modified Philip model (I St 1/2 + Ict), (where I is cumulative infiltration, K, the time coefficient, t, time elapsed, 'a' the time exponent, Ic the equilibrium infiltration rate and S, the soil water sorptivity), be used for routine characterization of the infiltration process. (author). 33 refs, 3 figs 6 tabs

  2. Valves and fittings for nuclear power stations

    International Nuclear Information System (INIS)

    1976-01-01

    The standard specifies technical requirements for valves and pipe fittings in nuclear power stations with PWR type reactors. Details of appropriate materials, welding, surface treatment for corrosion protection, painting, and complementary supply are given

  3. A bivariate contaminated binormal model for robust fitting of proper ROC curves to a pair of correlated, possibly degenerate, ROC datasets.

    Science.gov (United States)

    Zhai, Xuetong; Chakraborty, Dev P

    2017-06-01

    The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics

  4. A new mathematical approximation of sunlight attenuation in rocks for surface luminescence dating

    Energy Technology Data Exchange (ETDEWEB)

    Laskaris, Nikolaos, E-mail: nick.laskaris@gmail.com [University of the Aegean, Department of Mediterranean Studies, Laboratory of Archaeometry, 1 Demokratias Avenue, Rhodes 85100 (Greece); Liritzis, Ioannis, E-mail: liritzis@rhodes.aegean.gr [University of the Aegean, Department of Mediterranean Studies, Laboratory of Archaeometry, 1 Demokratias Avenue, Rhodes 85100 (Greece)

    2011-09-15

    The attenuation of sunlight through different rock surfaces and the thermoluminescence (TL) or Optical stimulated luminescence (OSL) residuals clock resetting derived from sunlight induced eviction of electrons from electron traps, is a prerequisite criterion for potential dating. The modeling of change of residual luminescence as a function of two variables, the solar radiation path length (or depth) and exposure time offers further insight into the dating concept. The double exponential function modeling based on the Lambert-Beer law, valid under certain assumptions, constructed by a quasi-manual equation fails to offer a general and statistically sound expression of the best fit for most rock types. A cumulative log-normal distribution fitting provides a most satisfactory mathematical approximation for marbles, marble schists and granites, where absorption coefficient and residual luminescence parameters are defined per each type of rock or marble quarry. The new model is applied on available data and age determination tests. - Highlights: > Study of aattenuation of sunlight through different rock surfaces. > Study of the thermoluminescence (TL) or Optical stimulated luminescence (OSL) residuals as a function of depth. > A Cumulative Log-Normal Distribution fitting provides the most satisfactory modeling for marbles, marble schists and granites. > The new model (Cummulative Log-Norm Fitting) is applied on available data and age determination tests.

  5. A new mathematical approximation of sunlight attenuation in rocks for surface luminescence dating

    International Nuclear Information System (INIS)

    Laskaris, Nikolaos; Liritzis, Ioannis

    2011-01-01

    The attenuation of sunlight through different rock surfaces and the thermoluminescence (TL) or Optical stimulated luminescence (OSL) residuals clock resetting derived from sunlight induced eviction of electrons from electron traps, is a prerequisite criterion for potential dating. The modeling of change of residual luminescence as a function of two variables, the solar radiation path length (or depth) and exposure time offers further insight into the dating concept. The double exponential function modeling based on the Lambert-Beer law, valid under certain assumptions, constructed by a quasi-manual equation fails to offer a general and statistically sound expression of the best fit for most rock types. A cumulative log-normal distribution fitting provides a most satisfactory mathematical approximation for marbles, marble schists and granites, where absorption coefficient and residual luminescence parameters are defined per each type of rock or marble quarry. The new model is applied on available data and age determination tests. - Highlights: → Study of aattenuation of sunlight through different rock surfaces. → Study of the thermoluminescence (TL) or Optical stimulated luminescence (OSL) residuals as a function of depth. → A Cumulative Log-Normal Distribution fitting provides the most satisfactory modeling for marbles, marble schists and granites. → The new model (Cummulative Log-Norm Fitting) is applied on available data and age determination tests.

  6. Building Customer Churn Prediction Models in Fitness Industry with Machine Learning Methods

    OpenAIRE

    Shan, Min

    2017-01-01

    With the rapid growth of digital systems, churn management has become a major focus within customer relationship management in many industries. Ample research has been conducted for churn prediction in different industries with various machine learning methods. This thesis aims to combine feature selection and supervised machine learning methods for defining models of churn prediction and apply them on fitness industry. Forward selection is chosen as feature selection methods. Support Vector ...

  7. Different fits satisfy different needs: linking person-environment fit to employee commitment and performance using self-determination theory.

    Science.gov (United States)

    Greguras, Gary J; Diefendorff, James M

    2009-03-01

    Integrating and expanding upon the person-environment fit (PE fit) and the self-determination theory literatures, the authors hypothesized and tested a model in which the satisfaction of the psychological needs for autonomy, relatedness, and competence partially mediated the relations between different types of perceived PE fit (i.e., person-organization fit, person-group fit, and job demands-abilities fit) with employee affective organizational commitment and overall job performance. Data from 163 full-time working employees and their supervisors were collected across 3 time periods. Results indicate that different types of PE fit predicted different types of psychological need satisfaction and that psychological need satisfaction predicted affective commitment and performance. Further, person-organization fit and demands-abilities fit also evidenced direct effects on employee affective commitment. These results begin to explicate the processes through which different types of PE fit relate to employee attitudes and behaviors. (c) 2009 APA, all rights reserved.

  8. Tennis Elbow Diagnosis Using Equivalent Uniform Voltage to Fit the Logistic and the Probit Diseased Probability Models

    Directory of Open Access Journals (Sweden)

    Tsair-Fwu Lee

    2015-01-01

    Full Text Available To develop the logistic and the probit models to analyse electromyographic (EMG equivalent uniform voltage- (EUV- response for the tenderness of tennis elbow. In total, 78 hands from 39 subjects were enrolled. In this study, surface EMG (sEMG signal is obtained by an innovative device with electrodes over forearm region. The analytical endpoint was defined as Visual Analog Score (VAS 3+ tenderness of tennis elbow. The logistic and the probit diseased probability (DP models were established for the VAS score and EMG absolute voltage-time histograms (AVTH. TV50 is the threshold equivalent uniform voltage predicting a 50% risk of disease. Twenty-one out of 78 samples (27% developed VAS 3+ tenderness of tennis elbow reported by the subject and confirmed by the physician. The fitted DP parameters were TV50 = 153.0 mV (CI: 136.3–169.7 mV, γ50 = 0.84 (CI: 0.78–0.90 and TV50 = 155.6 mV (CI: 138.9–172.4 mV, m = 0.54 (CI: 0.49–0.59 for logistic and probit models, respectively. When the EUV ≥ 153 mV, the DP of the patient is greater than 50% and vice versa. The logistic and the probit models are valuable tools to predict the DP of VAS 3+ tenderness of tennis elbow.

  9. Tennis Elbow Diagnosis Using Equivalent Uniform Voltage to Fit the Logistic and the Probit Diseased Probability Models

    Science.gov (United States)

    Lin, Wei-Chun; Lin, Shu-Yuan; Wu, Li-Fu; Guo, Shih-Sian; Huang, Hsiang-Jui; Chao, Pei-Ju

    2015-01-01

    To develop the logistic and the probit models to analyse electromyographic (EMG) equivalent uniform voltage- (EUV-) response for the tenderness of tennis elbow. In total, 78 hands from 39 subjects were enrolled. In this study, surface EMG (sEMG) signal is obtained by an innovative device with electrodes over forearm region. The analytical endpoint was defined as Visual Analog Score (VAS) 3+ tenderness of tennis elbow. The logistic and the probit diseased probability (DP) models were established for the VAS score and EMG absolute voltage-time histograms (AVTH). TV50 is the threshold equivalent uniform voltage predicting a 50% risk of disease. Twenty-one out of 78 samples (27%) developed VAS 3+ tenderness of tennis elbow reported by the subject and confirmed by the physician. The fitted DP parameters were TV50 = 153.0 mV (CI: 136.3–169.7 mV), γ 50 = 0.84 (CI: 0.78–0.90) and TV50 = 155.6 mV (CI: 138.9–172.4 mV), m = 0.54 (CI: 0.49–0.59) for logistic and probit models, respectively. When the EUV ≥ 153 mV, the DP of the patient is greater than 50% and vice versa. The logistic and the probit models are valuable tools to predict the DP of VAS 3+ tenderness of tennis elbow. PMID:26380281

  10. Fit reduced GUTS models online: From theory to practice.

    Science.gov (United States)

    Baudrot, Virgile; Veber, Philippe; Gence, Guillaume; Charles, Sandrine

    2018-05-20

    Mechanistic modeling approaches, such as the toxicokinetic-toxicodynamic (TKTD) framework, are promoted by international institutions such as the European Food Safety Authority and the Organization for Economic Cooperation and Development to assess the environmental risk of chemical products generated by human activities. TKTD models can encompass a large set of mechanisms describing the kinetics of compounds inside organisms (e.g., uptake and elimination) and their effect at the level of individuals (e.g., damage accrual, recovery, and death mechanism). Compared to classical dose-response models, TKTD approaches have many advantages, including accounting for temporal aspects of exposure and toxicity, considering data points all along the experiment and not only at the end, and making predictions for untested situations as realistic exposure scenarios. Among TKTD models, the general unified threshold model of survival (GUTS) is within the most recent and innovative framework but is still underused in practice, especially by risk assessors, because specialist programming and statistical skills are necessary to run it. Making GUTS models easier to use through a new module freely available from the web platform MOSAIC (standing for MOdeling and StAtistical tools for ecotoxIClogy) should promote GUTS operability in support of the daily work of environmental risk assessors. This paper presents the main features of MOSAIC_GUTS: uploading of the experimental data, GUTS fitting analysis, and LCx estimates with their uncertainty. These features will be exemplified from literature data. Integr Environ Assess Manag 2018;00:000-000. © 2018 SETAC. © 2018 SETAC.

  11. Alternative model of random surfaces

    International Nuclear Information System (INIS)

    Ambartzumian, R.V.; Sukiasian, G.S.; Savvidy, G.K.; Savvidy, K.G.

    1992-01-01

    We analyse models of triangulated random surfaces and demand that geometrically nearby configurations of these surfaces must have close actions. The inclusion of this principle drives us to suggest a new action, which is a modified Steiner functional. General arguments, based on the Minkowski inequality, shows that the maximal distribution to the partition function comes from surfaces close to the sphere. (orig.)

  12. Modelling nanostructures with vicinal surfaces

    International Nuclear Information System (INIS)

    Mugarza, A; Schiller, F; Kuntze, J; Cordon, J; Ruiz-Oses, M; Ortega, J E

    2006-01-01

    Vicinal surfaces of the (111) plane of noble metals are characterized by free-electron-like surface states that scatter at one-dimensional step edges, making them ideal model systems to test the electronic properties of periodic lateral nanostructures. Here we use high-resolution, angle-resolved photoemission to analyse the evolution of the surface state on a variety of vicinal surface structures where both the step potential barrier and the superlattice periodicity can vary. A transition in the electron dimensionality is found as we vary the terrace size in single-phase step arrays. In double-phase, periodic faceted surfaces, we observe surface states that characterize each of the phases

  13. AMS-02 fits dark matter

    Science.gov (United States)

    Balázs, Csaba; Li, Tong

    2016-05-01

    In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model.

  14. AMS-02 fits dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Balázs, Csaba; Li, Tong [ARC Centre of Excellence for Particle Physics at the Tera-scale,School of Physics and Astronomy, Monash University, Melbourne, Victoria 3800 (Australia)

    2016-05-05

    In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model.

  15. Fitness cost of reassortment in human influenza.

    Directory of Open Access Journals (Sweden)

    Mara Villa

    2017-11-01

    Full Text Available Reassortment, which is the exchange of genome sequence between viruses co-infecting a host cell, plays an important role in the evolution of segmented viruses. In the human influenza virus, reassortment happens most frequently between co-existing variants within the same lineage. This process breaks genetic linkage and fitness correlations between viral genome segments, but the resulting net effect on viral fitness has remained unclear. In this paper, we determine rate and average selective effect of reassortment processes in the human influenza lineage A/H3N2. For the surface proteins hemagglutinin and neuraminidase, reassortant variants with a mean distance of at least 3 nucleotides to their parent strains get established at a rate of about 10-2 in units of the neutral point mutation rate. Our inference is based on a new method to map reassortment events from joint genealogies of multiple genome segments, which is tested by extensive simulations. We show that intra-lineage reassortment processes are, on average, under substantial negative selection that increases in strength with increasing sequence distance between the parent strains. The deleterious effects of reassortment manifest themselves in two ways: there are fewer reassortment events than expected from a null model of neutral reassortment, and reassortant strains have fewer descendants than their non-reassortant counterparts. Our results suggest that influenza evolves under ubiquitous epistasis across proteins, which produces fitness barriers against reassortment even between co-circulating strains within one lineage.

  16. Fitness cost of reassortment in human influenza.

    Science.gov (United States)

    Villa, Mara; Lässig, Michael

    2017-11-01

    Reassortment, which is the exchange of genome sequence between viruses co-infecting a host cell, plays an important role in the evolution of segmented viruses. In the human influenza virus, reassortment happens most frequently between co-existing variants within the same lineage. This process breaks genetic linkage and fitness correlations between viral genome segments, but the resulting net effect on viral fitness has remained unclear. In this paper, we determine rate and average selective effect of reassortment processes in the human influenza lineage A/H3N2. For the surface proteins hemagglutinin and neuraminidase, reassortant variants with a mean distance of at least 3 nucleotides to their parent strains get established at a rate of about 10-2 in units of the neutral point mutation rate. Our inference is based on a new method to map reassortment events from joint genealogies of multiple genome segments, which is tested by extensive simulations. We show that intra-lineage reassortment processes are, on average, under substantial negative selection that increases in strength with increasing sequence distance between the parent strains. The deleterious effects of reassortment manifest themselves in two ways: there are fewer reassortment events than expected from a null model of neutral reassortment, and reassortant strains have fewer descendants than their non-reassortant counterparts. Our results suggest that influenza evolves under ubiquitous epistasis across proteins, which produces fitness barriers against reassortment even between co-circulating strains within one lineage.

  17. A Sport Education Fitness Season's Impact on Students' Fitness Levels, Knowledge, and In-Class Physical Activity.

    Science.gov (United States)

    Ward, Jeffery Kurt; Hastie, Peter A; Wadsworth, Danielle D; Foote, Shelby; Brock, Sheri J; Hollett, Nikki

    2017-09-01

    The purpose of this study was to determine the extent to which a sport education season of fitness could provide students with recommended levels of in-class moderate-to-vigorous physical activity (MVPA) while also increasing students' fitness knowledge and fitness achievement. One hundred and sixty-six 5th-grade students (76 boys, 90 girls) participated in a 20-lesson season called "CrossFit Challenge" during a 4-week period. The Progressive Aerobic Cardiovascular Endurance Run, push-ups, and curl-ups tests of the FITNESSGRAM® were used to assess fitness at pretest and posttest, while fitness knowledge was assessed through a validated, grade-appropriate test of health-related fitness knowledge (HRF). Physical activity was measured with Actigraph GT3X triaxial accelerometers. Results indicated a significant time effect for all fitness tests and the knowledge test. Across the entire season, the students spent an average of 54.5% of lesson time engaged in MVPA, irrespective of the type of lesson (instruction, free practice, or competition). The results suggest that configuring the key principles of sport education within a unit of fitness is an efficient model for providing students with the opportunity to improve fitness skill and HRF knowledge while attaining recommended levels of MVPA.

  18. Enhancing the representation of subgrid land surface characteristics in land surface models

    Directory of Open Access Journals (Sweden)

    Y. Ke

    2013-09-01

    Full Text Available Land surface heterogeneity has long been recognized as important to represent in the land surface models. In most existing land surface models, the spatial variability of surface cover is represented as subgrid composition of multiple surface cover types, although subgrid topography also has major controls on surface processes. In this study, we developed a new subgrid classification method (SGC that accounts for variability of both topography and vegetation cover. Each model grid cell was represented with a variable number of elevation classes and each elevation class was further described by a variable number of vegetation types optimized for each model grid given a predetermined total number of land response units (LRUs. The subgrid structure of the Community Land Model (CLM was used to illustrate the newly developed method in this study. Although the new method increases the computational burden in the model simulation compared to the CLM subgrid vegetation representation, it greatly reduced the variations of elevation within each subgrid class and is able to explain at least 80% of the total subgrid plant functional types (PFTs. The new method was also evaluated against two other subgrid methods (SGC1 and SGC2 that assigned fixed numbers of elevation and vegetation classes for each model grid (SGC1: M elevation bands–N PFTs method; SGC2: N PFTs–M elevation bands method. Implemented at five model resolutions (0.1°, 0.25°, 0.5°, 1.0°and 2.0° with three maximum-allowed total number of LRUs (i.e., NLRU of 24, 18 and 12 over North America (NA, the new method yielded more computationally efficient subgrid representation compared to SGC1 and SGC2, particularly at coarser model resolutions and moderate computational intensity (NLRU = 18. It also explained the most PFTs and elevation variability that is more homogeneously distributed spatially. The SGC method will be implemented in CLM over the NA continent to assess its impacts on

  19. Identifying Conformational-Selection and Induced-Fit Aspects in the Binding-Induced Folding of PMI from Markov State Modeling of Atomistic Simulations.

    Science.gov (United States)

    Paul, Fabian; Noé, Frank; Weikl, Thomas R

    2018-03-27

    Unstructured proteins and peptides typically fold during binding to ligand proteins. A challenging problem is to identify the mechanism and kinetics of these binding-induced folding processes in experiments and atomistic simulations. In this Article, we present a detailed picture for the folding of the inhibitor peptide PMI into a helix during binding to the oncoprotein fragment 25-109 Mdm2 obtained from atomistic, explicit-water simulations and Markov state modeling. We find that binding-induced folding of PMI is highly parallel and can occur along a multitude of pathways. Some pathways are induced-fit-like with binding occurring prior to PMI helix formation, while other pathways are conformational-selection-like with binding after helix formation. On the majority of pathways, however, binding is intricately coupled to folding, without clear temporal ordering. A central feature of these pathways is PMI motion on the Mdm2 surface, along the binding groove of Mdm2 or over the rim of this groove. The native binding groove of Mdm2 thus appears as an asymmetric funnel for PMI binding. Overall, binding-induced folding of PMI does not fit into the classical picture of induced fit or conformational selection that implies a clear temporal ordering of binding and folding events. We argue that this holds in general for binding-induced folding processes because binding and folding events in these processes likely occur on similar time scales and do exhibit the time-scale separation required for temporal ordering.

  20. Adsorption of lysozyme unto silica and polystyrene surfaces in ...

    African Journals Online (AJOL)

    user

    2011-04-11

    Apr 11, 2011 ... surfaces were well fitted by the Langmuir adsorption isotherm model with maximum adsorption .... following reasons: (1) Lysozyme is a globular protein with ... vigorously for 1 h to attain equilibrium adsorption and allowed to.

  1. The More, the Better? Curvilinear Effects of Job Autonomy on Well-Being From Vitamin Model and PE-Fit Theory Perspectives.

    Science.gov (United States)

    Stiglbauer, Barbara; Kovacs, Carrie

    2017-12-28

    In organizational psychology research, autonomy is generally seen as a job resource with a monotone positive relationship with desired occupational outcomes such as well-being. However, both Warr's vitamin model and person-environment (PE) fit theory suggest that negative outcomes may result from excesses of some job resources, including autonomy. Thus, the current studies used survey methodology to explore cross-sectional relationships between environmental autonomy, person-environment autonomy (mis)fit, and well-being. We found that autonomy and autonomy (mis)fit explained between 6% and 22% of variance in well-being, depending on type of autonomy (scheduling, method, or decision-making) and type of (mis)fit operationalization (atomistic operationalization through the separate assessment of actual and ideal autonomy levels vs. molecular operationalization through the direct assessment of perceived autonomy (mis)fit). Autonomy (mis)fit (PE-fit perspective) explained more unique variance in well-being than environmental autonomy itself (vitamin model perspective). Detrimental effects of autonomy excess on well-being were most evident for method autonomy and least consistent for decision-making autonomy. We argue that too-much-of-a-good-thing effects of job autonomy on well-being exist, but suggest that these may be dependent upon sample characteristics (range of autonomy levels), type of operationalization (molecular vs. atomistic fit), autonomy facet (method, scheduling, or decision-making), as well as individual and organizational moderators. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Global fits of GUT-scale SUSY models with GAMBIT

    Science.gov (United States)

    Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin

    2017-12-01

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

  3. Global fits of GUT-scale SUSY models with GAMBIT

    Energy Technology Data Exchange (ETDEWEB)

    Athron, Peter [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Bringmann, Torsten; Dal, Lars A.; Krislock, Abram; Raklev, Are [University of Oslo, Department of Physics, Oslo (Norway); Buckley, Andy [University of Glasgow, SUPA, School of Physics and Astronomy, Glasgow (United Kingdom); Chrzaszcz, Marcin [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); H. Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Krakow (Poland); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Jackson, Paul; White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, CNRS, ENS de Lyon, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval (France); Theoretical Physics Department, CERN, Geneva (Switzerland); Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Rogan, Christopher [Harvard University, Department of Physics, Cambridge, MA (United States); Ruiz de Austri, Roberto [IFIC-UV/CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Saavedra, Aldo [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); The University of Sydney, Faculty of Engineering and Information Technologies, Centre for Translational Data Science, School of Physics, Camperdown, NSW (Australia); Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Serra, Nicola [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); Collaboration: The GAMBIT Collaboration

    2017-12-15

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos. (orig.)

  4. Hair length, facial attractiveness, personality attribution: A multiple fitness model of hairdressing

    OpenAIRE

    Bereczkei, Tamas; Mesko, Norbert

    2007-01-01

    Multiple Fitness Model states that attractiveness varies across multiple dimensions, with each feature representing a different aspect of mate value. In the present study, male raters judged the attractiveness of young females with neotenous and mature facial features, with various hair lengths. Results revealed that the physical appearance of long-haired women was rated high, regardless of their facial attractiveness being valued high or low. Women rated as most attractive were those whose f...

  5. A classical regression framework for mediation analysis: fitting one model to estimate mediation effects.

    Science.gov (United States)

    Saunders, Christina T; Blume, Jeffrey D

    2017-10-26

    Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.

  6. Are all models created equal? A content analysis of women in advertisements of fitness versus fashion magazines.

    Science.gov (United States)

    Wasylkiw, L; Emms, A A; Meuse, R; Poirier, K F

    2009-03-01

    The current study is a content analysis of women appearing in advertisements in two types of magazines: fitness/health versus fashion/beauty chosen because of their large and predominantly female readerships. Women appearing in advertisements of the June 2007 issue of five fitness/health magazines were compared to women appearing in advertisements of the June 2007 issue of five beauty/fashion magazines. Female models appearing in advertisements of both types of magazines were primarily young, thin Caucasians; however, images of models were more likely to emphasize appearance over performance when they appeared in fashion magazines. This difference in emphasis has implications for future research.

  7. Understanding colloidal charge renormalization from surface chemistry: Experiment and theory

    Science.gov (United States)

    Gisler, T.; Schulz, S. F.; Borkovec, M.; Sticher, H.; Schurtenberger, P.; D'Aguanno, B.; Klein, R.

    1994-12-01

    In this paper we report on the charging behavior of latex particles in aqueous suspensions. We use static light scattering and acid-base titrations as complementary techniques to observe both effective and bare particle charges. Acid-base titrations at various ionic strengths provide the pH dependent charging curves. The surface chemical parameters (dissociation constant of the acidic carboxylic groups, total density of ionizable sites and Stern capacitance) are determined from fits of a Stern layer model to the titration data. We find strong evidence that the dissociation of protons is the only specific adsorption process. Effective particle charges are determined by fits of integral equation calculations of the polydisperse static structure factor to the static light scattering data. A generalization of the Poisson-Boltzmann cell model including the dissociation of the acidic surface groups and the autodissociation of water is used to predict effective particle charges from the surface chemical parameters determined by the titration experiments. We find that the light scattering data are best described by a model where a small fraction of the ionizable surface sites are sulfate groups which are completely dissociated at moderate pH. These effective charges are comparable to the predictions by a basic cell model where charge regulation is absent.

  8. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh; Genton, Marc G.

    2014-01-01

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte

  9. Effect of the surface oxygen groups on methane adsorption on coals

    Energy Technology Data Exchange (ETDEWEB)

    Hao Shixiong [Department of Chemical Engineering, Sichuan University, Chengdu 610065 (China); Department of Chemical Engineering, Sichuan University of Science and Engineering, Zigong 643000 (China); Wen Jie [Department of Chemical Engineering, Sichuan University, Chengdu 610065 (China); Yu Xiaopeng [Department of Chemical Engineering, Sichuan University, Chengdu 610065 (China); Department of Chemical Engineering, Sichuan University of Science and Engineering, Zigong 643000 (China); Chu Wei, E-mail: chuwei1965_scu@yahoo.com [Department of Chemical Engineering, Sichuan University, Chengdu 610065 (China)

    2013-01-01

    Highlights: Black-Right-Pointing-Pointer We modified one coal with H{sub 2}O{sub 2}, (NH{sub 4}){sub 2}S{sub 2}O{sub 8} and HNO{sub 3} respectively, to prepare coal samples with different surface properties. Black-Right-Pointing-Pointer The oxygen groups on coal surface were characterized by XPS. Black-Right-Pointing-Pointer The textures of the coal samples were investigated by N{sub 2} adsorption at 77 K. Black-Right-Pointing-Pointer The adsorption behaviors were measured by volumetric method. Black-Right-Pointing-Pointer There was a negative correlation between methane saturated adsorption capacity and the O{sub total}/C{sub total}. - Abstract: To investigate the influence of surface oxygen groups on methane adsorption on coals, one bituminous coal was modified with H{sub 2}O{sub 2}, (NH{sub 4}){sub 2}S{sub 2}O{sub 8} and HNO{sub 3} respectively, to prepare coal samples with different surface properties. The oxygen groups on coal surface were characterized by X-ray photoelectron spectroscopy (XPS). The textures of the coal samples were investigated by N{sub 2} adsorption at 77 K. Their surface morphologies were analyzed by scanning electron microscopy (SEM). The methane adsorption behaviors of these coal samples were measured at 303 K in pressure range of 0-5.3 MPa by volumetric method. The adsorption data of methane were fitted to the Langmuir model and Dubinin-Astakhov (D-A) model. The fitting results showed that the D-A model fitted the isotherm data better than the Langmuir model. It was observed that there was, in general, a positive correlation between the methane saturated adsorption capacity and the micropore volume of coals while a negative correlation between methane saturated adsorption capacity and the O{sub total}/C{sub total}. The methane adsorption capacity was determined by the coal surface chemistry when the microporosity parameters of two samples were similar. Coal with a higher amount of oxygen surface groups, and consequently with a less

  10. Modeling and process optimization of electrospinning of chitosan-collagen nanofiber by response surface methodology

    Science.gov (United States)

    Amiri, Nafise; Moradi, Ali; Abolghasem Sajjadi Tabasi, Sayyed; Movaffagh, Jebrail

    2018-04-01

    Chitosan-collagen composite nanofiber is of a great interest to researchers in biomedical fields. Since the electrospinning is the most popular method for nanofiber production, having a comprehensive knowledge of the electrospinning process is beneficial. Modeling techniques are precious tools for managing variables in the electrospinning process, prior to the more time- consuming and expensive experimental techniques. In this study, a central composite design of response surface methodology (RSM) was employed to develop a statistical model as well as to define the optimum condition for fabrication of chitosan-collagen nanofiber with minimum diameter. The individual and the interaction effects of applied voltage (10–25 kV), flow rate (0.5–1.5 mL h‑1), and needle to collector distance (15–25 cm) on the fiber diameter were investigated. ATR- FTIR and cell study were done to evaluate the optimized nanofibers. According to the RSM, a two-factor interaction (2FI) model was the most suitable model. The high regression coefficient value (R 2 ≥ 0.9666) of the fitted regression model and insignificant lack of fit (P = 0.0715) indicated that the model was highly adequate in predicting chitosan-collagen nanofiber diameter. The optimization process showed that the chitosan-collagen nanofiber diameter of 156.05 nm could be obtained in 9 kV, 0.2 ml h‑1, and 25 cm which was confirmed by experiment (155.92 ± 18.95 nm). The ATR-FTIR and cell study confirmed the structure and biocompatibility of the optimized membrane. The represented model could assist researchers in fabricating chitosan-collagen electrospun scaffolds with a predictable fiber diameter, and optimized chitosan-collagen nanofibrous mat could be a potential candidate for wound healing and tissue engineering.

  11. Towards self-tuning residual generators for UAV control surface fault diagnosis

    DEFF Research Database (Denmark)

    Blanke, Mogens; Hansen, Søren

    2013-01-01

    Control surface fault diagnosis is essential for timely detection of manoeuvring and stability risks for an unmanned aircraft. Timely detection is crucial since control surface related faults impact stability of flight and safety. Reliable diagnosis require well fitting dynamical models but with ...... flights with different members of a population of UAVs that have inherent model uncertainty from one member to another and from one flight to another. Events with actual faults on control surfaces demonstrates the efficacy of the approach....

  12. Modeling and optimization of ammonia treatment by acidic biochar using response surface methodology

    Directory of Open Access Journals (Sweden)

    Narong Chaisongkroh

    2012-09-01

    Full Text Available Emission of ammonia (NH3 contaminated waste air to the atmosphere without treatment has affected humans andenvironment. Eliminating NH3 in waste air emitted from industries is considered an environmental requisite. In this study,optimization of NH3 adsorption time using acidic rubber wood biochar (RWBs impregnated with sulfuric acid (H2SO4 wasinvestigated. The central composite design (CCD in response surface methodology (RSM by the Design Expert softwarewas used for designing the experiments as well as the full response surface estimation. The RSM was used to evaluate theeffect of adsorption parameters in continuous mode of fixed bed column including waste air flow rate, inlet NH3 concentration in waste air stream, and H2SO4 concentration for adsorbent surface modification. Based on statistical analysis, the NH3symmetric adsorption time (at 50% NH3 removal efficiency model proved to be very highly significant (p<0.0001. The optimum conditions obtained were 300 ppmv inlet NH3 concentration, 72% H2SO4, and 2.1 l/min waste air flow rate. This resultedin 219 minutes of NH3 adsorption time as obtained from the predicted model, which fitted well with the laboratory verification result. This was supported by the high value of coefficient of determination (R2=0.9137. (NH42SO4, a nitrogen fertilizerfor planting, was the by-product from chemical adsorption between NH3 and H2SO4.

  13. Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain

    Science.gov (United States)

    Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises

    2015-01-01

    Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156

  14. Digital Modeling Phenomenon Of Surface Ground Movement

    Directory of Open Access Journals (Sweden)

    Ioan Voina

    2016-11-01

    Full Text Available With the development of specialized software applications it was possible to approach and resolve complex problems concerning automating and process optimization for which are being used field data. Computerized representation of the shape and dimensions of the Earth requires a detailed mathematical modeling, known as "digital terrain model". The paper aims to present the digital terrain model of Vulcan mining, Hunedoara County, Romania. Modeling consists of a set of mathematical equations that define in detail the surface of Earth and has an approximate surface rigorously and mathematical, that calculated the land area. Therefore, the digital terrain model means a digital representation of the earth's surface through a mathematical model that approximates the land surface modeling, which can be used in various civil and industrial applications in. To achieve the digital terrain model of data recorded using linear and nonlinear interpolation method based on point survey which highlights the natural surface studied. Given the complexity of this work it is absolutely necessary to know in detail of all topographic elements of work area, without the actions to be undertaken to project and manipulate would not be possible. To achieve digital terrain model, within a specialized software were set appropriate parameters required to achieve this case study. After performing all steps we obtained digital terrain model of Vulcan Mine. Digital terrain model is the complex product, which has characteristics that are equivalent to the specialists that use satellite images and information stored in a digital model, this is easier to use.

  15. Neural network hydrological modelling: on questions of over-fitting, over-training and over-parameterisation

    Science.gov (United States)

    Abrahart, R. J.; Dawson, C. W.; Heppenstall, A. J.; See, L. M.

    2009-04-01

    The most critical issue in developing a neural network model is generalisation: how well will the preferred solution perform when it is applied to unseen datasets? The reported experiments used far-reaching sequences of model architectures and training periods to investigate the potential damage that could result from the impact of several interrelated items: (i) over-fitting - a machine learning concept related to exceeding some optimal architectural size; (ii) over-training - a machine learning concept related to the amount of adjustment that is applied to a specific model - based on the understanding that too much fine-tuning might result in a model that had accommodated random aspects of its training dataset - items that had no causal relationship to the target function; and (iii) over-parameterisation - a statistical modelling concept that is used to restrict the number of parameters in a model so as to match the information content of its calibration dataset. The last item in this triplet stems from an understanding that excessive computational complexities might permit an absurd and false solution to be fitted to the available material. Numerous feedforward multilayered perceptrons were trialled and tested. Two different methods of model construction were also compared and contrasted: (i) traditional Backpropagation of Error; and (ii) state-of-the-art Symbiotic Adaptive Neuro-Evolution. Modelling solutions were developed using the reported experimental set ups of Gaume & Gosset (2003). The models were applied to a near-linear hydrological modelling scenario in which past upstream and past downstream discharge records were used to forecast current discharge at the downstream gauging station [CS1: River Marne]; and a non-linear hydrological modelling scenario in which past river discharge measurements and past local meteorological records (precipitation and evaporation) were used to forecast current discharge at the river gauging station [CS2: Le Sauzay].

  16. Universality Classes of Interaction Structures for NK Fitness Landscapes

    Science.gov (United States)

    Hwang, Sungmin; Schmiegelt, Benjamin; Ferretti, Luca; Krug, Joachim

    2018-02-01

    Kauffman's NK-model is a paradigmatic example of a class of stochastic models of genotypic fitness landscapes that aim to capture generic features of epistatic interactions in multilocus systems. Genotypes are represented as sequences of L binary loci. The fitness assigned to a genotype is a sum of contributions, each of which is a random function defined on a subset of k ≤ L loci. These subsets or neighborhoods determine the genetic interactions of the model. Whereas earlier work on the NK model suggested that most of its properties are robust with regard to the choice of neighborhoods, recent work has revealed an important and sometimes counter-intuitive influence of the interaction structure on the properties of NK fitness landscapes. Here we review these developments and present new results concerning the number of local fitness maxima and the statistics of selectively accessible (that is, fitness-monotonic) mutational pathways. In particular, we develop a unified framework for computing the exponential growth rate of the expected number of local fitness maxima as a function of L, and identify two different universality classes of interaction structures that display different asymptotics of this quantity for large k. Moreover, we show that the probability that the fitness landscape can be traversed along an accessible path decreases exponentially in L for a large class of interaction structures that we characterize as locally bounded. Finally, we discuss the impact of the NK interaction structures on the dynamics of evolution using adaptive walk models.

  17. Statistical topography of fitness landscapes

    OpenAIRE

    Franke, Jasper

    2011-01-01

    Fitness landscapes are generalized energy landscapes that play an important conceptual role in evolutionary biology. These landscapes provide a relation between the genetic configuration of an organism and that organism’s adaptive properties. In this work, global topographical features of these fitness landscapes are investigated using theoretical models. The resulting predictions are compared to empirical landscapes. It is shown that these landscapes allow, at least with respe...

  18. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example

    Science.gov (United States)

    Helgesson, P.; Sjöstrand, H.

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  19. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example.

    Science.gov (United States)

    Helgesson, P; Sjöstrand, H

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  20. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.

    2002-01-01

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  1. Characterizing polycyclic aromatic hydrocarbon build-up processes on urban road surfaces

    International Nuclear Information System (INIS)

    Liu, Liang; Liu, An; Li, Dunzhu; Zhang, Lixun; Guan, Yuntao

    2016-01-01

    Reliable prediction models are essential for modeling pollutant build-up processes on urban road surfaces. Based on successive samplings of road deposited sediments (RDS), this study presents empirical models for mathematical replication of the polycyclic aromatic hydrocarbon (PAH) build-up processes on urban road surfaces. The contaminant build-up behavior was modeled using saturation functions, which are commonly applied in US EPA's Stormwater Management Model (SWMM). Accurate fitting results were achieved in three typical urban land use types, and the applicability of the models was confirmed based on their acceptable relative prediction errors. The fitting results showed high variability in PAH saturation value and build-up rate among different land use types. Results of multivariate data and temporal-based analyses suggested that the quantity and property of RDS significantly influenced PAH build-up. Furthermore, pollution sources, traffic parameters, road surface conditions, and sweeping frequency could synthetically impact the RDS build-up and RDS property change processes. Thus, changes in these parameters could be the main reason for variations in PAH build-up in different urban land use types. - Highlights: • Sufficient robust prediction models were established for analysis of PAH build-up on urban road surfaces. • PAH build-up processes showed high variability among different land use types. • Pollution sources as well as the quantity and property of RDS mainly influenced PAH build-up. - Sufficient robust prediction models were established for analysis of PAH build-up on urban road surfaces. Pollution sources as well as the quantity and property of RDS mainly influenced PAH build-up.

  2. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  3. Fitting monthly Peninsula Malaysian rainfall using Tweedie distribution

    Science.gov (United States)

    Yunus, R. M.; Hasan, M. M.; Zubairi, Y. Z.

    2017-09-01

    In this study, the Tweedie distribution was used to fit the monthly rainfall data from 24 monitoring stations of Peninsula Malaysia for the period from January, 2008 to April, 2015. The aim of the study is to determine whether the distributions within the Tweedie family fit well the monthly Malaysian rainfall data. Within the Tweedie family, the gamma distribution is generally used for fitting the rainfall totals, however the Poisson-gamma distribution is more useful to describe two important features of rainfall pattern, which are the occurrences (dry months) and the amount (wet months). First, the appropriate distribution of the monthly rainfall was identified within the Tweedie family for each station. Then, the Tweedie Generalised Linear Model (GLM) with no explanatory variable was used to model the monthly rainfall data. Graphical representation was used to assess model appropriateness. The QQ plots of quantile residuals show that the Tweedie models fit the monthly rainfall data better for majority of the stations in the west coast and mid land than those in the east coast of Peninsula. This significant finding suggests that the best fitted distribution depends on the geographical location of the monitoring station. In this paper, a simple model is developed for generating synthetic rainfall data for use in various areas, including agriculture and irrigation. We have showed that the data that were simulated using the Tweedie distribution have fairly similar frequency histogram to that of the actual data. Both the mean number of rainfall events and mean amount of rain for a month were estimated simultaneously for the case that the Poisson gamma distribution fits the data reasonably well. Thus, this work complements previous studies that fit the rainfall amount and the occurrence of rainfall events separately, each to a different distribution.

  4. Land-surface modelling in hydrological perspective

    DEFF Research Database (Denmark)

    Overgaard, Jesper; Rosbjerg, Dan; Butts, M.B.

    2006-01-01

    The purpose of this paper is to provide a review of the different types of energy-based land-surface models (LSMs) and discuss some of the new possibilities that will arise when energy-based LSMs are combined with distributed hydrological modelling. We choose to focus on energy-based approaches......, and the difficulties inherent in various evaluation procedures are presented. Finally, the dynamic coupling of hydrological and atmospheric models is explored, and the perspectives of such efforts are discussed......., because in comparison to the traditional potential evapotranspiration models, these approaches allow for a stronger link to remote sensing and atmospheric modelling. New opportunities for evaluation of distributed land-surface models through application of remote sensing are discussed in detail...

  5. Foundations of elastoplasticity subloading surface model

    CERN Document Server

    Hashiguchi, Koichi

    2017-01-01

    This book is the standard text book of elastoplasticity in which the elastoplasticity theory is comprehensively described from the conventional theory for the monotonic loading to the unconventional theory for the cyclic loading behavior. Explanations of vector-tensor analysis and continuum mechanics are provided first as a foundation for elastoplasticity theory, covering various strain and stress measures and their rates with their objectivities. Elastoplasticity has been highly developed by the creation and formulation of the subloading surface model which is the unified fundamental law for irreversible mechanical phenomena in solids. The assumption that the interior of the yield surface is an elastic domain is excluded in order to describe the plastic strain rate due to the rate of stress inside the yield surface in this model aiming at the prediction of cyclic loading behavior, although the yield surface enclosing the elastic domain is assumed in all the elastoplastic models other than the subloading surf...

  6. A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.

    Science.gov (United States)

    Glas, Cees A. W.; Meijer, Rob R.

    A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…

  7. GRace: a MATLAB-based application for fitting the discrimination-association model.

    Science.gov (United States)

    Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio

    2014-10-28

    The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed.

  8. Singularity fitting in hydrodynamical calculations II

    International Nuclear Information System (INIS)

    Richtmyer, R.D.; Lazarus, R.B.

    1975-09-01

    This is the second report in a series on the development of techniques for the proper handling of singularities in fluid-dynamical calculations; the first was called Progress Report on the Shock-Fitting Project. This report contains six main results: derivation of a free-surface condition, which relates the acceleration of the surface with the gradient of the square of the sound speed just behind it; an accurate method for the early and middle stages of the development of a rarefaction wave, two orders of magnitude more accurate than a simple direct method used for comparison; the similarity theory of the collapsing free surface, where it is shown that there is a two-parameter family of self-similar solutions for γ = 3.9; the similarity theory for the outgoing shock, which takes into account the entropy increase; a ''zooming'' method for the study of the asymptotic behavior of solutions of the full initial boundary-value problem; comparison of two methods for determining the similarity parameter delta by zooming, which shows that the second method is preferred. Future reports in the series will contain discussions of the self-similar solutions for this problem, and for that of the collapsing shock, in more detail and for the full range (1, infinity) of γ; the values of certain integrals related to neutronic and thermonuclear rates near collapse; and methods for fitting shocks, contact discontinuities, interfaces, and free surfaces in two-dimensional flows

  9. A CAD System for Evaluating Footwear Fit

    Science.gov (United States)

    Savadkoohi, Bita Ture; de Amicis, Raffaele

    With the great growth in footwear demand, the footwear manufacturing industry, for achieving commercial success, must be able to provide the footwear that fulfills consumer's requirement better than it's competitors. Accurate fitting for shoes is an important factor in comfort and functionality. Footwear fitter measurement have been using manual measurement for a long time, but the development of 3D acquisition devices and the advent of powerful 3D visualization and modeling techniques, automatically analyzing, searching and interpretation of the models have now made automatic determination of different foot dimensions feasible. In this paper, we proposed an approach for finding footwear fit within the shoe last data base. We first properly aligned the 3D models using "Weighted" Principle Component Analysis (WPCA). After solving the alignment problem we used an efficient algorithm for cutting the 3D model in order to find the footwear fit from shoe last data base.

  10. The Many Null Distributions of Person Fit Indices.

    Science.gov (United States)

    Molenaar, Ivo W.; Hoijtink, Herbert

    1990-01-01

    Statistical properties of person fit indices are reviewed as indicators of the extent to which a person's score pattern is in agreement with a measurement model. Distribution of a fit index and ability-free fit evaluation are discussed. The null distribution was simulated for a test of 20 items. (SLD)

  11. Comparative testing of dark matter models with 15 HSB and 15 LSB galaxies

    Science.gov (United States)

    Kun, E.; Keresztes, Z.; Simkó, A.; Szűcs, G.; Gergely, L. Á.

    2017-12-01

    Context. We assemble a database of 15 high surface brightness (HSB) and 15 low surface brightness (LSB) galaxies, for which surface brightness density and spectroscopic rotation curve data are both available and representative for various morphologies. We use this dataset to test the Navarro-Frenk-White, the Einasto, and the pseudo-isothermal sphere dark matter models. Aims: We investigate the compatibility of the pure baryonic model and baryonic plus one of the three dark matter models with observations on the assembled galaxy database. When a dark matter component improves the fit with the spectroscopic rotational curve, we rank the models according to the goodness of fit to the datasets. Methods: We constructed the spatial luminosity density of the baryonic component based on the surface brightness profile of the galaxies. We estimated the mass-to-light (M/L) ratio of the stellar component through a previously proposed color-mass-to-light ratio relation (CMLR), which yields stellar masses independent of the photometric band. We assumed an axissymetric baryonic mass model with variable axis ratios together with one of the three dark matter models to provide the theoretical rotational velocity curves, and we compared them with the dataset. In a second attempt, we addressed the question whether the dark component could be replaced by a pure baryonic model with fitted M/L ratios, varied over ranges consistent with CMLR relations derived from the available stellar population models. We employed the Akaike information criterion to establish the performance of the best-fit models. Results: For 7 galaxies (2 HSB and 5 LSB), neither model fits the dataset within the 1σ confidence level. For the other 23 cases, one of the models with dark matter explains the rotation curve data best. According to the Akaike information criterion, the pseudo-isothermal sphere emerges as most favored in 14 cases, followed by the Navarro-Frenk-White (6 cases) and the Einasto (3 cases) dark

  12. Mapping the global depth to bedrock for land surface modelling

    Science.gov (United States)

    Shangguan, W.; Hengl, T.; Yuan, H.; Dai, Y. J.; Zhang, S.

    2017-12-01

    Depth to bedrock serves as the lower boundary of land surface models, which controls hydrologic and biogeochemical processes. This paper presents a framework for global estimation of Depth to bedrock (DTB). Observations were extracted from a global compilation of soil profile data (ca. 130,000 locations) and borehole data (ca. 1.6 million locations). Additional pseudo-observations generated by expert knowledge were added to fill in large sampling gaps. The model training points were then overlaid on a stack of 155 covariates including DEM-based hydrological and morphological derivatives, lithologic units, MODIS surfacee reflectance bands and vegetation indices derived from the MODIS land products. Global spatial prediction models were developed using random forests and Gradient Boosting Tree algorithms. The final predictions were generated at the spatial resolution of 250m as an ensemble prediction of the two independently fitted models. The 10-fold cross-validation shows that the models explain 59% for absolute DTB and 34% for censored DTB (depths deep than 200 cm are predicted as 200 cm). The model for occurrence of R horizon (bedrock) within 200 cm does a good job. Visual comparisons of predictions in the study areas where more detailed maps of depth to bedrock exist show that there is a general match with spatial patterns from similar local studies. Limitation of the data set and extrapolation in data spare areas should not be ignored in applications. To improve accuracy of spatial prediction, more borehole drilling logs will need to be added to supplement the existing training points in under-represented areas.

  13. Inverse problem theory methods for data fitting and model parameter estimation

    CERN Document Server

    Tarantola, A

    2002-01-01

    Inverse Problem Theory is written for physicists, geophysicists and all scientists facing the problem of quantitative interpretation of experimental data. Although it contains a lot of mathematics, it is not intended as a mathematical book, but rather tries to explain how a method of acquisition of information can be applied to the actual world.The book provides a comprehensive, up-to-date description of the methods to be used for fitting experimental data, or to estimate model parameters, and to unify these methods into the Inverse Problem Theory. The first part of the book deals wi

  14. Modelling of low energy ion sputtering from oxide surfaces

    International Nuclear Information System (INIS)

    Kubart, T; Nyberg, T; Berg, S

    2010-01-01

    The main aim of this work is to present a way to estimate the values of surface binding energy for oxides. This is done by fitting results from the binary collisions approximation code Tridyn with data from the reactive sputtering processing curves, as well as the elemental composition obtained from x-ray photoelectron spectroscopy (XPS). Oxide targets of Al, Ti, V, Nb and Ta are studied. The obtained surface binding energies are then used to predict the partial sputtering yields. Anomalously high sputtering yield is observed for the TiO 2 target. This is attributed to the high sputtering yield of Ti lower oxides. Such an effect is not observed for the other studied metals. XPS measurement of the oxide targets confirms the formation of suboxides during ion bombardment as well as an oxygen deficient surface in the steady state. These effects are confirmed from the processing curves from the oxide targets showing an elevated sputtering rate in pure argon.

  15. Modeling of inactivation of surface borne microorganisms occurring on seeds by cold atmospheric plasma (CAP)

    Science.gov (United States)

    Mitra, Anindita; Li, Y.-F.; Shimizu, T.; Klämpfl, Tobias; Zimmermann, J. L.; Morfill, G. E.

    2012-10-01

    Cold Atmospheric Plasma (CAP) is a fast, low cost, simple, easy to handle technology for biological application. Our group has developed a number of different CAP devices using the microwave technology and the surface micro discharge (SMD) technology. In this study, FlatPlaSter2.0 at different time intervals (0.5 to 5 min) is used for microbial inactivation. There is a continuous demand for deactivation of microorganisms associated with raw foods/seeds without loosing their properties. This research focuses on the kinetics of CAP induced microbial inactivation of naturally growing surface microorganisms on seeds. The data were assessed for log- linear and non-log-linear models for survivor curves as a function of time. The Weibull model showed the best fitting performance of the data. No shoulder and tail was observed. The models are focused in terms of the number of log cycles reduction rather than on classical D-values with statistical measurements. The viability of seeds was not affected for CAP treatment times up to 3 min with our device. The optimum result was observed at 1 min with increased percentage of germination from 60.83% to 89.16% compared to the control. This result suggests the advantage and promising role of CAP in food industry.

  16. Black Versus Gray T-Shirts: Comparison of Spectrophotometric and Other Biophysical Properties of Physical Fitness Uniforms and Modeled Heat Strain and Thermal Comfort

    Science.gov (United States)

    2016-09-01

    PROPERTIES OF PHYSICAL FITNESS UNIFORMS AND MODELED HEAT STRAIN AND THERMAL COMFORT DISCLAIMER The opinions or assertions contained herein are the...SHIRTS: COMPARISON OF SPECTROPHOTOMETRIC AND OTHER BIOPHYSICAL PROPERTIES OF PHYSICAL FITNESS UNIFORMS AND MODELED HEAT STRAIN AND THERMAL COMFORT ...the impact of the environment on the wearer. To model these impacts on human thermal sensation (e.g., thermal comfort ) and thermoregulatory

  17. Virtual Suit Fit Assessment Using Body Shape Model

    Data.gov (United States)

    National Aeronautics and Space Administration — Shoulder injury is one of the most serious risks for crewmembers in long-duration spaceflight. While suboptimal suit fit and contact pressures between the shoulder...

  18. Two-Stage Method Based on Local Polynomial Fitting for a Linear Heteroscedastic Regression Model and Its Application in Economics

    Directory of Open Access Journals (Sweden)

    Liyun Su

    2012-01-01

    Full Text Available We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations.

  19. Convolution based profile fitting

    International Nuclear Information System (INIS)

    Kern, A.; Coelho, A.A.; Cheary, R.W.

    2002-01-01

    Full text: In convolution based profile fitting, profiles are generated by convoluting functions together to form the observed profile shape. For a convolution of 'n' functions this process can be written as, Y(2θ)=F 1 (2θ)x F 2 (2θ)x... x F i (2θ)x....xF n (2θ). In powder diffractometry the functions F i (2θ) can be interpreted as the aberration functions of the diffractometer, but in general any combination of appropriate functions for F i (2θ) may be used in this context. Most direct convolution fitting methods are restricted to combinations of F i (2θ) that can be convoluted analytically (e.g. GSAS) such as Lorentzians, Gaussians, the hat (impulse) function and the exponential function. However, software such as TOPAS is now available that can accurately convolute and refine a wide variety of profile shapes numerically, including user defined profiles, without the need to convolute analytically. Some of the most important advantages of modern convolution based profile fitting are: 1) virtually any peak shape and angle dependence can normally be described using minimal profile parameters in laboratory and synchrotron X-ray data as well as in CW and TOF neutron data. This is possible because numerical convolution and numerical differentiation is used within the refinement procedure so that a wide range of functions can easily be incorporated into the convolution equation; 2) it can use physically based diffractometer models by convoluting the instrument aberration functions. This can be done for most laboratory based X-ray powder diffractometer configurations including conventional divergent beam instruments, parallel beam instruments, and diffractometers used for asymmetric diffraction. It can also accommodate various optical elements (e.g. multilayers and monochromators) and detector systems (e.g. point and position sensitive detectors) and has already been applied to neutron powder diffraction systems (e.g. ANSTO) as well as synchrotron based

  20. Numerical modelling of surface hydrology and near-surface hydrogeology at Forsmark. Site descriptive modelling SDM. Site Forsmark

    Energy Technology Data Exchange (ETDEWEB)

    Bosson, Emma (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)); Gustafsson, Lars-Goeran; Sassner, Mona (DHI Sverige AB, Stockholm (Sweden))

    2008-09-15

    SKB is currently performing site investigations at two potential sites for a final repository for spent nuclear fuel. This report presents results of water flow and solute transport modelling of the Forsmark site. The modelling reported in this document focused on the near-surface groundwater, i.e. groundwater in Quaternary deposits and shallow rock, and surface water systems, and was performed using the MIKE SHE tool. The most recent site data used in the modelling were delivered in the Forsmark 2.3 dataset, which had its 'data freeze' on March 31, 2007. The present modelling is performed in support of the final version of the Forsmark site description that is produced during the site investigation phase. In this work, the hydrological modelling system MIKE SHE has been used to describe near-surface groundwater flow and the contact between groundwater and surface water at the Forsmark site. The surface water system at Forsmark is described with the one-dimensional 'channel flow' modelling tool MIKE 11, which is fully and dynamically integrated with MIKE SHE. The MIKE SHE model was updated with data from the F2.3 data freeze. The main updates concerned the geological description of the saturated zone and the time series data on water levels and surface water discharges. The time series data used as input data and for calibration and validation was extended until the Forsmark 2.3 data freeze (March 31, 2007). The present work can be subdivided into the following four parts: 1. Update of the numerical flow model. 2. Sensitivity analysis and calibration of the model parameters. 3. Validation of the calibrated model, followed by evaluation and identification of discrepancies between measurements and model results. 4. Additional sensitivity analysis and calibration in order to resolve the problems identified in point three above. The main actions taken during the calibration can be summarised as follows: 1. The potential evapotranspiration was

  1. Internal Physical Features of a Land Surface Model Employing a Tangent Linear Model

    Science.gov (United States)

    Yang, Runhua; Cohn, Stephen E.; daSilva, Arlindo; Joiner, Joanna; Houser, Paul R.

    1997-01-01

    The Earth's land surface, including its biomass, is an integral part of the Earth's weather and climate system. Land surface heterogeneity, such as the type and amount of vegetative covering., has a profound effect on local weather variability and therefore on regional variations of the global climate. Surface conditions affect local weather and climate through a number of mechanisms. First, they determine the re-distribution of the net radiative energy received at the surface, through the atmosphere, from the sun. A certain fraction of this energy increases the surface ground temperature, another warms the near-surface atmosphere, and the rest evaporates surface water, which in turn creates clouds and causes precipitation. Second, they determine how much rainfall and snowmelt can be stored in the soil and how much instead runs off into waterways. Finally, surface conditions influence the near-surface concentration and distribution of greenhouse gases such as carbon dioxide. The processes through which these mechanisms interact with the atmosphere can be modeled mathematically, to within some degree of uncertainty, on the basis of underlying physical principles. Such a land surface model provides predictive capability for surface variables including ground temperature, surface humidity, and soil moisture and temperature. This information is important for agriculture and industry, as well as for addressing fundamental scientific questions concerning global and local climate change. In this study we apply a methodology known as tangent linear modeling to help us understand more deeply, the behavior of the Mosaic land surface model, a model that has been developed over the past several years at NASA/GSFC. This methodology allows us to examine, directly and quantitatively, the dependence of prediction errors in land surface variables upon different vegetation conditions. The work also highlights the importance of accurate soil moisture information. Although surface

  2. Fitting outbreak models to data from many small norovirus outbreaks

    Directory of Open Access Journals (Sweden)

    Eamon B. O’Dea

    2014-03-01

    Full Text Available Infectious disease often occurs in small, independent outbreaks in populations with varying characteristics. Each outbreak by itself may provide too little information for accurate estimation of epidemic model parameters. Here we show that using standard stochastic epidemic models for each outbreak and allowing parameters to vary between outbreaks according to a linear predictor leads to a generalized linear model that accurately estimates parameters from many small and diverse outbreaks. By estimating initial growth rates in addition to transmission rates, we are able to characterize variation in numbers of initially susceptible individuals or contact patterns between outbreaks. With simulation, we find that the estimates are fairly robust to the data being collected at discrete intervals and imputation of about half of all infectious periods. We apply the method by fitting data from 75 norovirus outbreaks in health-care settings. Our baseline regression estimates are 0.0037 transmissions per infective-susceptible day, an initial growth rate of 0.27 transmissions per infective day, and a symptomatic period of 3.35 days. Outbreaks in long-term-care facilities had significantly higher transmission and initial growth rates than outbreaks in hospitals.

  3. Minimal see-saw model predicting best fit lepton mixing angles

    International Nuclear Information System (INIS)

    King, Stephen F.

    2013-01-01

    We discuss a minimal predictive see-saw model in which the right-handed neutrino mainly responsible for the atmospheric neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (0,1,1) and the right-handed neutrino mainly responsible for the solar neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (1,4,2), with a relative phase η=−2π/5. We show how these patterns of couplings could arise from an A 4 family symmetry model of leptons, together with Z 3 and Z 5 symmetries which fix η=−2π/5 up to a discrete phase choice. The PMNS matrix is then completely determined by one remaining parameter which is used to fix the neutrino mass ratio m 2 /m 3 . The model predicts the lepton mixing angles θ 12 ≈34 ∘ ,θ 23 ≈41 ∘ ,θ 13 ≈9.5 ∘ , which exactly coincide with the current best fit values for a normal neutrino mass hierarchy, together with the distinctive prediction for the CP violating oscillation phase δ≈106 ∘

  4. Numerical Study of Wind Turbine Wake Modeling Based on a Actuator Surface Model

    DEFF Research Database (Denmark)

    Zhou, Huai-yang; Xu, Chang; Han, Xing Xing

    2017-01-01

    In the Actuator Surface Model (ALM), the turbine blades are represented by porous surfaces of velocity and pressure discontinuities to model the action of lifting surfaces on the flow. The numerical simulation is implemented on FLUENT platform combined with N-S equations. This model is improved o...

  5. GENFIT - a generic track-fitting toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Rauch, Johannes [Technische Universitaet Muenchen (Germany); Schlueter, Tobias [Ludwig-Maximilians-Universitaet Muenchen (Germany)

    2014-07-01

    GENFIT is an experiment-independent track-fitting toolkit, which combines fitting algorithms, track representations, and measurement geometries into a modular framework. We report on a significantly improved version of GENFIT, based on experience gained in the Belle II, PANDA, and FOPI experiments. Improvements concern the implementation of additional track-fitting algorithms, enhanced implementations of Kalman fitters, enhanced visualization capabilities, and additional implementations of measurement types suited for various kinds of tracking detectors. The data model has been revised, allowing for efficient track merging, smoothing, residual calculation and alignment.

  6. New ROOT Graphical User Interfaces for fitting

    International Nuclear Information System (INIS)

    Maline, D Gonzalez; Moneta, L; Antcheva, I

    2010-01-01

    ROOT, as a scientific data analysis framework, provides extensive capabilities via Graphical User Interfaces (GUI) for performing interactive analysis and visualizing data objects like histograms and graphs. A new interface for fitting has been developed for performing, exploring and comparing fits on data point sets such as histograms, multi-dimensional graphs or trees. With this new interface, users can build interactively the fit model function, set parameter values and constraints and select fit and minimization methods with their options. Functionality for visualizing the fit results is as well provided, with the possibility of drawing residuals or confidence intervals. Furthermore, the new fit panel reacts as a standalone application and it does not prevent users from interacting with other windows. We will describe in great detail the functionality of this user interface, covering as well new capabilities provided by the new fitting and minimization tools introduced recently in the ROOT framework.

  7. Levy flights and self-similar exploratory behaviour of termite workers: beyond model fitting.

    Directory of Open Access Journals (Sweden)

    Octavio Miramontes

    Full Text Available Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties--including Lévy flights--in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.

  8. Dynamic Factor Models for the Volatility Surface

    DEFF Research Database (Denmark)

    van der Wel, Michel; Ozturk, Sait R.; Dijk, Dick van

    The implied volatility surface is the collection of volatilities implied by option contracts for different strike prices and time-to-maturity. We study factor models to capture the dynamics of this three-dimensional implied volatility surface. Three model types are considered to examine desirable...

  9. Innovation Rather than Improvement: A Solvable High-Dimensional Model Highlights the Limitations of Scalar Fitness

    Science.gov (United States)

    Tikhonov, Mikhail; Monasson, Remi

    2018-01-01

    Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.

  10. Transport and retention of strontium in surface-modified quartz sand with different wettability

    International Nuclear Information System (INIS)

    Yifei Li; Shuaihui Tian; Tianwei Qian

    2011-01-01

    Instead of radioactive 90 Sr, common strontium chloride was used to simulate the migration of radioactive strontium chloride in surface hydroxylated, silanized, and common quartz sand. The sorption and retardation characteristics of strontium (Sr 2+ ) in these surface modified quartz sands were studied by batch tests and column experiments. The equilibrium sorption data for Sr 2+ on different wettability sands were described by the Langmuir and Freundlich isotherm models, and the Langmuir model has been found to provide better correlation for hydrophilic sand. The breakthrough curves (BTCs) of Sr 2+ in these media were analyzed with the equilibrium convection-dispersion equation (CDE) and a non-equilibrium two-region mobile-immobile model (TRM) using a nonlinear least square curve-fitting program CXTFIT. The TRM model showed better fit to the measured BTCs of Sr 2+ , and the parameters of the fraction of mobile water indicated that significant preferential flow effected the non-equilibrium transport of Sr 2+ . Although TRM model could not fit the Sr 2+ BTCs very well, the parameter estimated by TRM model may be more reliable than those obtained from batch experiments because the transport of Sr 2+ in these kind of sand is non-equilibrium processes. (author)

  11. Thermodynamic and surface properties of Sb–Sn and In–Sn liquid ...

    Indian Academy of Sciences (India)

    properties through the activity coefficients of the alloy components in the bulk. .... In the model for studying surface properties, a statistical mechanical approach .... experimental values of Scc(0) determined by fitting the experimental activity ...

  12. A bipartite fitness model for online music streaming services

    Science.gov (United States)

    Pongnumkul, Suchit; Motohashi, Kazuyuki

    2018-01-01

    This paper proposes an evolution model and an analysis of the behavior of music consumers on online music streaming services. While previous studies have observed power-law degree distributions of usage in online music streaming services, the underlying behavior of users has not been well understood. Users and songs can be described using a bipartite network where an edge exists between a user node and a song node when the user has listened that song. The growth mechanism of bipartite networks has been used to understand the evolution of online bipartite networks Zhang et al. (2013). Existing bipartite models are based on a preferential attachment mechanism László Barabási and Albert (1999) in which the probability that a user listens to a song is proportional to its current popularity. This mechanism does not allow for two types of real world phenomena. First, a newly released song with high quality sometimes quickly gains popularity. Second, the popularity of songs normally decreases as time goes by. Therefore, this paper proposes a new model that is more suitable for online music services by adding fitness and aging functions to the song nodes of the bipartite network proposed by Zhang et al. (2013). Theoretical analyses are performed for the degree distribution of songs. Empirical data from an online streaming service, Last.fm, are used to confirm the degree distribution of the object nodes. Simulation results show improvements from a previous model. Finally, to illustrate the application of the proposed model, a simplified royalty cost model for online music services is used to demonstrate how the changes in the proposed parameters can affect the costs for online music streaming providers. Managerial implications are also discussed.

  13. ACCELERATED FITTING OF STELLAR SPECTRA

    Energy Technology Data Exchange (ETDEWEB)

    Ting, Yuan-Sen; Conroy, Charlie [Harvard–Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Rix, Hans-Walter [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany)

    2016-07-20

    Stellar spectra are often modeled and fitted by interpolating within a rectilinear grid of synthetic spectra to derive the stars’ labels: stellar parameters and elemental abundances. However, the number of synthetic spectra needed for a rectilinear grid grows exponentially with the label space dimensions, precluding the simultaneous and self-consistent fitting of more than a few elemental abundances. Shortcuts such as fitting subsets of labels separately can introduce unknown systematics and do not produce correct error covariances in the derived labels. In this paper we present a new approach—Convex Hull Adaptive Tessellation (chat)—which includes several new ideas for inexpensively generating a sufficient stellar synthetic library, using linear algebra and the concept of an adaptive, data-driven grid. A convex hull approximates the region where the data lie in the label space. A variety of tests with mock data sets demonstrate that chat can reduce the number of required synthetic model calculations by three orders of magnitude in an eight-dimensional label space. The reduction will be even larger for higher dimensional label spaces. In chat the computational effort increases only linearly with the number of labels that are fit simultaneously. Around each of these grid points in the label space an approximate synthetic spectrum can be generated through linear expansion using a set of “gradient spectra” that represent flux derivatives at every wavelength point with respect to all labels. These techniques provide new opportunities to fit the full stellar spectra from large surveys with 15–30 labels simultaneously.

  14. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2006-01-01

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...

  15. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...

  16. Introduction: Occam’s Razor (SOT - Fit for Purpose workshop introduction)

    Science.gov (United States)

    Mathematical models provide important, reproducible, and transparent information for risk-based decision making. However, these models must be constructed to fit the needs of the problem to be solved. A “fit for purpose” model is an abstraction of a complicated problem that allow...

  17. A new approach to the retrieval of surface properties from earthshine measurements

    Energy Technology Data Exchange (ETDEWEB)

    Spurr, R.J.D. E-mail: rspurr@cfa.harvard.edu

    2004-01-01

    Instruments such as the MODIS and MISR radiometers on EOS AM-1, and POLDER on ADEOS have been deployed for the remote sensing retrieval of surface properties. Typically, retrieval algorithms use linear combinations of semi-empirical bidirectional reflectance distribution function (BRDF) kernels to model surface reflectance. The retrieval proceeds in two steps; first, an atmospheric correction relates surface BRDF to top-of-atmosphere (TOA) reflectances, then regression is used to establish the linear coefficients used in the kernel combination. BRDF kernels may also depend on a number of physical or empirical non-linear parameters (e.g. ocean wind speed for a specular BRDF); such parameters are usually assumed known. A major source of error in this retrieval comes from lack of knowledge of planetary boundary layer (PBL) aerosol properties. In this paper, we present a different approach to surface property retrieval. For the radiative transfer simulations, we use the discrete ordinate LIDORT model, which has the capability to generate simultaneous fields of radiances and weighting functions in a multiply scattering multi-layer atmosphere. Surface-atmosphere coupling due to multiple scattering and reflection effects is treated in full; the use of an atmospheric correction is not required. Further, it is shown that sensitivities of TOA reflectances to both linear and non-linear surface BRDF parameters may be established directly by explicit analytic differentiation of the discrete ordinate radiative transfer equations. Surface properties may thus be retrieved directly and conveniently from satellite measurements using standard non-linear fitting methods. In the fitting for BRDF parameters, lower-boundary aerosol properties can either be retrieved as auxiliary parameters, or they can be regarded as forward model parameter errors. We present examples of simulated radiances and surface/aerosol weighting functions for combinations of multi-angle measurements at several

  18. A new approach to the retrieval of surface properties from earthshine measurements

    International Nuclear Information System (INIS)

    Spurr, R.J.D.

    2004-01-01

    Instruments such as the MODIS and MISR radiometers on EOS AM-1, and POLDER on ADEOS have been deployed for the remote sensing retrieval of surface properties. Typically, retrieval algorithms use linear combinations of semi-empirical bidirectional reflectance distribution function (BRDF) kernels to model surface reflectance. The retrieval proceeds in two steps; first, an atmospheric correction relates surface BRDF to top-of-atmosphere (TOA) reflectances, then regression is used to establish the linear coefficients used in the kernel combination. BRDF kernels may also depend on a number of physical or empirical non-linear parameters (e.g. ocean wind speed for a specular BRDF); such parameters are usually assumed known. A major source of error in this retrieval comes from lack of knowledge of planetary boundary layer (PBL) aerosol properties. In this paper, we present a different approach to surface property retrieval. For the radiative transfer simulations, we use the discrete ordinate LIDORT model, which has the capability to generate simultaneous fields of radiances and weighting functions in a multiply scattering multi-layer atmosphere. Surface-atmosphere coupling due to multiple scattering and reflection effects is treated in full; the use of an atmospheric correction is not required. Further, it is shown that sensitivities of TOA reflectances to both linear and non-linear surface BRDF parameters may be established directly by explicit analytic differentiation of the discrete ordinate radiative transfer equations. Surface properties may thus be retrieved directly and conveniently from satellite measurements using standard non-linear fitting methods. In the fitting for BRDF parameters, lower-boundary aerosol properties can either be retrieved as auxiliary parameters, or they can be regarded as forward model parameter errors. We present examples of simulated radiances and surface/aerosol weighting functions for combinations of multi-angle measurements at several

  19. VizieR Online Data Catalog: GRB prompt emission fitted with the DREAM model (Ahlgren+, 2015)

    Science.gov (United States)

    Ahlgren, B.; Larsson, J.; Nymark, T.; Ryde, F.; Pe'Er, A.

    2018-01-01

    We illustrate the application of the DREAM model by fitting it to two different, bright Fermi GRBs; GRB 090618 and GRB 100724B. While GRB 090618 is well fitted by a Band function, GRB 100724B was the first example of a burst with a significant additional BB component (Guiriec et al. 2011ApJ...727L..33G). GRB 090618 is analysed using Gamma-ray Burst Monitor (GBM) data (Meegan et al. 2009ApJ...702..791M) from the NaI and BGO detectors. For GRB 100724B, we used GBM data from the NaI and BGO detectors as well as Large Area Telescope Low Energy (LAT-LLE) data. For both bursts we selected NaI detectors seeing the GRB at an off-axis angle lower than 60° and the BGO detector as being the best aligned of the two BGO detectors. The spectra were fitted in the energy ranges 8-1000 keV (NaI), 200-40000 keV (BGO) and 30-1000 MeV (LAT-LLE). (2 data files).

  20. Extracting Optical Fiber Background from Surface-Enhanced Raman Spectroscopy Spectra Based on Bi-Objective Optimization Modeling.

    Science.gov (United States)

    Huang, Jie; Shi, Tielin; Tang, Zirong; Zhu, Wei; Liao, Guanglan; Li, Xiaoping; Gong, Bo; Zhou, Tengyuan

    2017-08-01

    We propose a bi-objective optimization model for extracting optical fiber background from the measured surface-enhanced Raman spectroscopy (SERS) spectrum of the target sample in the application of fiber optic SERS. The model is built using curve fitting to resolve the SERS spectrum into several individual bands, and simultaneously matching some resolved bands with the measured background spectrum. The Pearson correlation coefficient is selected as the similarity index and its maximum value is pursued during the spectral matching process. An algorithm is proposed, programmed, and demonstrated successfully in extracting optical fiber background or fluorescence background from the measured SERS spectra of rhodamine 6G (R6G) and crystal violet (CV). The proposed model not only can be applied to remove optical fiber background or fluorescence background for SERS spectra, but also can be transferred to conventional Raman spectra recorded using fiber optic instrumentation.

  1. A diffuse radar scattering model from Martian surface rocks

    Science.gov (United States)

    Calvin, W. M.; Jakosky, B. M.; Christensen, P. R.

    1987-01-01

    Remote sensing of Mars has been done with a variety of instrumentation at various wavelengths. Many of these data sets can be reconciled with a surface model of bonded fines (or duricrust) which varies widely across the surface and a surface rock distribution which varies less so. A surface rock distribution map from -60 to +60 deg latitude has been generated by Christensen. Our objective is to model the diffuse component of radar reflection based on this surface distribution of rocks. The diffuse, rather than specular, scattering is modeled because the diffuse component arises due to scattering from rocks with sizes on the order of the wavelength of the radar beam. Scattering for radio waves of 12.5 cm is then indicative of the meter scale and smaller structure of the surface. The specular term is indicative of large scale surface undulations and should not be causally related to other surface physical properties. A simplified model of diffuse scattering is described along with two rock distribution models. The results of applying the models to a planet of uniform fractional rock coverage with values ranging from 5 to 20% are discussed.

  2. Human X-chromosome inactivation pattern distributions fit a model of genetically influenced choice better than models of completely random choice

    Science.gov (United States)

    Renault, Nisa K E; Pritchett, Sonja M; Howell, Robin E; Greer, Wenda L; Sapienza, Carmen; Ørstavik, Karen Helene; Hamilton, David C

    2013-01-01

    In eutherian mammals, one X-chromosome in every XX somatic cell is transcriptionally silenced through the process of X-chromosome inactivation (XCI). Females are thus functional mosaics, where some cells express genes from the paternal X, and the others from the maternal X. The relative abundance of the two cell populations (X-inactivation pattern, XIP) can have significant medical implications for some females. In mice, the ‘choice' of which X to inactivate, maternal or paternal, in each cell of the early embryo is genetically influenced. In humans, the timing of XCI choice and whether choice occurs completely randomly or under a genetic influence is debated. Here, we explore these questions by analysing the distribution of XIPs in large populations of normal females. Models were generated to predict XIP distributions resulting from completely random or genetically influenced choice. Each model describes the discrete primary distribution at the onset of XCI, and the continuous secondary distribution accounting for changes to the XIP as a result of development and ageing. Statistical methods are used to compare models with empirical data from Danish and Utah populations. A rigorous data treatment strategy maximises information content and allows for unbiased use of unphased XIP data. The Anderson–Darling goodness-of-fit statistics and likelihood ratio tests indicate that a model of genetically influenced XCI choice better fits the empirical data than models of completely random choice. PMID:23652377

  3. Oxidation-reduction induced roughening of platinum (111) surface

    International Nuclear Information System (INIS)

    You, H.; Nagy, Z.

    1993-06-01

    Platinum (111) single crystal surface was roughened by repeated cycles of oxidation and reduction to study dynamic evolution of surface roughening. The interface roughens progressively upon repeated cycles. The measured width of the interface was fit to an assumed pow law, W ∼t β , with β = 0.38(1). The results are compared with a simulation based on a random growth model. The fraction of the singly stepped surface apparently saturates to 0. 25 monolayer, which explains the apparent saturation to a steady roughness observed in previous studies

  4. Selection of a design for response surface

    Science.gov (United States)

    Ranade, Shruti Sunil; Thiagarajan, Padma

    2017-11-01

    Box-Behnken, Central-Composite, D and I-optimal designs were compared using statistical tools. Experimental trials for all designs were generated. Random uniform responses were simulated for all models. R-square, Akaike and Bayesian Information Criterion for the fitted models were noted. One-way ANOVA and Tukey’s multiple comparison test were performed on these parameters. These models were evaluated based on the number of experimental trials generated in addition to the results of the statistical analyses. D-optimal design generated 12 trials in its model, which was lesser in comparison to both Central Composite and Box-Behnken designs. The R-square values of the fitted models were found to possess a statistically significant difference (P<0.0001). D-optimal design not only had the highest mean R-square value (0.7231), but also possessed the lowest means for both Akaike and Bayesian Information Criterion. The D-optimal design was recommended for generation of response surfaces, based on the assessment of the above parameters.

  5. Surface tensions of multi-component mixed inorganic/organic aqueous systems of atmospheric significance: measurements, model predictions and importance for cloud activation predictions

    Directory of Open Access Journals (Sweden)

    D. O. Topping

    2007-01-01

    Full Text Available In order to predict the physical properties of aerosol particles, it is necessary to adequately capture the behaviour of the ubiquitous complex organic components. One of the key properties which may affect this behaviour is the contribution of the organic components to the surface tension of aqueous particles in the moist atmosphere. Whilst the qualitative effect of organic compounds on solution surface tensions has been widely reported, our quantitative understanding on mixed organic and mixed inorganic/organic systems is limited. Furthermore, it is unclear whether models that exist in the literature can reproduce the surface tension variability for binary and higher order multi-component organic and mixed inorganic/organic systems of atmospheric significance. The current study aims to resolve both issues to some extent. Surface tensions of single and multiple solute aqueous solutions were measured and compared with predictions from a number of model treatments. On comparison with binary organic systems, two predictive models found in the literature provided a range of values resulting from sensitivity to calculations of pure component surface tensions. Results indicate that a fitted model can capture the variability of the measured data very well, producing the lowest average percentage deviation for all compounds studied. The performance of the other models varies with compound and choice of model parameters. The behaviour of ternary mixed inorganic/organic systems was unreliably captured by using a predictive scheme and this was dependent on the composition of the solutes present. For more atmospherically representative higher order systems, entirely predictive schemes performed poorly. It was found that use of the binary data in a relatively simple mixing rule, or modification of an existing thermodynamic model with parameters derived from binary data, was able to accurately capture the surface tension variation with concentration. Thus

  6. Minimization of gully erosion on reclaimed surface mines using the stable slope and sediment transport computer model

    International Nuclear Information System (INIS)

    McKenney, R.A.; Gardner, T.G.

    1992-01-01

    Disequilibrium between slope form and hydrologic and erosion processes on reclaimed surface coal mines in the humid temperate northeastern US, can result in gully erosion and sediment loads which are elevated above natural, background values. Initial sheetwash erosion is surpassed by gully erosion on reclamation sites which are not in equilibrium with post-mining hydrology. Long-term stability can be attained by designing a channel profile which is in equilibrium with the increased peak discharges found on reclaimed surface mines. The Stable Slope and Sediment transport model (SSAST) was developed to design stable longitudinal channel profiles for post-mining hydrologic and erosional processes. SSAST is an event based computer model that calculates the stable slope for a channel segment based on the post-mine hydrology and median grain size of a reclaimed surface mine. Peak discharge, which drives post-mine erosion, is calculated from a 10-year, 24-hour storm using the Soil Conservation Service curve number method. Curve number calibrated for Pennsylvania surface mines are used. Reclamation sites are represented by the rectangle of triangle which most closely fits the shape of the site while having the same drainage area and length. Sediment transport and slope stability are calculated using a modified Bagnold's equation with a correction factor for the irregular particle shapes formed during the mining process. Data from three reclaimed Pennsylvania surface mines were used to calibrate and verify SSAST. Analysis indicates that SSAST can predict longitudinal channel profiles for stable reclamation of surface mines in the humid, temperate northeastern US

  7. GEOSURF: a computer program for modeling adsorption on mineral surfaces from aqueous solution

    Science.gov (United States)

    Sahai, Nita; Sverjensky, Dimitri A.

    1998-11-01

    A new program, GEOSURF, has been developed for calculating aqueous and surface speciation consistent with the triple-layer model of surface complexation. GEOSURF is an extension of the original programs MINEQL, MICROQL and HYDRAQL. We present, here, the basic algorithm of GEOSURF along with a description of the new features implemented. GEOSURF is linked to internally consistent data bases for surface species (SURFK.DAT) and for aqueous species (AQSOL.DAT). SURFK.DAT contains properties of minerals such as site densities, and equilibrium constants for adsorption of aqueous protons and electrolyte ions on a variety of oxides and hydroxides. The Helgeson, Kirkham and Flowers version of the extended Debye-Huckel Equation for 1:1 electrolytes is implemented for calculating aqueous activity coefficients. This permits the calculation of speciation at ionic strengths greater than 0.5 M. The activity of water is computed explicitly from the osmotic coefficient of the solution, and the total amount of electrolyte cation (or anion) is adjusted to satisfy the electroneutrality condition. Finally, the use of standard symbols for chemical species rather than species identification numbers is included to facilitate use of the program. One of the main limitations of GEOSURF is that aqueous and surface speciation can only be calculated at fixed pH and at fixed concentration of total adsorbate. Thus, the program cannot perform reaction-path calculations: it cannot determine whether or not a solution is over- or under-saturated with respect to one or more solid phases. To check the proper running of GEOSURF, we have compared results generated by GEOSURF with those from two other programs, HYDRAQL and EQ3. The Davies equation and the "bdot" equation, respectively, are used in the latter two programs for calculating aqueous activity coefficients. An example of the model fit to experimental data for rutile in 0.001 M-2.0 M NaNO 3 is included.

  8. Comments on Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?"

    Science.gov (United States)

    McCluskey, Ken W.

    2010-01-01

    This article presents the author's comments on Hisham B. Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" Ghassib's article focuses on the transformation of science from pre-modern times to the present. Ghassib (2010) notes that, unlike in an earlier era when the economy depended on static…

  9. Supersymmetric Fits after the Higgs Discovery and Implications for Model Building

    CERN Document Server

    Ellis, John

    2014-01-01

    The data from the first run of the LHC at 7 and 8 TeV, together with the information provided by other experiments such as precision electroweak measurements, flavour measurements, the cosmological density of cold dark matter and the direct search for the scattering of dark matter particles in the LUX experiment, provide important constraints on supersymmetric models. Important information is provided by the ATLAS and CMS measurements of the mass of the Higgs boson, as well as the negative results of searches at the LHC for events with missing transverse energy accompanied by jets, and the LHCb and CMS measurements off BR($B_s \\to \\mu^+ \\mu^-$). Results are presented from frequentist analyses of the parameter spaces of the CMSSM and NUHM1. The global $\\chi^2$ functions for the supersymmetric models vary slowly over most of the parameter spaces allowed by the Higgs mass and the missing transverse energy search, with best-fit values that are comparable to the $\\chi^2$ for the Standard Model. The $95\\%$ CL lower...

  10. Non-linear least squares curve fitting of a simple theoretical model to radioimmunoassay dose-response data using a mini-computer

    International Nuclear Information System (INIS)

    Wilkins, T.A.; Chadney, D.C.; Bryant, J.; Palmstroem, S.H.; Winder, R.L.

    1977-01-01

    Using the simple univalent antigen univalent-antibody equilibrium model the dose-response curve of a radioimmunoassay (RIA) may be expressed as a function of Y, X and the four physical parameters of the idealised system. A compact but powerful mini-computer program has been written in BASIC for rapid iterative non-linear least squares curve fitting and dose interpolation with this function. In its simplest form the program can be operated in an 8K byte mini-computer. The program has been extensively tested with data from 10 different assay systems (RIA and CPBA) for measurement of drugs and hormones ranging in molecular size from thyroxine to insulin. For each assay system the results have been analysed in terms of (a) curve fitting biases and (b) direct comparison with manual fitting. In all cases the quality of fitting was remarkably good in spite of the fact that the chemistry of each system departed significantly from one or more of the assumptions implicit in the model used. A mathematical analysis of departures from the model's principal assumption has provided an explanation for this somewhat unexpected observation. The essential features of this analysis are presented in this paper together with the statistical analyses of the performance of the program. From these and the results obtained to date in the routine quality control of these 10 assays, it is concluded that the method of curve fitting and dose interpolation presented in this paper is likely to be of general applicability. (orig.) [de

  11. Fits combining hyperon semileptonic decays and magnetic moments and CVC

    International Nuclear Information System (INIS)

    Bohm, A.; Kielanowski, P.

    1982-10-01

    We have performed a test of CVC by determining the baryon charges and magnetic moments from the hyperon semileptonic data. Then CVC was applied in order to make a joint fit of all baryon semileptonic decay data and baryon magnetic moments for the spectrum generating group (SG) model as well as for the conventional (cabibbo and magnetic moments in nuclear magnetons) model. The SG model gives a very good fit with chi 2 /n/sub D/ = 25/20 approximately equals 21% C.L. whereas the conventional model gives a fit with chi 2 /n/sub D/ = 244/20

  12. Exploration of a Polarized Surface Bidirectional Reflectance Model Using the Ground-Based Multiangle SpectroPolarimetric Imager

    Directory of Open Access Journals (Sweden)

    David J. Diner

    2012-12-01

    Full Text Available Accurate characterization of surface reflection is essential for retrieval of aerosols using downward-looking remote sensors. In this paper, observations from the Ground-based Multiangle SpectroPolarimetric Imager (GroundMSPI are used to evaluate a surface polarized bidirectional reflectance distribution function (PBRDF model. GroundMSPI is an eight-band spectropolarimetric camera mounted on a rotating gimbal to acquire pushbroom imagery of outdoor landscapes. The camera uses a very accurate photoelastic-modulator-based polarimetric imaging technique to acquire Stokes vector measurements in three of the instrument’s bands (470, 660, and 865 nm. A description of the instrument is presented, and observations of selected targets within a scene acquired on 6 January 2010 are analyzed. Data collected during the course of the day as the Sun moved across the sky provided a range of illumination geometries that facilitated evaluation of the surface model, which is comprised of a volumetric reflection term represented by the modified Rahman-Pinty-Verstraete function plus a specular reflection term generated by a randomly oriented array of Fresnel-reflecting microfacets. While the model is fairly successful in predicting the polarized reflection from two grass targets in the scene, it does a poorer job for two manmade targets (a parking lot and a truck roof, possibly due to their greater degree of geometric organization. Several empirical adjustments to the model are explored and lead to improved fits to the data. For all targets, the data support the notion of spectral invariance in the angular shape of the unpolarized and polarized surface reflection. As noted by others, this behavior provides valuable constraints on the aerosol retrieval problem, and highlights the importance of multiangle observations.

  13. Generalized molybdenum oxide surface chemical state XPS determination via informed amorphous sample model

    Energy Technology Data Exchange (ETDEWEB)

    Baltrusaitis, Jonas, E-mail: job314@lehigh.edu [Department of Chemical Engineering, Lehigh University, B336 Iacocca Hall, 111 Research Drive, Bethlehem, PA 18015 (United States); PhotoCatalytic Synthesis group, MESA+ Institute for Nanotechnology, Faculty of Science and Technology, University of Twente, Meander 229, P.O. Box 217, 7500 AE Enschede (Netherlands); Mendoza-Sanchez, Beatriz [CRANN, Chemistry School, Trinity College Dublin, Dublin (Ireland); Fernandez, Vincent [Institut des Matériaux Jean Rouxel, 2 rue de la Houssinière, BP 32229, F-44322 Nantes Cedex 3 (France); Veenstra, Rick [PhotoCatalytic Synthesis group, MESA+ Institute for Nanotechnology, Faculty of Science and Technology, University of Twente, Meander 229, P.O. Box 217, 7500 AE Enschede (Netherlands); Dukstiene, Nijole [Department of Physical and Inorganic Chemistry, Kaunas University of Technology, Radvilenu pl. 19, LT-50254 Kaunas (Lithuania); Roberts, Adam [Kratos Analytical Ltd, Trafford Wharf Road, Wharfside, Manchester, M17 1GP (United Kingdom); Fairley, Neal [Casa Software Ltd, Bay House, 5 Grosvenor Terrace, Teignmouth, Devon TQ14 8NE (United Kingdom)

    2015-01-30

    Highlights: • We analyzed and modeled spectral envelopes of complex molybdenum oxides. • Molybdenum oxide films of varying valence and crystallinity were synthesized. • MoO{sub 3} and MoO{sub 2} line shapes from experimental data were created. • Informed amorphous sample model (IASM) developed. • Amorphous molybdenum oxide XPS envelopes were interpreted. - Abstract: Accurate elemental oxidation state determination for the outer surface of a complex material is of crucial importance in many science and engineering disciplines, including chemistry, fundamental and applied surface science, catalysis, semiconductors and many others. X-ray photoelectron spectroscopy (XPS) is the primary tool used for this purpose. The spectral data obtained, however, is often very complex and can be subject to incorrect interpretation. Unlike traditional XPS spectra fitting procedures using purely synthetic spectral components, here we develop and present an XPS data processing method based on vector analysis that allows creating XPS spectral components by incorporating key information, obtained experimentally. XPS spectral data, obtained from series of molybdenum oxide samples with varying oxidation states and degree of crystallinity, were processed using this method and the corresponding oxidation states present, as well as their relative distribution was elucidated. It was shown that monitoring the evolution of the chemistry and crystal structure of a molybdenum oxide sample due to an invasive X-ray probe could be used to infer solutions to complex spectral envelopes.

  14. Two Aspects of the Simplex Model: Goodness of Fit to Linear Growth Curve Structures and the Analysis of Mean Trends.

    Science.gov (United States)

    Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.

    1994-01-01

    Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)

  15. Land Surface Verification Toolkit (LVT) - A Generalized Framework for Land Surface Model Evaluation

    Science.gov (United States)

    Kumar, Sujay V.; Peters-Lidard, Christa D.; Santanello, Joseph; Harrison, Ken; Liu, Yuqiong; Shaw, Michael

    2011-01-01

    Model evaluation and verification are key in improving the usage and applicability of simulation models for real-world applications. In this article, the development and capabilities of a formal system for land surface model evaluation called the Land surface Verification Toolkit (LVT) is described. LVT is designed to provide an integrated environment for systematic land model evaluation and facilitates a range of verification approaches and analysis capabilities. LVT operates across multiple temporal and spatial scales and employs a large suite of in-situ, remotely sensed and other model and reanalysis datasets in their native formats. In addition to the traditional accuracy-based measures, LVT also includes uncertainty and ensemble diagnostics, information theory measures, spatial similarity metrics and scale decomposition techniques that provide novel ways for performing diagnostic model evaluations. Though LVT was originally designed to support the land surface modeling and data assimilation framework known as the Land Information System (LIS), it also supports hydrological data products from other, non-LIS environments. In addition, the analysis of diagnostics from various computational subsystems of LIS including data assimilation, optimization and uncertainty estimation are supported within LVT. Together, LIS and LVT provide a robust end-to-end environment for enabling the concepts of model data fusion for hydrological applications. The evolving capabilities of LVT framework are expected to facilitate rapid model evaluation efforts and aid the definition and refinement of formal evaluation procedures for the land surface modeling community.

  16. Land surface Verification Toolkit (LVT) - a generalized framework for land surface model evaluation

    Science.gov (United States)

    Kumar, S. V.; Peters-Lidard, C. D.; Santanello, J.; Harrison, K.; Liu, Y.; Shaw, M.

    2012-06-01

    Model evaluation and verification are key in improving the usage and applicability of simulation models for real-world applications. In this article, the development and capabilities of a formal system for land surface model evaluation called the Land surface Verification Toolkit (LVT) is described. LVT is designed to provide an integrated environment for systematic land model evaluation and facilitates a range of verification approaches and analysis capabilities. LVT operates across multiple temporal and spatial scales and employs a large suite of in-situ, remotely sensed and other model and reanalysis datasets in their native formats. In addition to the traditional accuracy-based measures, LVT also includes uncertainty and ensemble diagnostics, information theory measures, spatial similarity metrics and scale decomposition techniques that provide novel ways for performing diagnostic model evaluations. Though LVT was originally designed to support the land surface modeling and data assimilation framework known as the Land Information System (LIS), it supports hydrological data products from non-LIS environments as well. In addition, the analysis of diagnostics from various computational subsystems of LIS including data assimilation, optimization and uncertainty estimation are supported within LVT. Together, LIS and LVT provide a robust end-to-end environment for enabling the concepts of model data fusion for hydrological applications. The evolving capabilities of LVT framework are expected to facilitate rapid model evaluation efforts and aid the definition and refinement of formal evaluation procedures for the land surface modeling community.

  17. INTEGRATION OF HETEROGENOUS DIGITAL SURFACE MODELS

    Directory of Open Access Journals (Sweden)

    R. Boesch

    2012-08-01

    Full Text Available The application of extended digital surface models often reveals, that despite an acceptable global accuracy for a given dataset, the local accuracy of the model can vary in a wide range. For high resolution applications which cover the spatial extent of a whole country, this can be a major drawback. Within the Swiss National Forest Inventory (NFI, two digital surface models are available, one derived from LiDAR point data and the other from aerial images. Automatic photogrammetric image matching with ADS80 aerial infrared images with 25cm and 50cm resolution is used to generate a surface model (ADS-DSM with 1m resolution covering whole switzerland (approx. 41000 km2. The spatially corresponding LiDAR dataset has a global point density of 0.5 points per m2 and is mainly used in applications as interpolated grid with 2m resolution (LiDAR-DSM. Although both surface models seem to offer a comparable accuracy from a global view, local analysis shows significant differences. Both datasets have been acquired over several years. Concerning LiDAR-DSM, different flight patterns and inconsistent quality control result in a significantly varying point density. The image acquisition of the ADS-DSM is also stretched over several years and the model generation is hampered by clouds, varying illumination and shadow effects. Nevertheless many classification and feature extraction applications requiring high resolution data depend on the local accuracy of the used surface model, therefore precise knowledge of the local data quality is essential. The commercial photogrammetric software NGATE (part of SOCET SET generates the image based surface model (ADS-DSM and delivers also a map with figures of merit (FOM of the matching process for each calculated height pixel. The FOM-map contains matching codes like high slope, excessive shift or low correlation. For the generation of the LiDAR-DSM only first- and last-pulse data was available. Therefore only the point

  18. Determining Mission Statement Effectiveness from a Fit Perspective

    Directory of Open Access Journals (Sweden)

    Toh Seong-Yuen

    2017-08-01

    Full Text Available The purpose of this paper is to study the relationship between the organization's mission statement and its outcomes from a fit perspective in the alignment of the organization's structural and cultural elements. Based on an extension of Campbell's (1991 mission model by combination of ideas from two schools of thought in mission statement studies (structural and cultural, the authors introduce the concept of “fit” to show how it contributes towards a new mission statement model. The results show that both alignments are important to create a fit situation in order to positively impact organization outcomes. Based on Cohen (1988, the detected effect size of .322 is considered large. The managerial implication is that there should be more focus on managing organisational alignment to support a fit situation as this is instrumental to mission statement effectiveness. The originality of this study stems from the idea that while past studies develop model based on ideas from within the confine of a particular school of thought, this study is one of the first to combine ideas from both the structural and cultural schools of thought by extending Campbell's (1991 mission model using the fit perspective.

  19. Impact of improved Greenland ice sheet surface representation in the NASA GISS ModelE2 GCM on simulated surface mass balance and regional climate

    Science.gov (United States)

    Alexander, P. M.; LeGrande, A. N.; Fischer, E.; Tedesco, M.; Kelley, M.; Schmidt, G. A.; Fettweis, X.

    2017-12-01

    Towards achieving coupled simulations between the NASA Goddard Institute for Space Studies (GISS) ModelE2 general circulation model (GCM) and ice sheet models (ISMs), improvements have been made to the representation of the ice sheet surface in ModelE2. These include a sub-grid-scale elevation class scheme, a multi-layer snow model, a time-variable surface albedo scheme, and adjustments to parameterization of sublimation/evaporation. These changes improve the spatial resolution and physical representation of the ice sheet surface such that the surface is represented at a level of detail closer to that of Regional Climate Models (RCMs). We assess the impact of these changes on simulated Greenland Ice Sheet (GrIS) surface mass balance (SMB). We also compare ModelE2 simulations in which winds have been nudged to match the European Center for Medium-Range Weather Forecasts (ECMWF) ERA-Interim reanalysis with simulations from the Modèle Atmosphérique Régionale (MAR) RCM forced by the same reanalysis. Adding surface elevation classes results in a much higher spatial resolution representation of the surface necessary for coupling with ISMs, but has a negligible impact on overall SMB. Implementing a variable surface albedo scheme increases melt by 100%, bringing it closer to melt simulated by MAR. Adjustments made to the representation of topography-influenced surface roughness length in ModelE2 reduce a positive bias in evaporation relative to MAR. We also examine the impact of changes to the GrIS surface on regional atmospheric and oceanic climate in coupled ocean-atmosphere simulations with ModelE2, finding a general warming of the Arctic due to a warmer GrIS, and a cooler North Atlantic in scenarios with doubled atmospheric CO2 relative to pre-industrial levels. The substantial influence of changes to the GrIS surface on the oceans and atmosphere highlight the importance of including these processes in the GCM, in view of potential feedbacks between the ice sheet

  20. The fitting parameters extraction of conversion model of the low dose rate effect in bipolar devices

    International Nuclear Information System (INIS)

    Bakerenkov, Alexander

    2011-01-01

    The Enhanced Low Dose Rate Sensitivity (ELDRS) in bipolar devices consists of in base current degradation of NPN and PNP transistors increase as the dose rate is decreased. As a result of almost 20-year studying, the some physical models of effect are developed, being described in detail. Accelerated test methods, based on these models use in standards. The conversion model of the effect, that allows to describe the inverse S-shaped excess base current dependence versus dose rate, was proposed. This paper presents the problem of conversion model fitting parameters extraction.

  1. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-09-28

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  2. Mapping Precipitation Patterns from the Stable Isotopic Composition of Surface Waters: Olympic Peninsula, Washington State

    Science.gov (United States)

    Anders, A. M.; Brandon, M. T.

    2008-12-01

    Available data indicate that large and persistent precipitation gradients are tied to topography at scales down to a few kilometers, but precipitation patterns in the majority of mountain ranges are poorly constrained at scales less than tens of kilometers. A lack of knowledge of precipitation patterns hampers efforts to understand the processes of orographic precipitation and identify the relationships between geomorphic evolution and climate. A new method for mapping precipitation using the stable isotopic composition of surface waters is tested in the Olympic Mountains of Washington State. Measured δD and δ18O of 97 samples of surface water are linearly related and nearly inseparable from the global meteoric water line. A linear orographic precipitation model extended to include in effects of isotopic fractionation via Rayleigh distillation predicts precipitation patterns and isotopic composition of surface water. Seven parameters relating to the climate and isotopic composition of source water are used. A constrained random search identifies the best-fitting parameter set. Confidence intervals for parameter values are defined and precipitation patterns are determined. Average errors for the best-fitting model are 4.8 permil in δD. The difference between the best fitting model and other models within the 95% confidence interval was less than 20%. An independent high-resolution precipitation climatology documents precipitation gradients similar in shape and magnitude to the model derived from surface water isotopic composition. This technique could be extended to other mountain ranges, providing an economical and fast assessment of precipitation patterns requiring minimal field work.

  3. Drop shape analysis for determination of dynamic contact angles by double sided elliptical fitting method

    DEFF Research Database (Denmark)

    Andersen, Nis Korsgaard; Taboryski, Rafael J.

    2017-01-01

    Contact angle measurements are a fast and simple way to measure surface properties and is therefore widely used to measure surface energy and quantify wetting of a solid surface by a liquid substance. In common praxis contact angle measurements are done with sessile drops on a horizontal surface...... fitted to a drop profile derived from the Young-Laplace equation. When measuring the wetting behaviour by tilting experiments this is not possible since it involves moving drops that are not in equilibrium. Here we present a fitting technique capable of determining the contact angle of asymmetric drops...

  4. A Multi-Variate Fit to the Chemical Composition of the Cosmic-Ray Spectrum

    Science.gov (United States)

    Eisch, Jonathan

    Since the discovery of cosmic rays over a century ago, evidence of their origins has remained elusive. Deflected by galactic magnetic fields, the only direct evidence of their origin and propagation remain encoded in their energy distribution and chemical composition. Current models of galactic cosmic rays predict variations of the energy distribution of individual elements in an energy region around 3x1015 eV known as the knee. This work presents a method to measure the energy distribution of individual elemental groups in the knee region and its application to a year of data from the IceCube detector. The method uses cosmic rays detected by both IceTop, the surface-array component, and the deep-ice component of IceCube during the 2009-2010 operation of the IC-59 detector. IceTop is used to measure the energy and the relative likelihood of the mass composition using the signal from the cosmic-ray induced extensive air shower reaching the surface. IceCube, 1.5 km below the surface, measures the energy of the high-energy bundle of muons created in the very first interactions after the cosmic ray enters the atmosphere. These event distributions are fit by a constrained model derived from detailed simulations of cosmic rays representing five chemical elements. The results of this analysis are evaluated in terms of the theoretical uncertainties in cosmic-ray interactions and seasonal variations in the atmosphere. The improvements in high-energy cosmic ray hadronic-interaction models informed by this analysis, combined with increased data from subsequent operation of the IceCube detector, could provide crucial limits on the origin of cosmic rays and their propagation through the galaxy. In the course of developing this method, a number of analysis and statistical techniques were developed to deal with the difficulties inherent in this type of measurement. These include a composition-sensitive air shower reconstruction technique, a method to model simulated event

  5. Global modelling of Cryptosporidium in surface water

    Science.gov (United States)

    Vermeulen, Lucie; Hofstra, Nynke

    2016-04-01

    Introduction Waterborne pathogens that cause diarrhoea, such as Cryptosporidium, pose a health risk all over the world. In many regions quantitative information on pathogens in surface water is unavailable. Our main objective is to model Cryptosporidium concentrations in surface waters worldwide. We present the GloWPa-Crypto model and use the model in a scenario analysis. A first exploration of global Cryptosporidium emissions to surface waters has been published by Hofstra et al. (2013). Further work has focused on modelling emissions of Cryptosporidium and Rotavirus to surface waters from human sources (Vermeulen et al 2015, Kiulia et al 2015). A global waterborne pathogen model can provide valuable insights by (1) providing quantitative information on pathogen levels in data-sparse regions, (2) identifying pathogen hotspots, (3) enabling future projections under global change scenarios and (4) supporting decision making. Material and Methods GloWPa-Crypto runs on a monthly time step and represents conditions for approximately the year 2010. The spatial resolution is a 0.5 x 0.5 degree latitude x longitude grid for the world. We use livestock maps (http://livestock.geo-wiki.org/) combined with literature estimates to calculate spatially explicit livestock Cryptosporidium emissions. For human Cryptosporidium emissions, we use UN population estimates, the WHO/UNICEF JMP sanitation country data and literature estimates of wastewater treatment. We combine our emissions model with a river routing model and data from the VIC hydrological model (http://vic.readthedocs.org/en/master/) to calculate concentrations in surface water. Cryptosporidium survival during transport depends on UV radiation and water temperature. We explore pathogen emissions and concentrations in 2050 with the new Shared Socio-economic Pathways (SSPs) 1 and 3. These scenarios describe plausible future trends in demographics, economic development and the degree of global integration. Results and

  6. Theoretically unprejudiced fits to proton scattering

    International Nuclear Information System (INIS)

    Kobos, A.M.; Mackintosh, R.S.

    1979-01-01

    By using a spline interpolation method applied to all components of the proton optical potential we have fitted elastic scattering from 40 Ca and from 16 O at a range of energies. The potentials are highly oscillatory and we have shown that similar oscillations are found when the spline fitting procedure is applied to pseudo-data generated from potentials of known l-dependence. Moreover, we show how to find an l-independent potential equivalent to one that is l-dependent and we find that it is oscillatory and that various characteristic features of empirical spline fit potentials can be explained. Thus, by fitting the data with model indenpendt l-independent potentials we have found support for the contention that the nucleon optical potential should be viewed as being l-dependent. This work may be regarded as an example of the kind of physical information that can be gained by pursuing exact fits to proton elastic scattering data

  7. A comparison of approaches in fitting continuum SEDs

    International Nuclear Information System (INIS)

    Liu Yao; Wang Hong-Chi; Madlener David; Wolf Sebastian

    2013-01-01

    We present a detailed comparison of two approaches, the use of a pre-calculated database and simulated annealing (SA), for fitting the continuum spectral energy distribution (SED) of astrophysical objects whose appearance is dominated by surrounding dust. While pre-calculated databases are commonly used to model SED data, only a few studies to date employed SA due to its unclear accuracy and convergence time for this specific problem. From a methodological point of view, different approaches lead to different fitting quality, demand on computational resources and calculation time. We compare the fitting quality and computational costs of these two approaches for the task of SED fitting to provide a guide to the practitioner to find a compromise between desired accuracy and available resources. To reduce uncertainties inherent to real datasets, we introduce a reference model resembling a typical circumstellar system with 10 free parameters. We derive the SED of the reference model with our code MC3 D at 78 logarithmically distributed wavelengths in the range [0.3 μm, 1.3 mm] and use this setup to simulate SEDs for the database and SA. Our result directly demonstrates the applicability of SA in the field of SED modeling, since the algorithm regularly finds better solutions to the optimization problem than a pre-calculated database. As both methods have advantages and shortcomings, a hybrid approach is preferable. While the database provides an approximate fit and overall probability distributions for all parameters deduced using Bayesian analysis, SA can be used to improve upon the results returned by the model grid.

  8. Fitness club

    CERN Multimedia

    Fitness club

    2011-01-01

    General fitness Classes Enrolments are open for general fitness classes at CERN taking place on Monday, Wednesday, and Friday lunchtimes in the Pump Hall (building 216). There are shower facilities for both men and women. It is possible to pay for 1, 2 or 3 classes per week for a minimum of 1 month and up to 6 months. Check out our rates and enrol at: http://cern.ch/club-fitness Hope to see you among us! CERN Fitness Club fitness.club@cern.ch  

  9. Adjoint Methods for Adjusting Three-Dimensional Atmosphere and Surface Properties to Fit Multi-Angle Multi-Pixel Polarimetric Measurements

    Science.gov (United States)

    Martin, William G.; Cairns, Brian; Bal, Guillaume

    2014-01-01

    This paper derives an efficient procedure for using the three-dimensional (3D) vector radiative transfer equation (VRTE) to adjust atmosphere and surface properties and improve their fit with multi-angle/multi-pixel radiometric and polarimetric measurements of scattered sunlight. The proposed adjoint method uses the 3D VRTE to compute the measurement misfit function and the adjoint 3D VRTE to compute its gradient with respect to all unknown parameters. In the remote sensing problems of interest, the scalar-valued misfit function quantifies agreement with data as a function of atmosphere and surface properties, and its gradient guides the search through this parameter space. Remote sensing of the atmosphere and surface in a three-dimensional region may require thousands of unknown parameters and millions of data points. Many approaches would require calls to the 3D VRTE solver in proportion to the number of unknown parameters or measurements. To avoid this issue of scale, we focus on computing the gradient of the misfit function as an alternative to the Jacobian of the measurement operator. The resulting adjoint method provides a way to adjust 3D atmosphere and surface properties with only two calls to the 3D VRTE solver for each spectral channel, regardless of the number of retrieval parameters, measurement view angles or pixels. This gives a procedure for adjusting atmosphere and surface parameters that will scale to the large problems of 3D remote sensing. For certain types of multi-angle/multi-pixel polarimetric measurements, this encourages the development of a new class of three-dimensional retrieval algorithms with more flexible parametrizations of spatial heterogeneity, less reliance on data screening procedures, and improved coverage in terms of the resolved physical processes in the Earth?s atmosphere.

  10. LOCO with Constraints and Improved Fitting Technique

    International Nuclear Information System (INIS)

    Not Available

    2007-01-01

    LOCO has been a powerful beam-based diagnostics and optics control method for storage rings and synchrotrons worldwide ever since it was established at NSLS by J. Safranek. This method measures the orbit response matrix and optionally the dispersion function of the machine. The data are then fitted to a lattice model by adjusting parameters such as quadrupole and skew quadrupole strengths in the model, BPM gains and rolls, corrector gains and rolls of the measurement system. Any abnormality of the machine that affects the machine optics can then be identified. The resulting lattice model is equivalent to the real machine lattice as seen by the BPMs. Since there are usually two or more BPMs per betatron period in modern circular accelerators, the model is often a very accurate representation of the real machine. According to the fitting result, one can correct the machine lattice to the design lattice by changing the quadrupole and skew quadrupole strengths. LOCO is so important that it is routinely performed at many electron storage rings to guarantee machine performance, especially after the Matlab-based LOCO code became available. However, for some machines, LOCO is not easy to carry out. In some cases, LOCO fitting converges to an unrealistic solution with large changes to the quadrupole strengths ΔK. The quadrupole gradient changes can be so large that the resulting lattice model fails to find a closed orbit and subsequent iterations become impossible. In cases when LOCO converges, the solution can have ΔK that is larger than realistic and often along with a spurious zigzag pattern between adjacent quadrupoles. This degeneracy behavior of LOCO is due to the correlation between the fitting parameters - usually between neighboring quadrupoles. The fitting scheme is therefore less restrictive over certain patterns of changes to these quadrupoles with which the correlated quadrupoles fight each other and the net effect is very inefficient χ 2 reduction, i

  11. INVESTIGATION THE FITTING ACCURACY OF CAST AND SLM CO-CR DENTAL BRIDGES USING CAD SOFTWARE

    Directory of Open Access Journals (Sweden)

    Tsanka Dikova

    2017-09-01

    Full Text Available The aim of the present paper is to investigate the fitting accuracy of Co-Cr dental bridges, manufactured by three technologies, with the newly developed method using CAD software. The four-part dental bridges of Co-Cr alloys were produced by conventional casting of wax models, casting with 3D printed patterns and selective laser melting. The marginal and internal fit of dental bridges was studied out by two methods – silicone replica test and CAD software. As the silicone replica test characterizes with comparatively low accuracy, a new methodology for investigating the fitting accuracy of dental bridges was developed based on the SolidWorks CAD software. The newly developed method allows the study of the marginal and internal adaptation in unlimited directions and high accuracy. Investigation the marginal fit and internal adaptation of Co-Cr four-part dental bridges by the two methods show that the technological process strongly influences the fitting accuracy of dental restorations. The fitting accuracy of the bridges, cast with 3D printed patterns, is the highest, followed by the SLM and conventionally cast bridges. The marginal fit of the three groups of bridges is in the clinically acceptable range. The internal gap values vary in different regions – it is highest on the occlusal surfaces, followed by that in the marginal and axial areas. The higher fitting accuracy of the bridges, manufactured by casting with 3D printed patterns and SLM, compared to the conventionally cast bridges is a good precondition for their successful implementation in the dental offices and laboratories.

  12. Fitness voter model: Damped oscillations and anomalous consensus.

    Science.gov (United States)

    Woolcock, Anthony; Connaughton, Colm; Merali, Yasmin; Vazquez, Federico

    2017-09-01

    We study the dynamics of opinion formation in a heterogeneous voter model on a complete graph, in which each agent is endowed with an integer fitness parameter k≥0, in addition to its + or - opinion state. The evolution of the distribution of k-values and the opinion dynamics are coupled together, so as to allow the system to dynamically develop heterogeneity and memory in a simple way. When two agents with different opinions interact, their k-values are compared, and with probability p the agent with the lower value adopts the opinion of the one with the higher value, while with probability 1-p the opposite happens. The agent that keeps its opinion (winning agent) increments its k-value by one. We study the dynamics of the system in the entire 0≤p≤1 range and compare with the case p=1/2, in which opinions are decoupled from the k-values and the dynamics is equivalent to that of the standard voter model. When 0≤psystem approaches exponentially fast to the consensus state of the initial majority opinion. The mean consensus time τ appears to grow logarithmically with the number of agents N, and it is greatly decreased relative to the linear behavior τ∼N found in the standard voter model. When 1/2system initially relaxes to a state with an even coexistence of opinions, but eventually reaches consensus by finite-size fluctuations. The approach to the coexistence state is monotonic for 1/2oscillations around the coexistence value. The final approach to coexistence is approximately a power law t^{-b(p)} in both regimes, where the exponent b increases with p. Also, τ increases respect to the standard voter model, although it still scales linearly with N. The p=1 case is special, with a relaxation to coexistence that scales as t^{-2.73} and a consensus time that scales as τ∼N^{β}, with β≃1.45.

  13. Optimum extrusion-cooking conditions for improving physical properties of fish-cereal based snacks by response surface methodology.

    Science.gov (United States)

    Singh, R K Ratankumar; Majumdar, Ranendra K; Venkateshwarlu, G

    2014-09-01

    To establish the effect of barrel temperature, screw speed, total moisture and fish flour content on the expansion ratio and bulk density of the fish based extrudates, response surface methodology was adopted in this study. The experiments were optimized using five-levels, four factors central composite design. Analysis of Variance was carried to study the effects of main factors and interaction effects of various factors and regression analysis was carried out to explain the variability. The fitting was done to a second order model with the coded variables for each response. The response surface plots were developed as a function of two independent variables while keeping the other two independent variables at optimal values. Based on the ANOVA, the fitted model confirmed the model fitness for both the dependent variables. Organoleptically highest score was obtained with the combination of temperature-110(0) C, screw speed-480 rpm, moisture-18 % and fish flour-20 %.

  14. Thermal analysis of dry eye subjects and the thermal impulse perturbation model of ocular surface.

    Science.gov (United States)

    Zhang, Aizhong; Maki, Kara L; Salahura, Gheorghe; Kottaiyan, Ranjini; Yoon, Geunyoung; Hindman, Holly B; Aquavella, James V; Zavislan, James M

    2015-03-01

    In this study, we explore the usage of ocular surface temperature (OST) decay patterns to distinguished between dry eye patients with aqueous deficient dry eye (ADDE) and meibomian gland dysfunction (MGD). The OST profiles of 20 dry eye subjects were measured by a long-wave infrared thermal camera in a standardized environment (24 °C, and relative humidity (RH) 40%). The subjects were instructed to blink every 5 s after 20 ∼ 25 min acclimation. Exponential decay curves were fit to the average temperature within a region of the central cornea. We find the MGD subjects have both a higher initial temperature (p model, referred to as the thermal impulse perturbation (TIP) model. We conclude that long-wave-infrared thermal imaging is a plausible tool in assisting with the classification of dry eye patient. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Nonlocal continuum-based modeling of breathing mode of nanowires including surface stress and surface inertia effects

    Science.gov (United States)

    Ghavanloo, Esmaeal; Fazelzadeh, S. Ahmad; Rafii-Tabar, Hashem

    2014-05-01

    Nonlocal and surface effects significantly influence the mechanical response of nanomaterials and nanostructures. In this work, the breathing mode of a circular nanowire is studied on the basis of the nonlocal continuum model. Both the surface elastic properties and surface inertia effect are included. Nanowires can be modeled as long cylindrical solid objects. The classical model is reformulated using the nonlocal differential constitutive relations of Eringen and Gurtin-Murdoch surface continuum elasticity formalism. A new frequency equation for the breathing mode of nanowires, including small scale effect, surface stress and surface inertia is presented by employing the Bessel functions. Numerical results are computed, and are compared to confirm the validity and accuracy of the proposed method. Furthermore, the model is used to elucidate the effect of nonlocal parameter, the surface stress, the surface inertia and the nanowire orientation on the breathing mode of several types of nanowires with size ranging from 0.5 to 4 nm. Our results reveal that the combined surface and small scale effects are significant for nanowires with diameter smaller than 4 nm.

  16. Nonlocal continuum-based modeling of breathing mode of nanowires including surface stress and surface inertia effects

    International Nuclear Information System (INIS)

    Ghavanloo, Esmaeal; Fazelzadeh, S. Ahmad; Rafii-Tabar, Hashem

    2014-01-01

    Nonlocal and surface effects significantly influence the mechanical response of nanomaterials and nanostructures. In this work, the breathing mode of a circular nanowire is studied on the basis of the nonlocal continuum model. Both the surface elastic properties and surface inertia effect are included. Nanowires can be modeled as long cylindrical solid objects. The classical model is reformulated using the nonlocal differential constitutive relations of Eringen and Gurtin–Murdoch surface continuum elasticity formalism. A new frequency equation for the breathing mode of nanowires, including small scale effect, surface stress and surface inertia is presented by employing the Bessel functions. Numerical results are computed, and are compared to confirm the validity and accuracy of the proposed method. Furthermore, the model is used to elucidate the effect of nonlocal parameter, the surface stress, the surface inertia and the nanowire orientation on the breathing mode of several types of nanowires with size ranging from 0.5 to 4 nm. Our results reveal that the combined surface and small scale effects are significant for nanowires with diameter smaller than 4 nm.

  17. Nonlocal continuum-based modeling of breathing mode of nanowires including surface stress and surface inertia effects

    Energy Technology Data Exchange (ETDEWEB)

    Ghavanloo, Esmaeal, E-mail: ghavanloo@shirazu.ac.ir [School of Mechanical Engineering, Shiraz University, Shiraz 71963-16548 (Iran, Islamic Republic of); Fazelzadeh, S. Ahmad [School of Mechanical Engineering, Shiraz University, Shiraz 71963-16548 (Iran, Islamic Republic of); Rafii-Tabar, Hashem [Department of Medical Physics and Biomedical Engineering, Research Center for Medical Nanotechnology and Tissue Engineering, Shahid Beheshti University of Medical Sciences, Evin, Tehran (Iran, Islamic Republic of); Computational Physical Sciences Research Laboratory, School of Nano-Science, Institute for Research in Fundamental Sciences (IPM), Tehran (Iran, Islamic Republic of)

    2014-05-01

    Nonlocal and surface effects significantly influence the mechanical response of nanomaterials and nanostructures. In this work, the breathing mode of a circular nanowire is studied on the basis of the nonlocal continuum model. Both the surface elastic properties and surface inertia effect are included. Nanowires can be modeled as long cylindrical solid objects. The classical model is reformulated using the nonlocal differential constitutive relations of Eringen and Gurtin–Murdoch surface continuum elasticity formalism. A new frequency equation for the breathing mode of nanowires, including small scale effect, surface stress and surface inertia is presented by employing the Bessel functions. Numerical results are computed, and are compared to confirm the validity and accuracy of the proposed method. Furthermore, the model is used to elucidate the effect of nonlocal parameter, the surface stress, the surface inertia and the nanowire orientation on the breathing mode of several types of nanowires with size ranging from 0.5 to 4 nm. Our results reveal that the combined surface and small scale effects are significant for nanowires with diameter smaller than 4 nm.

  18. Three-dimensional modeling of chloroprene rubber surface topography upon composition

    Energy Technology Data Exchange (ETDEWEB)

    Žukienė, Kristina, E-mail: kristina.zukiene@ktu.lt [Department of Clothing and Polymer Products Technology, Kaunas University of Technology, Studentu St. 56, LT-51424 Kaunas (Lithuania); Jankauskaitė, Virginija [Department of Clothing and Polymer Products Technology, Kaunas University of Technology, Studentu St. 56, LT-51424 Kaunas (Lithuania); Petraitienė, Stase [Department of Applied Mathematics, Kaunas University of Technology, Studentu 50, LT-51368 Kaunas (Lithuania)

    2014-02-15

    In this study the effect of polymer blend composition on the surface roughness has been investigated and simulated. Three-dimensional modeling of chloroprene rubber film surface upon piperylene-styrene copolymer content was conducted. The efficiency of various surface roughness modeling methods, including Monte Carlo, surface growth and proposed method, named as parabolas, were compared. The required parameters for modeling were obtained from atomic force microscopy topographical images of polymer films surface. It was shown that experimental and modeled surfaces have the same correlation function. The quantitative comparison of function parameters was made. It was determined that novel parabolas method is suitable for three-dimensional polymer blends surface roughness description.

  19. Improving the Yule-Nielsen modified Neugebauer model by dot surface coverages depending on the ink superposition conditions

    Science.gov (United States)

    Hersch, Roger David; Crete, Frederique

    2005-01-01

    Dot gain is different when dots are printed alone, printed in superposition with one ink or printed in superposition with two inks. In addition, the dot gain may also differ depending on which solid ink the considered halftone layer is superposed. In a previous research project, we developed a model for computing the effective surface coverage of a dot according to its superposition conditions. In the present contribution, we improve the Yule-Nielsen modified Neugebauer model by integrating into it our effective dot surface coverage computation model. Calibration of the reproduction curves mapping nominal to effective surface coverages in every superposition condition is carried out by fitting effective dot surfaces which minimize the sum of square differences between the measured reflection density spectra and reflection density spectra predicted according to the Yule-Nielsen modified Neugebauer model. In order to predict the reflection spectrum of a patch, its known nominal surface coverage values are converted into effective coverage values by weighting the contributions from different reproduction curves according to the weights of the contributing superposition conditions. We analyze the colorimetric prediction improvement brought by our extended dot surface coverage model for clustered-dot offset prints, thermal transfer prints and ink-jet prints. The color differences induced by the differences between measured reflection spectra and reflection spectra predicted according to the new dot surface estimation model are quantified on 729 different cyan, magenta, yellow patches covering the full color gamut. As a reference, these differences are also computed for the classical Yule-Nielsen modified spectral Neugebauer model incorporating a single halftone reproduction curve for each ink. Taking into account dot surface coverages according to different superposition conditions considerably improves the predictions of the Yule-Nielsen modified Neugebauer model. In

  20. Empirical valence bond models for reactive potential energy surfaces: a parallel multilevel genetic program approach.

    Science.gov (United States)

    Bellucci, Michael A; Coker, David F

    2011-07-28

    We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics

  1. Modeling surface topography of state-of-the-art x-ray mirrors as a result of stochastic polishing process: recent developments

    Science.gov (United States)

    Yashchuk, Valeriy V.; Centers, Gary; Tyurin, Yuri N.; Tyurina, Anastasia

    2016-09-01

    Recently, an original method for the statistical modeling of surface topography of state-of-the-art mirrors for usage in xray optical systems at light source facilities and for astronomical telescopes [Opt. Eng. 51(4), 046501, 2012; ibid. 53(8), 084102 (2014); and ibid. 55(7), 074106 (2016)] has been developed. In modeling, the mirror surface topography is considered to be a result of a stationary uniform stochastic polishing process and the best fit time-invariant linear filter (TILF) that optimally parameterizes, with limited number of parameters, the polishing process is determined. The TILF model allows the surface slope profile of an optic with a newly desired specification to be reliably forecast before fabrication. With the forecast data, representative numerical evaluations of expected performance of the prospective mirrors in optical systems under development become possible [Opt. Eng., 54(2), 025108 (2015)]. Here, we suggest and demonstrate an analytical approach for accounting the imperfections of the used metrology instruments, which are described by the instrumental point spread function, in the TILF modeling. The efficacy of the approach is demonstrated with numerical simulations for correction of measurements performed with an autocollimator based surface slope profiler. Besides solving this major metrological problem, the results of the present work open an avenue for developing analytical and computational tools for stitching data in the statistical domain, obtained using multiple metrology instruments measuring significantly different bandwidths of spatial wavelengths.

  2. Modified-surface-energy methods for deriving heavy-ion potentials

    International Nuclear Information System (INIS)

    Sierk, A.J.

    1977-01-01

    The use of a modified-surface-energy approach for the calculation of heavy-ion interaction potentials is discussed. It is not possible to simultaneously fit elastic scattering, ion interaction barriers, and fission barriers with the same set of constants in this model. Possible explanations of this deficiency are discussed

  3. Physical Work Demands and Fitness

    DEFF Research Database (Denmark)

    Larsen, Mette Korshøj

    . The effects were evaluated with objective physiological or diurnal data in an intention-to-treat analysis using multi-adjusted mixed models. The results indicated that the intervention led to several improvements in risk factors for cardiovascular disease, e.g. enhanced cardiorespiratory fitness, reduced...... exposed to high relative aerobic workloads obtained more pronounced increases of resting and 24-hour ambulatory blood pressure, an unaltered cardiorespiratory fitness and a reduced sleeping heart rate. The enhanced resting and 24-hour ambulatory blood pressure may be explained as a potential...

  4. Modeling wind adjustment factor and midflame wind speed for Rothermel's surface fire spread model

    Science.gov (United States)

    Patricia L. Andrews

    2012-01-01

    Rothermel's surface fire spread model was developed to use a value for the wind speed that affects surface fire, called midflame wind speed. Models have been developed to adjust 20-ft wind speed to midflame wind speed for sheltered and unsheltered surface fuel. In this report, Wind Adjustment Factor (WAF) model equations are given, and the BehavePlus fire modeling...

  5. Rheological properties of emulsions stabilized by green banana (Musa cavendishii pulp fitted by power law model

    Directory of Open Access Journals (Sweden)

    Dayane Rosalyn Izidoro

    2009-12-01

    Full Text Available In this work, the rheological behaviour of emulsions (mayonnaises stabilized by green banana pulp using the response surface methodology was studied. In addition, the emulsions stability was investigated. Five formulations were developed, according to design for constrained surfaces and mixtures, with the proportion, respectively: water/soy oil/green banana pulp: F1 (0.10/0.20/0.70, F2 (0.20/0.20/0.60, F3 (0.10/0.25/0.65, F4 (0.20/0.25/0.55 and F5 (0.15/0.225/0.625 .Emulsions rheological properties were performed with a rotational Haake Rheostress 600 rheometer and a cone and plate geometry sensor (60-mm diameter, 2º cone angle, using a gap distance of 1mm. The emulsions showed pseudoplastic behaviour and were adequately described by the Power Law model. The rheological responses were influenced by the difference in green banana pulp proportions and also by the temperatures (10 and 25ºC. The formulations with high pulp content (F1 and F3 presented higher shear stress and apparent viscosity. Response surface methodology, described by the quadratic model,showed that the consistency coefficient (K increased with the interaction between green banana pulp and soy oil concentration and the water fraction contributed to the flow behaviour index increase for all emulsions samples. Analysis of variance showed that the second-order model had not significant lack-of-fit and a significant F-value, indicating that quadratic model fitted well into the experimental data. The emulsions that presented better stability were the formulations F4 (0.20/0.25/0.55 and F5 (0.15/0.225/0.625.No presente trabalho, foi estudado o comportamento reológico de emulsões adicionadas de polpa de banana verde utilizando a metodologia de superfície de resposta e também foram investigadas a estabilidade das emulsões. Foram desenvolvidas cinco formulações, de acordo com o delineamento para superfícies limitadas e misturas, com as proporções respectivamente: água/óleo de

  6. The role of social capital and community belongingness for exercise adherence: An exploratory study of the CrossFit gym model.

    Science.gov (United States)

    Whiteman-Sandland, Jessica; Hawkins, Jemma; Clayton, Debbie

    2016-08-01

    This is the first study to measure the 'sense of community' reportedly offered by the CrossFit gym model. A cross-sectional study adapted Social Capital and General Belongingness scales to compare perceptions of a CrossFit gym and a traditional gym. CrossFit gym members reported significantly higher levels of social capital (both bridging and bonding) and community belongingness compared with traditional gym members. However, regression analysis showed neither social capital, community belongingness, nor gym type was an independent predictor of gym attendance. Exercise and health professionals may benefit from evaluating further the 'sense of community' offered by gym-based exercise programmes.

  7. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references.

  8. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    International Nuclear Information System (INIS)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references

  9. Modelling the association between weight status and social deprivation in English school children: Can physical activity and fitness affect the relationship?

    Science.gov (United States)

    Nevill, Alan M; Duncan, Michael J; Lahart, Ian; Sandercock, Gavin

    2016-11-01

    The association between being overweight/obese and deprivation is a serious concern in English schoolchildren. To model this association incorporating known confounders and to discover whether physical fitness and physical activity may reduce or eliminate this association. Cross-sectional data were collected between 2007-2009, from 8053 10-16 year old children from the East-of-England Healthy Heart Study. Weight status was assessed using waist circumference (cm) and body mass (kg). Deprivation was measured using the Index of Multiple Deprivation (IMD). Confounding variables used in the proportional, allometric models were hip circumference, stature, age and sex. Children's fitness levels were assessed using predicted VO 2 max (20-metre shuttle-run test) and physical activity was estimated using the Physical Activity Questionnaire for Adolescents or Children. A strong association was found between both waist circumference and body mass and the IMD. These associations persisted after controlling for all confounding variables. When the children's physical activity and fitness levels were added to the models, the association was either greatly reduced or, in the case of body mass, absent. To reduce deprivation inequalities in children's weight-status, health practitioners should focus on increasing physical fitness via physical activity in areas of greater deprivation.

  10. Human eyeball model reconstruction and quantitative analysis.

    Science.gov (United States)

    Xing, Qi; Wei, Qi

    2014-01-01

    Determining shape of the eyeball is important to diagnose eyeball disease like myopia. In this paper, we present an automatic approach to precisely reconstruct three dimensional geometric shape of eyeball from MR Images. The model development pipeline involved image segmentation, registration, B-Spline surface fitting and subdivision surface fitting, neither of which required manual interaction. From the high resolution resultant models, geometric characteristics of the eyeball can be accurately quantified and analyzed. In addition to the eight metrics commonly used by existing studies, we proposed two novel metrics, Gaussian Curvature Analysis and Sphere Distance Deviation, to quantify the cornea shape and the whole eyeball surface respectively. The experiment results showed that the reconstructed eyeball models accurately represent the complex morphology of the eye. The ten metrics parameterize the eyeball among different subjects, which can potentially be used for eye disease diagnosis.

  11. On the fit of models to covariances and methodology to the Bulletin.

    Science.gov (United States)

    Bentler, P M

    1992-11-01

    It is noted that 7 of the 10 top-cited articles in the Psychological Bulletin deal with methodological topics. One of these is the Bentler-Bonett (1980) article on the assessment of fit in covariance structure models. Some context is provided on the popularity of this article. In addition, a citation study of methodology articles appearing in the Bulletin since 1978 was carried out. It verified that publications in design, evaluation, measurement, and statistics continue to be important to psychological research. Some thoughts are offered on the role of the journal in making developments in these areas more accessible to psychologists.

  12. Surface wettability effects on critical heat flux of boiling heat transfer using nanoparticle coatings

    KAUST Repository

    Hsu, Chin-Chi

    2012-06-01

    This study investigates the effects of surface wettability on pool boiling heat transfer. Nano-silica particle coatings were used to vary the wettability of the copper surface from superhydrophilic to superhydrophobic by modifying surface topography and chemistry. Experimental results show that critical heat flux (CHF) values are higher in the hydrophilic region. Conversely, CHF values are lower in the hydrophobic region. The experimental CHF data of the modified surface do not fit the classical models. Therefore, this study proposes a simple model to build the nexus between the surface wettability and the growth of bubbles on the heating surface. © 2012 Elsevier Ltd. All rights reserved.

  13. A hands-on approach for fitting long-term survival models under the GAMLSS framework.

    Science.gov (United States)

    de Castro, Mário; Cancho, Vicente G; Rodrigues, Josemar

    2010-02-01

    In many data sets from clinical studies there are patients insusceptible to the occurrence of the event of interest. Survival models which ignore this fact are generally inadequate. The main goal of this paper is to describe an application of the generalized additive models for location, scale, and shape (GAMLSS) framework to the fitting of long-term survival models. In this work the number of competing causes of the event of interest follows the negative binomial distribution. In this way, some well known models found in the literature are characterized as particular cases of our proposal. The model is conveniently parameterized in terms of the cured fraction, which is then linked to covariates. We explore the use of the gamlss package in R as a powerful tool for inference in long-term survival models. The procedure is illustrated with a numerical example. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  14. Model of the final borehole geometry for helical laser drilling

    Science.gov (United States)

    Kroschel, Alexander; Michalowski, Andreas; Graf, Thomas

    2018-05-01

    A model for predicting the borehole geometry for laser drilling is presented based on the calculation of a surface of constant absorbed fluence. It is applicable to helical drilling of through-holes with ultrashort laser pulses. The threshold fluence describing the borehole surface is fitted for best agreement with experimental data in the form of cross-sections of through-holes of different shapes and sizes in stainless steel samples. The fitted value is similar to ablation threshold fluence values reported for laser ablation models.

  15. Atomic structure of diamond {111} surfaces etched in oxygen water vapor

    International Nuclear Information System (INIS)

    Theije, F.K. de; Reedijk, M.F.; Arsic, J.; Enckevort, W.J.P. van; Vlieg, E.

    2001-01-01

    The atomic structure of the {111} diamond face after oxygen-water-vapor etching is determined using x-ray scattering. We find that a single dangling bond diamond {111} surface model, terminated by a full monolayer of -OH fits our data best. To explain the measurements it is necessary to add an ordered water layer on top of the -OH terminated surface. The vertical contraction of the surface cell and the distance between the oxygen atoms are generally in agreement with model calculations and results on similar systems. The OH termination is likely to be present during etching as well. This model experimentally confirms the atomic-scale mechanism we proposed previously for this etching system

  16. Validity of Intraoral Scans Compared with Plaster Models: An In-Vivo Comparison of Dental Measurements and 3D Surface Analysis.

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    Full Text Available Dental measurements have been commonly taken from plaster dental models obtained from alginate impressions can. Through the use of an intraoral scanner, digital impressions now acquire the information directly from the mouth. The purpose of this study was to determine the validity of the intraoral scans compared to plaster models.Two types of dental models (intraoral scan and plaster model of 20 subjects were included in this study. The subjects had impressions taken of their teeth and made as plaster model. In addition, their mouths were scanned with the intraoral scanner and the scans were converted into digital models. Eight transverse and 16 anteroposterior measurements, 24 tooth heights and widths were recorded on the plaster models with a digital caliper and on the intraoral scan with 3D reverse engineering software. For 3D surface analysis, the two models were superimposed by using best-fit algorithm. The average differences between the two models at all points on the surfaces were computed. Paired t-test and Bland-Altman plot were used to determine the validity of measurements from the intraoral scan compared to those from the plaster model.There were no significant differences between the plaster models and intraoral scans, except for one measurement of lower intermolar width. The Bland-Altman plots of all measurements showed that differences between the two models were within the limits of agreement. The average surface difference between the two models was within 0.10 mm.The results of the present study indicate that the intraoral scans are clinically acceptable for diagnosis and treatment planning in dentistry and can be used in place of plaster models.

  17. Validity of Intraoral Scans Compared with Plaster Models: An In-Vivo Comparison of Dental Measurements and 3D Surface Analysis

    Science.gov (United States)

    2016-01-01

    Purpose Dental measurements have been commonly taken from plaster dental models obtained from alginate impressions can. Through the use of an intraoral scanner, digital impressions now acquire the information directly from the mouth. The purpose of this study was to determine the validity of the intraoral scans compared to plaster models. Materials and Methods Two types of dental models (intraoral scan and plaster model) of 20 subjects were included in this study. The subjects had impressions taken of their teeth and made as plaster model. In addition, their mouths were scanned with the intraoral scanner and the scans were converted into digital models. Eight transverse and 16 anteroposterior measurements, 24 tooth heights and widths were recorded on the plaster models with a digital caliper and on the intraoral scan with 3D reverse engineering software. For 3D surface analysis, the two models were superimposed by using best-fit algorithm. The average differences between the two models at all points on the surfaces were computed. Paired t-test and Bland-Altman plot were used to determine the validity of measurements from the intraoral scan compared to those from the plaster model. Results There were no significant differences between the plaster models and intraoral scans, except for one measurement of lower intermolar width. The Bland-Altman plots of all measurements showed that differences between the two models were within the limits of agreement. The average surface difference between the two models was within 0.10 mm. Conclusions The results of the present study indicate that the intraoral scans are clinically acceptable for diagnosis and treatment planning in dentistry and can be used in place of plaster models. PMID:27304976

  18. Simplified models for surface hyperchannelling

    International Nuclear Information System (INIS)

    Evdokimov, I.N.; Webb, R.; Armour, D.G.; Karpuzov, D.S.

    1979-01-01

    Experimental and detailed, three-dimensional computer simulation studies of the scattering of low energy argon ions incident at grazing angles onto a nickel single crystal have shown that under certain, well defined conditions, surface hyperchannelling dominates the reflection process. The applicability of simple computer simulation models to the study of this type of scattering has been investigated by comparing the results obtained using a 'summation of binary collisions' model and a continuous string model with both the experimental observations and the three dimensional model calculations. It has been shown that all the major features of the phenomenon can be reproduced in a qualitative way using the simple models and that the continuous string represents a good approximation to the 'real' crystal over a wide range of angles. The saving in computer time compared with the more complex model makes it practicable to use the simple models to calculate cross-sections and overall scattering intensities for a wide range of geometries. The results of these calculations suggest that the critical angle for the onset of surface hyperchannelling, which is associated with a reduction in scattering intensity and which is thus not too sensitive to the parameters of experimental apparatus is a useful quantity from the point of view of comparison of theoretical calculations with experimental measurements. (author)

  19. Analysis of Shift and Deformation of Planar Surfaces Using the Least Squares Plane

    Directory of Open Access Journals (Sweden)

    Hrvoje Matijević

    2006-12-01

    Full Text Available Modern methods of measurement developed on the basis of advanced reflectorless distance measurement have paved the way for easier detection and analysis of shift and deformation. A large quantity of collected data points will often require a mathematical model of the surface that fits best into these. Although this can be a complex task, in the case of planar surfaces it is easily done, enabling further processing and analysis of measurement results. The paper describes the fitting of a plane to a set of collected points using the least squares distance, with previously excluded outliers via the RANSAC algorithm. Based on that, a method for analysis of the deformation and shift of planar surfaces is also described.

  20. Cryptosporidium and microcystins, two problems in making surface water fit for drinking; Criptosporidium y microcistinas, dos problemas en la potabilizacion de las aguas superficiales

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez-Cedron Rodriguez, A.

    2004-07-01

    Spanish Royal Decree 140/2003 of 7 February 2003 (published in the Spanish Official Gazette, BOE 45 of 21 February 2003 laid down the health criteria for considering water fit for human consumption. this for the first time in Spanish regulations, mention is made of the need to determine the presence of the cryptosporidium genus and other microorganisms or parasites in certain conditions (cloudiness). The decree also provides that in certain other conditions (eutrophication) the level of microcystins must also be determined. Both these polluting agents grow in surface water. This article describes the characteristics of these pollutants, their pathology, recorded epidemic outbreaks, the circumstances in which they can be detected, how the appropriate analyses can be carried out leading to their detection and what treatment are employed to eliminate them during the process of making water fit human consumption. (Author)