WorldWideScience

Sample records for best-fit model parameters

  1. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  2. The best-fit universe

    International Nuclear Information System (INIS)

    Turner, M.S.; Chicago Univ., IL

    1990-10-01

    Inflation provides very strong motivation for a flat Universe, Harrison-Zel'dovich (constant-curvature) perturbations, and cold dark matter. However, there are a number of cosmological observations that conflict with the predictions of the simplest such model -- one with zero cosmological constant. They include the age of the Universe, dynamical determinations of Ω, galaxy-number counts, and the apparent abundance of large-scale structure in the Universe. While the discrepancies are not yet serious enough to rule out the simplest and ''most well motivated'' model, the current data point to a ''best-fit model'' with the following parameters: Ω B ≅ 0.03, Ω CDM ≅ 0.17, Ω Λ ≅ 0.8, and H 0 ≅ 70 km sec -1 Mpc -1 , which improves significantly the concordance with observations. While there is no good reason to expect such a value for the cosmological constant, there is no physical principle that would rule out such. 42 refs

  3. The best-fit universe

    Energy Technology Data Exchange (ETDEWEB)

    Turner, M.S. (Fermi National Accelerator Lab., Batavia, IL (USA) Chicago Univ., IL (USA). Enrico Fermi Inst.)

    1990-10-01

    Inflation provides very strong motivation for a flat Universe, Harrison-Zel'dovich (constant-curvature) perturbations, and cold dark matter. However, there are a number of cosmological observations that conflict with the predictions of the simplest such model -- one with zero cosmological constant. They include the age of the Universe, dynamical determinations of {Omega}, galaxy-number counts, and the apparent abundance of large-scale structure in the Universe. While the discrepancies are not yet serious enough to rule out the simplest and most well motivated'' model, the current data point to a best-fit model'' with the following parameters: {Omega}{sub B} {approx equal} 0.03, {Omega}{sub CDM} {approx equal} 0.17, {Omega}{sub {Lambda}} {approx equal} 0.8, and H{sub 0} {approx equal} 70 km sec{sup {minus}1} Mpc{sup {minus}1}, which improves significantly the concordance with observations. While there is no good reason to expect such a value for the cosmological constant, there is no physical principle that would rule out such. 42 refs.

  4. Minimal see-saw model predicting best fit lepton mixing angles

    International Nuclear Information System (INIS)

    King, Stephen F.

    2013-01-01

    We discuss a minimal predictive see-saw model in which the right-handed neutrino mainly responsible for the atmospheric neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (0,1,1) and the right-handed neutrino mainly responsible for the solar neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (1,4,2), with a relative phase η=−2π/5. We show how these patterns of couplings could arise from an A 4 family symmetry model of leptons, together with Z 3 and Z 5 symmetries which fix η=−2π/5 up to a discrete phase choice. The PMNS matrix is then completely determined by one remaining parameter which is used to fix the neutrino mass ratio m 2 /m 3 . The model predicts the lepton mixing angles θ 12 ≈34 ∘ ,θ 23 ≈41 ∘ ,θ 13 ≈9.5 ∘ , which exactly coincide with the current best fit values for a normal neutrino mass hierarchy, together with the distinctive prediction for the CP violating oscillation phase δ≈106 ∘

  5. An Improved Cognitive Model of the Iowa and Soochow Gambling Tasks With Regard to Model Fitting Performance and Tests of Parameter Consistency

    Directory of Open Access Journals (Sweden)

    Junyi eDai

    2015-03-01

    Full Text Available The Iowa Gambling Task (IGT and the Soochow Gambling Task (SGT are two experience-based risky decision-making tasks for examining decision-making deficits in clinical populations. Several cognitive models, including the expectancy-valence learning model (EVL and the prospect valence learning model (PVL, have been developed to disentangle the motivational, cognitive, and response processes underlying the explicit choices in these tasks. The purpose of the current study was to develop an improved model that can fit empirical data better than the EVL and PVL models and, in addition, produce more consistent parameter estimates across the IGT and SGT. Twenty-six opiate users (mean age 34.23; SD 8.79 and 27 control participants (mean age 35; SD 10.44 completed both tasks. Eighteen cognitive models varying in evaluation, updating, and choice rules were fit to individual data and their performances were compared to that of a statistical baseline model to find a best fitting model. The results showed that the model combining the prospect utility function treating gains and losses separately, the decay-reinforcement updating rule, and the trial-independent choice rule performed the best in both tasks. Furthermore, the winning model produced more consistent individual parameter estimates across the two tasks than any of the other models.

  6. A flexible, interactive software tool for fitting the parameters of neuronal models.

    Science.gov (United States)

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.

  7. A flexible, interactive software tool for fitting the parameters of neuronal models

    Directory of Open Access Journals (Sweden)

    Péter eFriedrich

    2014-07-01

    Full Text Available The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problem of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting

  8. A general theory for the construction of best-fit correlation equations for multi-dimensioned numerical data

    International Nuclear Information System (INIS)

    Moore, S.E.; Moffat, D.G.

    2007-01-01

    A general theory for the construction of best-fit correlation equations for multi-dimensioned sets of numerical data is presented. This new theory is based on the mathematics of n-dimensional surfaces and goodness-of-fit statistics. It is shown that orthogonal best-fit analytical trend lines for each of the independent parameters of the data can be used to construct an overall best-fit correlation equation that satisfies both physical boundary conditions and best-of-fit statistical measurements. Application of the theory is illustrated by fitting a three-parameter set of numerical finite-element maximum-stress data obtained earlier by Dr. Moffat for pressure vessel nozzles and/or piping system branch connections

  9. Soil physical properties influencing the fitting parameters in Philip and Kostiakov infiltration models

    International Nuclear Information System (INIS)

    Mbagwu, J.S.C.

    1994-05-01

    Among the many models developed for monitoring the infiltration process those of Philip and Kostiakov have been studied in detail because of their simplicity and the ease of estimating their fitting parameters. The important soil physical factors influencing the fitting parameters in these infiltration models are reported in this study. The results of the study show that the single most important soil property affecting the fitting parameters in these models is the effective porosity. 36 refs, 2 figs, 5 tabs

  10. A fitting LEGACY – modelling Kepler's best stars

    Directory of Open Access Journals (Sweden)

    Aarslev Magnus J.

    2017-01-01

    Full Text Available The LEGACY sample represents the best solar-like stars observed in the Kepler mission[5, 8]. The 66 stars in the sample are all on the main sequence or only slightly more evolved. They each have more than one year's observation data in short cadence, allowing for precise extraction of individual frequencies. Here we present model fits using a modified ASTFIT procedure employing two different near-surface-effect corrections, one by Christensen-Dalsgaard[4] and a newer correction proposed by Ball & Gizon[1]. We then compare the results obtained using the different corrections. We find that using the latter correction yields lower masses and significantly lower χ2 values for a large part of the sample.

  11. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    OpenAIRE

    Matthew P. Adams; Catherine J. Collier; Sven Uthicke; Yan X. Ow; Lucas Langlois; Katherine R. O’Brien

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluat...

  12. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species.

    Science.gov (United States)

    Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R

    2017-01-04

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  13. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    Science.gov (United States)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  14. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development.

    Science.gov (United States)

    Tøndel, Kristin; Niederer, Steven A; Land, Sander; Smith, Nicolas P

    2014-05-20

    Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input-output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on

  15. On The Computation Of The Best-fit Okada-type Tsunami Source

    Science.gov (United States)

    Miranda, J. M. A.; Luis, J. M. F.; Baptista, M. A.

    2017-12-01

    The forward simulation of earthquake-induced tsunamis usually assumes that the initial sea surface elevation mimics the co-seismic deformation of the ocean bottom described by a simple "Okada-type" source (rectangular fault with constant slip in a homogeneous elastic half space). This approach is highly effective, in particular in far-field conditions. With this assumption, and a given set of tsunami waveforms recorded by deep sea pressure sensors and (or) coastal tide stations it is possible to deduce the set of parameters of the Okada-type solution that best fits a set of sea level observations. To do this, we build a "space of possible tsunami sources-solution space". Each solution consists of a combination of parameters: earthquake magnitude, length, width, slip, depth and angles - strike, rake, and dip. To constrain the number of possible solutions we use the earthquake parameters defined by seismology and establish a range of possible values for each parameter. We select the "best Okada source" by comparison of the results of direct tsunami modeling using the solution space of tsunami sources. However, direct tsunami modeling is a time-consuming process for the whole solution space. To overcome this problem, we use a precomputed database of Empirical Green Functions to compute the tsunami waveforms resulting from unit water sources and search which one best matches the observations. In this study, we use as a test case the Solomon Islands tsunami of 6 February 2013 caused by a magnitude 8.0 earthquake. The "best Okada" source is the solution that best matches the tsunami recorded at six DART stations in the area. We discuss the differences between the initial seismic solution and the final one obtained from tsunami data This publication received funding of FCT-project UID/GEO/50019/2013-Instituto Dom Luiz.

  16. Universal Rate Model Selector: A Method to Quickly Find the Best-Fit Kinetic Rate Model for an Experimental Rate Profile

    Science.gov (United States)

    2017-08-01

    k2 – k1) 3.3 Universal Kinetic Rate Platform Development Kinetic rate models range from pure chemical reactions to mass transfer...14 8. The rate model that best fits the experimental data is a first-order or homogeneous catalytic reaction ...Avrami (7), and intraparticle diffusion (6) rate equations to name a few. A single fitting algorithm (kinetic rate model ) for a reaction does not

  17. Spreadsheets, Graphing Calculators and the Line of Best Fit

    Directory of Open Access Journals (Sweden)

    Bernie O'Sullivan

    2003-07-01

    One technique that can now be done, almost mindlessly, is the line of best fit. Both the graphing calculator and the Excel spreadsheet produce models for collected data that appear to be very good fits, but upon closer scrutiny, are revealed to be quite poor. This article will examine one such case. I will couch the paper within the framework of a very good classroom investigation that will help generate students’ understanding of the basic principles of curve fitting and will enable them to produce a very accurate model of collected data by combining the technology of the graphing calculator and the spreadsheet.

  18. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    Science.gov (United States)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  19. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    Science.gov (United States)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    high flow and second the derivative of GED probability density function at zero is zero as β >1, but discontinuous as β ≤ 1, and even infinity as β < 1 with which the maximum likelihood estimation can guarantee the model errors approach zero as well as possible. The BC-GED that estimates the parameters (i.e. λ and β) of BC-GED model as well as hydrologic model parameters is the best distance-based goodness-of-fit indicator because not only the model validation using groundwater levels is very good, but also the model errors fulfill the statistic assumption best. However, in some cases of model calibration with a few observations e.g. calibration of single-event model, for avoiding estimation of the parameters of BC-GED model the MAE i.e. the boundary indicator (β = 1) of the two classes, can replace the BC-GED, because the model validation of MAE is best.

  20. Recommendations concerning models and parameters best suited to breeder reactor environmental radiological assessments

    International Nuclear Information System (INIS)

    Miller, C.W.; Baes, C.F. III; Dunning, D.E. Jr.

    1980-05-01

    Recommendations are presented concerning the models and parameters best suited for assessing the impact of radionuclide releases to the environment by breeder reactor facilities. These recommendations are based on the model and parameter evaluations performed during this project to date. Seven different areas are covered in separate sections

  1. The FITS model office ergonomics program: a model for best practice.

    Science.gov (United States)

    Chim, Justine M Y

    2014-01-01

    An effective office ergonomics program can predict positive results in reducing musculoskeletal injury rates, enhancing productivity, and improving staff well-being and job satisfaction. Its objective is to provide a systematic solution to manage the potential risk of musculoskeletal disorders among computer users in an office setting. A FITS Model office ergonomics program is developed. The FITS Model Office Ergonomics Program has been developed which draws on the legislative requirements for promoting the health and safety of workers using computers for extended periods as well as previous research findings. The Model is developed according to the practical industrial knowledge in ergonomics, occupational health and safety management, and human resources management in Hong Kong and overseas. This paper proposes a comprehensive office ergonomics program, the FITS Model, which considers (1) Furniture Evaluation and Selection; (2) Individual Workstation Assessment; (3) Training and Education; (4) Stretching Exercises and Rest Break as elements of an effective program. An experienced ergonomics practitioner should be included in the program design and implementation. Through the FITS Model Office Ergonomics Program, the risk of musculoskeletal disorders among computer users can be eliminated or minimized, and workplace health and safety and employees' wellness enhanced.

  2. A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models

    Directory of Open Access Journals (Sweden)

    Erin Scott

    2016-01-01

    Full Text Available The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE models to observation reference data (Mackinson et al. 2009. The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting >1000 specific individual searches to find the statistically ‘best fit’ model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the ‘best fit’ model for ecological accuracy.

  3. Fitting PAC spectra with stochastic models: PolyPacFit

    Energy Technology Data Exchange (ETDEWEB)

    Zacate, M. O., E-mail: zacatem1@nku.edu [Northern Kentucky University, Department of Physics and Geology (United States); Evenson, W. E. [Utah Valley University, College of Science and Health (United States); Newhouse, R.; Collins, G. S. [Washington State University, Department of Physics and Astronomy (United States)

    2010-04-15

    PolyPacFit is an advanced fitting program for time-differential perturbed angular correlation (PAC) spectroscopy. It incorporates stochastic models and provides robust options for customization of fits. Notable features of the program include platform independence and support for (1) fits to stochastic models of hyperfine interactions, (2) user-defined constraints among model parameters, (3) fits to multiple spectra simultaneously, and (4) any spin nuclear probe.

  4. Group Targets Tracking Using Multiple Models GGIW-CPHD Based on Best-Fitting Gaussian Approximation and Strong Tracking Filter

    Directory of Open Access Journals (Sweden)

    Yun Wang

    2016-01-01

    Full Text Available Gamma Gaussian inverse Wishart cardinalized probability hypothesis density (GGIW-CPHD algorithm was always used to track group targets in the presence of cluttered measurements and missing detections. A multiple models GGIW-CPHD algorithm based on best-fitting Gaussian approximation method (BFG and strong tracking filter (STF is proposed aiming at the defect that the tracking error of GGIW-CPHD algorithm will increase when the group targets are maneuvering. The best-fitting Gaussian approximation method is proposed to implement the fusion of multiple models using the strong tracking filter to correct the predicted covariance matrix of the GGIW component. The corresponding likelihood functions are deduced to update the probability of multiple tracking models. From the simulation results we can see that the proposed tracking algorithm MM-GGIW-CPHD can effectively deal with the combination/spawning of groups and the tracking error of group targets in the maneuvering stage is decreased.

  5. Application of an Evolutionary Algorithm for Parameter Optimization in a Gully Erosion Model

    Energy Technology Data Exchange (ETDEWEB)

    Rengers, Francis; Lunacek, Monte; Tucker, Gregory

    2016-06-01

    Herein we demonstrate how to use model optimization to determine a set of best-fit parameters for a landform model simulating gully incision and headcut retreat. To achieve this result we employed the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an iterative process in which samples are created based on a distribution of parameter values that evolve over time to better fit an objective function. CMA-ES efficiently finds optimal parameters, even with high-dimensional objective functions that are non-convex, multimodal, and non-separable. We ran model instances in parallel on a high-performance cluster, and from hundreds of model runs we obtained the best parameter choices. This method is far superior to brute-force search algorithms, and has great potential for many applications in earth science modeling. We found that parameters representing boundary conditions tended to converge toward an optimal single value, whereas parameters controlling geomorphic processes are defined by a range of optimal values.

  6. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  7. On 4-degree-of-freedom biodynamic models of seated occupants: Lumped-parameter modeling

    Science.gov (United States)

    Bai, Xian-Xu; Xu, Shi-Xu; Cheng, Wei; Qian, Li-Jun

    2017-08-01

    It is useful to develop an effective biodynamic model of seated human occupants to help understand the human vibration exposure to transportation vehicle vibrations and to help design and improve the anti-vibration devices and/or test dummies. This study proposed and demonstrated a methodology for systematically identifying the best configuration or structure of a 4-degree-of-freedom (4DOF) human vibration model and for its parameter identification. First, an equivalent simplification expression for the models was made. Second, all of the possible 23 structural configurations of the models were identified. Third, each of them was calibrated using the frequency response functions recommended in a biodynamic standard. An improved version of non-dominated sorting genetic algorithm (NSGA-II) based on Pareto optimization principle was used to determine the model parameters. Finally, a model evaluation criterion proposed in this study was used to assess the models and to identify the best one, which was based on both the goodness of curve fits and comprehensive goodness of the fits. The identified top configurations were better than those reported in the literature. This methodology may also be extended and used to develop the models with other DOFs.

  8. Identifying the Best-Fitting Factor Structure of the Experience of Close Relations

    DEFF Research Database (Denmark)

    Esbjørn, Barbara Hoff; Breinholst, Sonja; Niclasen, Janni

    2015-01-01

    . The present study used a Danish sample with the purpose of addressing limitations in previous studies, such as the lack of diversity in cultural back- ground, restricted sample characteristics, and poorly fitting structure models. Participants consisted of 253 parents of children between the ages of 7 and 12...... years, 53% being moth- ers. The parents completed the paper version of the questionnaire. Confirmatory Factor Analyses were carried out to determine whether theoretically and empirically established models including one and two factors would also provide adequate fits in a Danish sample. A previous...... study using the original ECR suggested that Scandinavian samples may best be described using a five-factor solution. Our results indicated that the one- and two-factor models of the ECR-R did not fit the data well. Exploratory Factor Analysis revealed a five- factor model. Our study provides evidence...

  9. Inverse problem theory methods for data fitting and model parameter estimation

    CERN Document Server

    Tarantola, A

    2002-01-01

    Inverse Problem Theory is written for physicists, geophysicists and all scientists facing the problem of quantitative interpretation of experimental data. Although it contains a lot of mathematics, it is not intended as a mathematical book, but rather tries to explain how a method of acquisition of information can be applied to the actual world.The book provides a comprehensive, up-to-date description of the methods to be used for fitting experimental data, or to estimate model parameters, and to unify these methods into the Inverse Problem Theory. The first part of the book deals wi

  10. GOSSIP: SED fitting code

    Science.gov (United States)

    Franzetti, Paolo; Scodeggio, Marco

    2012-10-01

    GOSSIP fits the electro-magnetic emission of an object (the SED, Spectral Energy Distribution) against synthetic models to find the simulated one that best reproduces the observed data. It builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions.

  11. Are Fit Indices Biased in Favor of Bi-Factor Models in Cognitive Ability Research?: A Comparison of Fit in Correlated Factors, Higher-Order, and Bi-Factor Models via Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Grant B. Morgan

    2015-02-01

    Full Text Available Bi-factor confirmatory factor models have been influential in research on cognitive abilities because they often better fit the data than correlated factors and higher-order models. They also instantiate a perspective that differs from that offered by other models. Motivated by previous work that hypothesized an inherent statistical bias of fit indices favoring the bi-factor model, we compared the fit of correlated factors, higher-order, and bi-factor models via Monte Carlo methods. When data were sampled from a true bi-factor structure, each of the approximate fit indices was more likely than not to identify the bi-factor solution as the best fitting. When samples were selected from a true multiple correlated factors structure, approximate fit indices were more likely overall to identify the correlated factors solution as the best fitting. In contrast, when samples were generated from a true higher-order structure, approximate fit indices tended to identify the bi-factor solution as best fitting. There was extensive overlap of fit values across the models regardless of true structure. Although one model may fit a given dataset best relative to the other models, each of the models tended to fit the data well in absolute terms. Given this variability, models must also be judged on substantive and conceptual grounds.

  12. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  13. Slope and Line of Best Fit: A Transfer of Knowledge Case Study

    Science.gov (United States)

    Nagle, Courtney; Casey, Stephanie; Moore-Russo, Deborah

    2017-01-01

    This paper brings together research on slope from mathematics education and research on line of best fit from statistics education by considering what knowledge of slope students transfer to a novel task involving determining the placement of an informal line of best fit. This study focuses on two students who transitioned from placing inaccurate…

  14. Linear least squares compartmental-model-independent parameter identification in PET

    International Nuclear Information System (INIS)

    Thie, J.A.; Smith, G.T.; Hubner, K.F.

    1997-01-01

    A simplified approach involving linear-regression straight-line parameter fitting of dynamic scan data is developed for both specific and nonspecific models. Where compartmental-model topologies apply, the measured activity may be expressed in terms of: its integrals, plasma activity and plasma integrals -- all in a linear expression with macroparameters as coefficients. Multiple linear regression, as in spreadsheet software, determines parameters for best data fits. Positron emission tomography (PET)-acquired gray-matter images in a dynamic scan are analyzed: both by this method and by traditional iterative nonlinear least squares. Both patient and simulated data were used. Regression and traditional methods are in expected agreement. Monte-Carlo simulations evaluate parameter standard deviations, due to data noise, and much smaller noise-induced biases. Unique straight-line graphical displays permit visualizing data influences on various macroparameters as changes in slopes. Advantages of regression fitting are: simplicity, speed, ease of implementation in spreadsheet software, avoiding risks of convergence failures or false solutions in iterative least squares, and providing various visualizations of the uptake process by straight line graphical displays. Multiparameter model-independent analyses on lesser understood systems is also made possible

  15. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    Energy Technology Data Exchange (ETDEWEB)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu [Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States)

    2016-02-15

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  16. Fuzzy Analytic Hierarchy Process-based Chinese Resident Best Fitness Behavior Method Research.

    Science.gov (United States)

    Wang, Dapeng; Zhang, Lan

    2015-01-01

    With explosive development in Chinese economy and science and technology, people's pursuit of health becomes more and more intense, therefore Chinese resident sports fitness activities have been rapidly developed. However, different fitness events popularity degrees and effects on body energy consumption are different, so bases on this, the paper researches on fitness behaviors and gets Chinese residents sports fitness behaviors exercise guide, which provides guidance for propelling to national fitness plan's implementation and improving Chinese resident fitness scientization. The paper starts from the perspective of energy consumption, it mainly adopts experience method, determines Chinese resident favorite sports fitness event energy consumption through observing all kinds of fitness behaviors energy consumption, and applies fuzzy analytic hierarchy process to make evaluation on bicycle riding, shadowboxing practicing, swimming, rope skipping, jogging, running, aerobics these seven fitness events. By calculating fuzzy rate model's membership and comparing their sizes, it gets fitness behaviors that are more helpful for resident health, more effective and popular. Finally, it gets conclusions that swimming is a best exercise mode and its membership is the highest. Besides, the memberships of running, rope skipping and shadowboxing practicing are also relative higher. It should go in for bodybuilding by synthesizing above several kinds of fitness events according to different physical conditions; different living conditions so that can better achieve the purpose of fitness exercises.

  17. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  18. THE HERSCHEL ORION PROTOSTAR SURVEY: SPECTRAL ENERGY DISTRIBUTIONS AND FITS USING A GRID OF PROTOSTELLAR MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Furlan, E. [Infrared Processing and Analysis Center, California Institute of Technology, 770 S. Wilson Ave., Pasadena, CA 91125 (United States); Fischer, W. J. [Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771 (United States); Ali, B. [Space Science Institute, 4750 Walnut Street, Boulder, CO 80301 (United States); Stutz, A. M. [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Stanke, T. [ESO, Karl-Schwarzschild-Strasse 2, D-85748 Garching bei München (Germany); Tobin, J. J. [National Radio Astronomy Observatory, Charlottesville, VA 22903 (United States); Megeath, S. T.; Booker, J. [Ritter Astrophysical Research Center, Department of Physics and Astronomy, University of Toledo, 2801 W. Bancroft Street, Toledo, OH 43606 (United States); Osorio, M. [Instituto de Astrofísica de Andalucía, CSIC, Camino Bajo de Huétor 50, E-18008 Granada (Spain); Hartmann, L.; Calvet, N. [Department of Astronomy, University of Michigan, 500 Church Street, Ann Arbor, MI 48109 (United States); Poteet, C. A. [New York Center for Astrobiology, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180 (United States); Manoj, P. [Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India); Watson, D. M. [Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627 (United States); Allen, L., E-mail: furlan@ipac.caltech.edu [National Optical Astronomy Observatory, 950 N. Cherry Avenue, Tucson, AZ 85719 (United States)

    2016-05-01

    We present key results from the Herschel Orion Protostar Survey: spectral energy distributions (SEDs) and model fits of 330 young stellar objects, predominantly protostars, in the Orion molecular clouds. This is the largest sample of protostars studied in a single, nearby star formation complex. With near-infrared photometry from 2MASS, mid- and far-infrared data from Spitzer and Herschel , and submillimeter photometry from APEX, our SEDs cover 1.2–870 μ m and sample the peak of the protostellar envelope emission at ∼100 μ m. Using mid-IR spectral indices and bolometric temperatures, we classify our sample into 92 Class 0 protostars, 125 Class I protostars, 102 flat-spectrum sources, and 11 Class II pre-main-sequence stars. We implement a simple protostellar model (including a disk in an infalling envelope with outflow cavities) to generate a grid of 30,400 model SEDs and use it to determine the best-fit model parameters for each protostar. We argue that far-IR data are essential for accurate constraints on protostellar envelope properties. We find that most protostars, and in particular the flat-spectrum sources, are well fit. The median envelope density and median inclination angle decrease from Class 0 to Class I to flat-spectrum protostars, despite the broad range in best-fit parameters in each of the three categories. We also discuss degeneracies in our model parameters. Our results confirm that the different protostellar classes generally correspond to an evolutionary sequence with a decreasing envelope infall rate, but the inclination angle also plays a role in the appearance, and thus interpretation, of the SEDs.

  19. CRAPONE, Optical Model Potential Fit of Neutron Scattering Data

    International Nuclear Information System (INIS)

    Fabbri, F.; Fratamico, G.; Reffo, G.

    2004-01-01

    1 - Description of problem or function: Automatic search for local and non-local optical potential parameters for neutrons. Total, elastic, differential elastic cross sections, l=0 and l=1 strength functions and scattering length can be considered. 2 - Method of solution: A fitting procedure is applied to different sets of experimental data depending on the local or non-local approximation chosen. In the non-local approximation the fitting procedure can be simultaneously performed over the whole energy range. The best fit is obtained when a set of parameters is found where CHI 2 is at its minimum. The solution of the system equations is obtained by diagonalization of the matrix according to the Jacobi method

  20. Multi-objective genetic algorithm parameter estimation in a reduced nuclear reactor model

    Energy Technology Data Exchange (ETDEWEB)

    Marseguerra, M.; Zio, E.; Canetta, R. [Polytechnic of Milan, Dept. of Nuclear Engineering, Milano (Italy)

    2005-07-01

    The fast increase in computing power has rendered, and will continue to render, more and more feasible the incorporation of dynamics in the safety and reliability models of complex engineering systems. In particular, the Monte Carlo simulation framework offers a natural environment for estimating the reliability of systems with dynamic features. However, the time-integration of the dynamic processes may render the Monte Carlo simulation quite burdensome so that it becomes mandatory to resort to validated, simplified models of process evolution. Such models are typically based on lumped effective parameters whose values need to be suitably estimated so as to best fit to the available plant data. In this paper we propose a multi-objective genetic algorithm approach for the estimation of the effective parameters of a simplified model of nuclear reactor dynamics. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest to the actual evolution profiles. A case study is reported in which the real reactor is simulated by the QUAndry based Reactor Kinetics (Quark) code available from the Nuclear Energy Agency and the simplified model is based on the point kinetics approximation to describe the neutron balance in the core and on thermal equilibrium relations to describe the energy exchange between the different loops. (authors)

  1. Multi-objective genetic algorithm parameter estimation in a reduced nuclear reactor model

    International Nuclear Information System (INIS)

    Marseguerra, M.; Zio, E.; Canetta, R.

    2005-01-01

    The fast increase in computing power has rendered, and will continue to render, more and more feasible the incorporation of dynamics in the safety and reliability models of complex engineering systems. In particular, the Monte Carlo simulation framework offers a natural environment for estimating the reliability of systems with dynamic features. However, the time-integration of the dynamic processes may render the Monte Carlo simulation quite burdensome so that it becomes mandatory to resort to validated, simplified models of process evolution. Such models are typically based on lumped effective parameters whose values need to be suitably estimated so as to best fit to the available plant data. In this paper we propose a multi-objective genetic algorithm approach for the estimation of the effective parameters of a simplified model of nuclear reactor dynamics. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest to the actual evolution profiles. A case study is reported in which the real reactor is simulated by the QUAndry based Reactor Kinetics (Quark) code available from the Nuclear Energy Agency and the simplified model is based on the point kinetics approximation to describe the neutron balance in the core and on thermal equilibrium relations to describe the energy exchange between the different loops. (authors)

  2. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  3. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  4. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  5. A Data-Driven Method for Selecting Optimal Models Based on Graphical Visualisation of Differences in Sequentially Fitted ROC Model Parameters

    Directory of Open Access Journals (Sweden)

    K S Mwitondi

    2013-05-01

    Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.

  6. Method for fitting crystal field parameters and the energy level fitting for Yb3+ in crystal SC2O3

    International Nuclear Information System (INIS)

    Qing-Li, Zhang; Kai-Jie, Ning; Jin, Xiao; Li-Hua, Ding; Wen-Long, Zhou; Wen-Peng, Liu; Shao-Tang, Yin; Hai-He, Jiang

    2010-01-01

    A method to compute the numerical derivative of eigenvalues of parameterized crystal field Hamiltonian matrix is given, based on the numerical derivatives the general iteration methods such as Levenberg–Marquardt, Newton method, and so on, can be used to solve crystal field parameters by fitting to experimental energy levels. With the numerical eigenvalue derivative, a detailed iteration algorithm to compute crystal field parameters by fitting experimental energy levels has also been described. This method is used to compute the crystal parameters of Yb 3+ in Sc 2 O 3 crystal, which is prepared by a co-precipitation method and whose structure was refined by Rietveld method. By fitting on the parameters of a simple overlap model of crystal field, the results show that the new method can fit the crystal field energy splitting with fast convergence and good stability. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  7. Overhead-Aware-Best-Fit (OABF) Resource Allocation Algorithm for Minimizing VM Launching Overhead

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Hao [IIT; Garzoglio, Gabriele [Fermilab; Ren, Shangping [IIT, Chicago; Timm, Steven [Fermilab; Noh, Seo Young [KISTI, Daejeon

    2014-11-11

    FermiCloud is a private cloud developed in Fermi National Accelerator Laboratory to provide elastic and on-demand resources for different scientific research experiments. The design goal of the FermiCloud is to automatically allocate resources for different scientific applications so that the QoS required by these applications is met and the operational cost of the FermiCloud is minimized. Our earlier research shows that VM launching overhead has large variations. If such variations are not taken into consideration when making resource allocation decisions, it may lead to poor performance and resource waste. In this paper, we show how we may use an VM launching overhead reference model to minimize VM launching overhead. In particular, we first present a training algorithm that automatically tunes a given refer- ence model to accurately reflect FermiCloud environment. Based on the tuned reference model for virtual machine launching overhead, we develop an overhead-aware-best-fit resource allocation algorithm that decides where and when to allocate resources so that the average virtual machine launching overhead is minimized. The experimental results indicate that the developed overhead-aware-best-fit resource allocation algorithm can significantly improved the VM launching time when large number of VMs are simultaneously launched.

  8. Lambert W-function based exact representation for double diode model of solar cells: Comparison on fitness and parameter extraction

    International Nuclear Information System (INIS)

    Gao, Xiankun; Cui, Yan; Hu, Jianjun; Xu, Guangyin; Yu, Yongchang

    2016-01-01

    Highlights: • Lambert W-function based exact representation (LBER) is presented for double diode model (DDM). • Fitness difference between LBER and DDM is verified by reported parameter values. • The proposed LBER can better represent the I–V and P–V characteristics of solar cells. • Parameter extraction difference between LBER and DDM is validated by two algorithms. • The parameter values extracted from LBER are more accurate than those from DDM. - Abstract: Accurate modeling and parameter extraction of solar cells play an important role in the simulation and optimization of PV systems. This paper presents a Lambert W-function based exact representation (LBER) for traditional double diode model (DDM) of solar cells, and then compares their fitness and parameter extraction performance. Unlike existing works, the proposed LBER is rigorously derived from DDM, and in LBER the coefficients of Lambert W-function are not extra parameters to be extracted or arbitrary scalars but the vectors of terminal voltage and current of solar cells. The fitness difference between LBER and DDM is objectively validated by the reported parameter values and experimental I–V data of a solar cell and four solar modules from different technologies. The comparison results indicate that under the same parameter values, the proposed LBER can better represent the I–V and P–V characteristics of solar cells and provide a closer representation to actual maximum power points of all module types. Two different algorithms are used to compare the parameter extraction performance of LBER and DDM. One is our restart-based bound constrained Nelder-Mead (rbcNM) algorithm implemented in Matlab, and the other is the reported R_c_r-IJADE algorithm executed in Visual Studio. The comparison results reveal that, the parameter values extracted from LBER using two algorithms are always more accurate and robust than those from DDM despite more time consuming. As an improved version of DDM, the

  9. Testing the goodness of fit of selected infiltration models on soils with different land use histories

    International Nuclear Information System (INIS)

    Mbagwu, J.S.C.

    1993-10-01

    Six infiltration models, some obtained by reformulating the fitting parameters of the classical Kostiakov (1932) and Philip (1957) equations, were investigated for their ability to describe water infiltration into highly permeable sandy soils from the Nsukka plains of SE Nigeria. The models were Kostiakov, Modified Kostiakov (A), Modified Kostiakov (B), Philip, Modified Philip (A) and Modified Philip (B). Infiltration data were obtained from double ring infiltrometers on field plots established on a Knadic Paleustult (Nkpologu series) to investigate the effects of land use on soil properties and maize yield. The treatments were; (i) tilled-mulched (TM), (ii) tilled-unmulched (TU), (iii) untilled-mulched (UM), (iv) untilled-unmulched (UU) and (v) continuous pasture (CP). Cumulative infiltration was highest on the TM and lowest on the CP plots. All estimated model parameters obtained by the best fit of measured data differed significantly among the treatments. Based on the magnitude of R 2 values, the Kostiakov, Modified Kostiakov (A), Philip and Modified Philip (A) models provided best predictions of cumulative infiltration as a function of time. Comparing experimental with model-predicted cumulative infiltration showed, however, that on all treatments the values predicted by the classical Kostiakov, Philip and Modified Philip (A) models deviated most from experimental data. The other models produced values that agreed very well with measured data. Considering the eases of determining the fitting parameters it is proposed that on soils with high infiltration rates, either Modified Kostiakov model (I = Kt a + Ict) or Modified Philip model (I St 1/2 + Ict), (where I is cumulative infiltration, K, the time coefficient, t, time elapsed, 'a' the time exponent, Ic the equilibrium infiltration rate and S, the soil water sorptivity), be used for routine characterization of the infiltration process. (author). 33 refs, 3 figs 6 tabs

  10. ParFit: A Python-Based Object-Oriented Program for Fitting Molecular Mechanics Parameters to ab Initio Data.

    Science.gov (United States)

    Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S; Windus, Theresa L; Dick-Perez, Marilu

    2017-03-27

    A newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides, important for metal extraction chemistry, are parametrized using ParFit. ParFit is in an open source program available for free on GitHub ( https://github.com/fzahari/ParFit ).

  11. The fitting parameters extraction of conversion model of the low dose rate effect in bipolar devices

    International Nuclear Information System (INIS)

    Bakerenkov, Alexander

    2011-01-01

    The Enhanced Low Dose Rate Sensitivity (ELDRS) in bipolar devices consists of in base current degradation of NPN and PNP transistors increase as the dose rate is decreased. As a result of almost 20-year studying, the some physical models of effect are developed, being described in detail. Accelerated test methods, based on these models use in standards. The conversion model of the effect, that allows to describe the inverse S-shaped excess base current dependence versus dose rate, was proposed. This paper presents the problem of conversion model fitting parameters extraction.

  12. Using the Flipchem Photochemistry Model When Fitting Incoherent Scatter Radar Data

    Science.gov (United States)

    Reimer, A. S.; Varney, R. H.

    2017-12-01

    The North face Resolute Bay Incoherent Scatter Radar (RISR-N) routinely images the dynamics of the polar ionosphere, providing measurements of the plasma density, electron temperature, ion temperature, and line of sight velocity with seconds to minutes time resolution. RISR-N does not directly measure ionospheric parameters, but backscattered signals, recording them as voltage samples. Using signal processing techniques, radar autocorrelation functions (ACF) are estimated from the voltage samples. A model of the signal ACF is then fitted to the ACF using non-linear least-squares techniques to obtain the best-fit ionospheric parameters. The signal model, and therefore the fitted parameters, depend on the ionospheric ion composition that is used [e.g. Zettergren et. al. (2010), Zou et. al. (2017)].The software used to process RISR-N ACF data includes the "flipchem" model, which is an ion photochemistry model developed by Richards [2011] that was adapted from the Field LineInterhemispheric Plasma (FLIP) model. Flipchem requires neutral densities, neutral temperatures, electron density, ion temperature, electron temperature, solar zenith angle, and F10.7 as inputs to compute ion densities, which are input to the signal model. A description of how the flipchem model is used in RISR-N fitting software will be presented. Additionally, a statistical comparison of the fitted electron density, ion temperature, electron temperature, and velocity obtained using a flipchem ionosphere, a pure O+ ionosphere, and a Chapman O+ ionosphere will be presented. The comparison covers nearly two years of RISR-N data (April 2015 - December 2016). Richards, P. G. (2011), Reexamination of ionospheric photochemistry, J. Geophys. Res., 116, A08307, doi:10.1029/2011JA016613.Zettergren, M., Semeter, J., Burnett, B., Oliver, W., Heinselman, C., Blelly, P.-L., and Diaz, M.: Dynamic variability in F-region ionospheric composition at auroral arc boundaries, Ann. Geophys., 28, 651-664, https

  13. Models for Estimating Genetic Parameters of Milk Production Traits Using Random Regression Models in Korean Holstein Cattle

    Directory of Open Access Journals (Sweden)

    C. I. Cho

    2016-05-01

    Full Text Available The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs, and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK, fat yield (FAT, protein yield (PROT, and solids-not-fat yield (SNF. The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP of the third to fifth order (L3–L5, fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order. The residual variances in the models were either homogeneous (HOM or heterogeneous (15 classes, HET15; 60 classes, HET60. A total of nine models (3 orders of polynomials×3 types of residual variance including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC and/or Schwarz Bayesian information criteria (BIC statistics to identify the model(s of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF and L4-HET15 (FAT, which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first

  14. Universally sloppy parameter sensitivities in systems biology models.

    Directory of Open Access Journals (Sweden)

    Ryan N Gutenkunst

    2007-10-01

    Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  15. Universally sloppy parameter sensitivities in systems biology models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  16. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  17. Predicting the Best Fit: A Comparison of Response Surface Models for Midazolam and Alfentanil Sedation in Procedures With Varying Stimulation.

    Science.gov (United States)

    Liou, Jing-Yang; Ting, Chien-Kun; Mandell, M Susan; Chang, Kuang-Yi; Teng, Wei-Nung; Huang, Yu-Yin; Tsou, Mei-Yung

    2016-08-01

    Selecting an effective dose of sedative drugs in combined upper and lower gastrointestinal endoscopy is complicated by varying degrees of pain stimulation. We tested the ability of 5 response surface models to predict depth of sedation after administration of midazolam and alfentanil in this complex model. The procedure was divided into 3 phases: esophagogastroduodenoscopy (EGD), colonoscopy, and the time interval between the 2 (intersession). The depth of sedation in 33 adult patients was monitored by Observer Assessment of Alertness/Scores. A total of 218 combinations of midazolam and alfentanil effect-site concentrations derived from pharmacokinetic models were used to test 5 response surface models in each of the 3 phases of endoscopy. Model fit was evaluated with objective function value, corrected Akaike Information Criterion (AICc), and Spearman ranked correlation. A model was arbitrarily defined as accurate if the predicted probability is effect-site concentrations tested ranged from 1 to 76 ng/mL and from 5 to 80 ng/mL for midazolam and alfentanil, respectively. Midazolam and alfentanil had synergistic effects in colonoscopy and EGD, but additivity was observed in the intersession group. Adequate prediction rates were 84% to 85% in the intersession group, 84% to 88% during colonoscopy, and 82% to 87% during EGD. The reduced Greco and Fixed alfentanil concentration required for 50% of the patients to achieve targeted response Hierarchy models performed better with comparable predictive strength. The reduced Greco model had the lowest AICc with strong correlation in all 3 phases of endoscopy. Dynamic, rather than fixed, γ and γalf in the Hierarchy model improved model fit. The reduced Greco model had the lowest objective function value and AICc and thus the best fit. This model was reliable with acceptable predictive ability based on adequate clinical correlation. We suggest that this model has practical clinical value for patients undergoing procedures

  18. Cosmological parameter estimation using particle swarm optimization

    Science.gov (United States)

    Prasad, Jayanti; Souradeep, Tarun

    2012-06-01

    Constraining theoretical models, which are represented by a set of parameters, using observational data is an important exercise in cosmology. In Bayesian framework this is done by finding the probability distribution of parameters which best fits to the observational data using sampling based methods like Markov chain Monte Carlo (MCMC). It has been argued that MCMC may not be the best option in certain problems in which the target function (likelihood) poses local maxima or have very high dimensionality. Apart from this, there may be examples in which we are mainly interested to find the point in the parameter space at which the probability distribution has the largest value. In this situation the problem of parameter estimation becomes an optimization problem. In the present work we show that particle swarm optimization (PSO), which is an artificial intelligence inspired population based search procedure, can also be used for cosmological parameter estimation. Using PSO we were able to recover the best-fit Λ cold dark matter (LCDM) model parameters from the WMAP seven year data without using any prior guess value or any other property of the probability distribution of parameters like standard deviation, as is common in MCMC. We also report the results of an exercise in which we consider a binned primordial power spectrum (to increase the dimensionality of problem) and find that a power spectrum with features gives lower chi square than the standard power law. Since PSO does not sample the likelihood surface in a fair way, we follow a fitting procedure to find the spread of likelihood function around the best-fit point.

  19. Estimating the best laser parameters for skin cancer treatment using finite element models

    International Nuclear Information System (INIS)

    El-Berry, A.A.; El-Berry, A.A.; Solouma, N.H.; Hassan, F.; Ahmed, A.S.

    2010-01-01

    Skin cancer is an intimidating disease which necessitates the presence of a non-invasive treatment. Laser-induced thermo therapy is one of the recent noninvasive modalities of superficial lesion treatment. Although of its promising effect, this method still needs more effort to be quantized. Many studies are being conducted for this purpose. Modeling and simulating the process of skin lesion treatment by laser can lead to the best quantization of the treatment protocol. In this paper, we provide finite element models for the treatment of skin cancer using laser thermal effect. A comparison between the effects of using different laser parameters of diode laser (800nm) and Nd: Yag laser (1064 nm) revealed that Nd: Yag laser can be used effectively foe skin cancer treatment specially with high intensities of about 106 w/m 2 .

  20. Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.

    Science.gov (United States)

    Chaudhuri, Shomesh E; Merfeld, Daniel M

    2013-03-01

    Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.

  1. Analytical fitting model for rough-surface BRDF.

    Science.gov (United States)

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  2. The influence of model parameters on catchment-response

    International Nuclear Information System (INIS)

    Shah, S.M.S.; Gabriel, H.F.; Khan, A.A.

    2002-01-01

    This paper deals with the study of influence of influence of conceptual rainfall-runoff model parameters on catchment response (runoff). A conceptual modified watershed yield model is employed to study the effects of model-parameters on catchment-response, i.e. runoff. The model is calibrated, using manual parameter-fitting approach, also known as trial and error parameter-fitting. In all, there are twenty one (21) parameters that control the functioning of the model. A lumped parametric approach is used. The detailed analysis was performed on Ling River near Kahuta, having catchment area of 56 sq. miles. The model includes physical parameters like GWSM, PETS, PGWRO, etc. fitting coefficients like CINF, CGWS, etc. and initial estimates of the surface-water and groundwater storages i.e. srosp and gwsp. Sensitivity analysis offers a good way, without repetititious computations, the proper weight and consideration that must be taken when each of the influencing factor is evaluated. Sensitivity-analysis was performed to evaluate the influence of model-parameters on runoff. The sensitivity and relative contributions of model parameters influencing catchment-response are studied. (author)

  3. Evaluation of some infiltration models and hydraulic parameters

    International Nuclear Information System (INIS)

    Haghighi, F.; Gorji, M.; Shorafa, M.; Sarmadian, F.; Mohammadi, M. H.

    2010-01-01

    The evaluation of infiltration characteristics and some parameters of infiltration models such as sorptivity and final steady infiltration rate in soils are important in agriculture. The aim of this study was to evaluate some of the most common models used to estimate final soil infiltration rate. The equality of final infiltration rate with saturated hydraulic conductivity (Ks) was also tested. Moreover, values of the estimated sorptivity from the Philips model were compared to estimates by selected pedotransfer functions (PTFs). The infiltration experiments used the doublering method on soils with two different land uses in the Taleghan watershed of Tehran province, Iran, from September to October, 2007. The infiltration models of Kostiakov-Lewis, Philip two-term and Horton were fitted to observed infiltration data. Some parameters of the models and the coefficient of determination goodness of fit were estimated using MATLAB software. The results showed that, based on comparing measured and model-estimated infiltration rate using root mean squared error (RMSE), Hortons model gave the best prediction of final infiltration rate in the experimental area. Laboratory measured Ks values gave significant differences and higher values than estimated final infiltration rates from the selected models. The estimated final infiltration rate was not equal to laboratory measured Ks values in the study area. Moreover, the estimated sorptivity factor by Philips model was significantly different to those estimated by selected PTFs. It is suggested that the applicability of PTFs is limited to specific, similar conditions. (Author) 37 refs.

  4. Item level diagnostics and model - data fit in item response theory ...

    African Journals Online (AJOL)

    Item response theory (IRT) is a framework for modeling and analyzing item response data. Item-level modeling gives IRT advantages over classical test theory. The fit of an item score pattern to an item response theory (IRT) models is a necessary condition that must be assessed for further use of item and models that best fit ...

  5. A software for parameter estimation in dynamic models

    Directory of Open Access Journals (Sweden)

    M. Yuceer

    2008-12-01

    Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

  6. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    DEFF Research Database (Denmark)

    Bolker, B.M.; Gardner, B.; Maunder, M.

    2013-01-01

    Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. R is convenient and (relatively) easy...... to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield...

  7. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    Science.gov (United States)

    Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise

    2013-01-01

    1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.

  8. Rate-equation modelling and ensemble approach to extraction of parameters for viral infection-induced cell apoptosis and necrosis

    Energy Technology Data Exchange (ETDEWEB)

    Domanskyi, Sergii; Schilling, Joshua E.; Privman, Vladimir, E-mail: privman@clarkson.edu [Department of Physics, Clarkson University, Potsdam, New York 13676 (United States); Gorshkov, Vyacheslav [National Technical University of Ukraine — KPI, Kiev 03056 (Ukraine); Libert, Sergiy, E-mail: libert@cornell.edu [Department of Biomedical Sciences, Cornell University, Ithaca, New York 14853 (United States)

    2016-09-07

    We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model we describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of “stiff” equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.

  9. A termination criterion for parameter estimation in stochastic models in systems biology.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven

    2015-11-01

    Parameter estimation procedures are a central aspect of modeling approaches in systems biology. They are often computationally expensive, especially when the models take stochasticity into account. Typically parameter estimation involves the iterative optimization of an objective function that describes how well the model fits some measured data with a certain set of parameter values. In order to limit the computational expenses it is therefore important to apply an adequate stopping criterion for the optimization process, so that the optimization continues at least until a reasonable fit is obtained, but not much longer. In the case of stochastic modeling, at least some parameter estimation schemes involve an objective function that is itself a random variable. This means that plain convergence tests are not a priori suitable as stopping criteria. This article suggests a termination criterion suited to optimization problems in parameter estimation arising from stochastic models in systems biology. The termination criterion is developed for optimization algorithms that involve populations of parameter sets, such as particle swarm or evolutionary algorithms. It is based on comparing the variance of the objective function over the whole population of parameter sets with the variance of repeated evaluations of the objective function at the best parameter set. The performance is demonstrated for several different algorithms. To test the termination criterion we choose polynomial test functions as well as systems biology models such as an Immigration-Death model and a bistable genetic toggle switch. The genetic toggle switch is an especially challenging test case as it shows a stochastic switching between two steady states which is qualitatively different from the model behavior in a deterministic model. Copyright © 2015. Published by Elsevier Ireland Ltd.

  10. SDSS-II: Determination of shape and color parameter coefficients for SALT-II fit model

    Energy Technology Data Exchange (ETDEWEB)

    Dojcsak, L.; Marriner, J.; /Fermilab

    2010-08-01

    In this study we look at the SALT-II model of Type IA supernova analysis, which determines the distance moduli based on the known absolute standard candle magnitude of the Type IA supernovae. We take a look at the determination of the shape and color parameter coefficients, {alpha} and {beta} respectively, in the SALT-II model with the intrinsic error that is determined from the data. Using the SNANA software package provided for the analysis of Type IA supernovae, we use a standard Monte Carlo simulation to generate data with known parameters to use as a tool for analyzing the trends in the model based on certain assumptions about the intrinsic error. In order to find the best standard candle model, we try to minimize the residuals on the Hubble diagram by calculating the correct shape and color parameter coefficients. We can estimate the magnitude of the intrinsic errors required to obtain results with {chi}{sup 2}/degree of freedom = 1. We can use the simulation to estimate the amount of color smearing as indicated by the data for our model. We find that the color smearing model works as a general estimate of the color smearing, and that we are able to use the RMS distribution in the variables as one method of estimating the correct intrinsic errors needed by the data to obtain the correct results for {alpha} and {beta}. We then apply the resultant intrinsic error matrix to the real data and show our results.

  11. Damage Identification of Bridge Based on Chebyshev Polynomial Fitting and Fuzzy Logic without Considering Baseline Model Parameters

    Directory of Open Access Journals (Sweden)

    Yu-Bo Jiao

    2015-01-01

    Full Text Available The paper presents an effective approach for damage identification of bridge based on Chebyshev polynomial fitting and fuzzy logic systems without considering baseline model data. The modal curvature of damaged bridge can be obtained through central difference approximation based on displacement modal shape. Depending on the modal curvature of damaged structure, Chebyshev polynomial fitting is applied to acquire the curvature of undamaged one without considering baseline parameters. Therefore, modal curvature difference can be derived and used for damage localizing. Subsequently, the normalized modal curvature difference is treated as input variable of fuzzy logic systems for damage condition assessment. Numerical simulation on a simply supported bridge was carried out to demonstrate the feasibility of the proposed method.

  12. Induced subgraph searching for geometric model fitting

    Science.gov (United States)

    Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi

    2017-11-01

    In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.

  13. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  14. Automated model fit method for diesel engine control development

    NARCIS (Netherlands)

    Seykens, X.L.J.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.J.H.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  15. Reactive decontamination of absorbing thin film polymer coatings: model development and parameter determination

    Science.gov (United States)

    Varady, Mark; Mantooth, Brent; Pearl, Thomas; Willis, Matthew

    2014-03-01

    A continuum model of reactive decontamination in absorbing polymeric thin film substrates exposed to the chemical warfare agent O-ethyl S-[2-(diisopropylamino)ethyl] methylphosphonothioate (known as VX) was developed to assess the performance of various decontaminants. Experiments were performed in conjunction with an inverse analysis method to obtain the necessary model parameters. The experiments involved contaminating a substrate with a fixed VX exposure, applying a decontaminant, followed by a time-resolved, liquid phase extraction of the absorbing substrate to measure the residual contaminant by chromatography. Decontamination model parameters were uniquely determined using the Levenberg-Marquardt nonlinear least squares fitting technique to best fit the experimental time evolution of extracted mass. The model was implemented numerically in both a 2D axisymmetric finite element program and a 1D finite difference code, and it was found that the more computationally efficient 1D implementation was sufficiently accurate. The resulting decontamination model provides an accurate quantification of contaminant concentration profile in the material, which is necessary to assess exposure hazards.

  16. Increasing parameter certainty and data utility through multi-objective calibration of a spatially distributed temperature and solute model

    Directory of Open Access Journals (Sweden)

    C. Bandaragoda

    2011-05-01

    Full Text Available To support the goal of distributed hydrologic and instream model predictions based on physical processes, we explore multi-dimensional parameterization determined by a broad set of observations. We present a systematic approach to using various data types at spatially distributed locations to decrease parameter bounds sampled within calibration algorithms that ultimately provide information regarding the extent of individual processes represented within the model structure. Through the use of a simulation matrix, parameter sets are first locally optimized by fitting the respective data at one or two locations and then the best results are selected to resolve which parameter sets perform best at all locations, or globally. This approach is illustrated using the Two-Zone Temperature and Solute (TZTS model for a case study in the Virgin River, Utah, USA, where temperature and solute tracer data were collected at multiple locations and zones within the river that represent the fate and transport of both heat and solute through the study reach. The result was a narrowed parameter space and increased parameter certainty which, based on our results, would not have been as successful if only single objective algorithms were used. We also found that the global optimum is best defined by multiple spatially distributed local optima, which supports the hypothesis that there is a discrete and narrowly bounded parameter range that represents the processes controlling the dominant hydrologic responses. Further, we illustrate that the optimization process itself can be used to determine which observed responses and locations are most useful for estimating the parameters that result in a global fit to guide future data collection efforts.

  17. RATES OF FITNESS DECLINE AND REBOUND SUGGEST PERVASIVE EPISTASIS

    Science.gov (United States)

    Perfeito, L; Sousa, A; Bataillon, T; Gordo, I

    2014-01-01

    Unraveling the factors that determine the rate of adaptation is a major question in evolutionary biology. One key parameter is the effect of a new mutation on fitness, which invariably depends on the environment and genetic background. The fate of a mutation also depends on population size, which determines the amount of drift it will experience. Here, we manipulate both population size and genotype composition and follow adaptation of 23 distinct Escherichia coli genotypes. These have previously accumulated mutations under intense genetic drift and encompass a substantial fitness variation. A simple rule is uncovered: the net fitness change is negatively correlated with the fitness of the genotype in which new mutations appear—a signature of epistasis. We find that Fisher's geometrical model can account for the observed patterns of fitness change and infer the parameters of this model that best fit the data, using Approximate Bayesian Computation. We estimate a genomic mutation rate of 0.01 per generation for fitness altering mutations, albeit with a large confidence interval, a mean fitness effect of mutations of −0.01, and an effective number of traits nine in mutS− E. coli. This framework can be extended to confront a broader range of models with data and test different classes of fitness landscape models. PMID:24372601

  18. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.; Katzfuss, M.; Hu, J.; Johnson, V. E.

    2014-01-01

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  19. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.

    2014-09-16

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models\\' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  20. NR sulphur vulcanization: Interaction study between TBBS and DPG by means of a combined experimental rheometer and meta-model best fitting strategy

    Energy Technology Data Exchange (ETDEWEB)

    Milani, G., E-mail: gabriele.milani@polimi.it [Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan (Italy); Hanel, T.; Donetti, R. [Pirelli Tyre, Via Alberto e Piero Pirelli 25, 20126 Milan (Italy); Milani, F. [Chem. Co, Via J.F. Kennedy 2, 45030 Occhiobello (Italy)

    2016-06-08

    The paper is aimed at studying the possible interaction between two different accelerators (DPG and TBBS) in the chemical kinetic of Natural Rubber (NR) vulcanized with sulphur. The same blend with several DPG and TBBS concentrations is deeply analyzed from an experimental point of view, varying the curing temperature in the range 150-180°C and obtaining rheometer curves with a step of 10°C. In order to study any possible interaction between the two accelerators –and eventually evaluating its engineering relevance-rheometer data are normalized by means of the well known Sun and Isayev normalization approach and two output parameters are assumed as meaningful to have an insight into the possible interaction, namely time at maximum torque and reversion percentage. Two different numerical meta-models, which belong to the family of the so-called response surfaces RS are compared. The first is linear against TBBS and DPG and therefore well reproduces no interaction between the accelerators, whereas the latter is a non-linear RS with bilinear term. Both RS are deduced from standard best fitting of experimental data available. It is found that, generally, there is a sort of interaction between TBBS and DPG, but that the error introduced making use of a linear model (no interaction) is generally lower than 10%, i.e. fully acceptable from an engineering standpoint.

  1. A systematic fitting scheme for caustic-crossing microlensing events

    DEFF Research Database (Denmark)

    Kains ...[et al], N.; Jørgensen, Uffe Gråe

    2009-01-01

    with a source crossing the whole caustic structure in less than three days. In order to identify all possible models we conduct an extensive search of the parameter space, followed by a refinement of the parameters with a Markov Chain Monte Carlo algorithm. We find a number of low-chi(2) regions...... in the parameter space, which lead to several distinct competitive best models. We examine the parameters for each of them, and estimate their physical properties. We find that our fitting strategy locates several minima that are difficult to find with other modelling strategies and is therefore a more appropriate...

  2. Recovering stellar population parameters via two full-spectrum fitting algorithms in the absence of model uncertainties

    Science.gov (United States)

    Ge, Junqiang; Yan, Renbin; Cappellari, Michele; Mao, Shude; Li, Hongyu; Lu, Youjun

    2018-05-01

    Using mock spectra based on Vazdekis/MILES library fitted within the wavelength region 3600-7350Å, we analyze the bias and scatter on the resulting physical parameters induced by the choice of fitting algorithms and observational uncertainties, but avoid effects of those model uncertainties. We consider two full-spectrum fitting codes: pPXF and STARLIGHT, in fitting for stellar population age, metallicity, mass-to-light ratio, and dust extinction. With pPXF we find that both the bias μ in the population parameters and the scatter σ in the recovered logarithmic values follows the expected trend μ ∝ σ ∝ 1/(S/N). The bias increases for younger ages and systematically makes recovered ages older, M*/Lr larger and metallicities lower than the true values. For reference, at S/N=30, and for the worst case (t = 108yr), the bias is 0.06 dex in M/Lr, 0.03 dex in both age and [M/H]. There is no significant dependence on either E(B-V) or the shape of the error spectrum. Moreover, the results are consistent for both our 1-SSP and 2-SSP tests. With the STARLIGHT algorithm, we find trends similar to pPXF, when the input E(B-V)values, with significantly underestimated dust extinction and [M/H], and larger ages and M*/Lr. Results degrade when moving from our 1-SSP to the 2-SSP tests. The STARLIGHT convergence to the true values can be improved by increasing Markov Chains and annealing loops to the "slow mode". For the same input spectrum, pPXF is about two order of magnitudes faster than STARLIGHT's "default mode" and about three order of magnitude faster than STARLIGHT's "slow mode".

  3. A Comparison of the One-and Three-Parameter Logistic Models on Measures of Test Efficiency.

    Science.gov (United States)

    Benson, Jeri

    Two methods of item selection were used to select sets of 40 items from a 50-item verbal analogies test, and the resulting item sets were compared for relative efficiency. The BICAL program was used to select the 40 items having the best mean square fit to the one parameter logistic (Rasch) model. The LOGIST program was used to select the 40 items…

  4. Supersymmetric Fits after the Higgs Discovery and Implications for Model Building

    CERN Document Server

    Ellis, John

    2014-01-01

    The data from the first run of the LHC at 7 and 8 TeV, together with the information provided by other experiments such as precision electroweak measurements, flavour measurements, the cosmological density of cold dark matter and the direct search for the scattering of dark matter particles in the LUX experiment, provide important constraints on supersymmetric models. Important information is provided by the ATLAS and CMS measurements of the mass of the Higgs boson, as well as the negative results of searches at the LHC for events with missing transverse energy accompanied by jets, and the LHCb and CMS measurements off BR($B_s \\to \\mu^+ \\mu^-$). Results are presented from frequentist analyses of the parameter spaces of the CMSSM and NUHM1. The global $\\chi^2$ functions for the supersymmetric models vary slowly over most of the parameter spaces allowed by the Higgs mass and the missing transverse energy search, with best-fit values that are comparable to the $\\chi^2$ for the Standard Model. The $95\\%$ CL lower...

  5. A CONTRASTIVE ANALYSIS OF THE FACTORIAL STRUCTURE OF THE PCL-R: WHICH MODEL FITS BEST THE DATA?

    Directory of Open Access Journals (Sweden)

    Beatriz Pérez

    2015-01-01

    Full Text Available The aim of this study was to determine which of the factorial solutions proposed for the Hare Psychopathy Checklist-Revised (PCL-R of two, three, four factors, and unidimensional fitted best the data. Two trained and experienced independent raters scored 197 prisoners from the Villabona Penitentiary (Asturias, Spain, age range 21 to 73 years (M = 36.0, SD = 9.7, of whom 60.12% were reoffenders and 73% had committed violent crimes. The results revealed that the two-factor correlational, three-factor hierarchical without testlets, four-factor correlational and hierarchical, and unidimensional models were a poor fit for the data (CFI ≤ .86, and the three-factor model with testlets was a reasonable fit for the data (CFI = .93. The scale resulting from the three-factor hierarchical model with testlets (13 items classified psychopathy significantly higher than the original 20-item scale. The results are discussed in terms of their implications for theoretical models of psychopathy, decision-making, prison classification and intervention, and prevention. Se diseñó un estudio con el objetivo de conocer cuál de las soluciones factoriales propuestas para la Hare Psychopathy Checklist-Revised (PCL-R de dos, tres y cuatro factores y unidimensional era la que presentaba mejor ajuste a los datos. Para ello, dos evaluadores entrenados y con experiencia evaluaron de forma independiente a 197 internos en la prisión Villabona (Asturias, España, con edades comprendidas entre los 21 y los 73 años (M = 36.0, DT = 9.7, de los cuales el 60.12% eran reincidentes y el 73% había cometido delitos violentos. Los resultados mostraron que los modelos unidimensional, correlacional de 2 factores, jerárquico de 3 factores sin testlest y correlacional y jerárquico de 4 factores, presentaban un pobre ajuste con los datos (CFI ≤ .86 y un ajuste razonable del modelo jerárquico de tres factores con testlets (CFI = .93. La escala resultante del modelo de tres factores

  6. A Model Fit Statistic for Generalized Partial Credit Model

    Science.gov (United States)

    Liang, Tie; Wells, Craig S.

    2009-01-01

    Investigating the fit of a parametric model is an important part of the measurement process when implementing item response theory (IRT), but research examining it is limited. A general nonparametric approach for detecting model misfit, introduced by J. Douglas and A. S. Cohen (2001), has exhibited promising results for the two-parameter logistic…

  7. A METHODOLOGY FOR THE CHOICE OF THE BEST FITTING CONTINUOUS-TIME STOCHASTIC MODELS OF CRUDE OIL PRICE: THE CASE OF RUSSIA

    Directory of Open Access Journals (Sweden)

    Hamidreza Mostafaei

    2013-01-01

    Full Text Available In this study, it has been attempted to select the best continuous- time stochastic model, in order to describe and forecast the oil price of Russia, by information and statistics about oil price that has been available for oil price in the past. For this purpose, method of The Maximum Likelihood Estimation is implemented for estimation of the parameters of continuous-time stochastic processes. The result of unit root test with a structural break, reveals that time series of the crude oil price is a stationary series. The simulation of continuous-time stochastic processes and the mean square error between the simulated prices and the market ones shows that the Geometric Brownian Motion is the best model for the Russian crude oil price.

  8. Model of the best-of-N nest-site selection process in honeybees

    Science.gov (United States)

    Reina, Andreagiovanni; Marshall, James A. R.; Trianni, Vito; Bose, Thomas

    2017-05-01

    The ability of a honeybee swarm to select the best nest site plays a fundamental role in determining the future colony's fitness. To date, the nest-site selection process has mostly been modeled and theoretically analyzed for the case of binary decisions. However, when the number of alternative nests is larger than two, the decision-process dynamics qualitatively change. In this work, we extend previous analyses of a value-sensitive decision-making mechanism to a decision process among N nests. First, we present the decision-making dynamics in the symmetric case of N equal-quality nests. Then, we generalize our findings to a best-of-N decision scenario with one superior nest and N -1 inferior nests, previously studied empirically in bees and ants. Whereas previous binary models highlighted the crucial role of inhibitory stop-signaling, the key parameter in our new analysis is the relative time invested by swarm members in individual discovery and in signaling behaviors. Our new analysis reveals conflicting pressures on this ratio in symmetric and best-of-N decisions, which could be solved through a time-dependent signaling strategy. Additionally, our analysis suggests how ecological factors determining the density of suitable nest sites may have led to selective pressures for an optimal stable signaling ratio.

  9. An analysis of sensitivity of CLIMEX parameters in mapping species potential distribution and the broad-scale changes observed with minor variations in parameters values: an investigation using open-field Solanum lycopersicum and Neoleucinodes elegantalis as an example

    Science.gov (United States)

    da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho

    2018-04-01

    A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.

  10. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    Science.gov (United States)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  11. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis.

    Science.gov (United States)

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.

  12. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis

    Directory of Open Access Journals (Sweden)

    Christian Held

    2013-01-01

    Full Text Available Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline′s modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.

  13. Investigation of the leading and subleading high-energy behavior of hadron-hadron total cross sections using a best-fit analysis of hadronic scattering data

    Science.gov (United States)

    Giordano, M.; Meggiolaro, E.; Silva, P. V. R. G.

    2017-08-01

    In the present investigation we study the leading and subleading high-energy behavior of hadron-hadron total cross sections using a best-fit analysis of hadronic scattering data. The parametrization used for the hadron-hadron total cross sections at high energy is inspired by recent results obtained by Giordano and Meggiolaro [J. High Energy Phys. 03 (2014) 002, 10.1007/JHEP03(2014)002] using a nonperturbative approach in the framework of QCD, and it reads σtot˜B ln2s +C ln s ln ln s . We critically investigate if B and C can be obtained by means of best-fits to data for proton-proton and antiproton-proton scattering, including recent data obtained at the LHC, and also to data for other meson-baryon and baryon-baryon scattering processes. In particular, following the above-mentioned nonperturbative QCD approach, we also consider fits where the parameters B and C are set to B =κ Bth and C =κ Cth, where Bth and Cth are universal quantities related to the QCD stable spectrum, while κ (treated as an extra free parameter) is related to the asymptotic value of the ratio σel/σtot. Different possible scenarios are then considered and compared.

  14. Evaluation of the best fit distribution for partial duration series of daily rainfall in Madinah, western Saudi Arabia

    Science.gov (United States)

    Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.

    2014-09-01

    Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.

  15. Evaluation of the best fit distribution for partial duration series of daily rainfall in Madinah, western Saudi Arabia

    Directory of Open Access Journals (Sweden)

    F. Alahmadi

    2014-09-01

    Full Text Available Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC, Corrected Akaike Information Criterion (AICc, Bayesian Information Criterion (BIC and Anderson-Darling Criterion (ADC. The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.

  16. Model Test Bed for Evaluating Wave Models and Best Practices for Resource Assessment and Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Neary, Vincent Sinclair [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies; Yang, Zhaoqing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Coastal Sciences Division; Wang, Taiping [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Coastal Sciences Division; Gunawan, Budi [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies; Dallman, Ann Renee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies

    2016-03-01

    A wave model test bed is established to benchmark, test and evaluate spectral wave models and modeling methodologies (i.e., best practices) for predicting the wave energy resource parameters recommended by the International Electrotechnical Commission, IEC TS 62600-101Ed. 1.0 ©2015. Among other benefits, the model test bed can be used to investigate the suitability of different models, specifically what source terms should be included in spectral wave models under different wave climate conditions and for different classes of resource assessment. The overarching goal is to use these investigations to provide industry guidance for model selection and modeling best practices depending on the wave site conditions and desired class of resource assessment. Modeling best practices are reviewed, and limitations and knowledge gaps in predicting wave energy resource parameters are identified.

  17. Key transmission parameters of an institutional outbreak during the 1918 influenza pandemic estimated by mathematical modelling

    Directory of Open Access Journals (Sweden)

    Nelson Peter

    2006-11-01

    Full Text Available Abstract Aim To estimate the key transmission parameters associated with an outbreak of pandemic influenza in an institutional setting (New Zealand 1918. Methods Historical morbidity and mortality data were obtained from the report of the medical officer for a large military camp. A susceptible-exposed-infectious-recovered epidemiological model was solved numerically to find a range of best-fit estimates for key epidemic parameters and an incidence curve. Mortality data were subsequently modelled by performing a convolution of incidence distribution with a best-fit incidence-mortality lag distribution. Results Basic reproduction number (R0 values for three possible scenarios ranged between 1.3, and 3.1, and corresponding average latent period and infectious period estimates ranged between 0.7 and 1.3 days, and 0.2 and 0.3 days respectively. The mean and median best-estimate incidence-mortality lag periods were 6.9 and 6.6 days respectively. This delay is consistent with secondary bacterial pneumonia being a relatively important cause of death in this predominantly young male population. Conclusion These R0 estimates are broadly consistent with others made for the 1918 influenza pandemic and are not particularly large relative to some other infectious diseases. This finding suggests that if a novel influenza strain of similar virulence emerged then it could potentially be controlled through the prompt use of major public health measures.

  18. Fitting ARMA Time Series by Structural Equation Models.

    Science.gov (United States)

    van Buuren, Stef

    1997-01-01

    This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)

  19. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  20. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    International Nuclear Information System (INIS)

    Liang, Zhong Wei; Wang, Yi Jun; Ye, Bang Yan; Brauwer, Richard Kars

    2012-01-01

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process

  1. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Zhong Wei; Wang, Yi Jun [Guangzhou Univ., Guangzhou (China); Ye, Bang Yan [South China Univ. of Technology, Guangzhou (China); Brauwer, Richard Kars [Indian Institute of Technology, Kanpur (India)

    2012-10-15

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process.

  2. Single-level resonance parameters fit nuclear cross-sections

    Science.gov (United States)

    Drawbaugh, D. W.; Gibson, G.; Miller, M.; Page, S. L.

    1970-01-01

    Least squares analyses of experimental differential cross-section data for the U-235 nucleus have yielded single level Breit-Wigner resonance parameters that fit, simultaneously, three nuclear cross sections of capture, fission, and total.

  3. Ultra high energy interaction models for Monte Carlo calculations: what model is the best fit

    Energy Technology Data Exchange (ETDEWEB)

    Stanev, Todor [Bartol Research Institute, University of Delaware, Newark DE 19716 (United States)

    2006-01-15

    We briefly outline two methods for extension of hadronic interaction models to extremely high energy. Then we compare the main characteristics of representative computer codes that implement the different models and give examples of air shower parameters predicted by those codes.

  4. A study of V79 cell survival after for proton and carbon ion beams as represented by the parameters of Katz' track structure model

    DEFF Research Database (Denmark)

    Grzanka, Leszek; Waligórski, M. P. R.; Bassler, Niels

    different sets of data obtained for the same cell line and different ions, measured at different laboratories, we have fitted model parameters to a set of carbon-irradiated V79 cells, published by Furusawa et al. (2), and to a set of proton-irradiated V79 cells, published by Wouters et al. (3), separately....... We found that values of model parameters best fitted to the carbon data of Furusawa et al. yielded predictions of V79 survival after proton irradiation which did not match the V79 proton data of Wouters et al. Fitting parameters to both sets combined did not improve the accuracy of model predictions...... carbon irradiation. 1. Katz, R., Track structure in radiobiology and in radiation detection. Nuclear Track Detection 2: 1-28 (1978). 2. Furusawa Y. et al. Inactivation of aerobic and hypoxic cells from three different cell lines by accelerated 3He-, 12C- and 20Ne beams. Radiat Res. 2012 Jan; 177...

  5. Optimization of Saturn paraboloid magnetospheric field model parameters using Cassini equatorial magnetic field data

    Directory of Open Access Journals (Sweden)

    E. S. Belenkaya

    2016-07-01

    Full Text Available The paraboloid model of Saturn's magnetosphere describes the magnetic field as being due to the sum of contributions from the internal field of the planet, the ring current, and the tail current, all contained by surface currents inside a magnetopause boundary which is taken to be a paraboloid of revolution about the planet-Sun line. The parameters of the model have previously been determined by comparison with data from a few passes through Saturn's magnetosphere in compressed and expanded states, depending on the prevailing dynamic pressure of the solar wind. Here we significantly expand such comparisons through examination of Cassini magnetic field data from 18 near-equatorial passes that span wide ranges of local time, focusing on modelling the co-latitudinal field component that defines the magnetic flux passing through the equatorial plane. For 12 of these passes, spanning pre-dawn, via noon, to post-midnight, the spacecraft crossed the magnetopause during the pass, thus allowing an estimate of the concurrent subsolar radial distance of the magnetopause R1 to be made, considered to be the primary parameter defining the scale size of the system. The best-fit model parameters from these passes are then employed to determine how the parameters vary with R1, using least-squares linear fits, thus providing predictive model parameters for any value of R1 within the range. We show that the fits obtained using the linear approximation parameters are of the same order as those for the individually selected parameters. We also show that the magnetic flux mapping to the tail lobes in these models is generally in good accord with observations of the location of the open-closed field line boundary in Saturn's ionosphere, and the related position of the auroral oval. We then investigate the field data on six passes through the nightside magnetosphere, for which the spacecraft did not cross the magnetopause, such that in this case we compare the

  6. Planck intermediate results LI. Features in the cosmic microwave background temperature power spectrum and shifts in cosmological parameters

    DEFF Research Database (Denmark)

    Aghanim, N.; Akrami, Y.; Ashdown, M.

    2017-01-01

    The six parameters of the standard ΛCDM model have best-fit values derived from the Planck temperature power spectrum that are shifted somewhat from the best-fit values derived from WMAP data. These shifts are driven by features in the Planck temperature power spectrum at angular scales that had ...

  7. Assumptions of the primordial spectrum and cosmological parameter estimation

    International Nuclear Information System (INIS)

    Shafieloo, Arman; Souradeep, Tarun

    2011-01-01

    The observables of the perturbed universe, cosmic microwave background (CMB) anisotropy and large structures depend on a set of cosmological parameters, as well as the assumed nature of primordial perturbations. In particular, the shape of the primordial power spectrum (PPS) is, at best, a well-motivated assumption. It is known that the assumed functional form of the PPS in cosmological parameter estimation can affect the best-fit-parameters and their relative confidence limits. In this paper, we demonstrate that a specific assumed form actually drives the best-fit parameters into distinct basins of likelihood in the space of cosmological parameters where the likelihood resists improvement via modifications to the PPS. The regions where considerably better likelihoods are obtained allowing free-form PPS lie outside these basins. In the absence of a preferred model of inflation, this raises a concern that current cosmological parameter estimates are strongly prejudiced by the assumed form of PPS. Our results strongly motivate approaches toward simultaneous estimation of the cosmological parameters and the shape of the primordial spectrum from upcoming cosmological data. It is equally important for theorists to keep an open mind towards early universe scenarios that produce features in the PPS. (paper)

  8. A worked example of "best fit" framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents.

    Science.gov (United States)

    Carroll, Christopher; Booth, Andrew; Cooper, Katy

    2011-03-16

    A variety of different approaches to the synthesis of qualitative data are advocated in the literature. The aim of this paper is to describe the application of a pragmatic method of qualitative evidence synthesis and the lessons learned from adopting this "best fit" framework synthesis approach. An evaluation of framework synthesis as an approach to the qualitative systematic review of evidence exploring the views of adults to the taking of potential agents within the context of the primary prevention of colorectal cancer. Twenty papers from North America, Australia, the UK and Europe met the criteria for inclusion. Fourteen themes were identified a priori from a related, existing conceptual model identified in the literature, which were then used to code the extracted data. Further analysis resulted in the generation of a more sophisticated model with additional themes. The synthesis required a combination of secondary framework and thematic analysis approaches and was conducted within a health technology assessment timeframe. The novel and pragmatic "best fit" approach to framework synthesis developed and described here was found to be fit for purpose. Future research should seek to test further this approach to qualitative data synthesis.

  9. Comprehensive process model of clinical information interaction in primary care: results of a "best-fit" framework synthesis.

    Science.gov (United States)

    Veinot, Tiffany C; Senteio, Charles R; Hanauer, David; Lowery, Julie C

    2018-06-01

    To describe a new, comprehensive process model of clinical information interaction in primary care (Clinical Information Interaction Model, or CIIM) based on a systematic synthesis of published research. We used the "best fit" framework synthesis approach. Searches were performed in PubMed, Embase, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), PsycINFO, Library and Information Science Abstracts, Library, Information Science and Technology Abstracts, and Engineering Village. Two authors reviewed articles according to inclusion and exclusion criteria. Data abstraction and content analysis of 443 published papers were used to create a model in which every element was supported by empirical research. The CIIM documents how primary care clinicians interact with information as they make point-of-care clinical decisions. The model highlights 3 major process components: (1) context, (2) activity (usual and contingent), and (3) influence. Usual activities include information processing, source-user interaction, information evaluation, selection of information, information use, clinical reasoning, and clinical decisions. Clinician characteristics, patient behaviors, and other professionals influence the process. The CIIM depicts the complete process of information interaction, enabling a grasp of relationships previously difficult to discern. The CIIM suggests potentially helpful functionality for clinical decision support systems (CDSSs) to support primary care, including a greater focus on information processing and use. The CIIM also documents the role of influence in clinical information interaction; influencers may affect the success of CDSS implementations. The CIIM offers a new framework for achieving CDSS workflow integration and new directions for CDSS design that can support the work of diverse primary care clinicians.

  10. An Investigation of Invariance Properties of One, Two and Three Parameter Logistic Item Response Theory Models

    Directory of Open Access Journals (Sweden)

    O.A. Awopeju

    2017-12-01

    Full Text Available The study investigated the invariance properties of one, two and three parame-ter logistic item response theory models. It examined the best fit among one parameter logistic (1PL, two-parameter logistic (2PL and three-parameter logistic (3PL IRT models for SSCE, 2008 in Mathematics. It also investigated the degree of invariance of the IRT models based item difficulty parameter estimates in SSCE in Mathematics across different samples of examinees and examined the degree of invariance of the IRT models based item discrimination estimates in SSCE in Mathematics across different samples of examinees. In order to achieve the set objectives, 6000 students (3000 males and 3000 females were drawn from the population of 35262 who wrote the 2008 paper 1 Senior Secondary Certificate Examination (SSCE in Mathematics organized by National Examination Council (NECO. The item difficulty and item discrimination parameter estimates from CTT and IRT were tested for invariance using BLOG MG 3 and correlation analysis was achieved using SPSS version 20. The research findings were that two parameter model IRT item difficulty and discrimination parameter estimates exhibited invariance property consistently across different samples and that 2-parameter model was suitable for all samples of examinees unlike one-parameter model and 3-parameter model.

  11. Fitted curve parameters for the efficiency of a coaxial HPGe Detector

    International Nuclear Information System (INIS)

    Supian Samat

    1996-01-01

    Using Ngraph software, the parameters of various functions were determined by least squares analysis of fits to experimental efficiencies , ε sub f of a coaxial HPGe detector for gamma rays in the energy range 59 keV to 1836 keV. When these parameters had been determined, their reliability was tested by the calculated goodness-of-fit parameter χ sup 2 sub cal. It is shown that the function, ln ε sub f = Σ sup n sub j=0 a sub j (ln E/E sub 0) sup j , where n=3, gives satisfactory results

  12. Ground level enhancement (GLE) energy spectrum parameters model

    Science.gov (United States)

    Qin, G.; Wu, S.

    2017-12-01

    We study the ground level enhancement (GLE) events in solar cycle 23 with the four energy spectra parameters, the normalization parameter C, low-energy power-law slope γ 1, high-energy power-law slope γ 2, and break energy E0, obtained by Mewaldt et al. 2012 who fit the observations to the double power-law equation. we divide the GLEs into two groups, one with strong acceleration by interplanetary (IP) shocks and another one without strong acceleration according to the condition of solar eruptions. We next fit the four parameters with solar event conditions to get models of the parameters for the two groups of GLEs separately. So that we would establish a model of energy spectrum for GLEs for the future space weather prediction.

  13. The 'fitting problem' in cosmology

    International Nuclear Information System (INIS)

    Ellis, G.F.R.; Stoeger, W.

    1987-01-01

    The paper considers the best way to fit an idealised exactly homogeneous and isotropic universe model to a realistic ('lumpy') universe; whether made explicit or not, some such approach of necessity underlies the use of the standard Robertson-Walker models as models of the real universe. Approaches based on averaging, normal coordinates and null data are presented, the latter offering the best opportunity to relate the fitting procedure to data obtainable by astronomical observations. (author)

  14. Parameter optimization for surface flux transport models

    Science.gov (United States)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  15. Fitting of alpha-efficiency versus quenching parameter by exponential functions in liquid scintillation counting

    International Nuclear Information System (INIS)

    Sosa, M.; Manjón, G.; Mantero, J.; García-Tenorio, R.

    2014-01-01

    The objective of this work is to propose an exponential fit for the low alpha-counting efficiency as a function of a sample quenching parameter using a Quantulus liquid scintillation counter. The sample quenching parameter in a Quantulus is the Spectral Quench Parameter of the External Standard (SQP(E)), which is defined as the number of channel under which lies the 99% of Compton spectrum generated by a gamma emitter ( 152 Eu). Although in the literature one usually finds a polynomial fitting of the alpha counting efficiency, it is shown here that an exponential function is a better description. - Highlights: • We have studied the quenching in alpha measurement by liquid scintillation counting. • We have reviewed typical fitting of alpha counting efficiency versus quenching parameter. • Exponential fitting of data is proposed as better fitting. • We consider exponential fitting has a physical basis

  16. Fitting of alpha-efficiency versus quenching parameter by exponential functions in liquid scintillation counting

    Energy Technology Data Exchange (ETDEWEB)

    Sosa, M. [Departamento de Ingeniería Física, Campus León, Universidad de Guanajuato, 37150 León, Guanajuato (Mexico); Universidad de Sevilla, Departamento de Física Aplicada II, E.T.S. Arquitectura, Av. Reina Mercedes, 2, 41012 Sevilla (Spain); Manjón, G., E-mail: manjon@us.es [Universidad de Sevilla, Departamento de Física Aplicada II, E.T.S. Arquitectura, Av. Reina Mercedes, 2, 41012 Sevilla (Spain); Mantero, J.; García-Tenorio, R. [Universidad de Sevilla, Departamento de Física Aplicada II, E.T.S. Arquitectura, Av. Reina Mercedes, 2, 41012 Sevilla (Spain)

    2014-05-01

    The objective of this work is to propose an exponential fit for the low alpha-counting efficiency as a function of a sample quenching parameter using a Quantulus liquid scintillation counter. The sample quenching parameter in a Quantulus is the Spectral Quench Parameter of the External Standard (SQP(E)), which is defined as the number of channel under which lies the 99% of Compton spectrum generated by a gamma emitter ({sup 152}Eu). Although in the literature one usually finds a polynomial fitting of the alpha counting efficiency, it is shown here that an exponential function is a better description. - Highlights: • We have studied the quenching in alpha measurement by liquid scintillation counting. • We have reviewed typical fitting of alpha counting efficiency versus quenching parameter. • Exponential fitting of data is proposed as better fitting. • We consider exponential fitting has a physical basis.

  17. Improving children's menus in community restaurants: best food for families, infants, and toddlers (Best Food FITS) intervention, South Central Texas, 2010-2014.

    Science.gov (United States)

    Crixell, Sylvia Hurd; Friedman, Bj; Fisher, Deborah Torrey; Biediger-Friedman, Lesli

    2014-12-24

    Approximately 32% of US children are overweight or obese. Restaurant and fast food meals contribute 18% of daily calories for children and adolescents aged 2 to 18 years. Changing children's menus may improve their diets. This case study describes Best Food for Families, Infants, and Toddlers (Best Food FITS), a community-based intervention designed to address childhood obesity. The objective of this study was to improve San Marcos children's access to healthy diets through partnerships with local restaurants, removing sugar-sweetened beverages, decreasing the number of energy-dense entrées, and increasing fruit and vegetable offerings on restaurant menus. San Marcos, Texas, the fastest growing US city, has more restaurants and fewer grocery stores than other Texas cities. San Marcos's population is diverse; 37.8% of residents and 70.3% of children are Hispanic. Overweight and obesity rates among school children exceed 50%; 40.3% of children live below the poverty level. This project received funding from the Texas Department of State Health Services Nutrition, Physical Activity, and Obesity Prevention Program to develop Best Food FITS. The case study consisted of developing a brand, engaging community stakeholders, reviewing existing children's menus in local restaurants, administering owner-manager surveys, collaborating with restaurants to improve menus, and assessing the process and outcomes of the intervention. Best Food FITS regularly participated in citywide health events and funded the construction of a teaching kitchen in a new community building where regular nutrition classes are held. Sixteen independent restaurants and 1 chain restaurant implemented new menus. Improving menus in restaurants can be a simple step toward changing children's food habits. The approach taken in this case study can be adapted to other communities. Minimal funding would be needed to facilitate development of promotional items to support brand recognition.

  18. An optimized knife-edge method for on-orbit MTF estimation of optical sensors using powell parameter fitting

    Science.gov (United States)

    Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue

    2017-08-01

    On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.

  19. Best Speed Fit EDF Scheduling for Performance Asymmetric Multiprocessors

    Directory of Open Access Journals (Sweden)

    Peng Wu

    2017-01-01

    Full Text Available In order to improve the performance of a real-time system, asymmetric multiprocessors have been proposed. The benefits of improved system performance and reduced power consumption from such architectures cannot be fully exploited unless suitable task scheduling and task allocation approaches are implemented at the operating system level. Unfortunately, most of the previous research on scheduling algorithms for performance asymmetric multiprocessors is focused on task priority assignment. They simply assign the highest priority task to the fastest processor. In this paper, we propose BSF-EDF (best speed fit for earliest deadline first for performance asymmetric multiprocessor scheduling. This approach chooses a suitable processor rather than the fastest one, when allocating tasks. With this proposed BSF-EDF scheduling, we also derive an effective schedulability test.

  20. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  1. Luminescence model with quantum impact parameter for low energy ions

    CERN Document Server

    Cruz-Galindo, H S; Martínez-Davalos, A; Belmont-Moreno, E; Galindo, S

    2002-01-01

    We have modified an analytical model of induced light production by energetic ions interacting in scintillating materials. The original model is based on the distribution of energy deposited by secondary electrons produced along the ion's track. The range of scattered electrons, and thus the energy distribution, depends on a classical impact parameter between the electron and the ion's track. The only adjustable parameter of the model is the quenching density rho sub q. The modification here presented, consists in proposing a quantum impact parameter that leads to a better fit of the model to the experimental data at low incident ion energies. The light output response of CsI(Tl) detectors to low energy ions (<3 MeV/A) is fitted with the modified model and comparison is made to the original model.

  2. Mathematical modelling of the sorption isotherms of quince

    Directory of Open Access Journals (Sweden)

    Mitrevski Vangelce

    2017-01-01

    Full Text Available The moisture adsorption isotherms of quince were determined at four temperatures 15, 30, 45, and 60°C over a range of water activity from 0.110 to 0.920 using the standard static gravimetric method. The experimental data were fitted with generated three parameter sorption isotherm models on Mitrevski et al., and the referent Anderson model known in the scientific and engineering literature as Guggenheim- Anderson-de Boer model. In order to find which models give the best results, large number of numerical experiments was performed. After that, several statistical criteria for estimation and selection of the best sorption isotherm model was used. The performed statistical analysis shows that the generated three parameter model M11 gave the best fit to the sorption data of quince than the referent three parameter Anderson model.

  3. Modeling and Bayesian parameter estimation for shape memory alloy bending actuators

    Science.gov (United States)

    Crews, John H.; Smith, Ralph C.

    2012-04-01

    In this paper, we employ a homogenized energy model (HEM) for shape memory alloy (SMA) bending actuators. Additionally, we utilize a Bayesian method for quantifying parameter uncertainty. The system consists of a SMA wire attached to a flexible beam. As the actuator is heated, the beam bends, providing endoscopic motion. The model parameters are fit to experimental data using an ordinary least-squares approach. The uncertainty in the fit model parameters is then quantified using Markov Chain Monte Carlo (MCMC) methods. The MCMC algorithm provides bounds on the parameters, which will ultimately be used in robust control algorithms. One purpose of the paper is to test the feasibility of the Random Walk Metropolis algorithm, the MCMC method used here.

  4. An Empirical Study on Raman Peak Fitting and Its Application to Raman Quantitative Research.

    Science.gov (United States)

    Yuan, Xueyin; Mayanovic, Robert A

    2017-10-01

    Fitting experimentally measured Raman bands with theoretical model profiles is the basic operation for numerical determination of Raman peak parameters. In order to investigate the effects of peak modeling using various algorithms on peak fitting results, the representative Raman bands of mineral crystals, glass, fluids as well as the emission lines from a fluorescent lamp, some of which were measured under ambient light whereas others under elevated pressure and temperature conditions, were fitted using Gaussian, Lorentzian, Gaussian-Lorentzian, Voigtian, Pearson type IV, and beta profiles. From the fitting results of the Raman bands investigated in this study, the fitted peak position, intensity, area and full width at half-maximum (FWHM) values of the measured Raman bands can vary significantly depending upon which peak profile function is used in the fitting, and the most appropriate fitting profile should be selected depending upon the nature of the Raman bands. Specifically, the symmetric Raman bands of mineral crystals and non-aqueous fluids are best fit using Gaussian-Lorentzian or Voigtian profiles, whereas the asymmetric Raman bands are best fit using Pearson type IV profiles. The asymmetric O-H stretching vibrations of H 2 O and the Raman bands of soda-lime glass are best fit using several Gaussian profiles, whereas the emission lines from a florescent light are best fit using beta profiles. Multiple peaks that are not clearly separated can be fit simultaneously, provided the residuals in the fitting of one peak will not affect the fitting of the remaining peaks to a significant degree. Once the resolution of the Raman spectrometer has been properly accounted for, our findings show that the precision in peak position and intensity can be improved significantly by fitting the measured Raman peaks with appropriate profiles. Nevertheless, significant errors in peak position and intensity were still observed in the results from fitting of weak and wide Raman

  5. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    International Nuclear Information System (INIS)

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. (paper)

  6. Improving Children’s Menus in Community Restaurants: Best Food for Families, Infants, and Toddlers (Best Food FITS) Intervention, South Central Texas, 2010–2014

    Science.gov (United States)

    Friedman, BJ; Fisher, Deborah Torrey; Biediger-Friedman, Lesli

    2014-01-01

    Background Approximately 32% of US children are overweight or obese. Restaurant and fast food meals contribute 18% of daily calories for children and adolescents aged 2 to 18 years. Changing children’s menus may improve their diets. This case study describes Best Food for Families, Infants, and Toddlers (Best Food FITS), a community-based intervention designed to address childhood obesity. The objective of this study was to improve San Marcos children’s access to healthy diets through partnerships with local restaurants, removing sugar-sweetened beverages, decreasing the number of energy-dense entrées, and increasing fruit and vegetable offerings on restaurant menus. Community Context San Marcos, Texas, the fastest growing US city, has more restaurants and fewer grocery stores than other Texas cities. San Marcos’s population is diverse; 37.8% of residents and 70.3% of children are Hispanic. Overweight and obesity rates among school children exceed 50%; 40.3% of children live below the poverty level. Methods This project received funding from the Texas Department of State Health Services Nutrition, Physical Activity, and Obesity Prevention Program to develop Best Food FITS. The case study consisted of developing a brand, engaging community stakeholders, reviewing existing children’s menus in local restaurants, administering owner–manager surveys, collaborating with restaurants to improve menus, and assessing the process and outcomes of the intervention. Outcome Best Food FITS regularly participated in citywide health events and funded the construction of a teaching kitchen in a new community building where regular nutrition classes are held. Sixteen independent restaurants and 1 chain restaurant implemented new menus. Interpretation Improving menus in restaurants can be a simple step toward changing children’s food habits. The approach taken in this case study can be adapted to other communities. Minimal funding would be needed to facilitate development

  7. Nonlinear models for fitting growth curves of Nellore cows reared in the Amazon Biome

    Directory of Open Access Journals (Sweden)

    Kedma Nayra da Silva Marinho

    2013-09-01

    Full Text Available Growth curves of Nellore cows were estimated by comparing six nonlinear models: Brody, Logistic, two alternatives by Gompertz, Richards and Von Bertalanffy. The models were fitted to weight-age data, from birth to 750 days of age of 29,221 cows, born between 1976 and 2006 in the Brazilian states of Acre, Amapá, Amazonas, Pará, Rondônia, Roraima and Tocantins. The models were fitted by the Gauss-Newton method. The goodness of fit of the models was evaluated by using mean square error, adjusted coefficient of determination, prediction error and mean absolute error. Biological interpretation of parameters was accomplished by plotting estimated weights versus the observed weight means, instantaneous growth rate, absolute maturity rate, relative instantaneous growth rate, inflection point and magnitude of the parameters A (asymptotic weight and K (maturing rate. The Brody and Von Bertalanffy models fitted the weight-age data but the other models did not. The average weight (A and growth rate (K were: 384.6±1.63 kg and 0.0022±0.00002 (Brody and 313.40±0.70 kg and 0.0045±0.00002 (Von Bertalanffy. The Brody model provides better goodness of fit than the Von Bertalanffy model.

  8. Simultaneous Parameters Identifiability and Estimation of an E. coli Metabolic Network Model

    Directory of Open Access Journals (Sweden)

    Kese Pontes Freitas Alberton

    2015-01-01

    Full Text Available This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available.

  9. Musrfit-Real Time Parameter Fitting Using GPUs

    Science.gov (United States)

    Locans, Uldis; Suter, Andreas

    High transverse field μSR (HTF-μSR) experiments typically lead to a rather large data sets, since it is necessary to follow the high frequencies present in the positron decay histograms. The analysis of these data sets can be very time consuming, usually due to the limited computational power of the hardware. To overcome the limited computing resources rotating reference frame transformation (RRF) is often used to reduce the data sets that need to be handled. This comes at a price typically the μSR community is not aware of: (i) due to the RRF transformation the fitting parameter estimate is of poorer precision, i.e., more extended expensive beamtime is needed. (ii) RRF introduces systematic errors which hampers the statistical interpretation of χ2 or the maximum log-likelihood. We will briefly discuss these issues in a non-exhaustive practical way. The only and single purpose of the RRF transformation is the sluggish computer power. Therefore during this work GPU (Graphical Processing Units) based fitting was developed which allows to perform real-time full data analysis without RRF. GPUs have become increasingly popular in scientific computing in recent years. Due to their highly parallel architecture they provide the opportunity to accelerate many applications with considerably less costs than upgrading the CPU computational power. With the emergence of frameworks such as CUDA and OpenCL these devices have become more easily programmable. During this work GPU support was added to Musrfit- a data analysis framework for μSR experiments. The new fitting algorithm uses CUDA or OpenCL to offload the most time consuming parts of the calculations to Nvidia or AMD GPUs. Using the current CPU implementation in Musrfit parameter fitting can take hours for certain data sets while the GPU version can allow to perform real-time data analysis on the same data sets. This work describes the challenges that arise in adding the GPU support to t as well as results obtained

  10. Repair models of cell survival and corresponding computer program for survival curve fitting

    International Nuclear Information System (INIS)

    Shen Xun; Hu Yiwei

    1992-01-01

    Some basic concepts and formulations of two repair models of survival, the incomplete repair (IR) model and the lethal-potentially lethal (LPL) model, are introduced. An IBM-PC computer program for survival curve fitting with these models was developed and applied to fit the survivals of human melanoma cells HX118 irradiated at different dose rates. Comparison was made between the repair models and two non-repair models, the multitar get-single hit model and the linear-quadratic model, in the fitting and analysis of the survival-dose curves. It was shown that either IR model or LPL model can fit a set of survival curves of different dose rates with same parameters and provide information on the repair capacity of cells. These two mathematical models could be very useful in quantitative study on the radiosensitivity and repair capacity of cells

  11. Analysis of Statistical Distributions Used for Modeling Reliability and Failure Rate of Temperature Alarm Circuit

    International Nuclear Information System (INIS)

    EI-Shanshoury, G.I.

    2011-01-01

    Several statistical distributions are used to model various reliability and maintainability parameters. The applied distribution depends on the' nature of the data being analyzed. The presented paper deals with analysis of some statistical distributions used in reliability to reach the best fit of distribution analysis. The calculations rely on circuit quantity parameters obtained by using Relex 2009 computer program. The statistical analysis of ten different distributions indicated that Weibull distribution gives the best fit distribution for modeling the reliability of the data set of Temperature Alarm Circuit (TAC). However, the Exponential distribution is found to be the best fit distribution for modeling the failure rate

  12. Seven-parameter statistical model for BRDF in the UV band.

    Science.gov (United States)

    Bai, Lu; Wu, Zhensen; Zou, Xiren; Cao, Yunhua

    2012-05-21

    A new semi-empirical seven-parameter BRDF model is developed in the UV band using experimentally measured data. The model is based on the five-parameter model of Wu and the fourteen-parameter model of Renhorn and Boreman. Surface scatter, bulk scatter and retro-reflection scatter are considered. An optimizing modeling method, the artificial immune network genetic algorithm, is used to fit the BRDF measurement data over a wide range of incident angles. The calculation time and accuracy of the five- and seven-parameter models are compared. After fixing the seven parameters, the model can well describe scattering data in the UV band.

  13. Extracting the noise spectral densities parameters of JFET transistor by modeling a nuclear electronics channel response

    International Nuclear Information System (INIS)

    Assaf, J.

    2009-07-01

    Mathematical model for the RMS noise of JFET transistor has been realized. Fitting the model according to the experimental results gives the noise spectral densities values. Best fitting was for the model of three noise sources and real preamplifier transfer function. After gamma irradiation, an additional and important noise sources appeared and two point defects are estimated through the fitting process. (author)

  14. Supersymmetry with prejudice: Fitting the wrong model to LHC data

    Science.gov (United States)

    Allanach, B. C.; Dolan, Matthew J.

    2012-09-01

    We critically examine interpretations of hypothetical supersymmetric LHC signals, fitting to alternative wrong models of supersymmetry breaking. The signals we consider are some of the most constraining on the sparticle spectrum: invariant mass distributions with edges and endpoints from the golden decay chain q˜→qχ20(→l˜±l∓q)→χ10l+l-q. We assume a constrained minimal supersymmetric standard model (CMSSM) point to be the ‘correct’ one, but fit the signals instead with minimal gauge mediated supersymmetry breaking models (mGMSB) with a neutralino quasistable lightest supersymmetric particle, minimal anomaly mediation and large volume string compactification models. Minimal anomaly mediation and large volume scenario can be unambiguously discriminated against the CMSSM for the assumed signal and 1fb-1 of LHC data at s=14TeV. However, mGMSB would not be discriminated on the basis of the kinematic endpoints alone. The best-fit point spectra of mGMSB and CMSSM look remarkably similar, making experimental discrimination at the LHC based on the edges or Higgs properties difficult. However, using rate information for the golden chain should provide the additional separation required.

  15. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  16. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    Science.gov (United States)

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  17. Testing backreaction effects with observational Hubble parameter data

    Science.gov (United States)

    Cao, Shu-Lei; Teng, Huan-Yu; Wan, Hao-Yi; Yu, Hao-Ran; Zhang, Tong-Jie

    2018-02-01

    The spatially averaged inhomogeneous Universe includes a kinematical backreaction term Q_{D} that is relate to the averaged spatial Ricci scalar _{D} in the framework of general relativity. Under the assumption that Q_{D} and _{D} obey the scaling laws of the volume scale factor a_{D}, a direct coupling between them with a scaling index n is remarkable. In order to explore the generic properties of a backreaction model for explaining the accelerated expansion of the Universe, we exploit two metrics to describe the late time Universe. Since the standard FLRW metric cannot precisely describe the late time Universe on small scales, the template metric with an evolving curvature parameter κ _{D}(t) is employed. However, we doubt the validity of the prescription for κ _{D}, which motivates us apply observational Hubble parameter data (OHD) to constrain parameters in dust cosmology. First, for FLRW metric, by getting best-fit constraints of Ω^{D_0}_m = 0.25^{+0.03}_{-0.03}, n = 0.02^{+0.69}_{-0.66}, and H_{D_0} = 70.54^{+4.24}_{-3.97} km s^{-1 Mpc^{-1}}, the evolutions of parameters are explored. Second, in template metric context, by marginalizing over H_{D_0} as a prior of uniform distribution, we obtain the best-fit values of n=-1.22^{+0.68}_{-0.41} and Ωm^{D0}=0.12^{+0.04}_{-0.02}. Moreover, we utilize three different Gaussian priors of H_{D_0}, which result in different best-fits of n, but almost the same best-fit value of Ωm^{D0}˜ 0.12. Also, the absolute constraints without marginalization of parameter are obtained: n=-1.1^{+0.58}_{-0.50} and Ωm^{D0}=0.13± 0.03. With these constraints, the evolutions of the effective deceleration parameter q^{D} indicate that the backreaction can account for the accelerated expansion of the Universe without involving extra dark energy component in the scaling solution context. Nevertheless, the results also verify that the prescription of κ _{D} is insufficient and should be improved.

  18. Determination of kinetic and thermodynamic parameters that describe isothermal seed germination: A student research project

    Science.gov (United States)

    Hageseth, Gaylord T.

    1982-02-01

    Students under the supervision of a faculty member can collect data and fit the data to the theoretical mathematical model that describes the rate of isothermal seed germination. The best-fit parameters are interpreted as an initial substrate concentration, product concentration, and the autocatalytic reaction rate. The thermodynamic model enables one to calculate the activation energy for the substrate and product, the activation energy for the autocatalytic reaction, and changes in enthalpy, entropy, and the Gibb's free energy. Turnip, lettuce, soybean, and radish seeds have been investigated. All data fit the proposed model.

  19. Modelling population dynamics model formulation, fitting and assessment using state-space methods

    CERN Document Server

    Newman, K B; Morgan, B J T; King, R; Borchers, D L; Cole, D J; Besbeas, P; Gimenez, O; Thomas, L

    2014-01-01

    This book gives a unifying framework for estimating the abundance of open populations: populations subject to births, deaths and movement, given imperfect measurements or samples of the populations.  The focus is primarily on populations of vertebrates for which dynamics are typically modelled within the framework of an annual cycle, and for which stochastic variability in the demographic processes is usually modest. Discrete-time models are developed in which animals can be assigned to discrete states such as age class, gender, maturity,  population (within a metapopulation), or species (for multi-species models). The book goes well beyond estimation of abundance, allowing inference on underlying population processes such as birth or recruitment, survival and movement. This requires the formulation and fitting of population dynamics models.  The resulting fitted models yield both estimates of abundance and estimates of parameters characterizing the underlying processes.  

  20. FitSKIRT: genetic algorithms to automatically fit dusty galaxies with a Monte Carlo radiative transfer code

    Science.gov (United States)

    De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.

    2013-02-01

    We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.

  1. Optimization of Experimental Model Parameter Identification for Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Rosario Morello

    2013-09-01

    Full Text Available The smart grid approach is envisioned to take advantage of all available modern technologies in transforming the current power system to provide benefits to all stakeholders in the fields of efficient energy utilisation and of wide integration of renewable sources. Energy storage systems could help to solve some issues that stem from renewable energy usage in terms of stabilizing the intermittent energy production, power quality and power peak mitigation. With the integration of energy storage systems into the smart grids, their accurate modeling becomes a necessity, in order to gain robust real-time control on the network, in terms of stability and energy supply forecasting. In this framework, this paper proposes a procedure to identify the values of the battery model parameters in order to best fit experimental data and integrate it, along with models of energy sources and electrical loads, in a complete framework which represents a real time smart grid management system. The proposed method is based on a hybrid optimisation technique, which makes combined use of a stochastic and a deterministic algorithm, with low computational burden and can therefore be repeated over time in order to account for parameter variations due to the battery’s age and usage.

  2. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  3. Multi-parameter-fitting procedure for photothermal infrared radiometry on multilayered and bulk-absorbing solids

    International Nuclear Information System (INIS)

    Dorr, Peter; Gruss, Christian

    2001-01-01

    Photothermal infrared radiometry has been used for the measurement of thermophysical, optical, and geometrical properties of multilayered samples of paint on a metallic substrate. A special data normalization is applied to reduce the number of sensitive parameters which makes the identification task for the remaining parameters easier. The normalization stabilizes the evaluation of the photothermal signal and makes the infrared radiometry more attractive for applications in the industrial environment. It is shown that modeling and multi-parameter-fitting can be applied successfully to the normalized data for the determination of layer thicknesses. As a side product we can calculate some other physical properties of the sample. [copyright] 2001 American Institute of Physics

  4. The global electroweak Standard Model fit after the Higgs discovery

    CERN Document Server

    Baak, Max

    2013-01-01

    We present an update of the global Standard Model (SM) fit to electroweak precision data under the assumption that the new particle discovered at the LHC is the SM Higgs boson. In this scenario all parameters entering the calculations of electroweak precision observalbes are known, allowing, for the first time, to over-constrain the SM at the electroweak scale and assert its validity. Within the SM the W boson mass and the effective weak mixing angle can be accurately predicted from the global fit. The results are compatible with, and exceed in precision, the direct measurements. An updated determination of the S, T and U parameters, which parametrize the oblique vacuum corrections, is given. The obtained values show good consistency with the SM expectation and no direct signs of new physics are seen. We conclude with an outlook to the global electroweak fit for a future e+e- collider.

  5. Revisiting the Global Electroweak Fit of the Standard Model and Beyond with Gfitter

    CERN Document Server

    Flächer, Henning; Haller, J; Höcker, A; Mönig, K; Stelzer, J

    2009-01-01

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter projec...

  6. Fitting and benchmarking of Monte Carlo output parameters for iridium-192 high dose rate brachytherapy source

    International Nuclear Information System (INIS)

    Acquah, F.G.

    2011-01-01

    Brachytherapy, the use of radioactive sources for the treatment of tumours is an important tool in radiation oncology. Accurate calculations of dose delivered to malignant and normal tissues are the main responsibility of the Medical Physics staff. With the use of Treatment Planning System (TPS) computers now becoming a standard practice in the Radiation Oncology Departments, Independent calculations to certify the results of these commercial TPSs are important part of a good quality management system for brachytherapy implants. There are inherent errors in the dose distributions produced by these TPSs due to its failure to account for heterogeneity in the calculation algorithms and Monte Carlo (MC) method seems to be the panacea for these corrections. In this study, a fit functional form using MC output parameters was performed to reduce dose calculation uncertainty using the Matlab software curve fitting applications. This includes the modification of the AAPM TG-43 parameters to accommodate the new developments for a rapid brachytherapy dose rate calculation. Analytical computations were performed to hybridize the anisotropy function, F(r,θ) and radial dose function, g(r) into a single new function f(r,θ) for the Nucletron microSelectron High Dose Rate 'new or v2' (mHDRv2) 192 Ir brachytherapy source. In order to minimize computation time and to improve the accuracy of manual calculations, the dosimetry function f(r,θ) used fewer parameters and formulas for the fit. Using MC outputs as the standard, the percentage errors for the fits were calculated and used to evaluate the average and maximum uncertainties. Dose rate deviation between the MC data and fit were also quantified as errors(E), which showed minimal values. These results showed that the dosimetry parameters from this study as compared to those of MC outputs parameters were in good agreement and better than the results obtained from literature. The work confirms a lot of promise in building robust

  7. Improving weather predictability by including land-surface model parameter uncertainty

    Science.gov (United States)

    Orth, Rene; Dutra, Emanuel; Pappenberger, Florian

    2016-04-01

    The land surface forms an important component of Earth system models and interacts nonlinearly with other parts such as ocean and atmosphere. To capture the complex and heterogenous hydrology of the land surface, land surface models include a large number of parameters impacting the coupling to other components of the Earth system model. Focusing on ECMWF's land-surface model HTESSEL we present in this study a comprehensive parameter sensitivity evaluation using multiple observational datasets in Europe. We select 6 poorly constrained effective parameters (surface runoff effective depth, skin conductivity, minimum stomatal resistance, maximum interception, soil moisture stress function shape, total soil depth) and explore their sensitivity to model outputs such as soil moisture, evapotranspiration and runoff using uncoupled simulations and coupled seasonal forecasts. Additionally we investigate the possibility to construct ensembles from the multiple land surface parameters. In the uncoupled runs we find that minimum stomatal resistance and total soil depth have the most influence on model performance. Forecast skill scores are moreover sensitive to the same parameters as HTESSEL performance in the uncoupled analysis. We demonstrate the robustness of our findings by comparing multiple best performing parameter sets and multiple randomly chosen parameter sets. We find better temperature and precipitation forecast skill with the best-performing parameter perturbations demonstrating representativeness of model performance across uncoupled (and hence less computationally demanding) and coupled settings. Finally, we construct ensemble forecasts from ensemble members derived with different best-performing parameterizations of HTESSEL. This incorporation of parameter uncertainty in the ensemble generation yields an increase in forecast skill, even beyond the skill of the default system. Orth, R., E. Dutra, and F. Pappenberger, 2016: Improving weather predictability by

  8. Type Ia Supernova Intrinsic Magnitude Dispersion and the Fitting of Cosmological Parameters

    Science.gov (United States)

    Kim, A. G.

    2011-02-01

    I present an analysis for fitting cosmological parameters from a Hubble diagram of a standard candle with unknown intrinsic magnitude dispersion. The dispersion is determined from the data, simultaneously with the cosmological parameters. This contrasts with the strategies used to date. The advantages of the presented analysis are that it is done in a single fit (it is not iterative), it provides a statistically founded and unbiased estimate of the intrinsic dispersion, and its cosmological-parameter uncertainties account for the intrinsic-dispersion uncertainty. Applied to Type Ia supernovae, my strategy provides a statistical measure to test for subtypes and assess the significance of any magnitude corrections applied to the calibrated candle. Parameter bias and differences between likelihood distributions produced by the presented and currently used fitters are negligibly small for existing and projected supernova data sets.

  9. Association between selected physical fitness parameters and esthetic competence in contemporary dancers.

    Science.gov (United States)

    Angioi, Manuela; Metsios, George S; Twitchett, Emily; Koutedakis, Yiannis; Wyon, Matthew

    2009-01-01

    The physical demands imposed on contemporary dancers by choreographers and performance schedules make their physical fitness just as important to them as skill development. Nevertheless, it remains to be confirmed which physical fitness components are associated with aesthetic competence. The aim of this study was to: 1. replicate and test a novel aesthetic competence tool for reliability, and 2. investigate the association between selected physical fitness components and aesthetic competence by using this new tool. Seventeen volunteers underwent a series of physical fitness tests (body composition, flexibility, muscular power and endurance, and aerobic capacity) and aesthetic competence assessments (seven individual criteria commonly used by selected dance companies). Inter-rater reliability of the aesthetic competence tool was very high (r = 0.96). There were significant correlations between the aesthetic competence score and jump ability and push-ups (r = 0.55 and r = 0.55, respectively). Stepwise backward multiple regression analysis revealed that the best predictor of aesthetic competence was push-ups (R(2) = 0.30, p = 0.03). Univariate analyses also revealed that the interaction of push-ups and jump ability improved the prediction power of aesthetic competence (R(2) = 0.44, p = 0.004). It is concluded that upper body muscular endurance and jump ability best predict aesthetic competence of the present sample of contemporary dancers. Further research is required to investigate the contribution of other components of aesthetic competence, including upper body strength, lower body muscular endurance, general coordination, and static and dynamic balance.

  10. Error estimation and global fitting in transverse-relaxation dispersion experiments to determine chemical-exchange parameters

    International Nuclear Information System (INIS)

    Ishima, Rieko; Torchia, Dennis A.

    2005-01-01

    Off-resonance effects can introduce significant systematic errors in R 2 measurements in constant-time Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation dispersion experiments. For an off-resonance chemical shift of 500 Hz, 15 N relaxation dispersion profiles obtained from experiment and computer simulation indicated a systematic error of ca. 3%. This error is three- to five-fold larger than the random error in R 2 caused by noise. Good estimates of total R 2 uncertainty are critical in order to obtain accurate estimates in optimized chemical exchange parameters and their uncertainties derived from χ 2 minimization of a target function. Here, we present a simple empirical approach that provides a good estimate of the total error (systematic + random) in 15 N R 2 values measured for the HIV protease. The advantage of this empirical error estimate is that it is applicable even when some of the factors that contribute to the off-resonance error are not known. These errors are incorporated into a χ 2 minimization protocol, in which the Carver-Richards equation is used fit the observed R 2 dispersion profiles, that yields optimized chemical exchange parameters and their confidence limits. Optimized parameters are also derived, using the same protein sample and data-fitting protocol, from 1 H R 2 measurements in which systematic errors are negligible. Although 1 H and 15 N relaxation profiles of individual residues were well fit, the optimized exchange parameters had large uncertainties (confidence limits). In contrast, when a single pair of exchange parameters (the exchange lifetime, τ ex , and the fractional population, p a ), were constrained to globally fit all R 2 profiles for residues in the dimer interface of the protein, confidence limits were less than 8% for all optimized exchange parameters. In addition, F-tests showed that quality of the fits obtained using τ ex , p a as global parameters were not improved when these parameters were free to fit the R

  11. Fitness

    Science.gov (United States)

    ... gov home http://www.girlshealth.gov/ Home Fitness Fitness Want to look and feel your best? Physical ... are? Check out this info: What is physical fitness? top Physical fitness means you can do everyday ...

  12. Difficulties in fitting the thermal response of atomic force microscope cantilevers for stiffness calibration

    International Nuclear Information System (INIS)

    Cole, D G

    2008-01-01

    This paper discusses the difficulties of calibrating atomic force microscope (AFM) cantilevers, in particular the effect calibrating under light fluid-loading (in air) and under heavy fluid-loading (in water) has on the ability to use thermal motion response to fit model parameters that are used to determine cantilever stiffness. For the light fluid-loading case, the resonant frequency and quality factor can easily be used to determine stiffness. The extension of this approach to the heavy fluid-loading case is troublesome due to the low quality factor (high damping) caused by fluid-loading. Simple calibration formulae are difficult to realize, and the best approach is often to curve-fit the thermal response, using the parameters of natural frequency and mass ratio so that the curve-fit's response is within some acceptable tolerance of the actual thermal response. The parameters can then be used to calculate the cantilever stiffness. However, the process of curve-fitting can lead to erroneous results unless suitable care is taken. A feedback model of the fluid–structure interaction between the unloaded cantilever and the hydrodynamic drag provides a framework for fitting a modeled thermal response to a measured response and for evaluating the parametric uncertainty of the fit. The cases of uncertainty in the natural frequency, the mass ratio, and combined uncertainty are presented and the implications for system identification and stiffness calibration using curve-fitting techniques are discussed. Finally, considerations and recommendations for the calibration of AFM cantilevers are given in light of the results of this paper

  13. Using geometry to improve model fitting and experiment design for glacial isostasy

    Science.gov (United States)

    Kachuck, S. B.; Cathles, L. M.

    2017-12-01

    As scientists we routinely deal with models, which are geometric objects at their core - the manifestation of a set of parameters as predictions for comparison with observations. When the number of observations exceeds the number of parameters, the model is a hypersurface (the model manifold) in the space of all possible predictions. The object of parameter fitting is to find the parameters corresponding to the point on the model manifold as close to the vector of observations as possible. But the geometry of the model manifold can make this difficult. By curving, ending abruptly (where, for instance, parameters go to zero or infinity), and by stretching and compressing the parameters together in unexpected directions, it can be difficult to design algorithms that efficiently adjust the parameters. Even at the optimal point on the model manifold, parameters might not be individually resolved well enough to be applied to new contexts. In our context of glacial isostatic adjustment, models of sparse surface observations have a broad spread of sensitivity to mixtures of the earth's viscous structure and the surface distribution of ice over the last glacial cycle. This impedes precise statements about crucial geophysical processes, such as the planet's thermal history or the climates that controlled the ice age. We employ geometric methods developed in the field of systems biology to improve the efficiency of fitting (geodesic accelerated Levenberg-Marquardt) and to identify the maximally informative sources of additional data to make better predictions of sea levels and ice configurations (optimal experiment design). We demonstrate this in particular in reconstructions of the Barents Sea Ice Sheet, where we show that only certain kinds of data from the central Barents have the power to distinguish between proposed models.

  14. The level density parameters for fermi gas model

    International Nuclear Information System (INIS)

    Zuang Youxiang; Wang Cuilan; Zhou Chunmei; Su Zongdi

    1986-01-01

    Nuclear level densities are crucial ingredient in the statistical models, for instance, in the calculations of the widths, cross sections, emitted particle spectra, etc. for various reaction channels. In this work 667 sets of more reliable and new experimental data are adopted, which include average level spacing D D , radiative capture width Γ γ 0 at neutron binding energy and cumulative level number N 0 at the low excitation energy. They are published during 1973 to 1983. Based on the parameters given by Gilbert-Cameon and Cook the physical quantities mentioned above are calculated. The calculated results have the deviation obviously from experimental values. In order to improve the fitting, the parameters in the G-C formula are adjusted and new set of level density parameters is obsained. The parameters is this work are more suitable to fit new measurements

  15. Cosmological model-independent test of ΛCDM with two-point diagnostic by the observational Hubble parameter data

    Science.gov (United States)

    Cao, Shu-Lei; Duan, Xiao-Wei; Meng, Xiao-Lei; Zhang, Tong-Jie

    2018-04-01

    Aiming at exploring the nature of dark energy (DE), we use forty-three observational Hubble parameter data (OHD) in the redshift range 0 measurements. The binning methods turn out to be promising and considered to be robust. By applying the two-point diagnostic to the binned data, we find that although the best-fit values of Omh^2 fluctuate as the continuous redshift intervals change, on average, they are continuous with being constant within 1 σ confidence interval. Therefore, we conclude that the ΛCDM model cannot be ruled out.

  16. Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.

    Science.gov (United States)

    Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei

    2015-02-01

    This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.

  17. Ensemble Kinetic Modeling of Metabolic Networks from Dynamic Metabolic Profiles

    Directory of Open Access Journals (Sweden)

    Gengjie Jia

    2012-11-01

    Full Text Available Kinetic modeling of metabolic pathways has important applications in metabolic engineering, but significant challenges still remain. The difficulties faced vary from finding best-fit parameters in a highly multidimensional search space to incomplete parameter identifiability. To meet some of these challenges, an ensemble modeling method is developed for characterizing a subset of kinetic parameters that give statistically equivalent goodness-of-fit to time series concentration data. The method is based on the incremental identification approach, where the parameter estimation is done in a step-wise manner. Numerical efficacy is achieved by reducing the dimensionality of parameter space and using efficient random parameter exploration algorithms. The shift toward using model ensembles, instead of the traditional “best-fit” models, is necessary to directly account for model uncertainty during the application of such models. The performance of the ensemble modeling approach has been demonstrated in the modeling of a generic branched pathway and the trehalose pathway in Saccharomyces cerevisiae using generalized mass action (GMA kinetics.

  18. Application of Time-series Model to Predict Groundwater Quality Parameters for Agriculture: (Plain Mehran Case Study)

    Science.gov (United States)

    Mehrdad Mirsanjari, Mir; Mohammadyari, Fatemeh

    2018-03-01

    Underground water is regarded as considerable water source which is mainly available in arid and semi arid with deficient surface water source. Forecasting of hydrological variables are suitable tools in water resources management. On the other hand, time series concepts is considered efficient means in forecasting process of water management. In this study the data including qualitative parameters (electrical conductivity and sodium adsorption ratio) of 17 underground water wells in Mehran Plain has been used to model the trend of parameters change over time. Using determined model, the qualitative parameters of groundwater is predicted for the next seven years. Data from 2003 to 2016 has been collected and were fitted by AR, MA, ARMA, ARIMA and SARIMA models. Afterward, the best model is determined using information criterion or Akaike (AIC) and correlation coefficient. After modeling parameters, the map of agricultural land use in 2016 and 2023 were generated and the changes between these years were studied. Based on the results, the average of predicted SAR (Sodium Adsorption Rate) in all wells in the year 2023 will increase compared to 2016. EC (Electrical Conductivity) average in the ninth and fifteenth holes and decreases in other wells will be increased. The results indicate that the quality of groundwater for Agriculture Plain Mehran will decline in seven years.

  19. Ammonium removal from aqueous solutions by clinoptilolite: determination of isotherm and thermodynamic parameters and comparison of kinetics by the double exponential model and conventional kinetic models.

    Science.gov (United States)

    Tosun, Ismail

    2012-03-01

    The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R(2)) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.

  20. Ammonium Removal from Aqueous Solutions by Clinoptilolite: Determination of Isotherm and Thermodynamic Parameters and Comparison of Kinetics by the Double Exponential Model and Conventional Kinetic Models

    Directory of Open Access Journals (Sweden)

    İsmail Tosun

    2012-03-01

    Full Text Available The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R and four three-parameter (Redlich-Peterson (R-P, Sips, Toth and Khan isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2 of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°, enthalpy (∆H° and entropy (∆S° of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.

  1. Can the Stephani model be an alternative to FRW accelerating models?

    International Nuclear Information System (INIS)

    Godlowski, Wlodzimierz; Stelmach, Jerzy; Szydlowski, Marek

    2004-01-01

    A class of Stephani cosmological models as a prototype of a non-homogeneous universe is considered. The non-homogeneity can lead to accelerated evolution, which is now observed from the SNe Ia data. Three samples of type Ia supernovae obtained by Perlmutter et al, Tonry et al and Knop et al are taken into account. Different statistical methods (best fits as well as maximum likelihood method) to obtain estimation for the model parameters are used. The Stephani model is considered as an alternative to the ΛCDM model in the explanation of the present acceleration of the universe. The model explains the acceleration of the universe at the same level of accuracy as the ΛCDM model (χ 2 statistics are comparable). From the best fit analysis it follows that the Stephani model is characterized by a higher value of density parameter Ω m0 than the ΛCDM model. It is also shown that the model is consistent with the location of CMB peaks

  2. The use of Stress Tensor Discriminator Faults in separating heterogeneous fault-slip data with best-fit stress inversion methods. II. Compressional stress regimes

    Science.gov (United States)

    Tranos, Markos D.

    2018-02-01

    Synthetic heterogeneous fault-slip data as driven by Andersonian compressional stress tensors were used to examine the efficiency of best-fit stress inversion methods in separating them. Heterogeneous fault-slip data are separated only if (a) they have been driven by stress tensors defining 'hybrid' compression (R constitute a necessary discriminatory tool for the establishment and comparison of two compressional stress tensors determined by a best-fit stress inversion method. The best-fit stress inversion methods are not able to determine more than one 'real' compressional stress tensor, as far as the thrust stacking in an orogeny is concerned. They can only possibly discern stress differences in the late-orogenic faulting processes, but not between the main- and late-orogenic stages.

  3. Gfitter - Revisiting the global electroweak fit of the Standard Model and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Flaecher, H.; Hoecker, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Goebel, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Haller, J. [Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Moenig, K.; Stelzer, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2008-11-15

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M{sub H}=116.4{sup +18.3}{sub -1.3} GeV, and the 2{sigma} and 3{sigma} allowed regions [114,145] GeV and [[113,168] and [180,225

  4. The universal Higgs fit

    DEFF Research Database (Denmark)

    Giardino, P. P.; Kannike, K.; Masina, I.

    2014-01-01

    We perform a state-of-the-art global fit to all Higgs data. We synthesise them into a 'universal' form, which allows to easily test any desired model. We apply the proposed methodology to extract from data the Higgs branching ratios, production cross sections, couplings and to analyse composite...... Higgs models, models with extra Higgs doublets, supersymmetry, extra particles in the loops, anomalous top couplings, and invisible Higgs decays into Dark Matter. Best fit regions lie around the Standard Model predictions and are well approximated by our 'universal' fit. Latest data exclude the dilaton...... as an alternative to the Higgs, and disfavour fits with negative Yukawa couplings. We derive for the first time the SM Higgs boson mass from the measured rates, rather than from the peak positions, obtaining M-h = 124.4 +/- 1.6 GeV....

  5. Testing backreaction effects with observational Hubble parameter data

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Shu-Lei; Teng, Huan-Yu [Beijing Normal University, Department of Astronomy, Beijing (China); Wan, Hao-Yi [Beijing Normal University, Department of Astronomy, Beijing (China); National Astronomical Observatories, Chinese Academy of Sciences, Beijing (China); Yu, Hao-Ran [Shanghai Jiao Tong University, Tsung-Dao Lee Institute, Shanghai (China); Zhang, Tong-Jie [Dezhou University, Dezhou (China); Beijing Normal University, Department of Astronomy, Beijing (China)

    2018-02-15

    The spatially averaged inhomogeneous Universe includes a kinematical backreaction term Q{sub D} that is relate to the averaged spatial Ricci scalar left angle R right angle {sub D} in the framework of general relativity. Under the assumption that Q{sub D} and left angle R right angle {sub D} obey the scaling laws of the volume scale factor a{sub D}, a direct coupling between them with a scaling index n is remarkable. In order to explore the generic properties of a backreaction model for explaining the accelerated expansion of the Universe, we exploit two metrics to describe the late time Universe. Since the standard FLRW metric cannot precisely describe the late time Universe on small scales, the template metric with an evolving curvature parameter κ{sub D}(t) is employed. However, we doubt the validity of the prescription for κ{sub D}, which motivates us apply observational Hubble parameter data (OHD) to constrain parameters in dust cosmology. First, for FLRW metric, by getting best-fit constraints of Ω{sup D{sub 0m}} = 0.25{sup +0.03}{sub -0.03}, n = 0.02{sup +0.69}{sub -0.66}, and H{sub D{sub 0}} = 70.544{sup +4.24}{sub -3.97} km s{sup -1} Mpc{sup -1}, the evolutions of parameters are explored. Second, in template metric context, by marginalizing over H{sub D{sub 0}} as a prior of uniform distribution, we obtain the best-fit values of n = -1.22{sup +0.68}{sub -0.41} and Ω{sub m}{sup D{sub 0}} = 0.12{sup +0.04}{sub -0.02}. Moreover, we utilize three different Gaussian priors of H{sub D{sub 0}}, which result in different best-fits of n, but almost the same best-fit value of Ω{sub m}{sup D{sub 0}} ∝ 0.12. Also, the absolute constraints without marginalization of parameter are obtained: n = -1.1{sup +0.58}{sub -0.50} and Ω{sub m}{sup D{sub 0}} = 0.13 ± 0.03. With these constraints, the evolutions of the effective deceleration parameter q{sup D} indicate that the backreaction can account for the accelerated expansion of the Universe without involving extra

  6. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    Directory of Open Access Journals (Sweden)

    Jonathan R Karr

    2015-05-01

    Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  7. Global parameter estimation for thermodynamic models of transcriptional regulation.

    Science.gov (United States)

    Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

    2013-07-15

    Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. An automatic scaling method for obtaining the trace and parameters from oblique ionogram based on hybrid genetic algorithm

    Science.gov (United States)

    Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian

    2016-12-01

    Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.

  9. Fitness function and nonunique solutions in x-ray reflectivity curve fitting: crosserror between surface roughness and mass density

    International Nuclear Information System (INIS)

    Tiilikainen, J; Bosund, V; Mattila, M; Hakkarainen, T; Sormunen, J; Lipsanen, H

    2007-01-01

    Nonunique solutions of the x-ray reflectivity (XRR) curve fitting problem were studied by modelling layer structures with neural networks and designing a fitness function to handle the nonidealities of measurements. Modelled atomic-layer-deposited aluminium oxide film structures were used in the simulations to calculate XRR curves based on Parratt's formalism. This approach reduced the dimensionality of the parameter space and allowed the use of fitness landscapes in the study of nonunique solutions. Fitness landscapes, where the height in a map represents the fitness value as a function of the process parameters, revealed tracks where the local fitness optima lie. The tracks were projected on the physical parameter space thus allowing the construction of the crosserror equation between weakly determined parameters, i.e. between the mass density and the surface roughness of a layer. The equation gives the minimum error for the other parameters which is a consequence of the nonuniqueness of the solution if noise is present. Furthermore, the existence of a possible unique solution in a certain parameter range was found to be dependent on the layer thickness and the signal-to-noise ratio

  10. Simultaneous fitting of statistical-model parameters to symmetric and asymmetric fission cross sections

    International Nuclear Information System (INIS)

    Mancusi, D; Charity, R J; Cugnon, J

    2013-01-01

    The de-excitation of compound nuclei has been successfully described for several decades by means of statistical models. However, accurate predictions require some fine-tuning of the model parameters. This task can be simplified by studying several entrance channels, which populate different regions of the parameter space of the compound nucleus. Fusion reactions play an important role in this strategy because they minimise the uncertainty on the entrance channel by fixing mass, charge and excitation energy of the compound nucleus. If incomplete fusion is negligible, the only uncertainty on the compound nucleus comes from the spin distribution. However, some de-excitation channels, such as fission, are quite sensitive to spin. Other entrance channels can then be used to discriminate between equivalent parameter sets. The focus of this work is on fission and intermediate-mass-fragment emission cross sections of compound nuclei with 70 70 ≲ A ≲ 240. 240. The statistical de-excitation model is GEMINI++. The choice of the observables is natural in the framework of GEMINI++, which describes fragment emission using a fissionlike formalism. Equivalent parameter sets for fusion reactions can be resolved using the spallation entrance channel. This promising strategy can lead to the identification of a minimal set of physical ingredients necessary for a unified quantitative description of nuclear de-excitation.

  11. An experimental study of symmetric and asymmetric peak-fitting parameters for alpha-particle spectrometry

    International Nuclear Information System (INIS)

    Martin Sanchez, A.; Vera Tome, F.; Caceres Marzal, D.; Bland, C.J.

    1994-01-01

    A pulse-height spectrum of alpha-particle emissions at discrete energies can be fitted by the peak-shape functions generated by combining asymmetric truncated exponential functions with a symmetric Gaussian distribution. These functions have been applied successfully by several workers. A correlation was previously found between the variance of the symmetric Gaussian portion of the fitting function, and the parameter characterising the principal exponential tailing function. The results of a more detailed experimental study are reported, which involve varying the angle and the distance between the source and the detector. This analysis shows that the parameters of the symmetric and asymmetric parts of the fitted functions seem to depend on either the detector or the source. These parameters are influenced by the energy loss suffered by the alpha-particles as well as by the efficiency of charge collection in the solid-state detector. (orig.)

  12. The issue of statistical power for overall model fit in evaluating structural equation models

    Directory of Open Access Journals (Sweden)

    Richard HERMIDA

    2015-06-01

    Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.

  13. Cognitive Models of Risky Choice: Parameter Stability and Predictive Accuracy of Prospect Theory

    Science.gov (United States)

    Glockner, Andreas; Pachur, Thorsten

    2012-01-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are…

  14. Constant-parameter capture-recapture models

    Science.gov (United States)

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  15. Hybrid artificial bee colony algorithm for parameter optimization of five-parameter bidirectional reflectance distribution function model.

    Science.gov (United States)

    Wang, Qianqian; Zhao, Jing; Gong, Yong; Hao, Qun; Peng, Zhong

    2017-11-20

    A hybrid artificial bee colony (ABC) algorithm inspired by the best-so-far solution and bacterial chemotaxis was introduced to optimize the parameters of the five-parameter bidirectional reflectance distribution function (BRDF) model. To verify the performance of the hybrid ABC algorithm, we measured BRDF of three kinds of samples and simulated the undetermined parameters of the five-parameter BRDF model using the hybrid ABC algorithm and the genetic algorithm, respectively. The experimental results demonstrate that the hybrid ABC algorithm outperforms the genetic algorithm in convergence speed, accuracy, and time efficiency under the same conditions.

  16. Goodness-of-Fit versus Significance: A CAPM Selection with Dynamic Betas Applied to the Brazilian Stock Market

    Directory of Open Access Journals (Sweden)

    André Ricardo de Pinho Ronzani

    2017-12-01

    Full Text Available In this work, a Capital Asset Pricing Model (CAPM with time-varying betas is considered. These betas evolve over time, conditional on financial and non-financial variables. Indeed, the model proposed by Adrian and Franzoni (2009 is adapted to assess the behavior of some selected Brazilian equities. For each equity, several models are fitted, and the best model is chosen based on goodness-of-fit tests and parameters significance. Finally, using the selected dynamic models, VaR (Value-at-Risk measures are calculated. We can conclude that CAPM with time-varying betas provide less conservative VaR measures than those based on CAPM with static betas or historical VaR.

  17. Aerobic capacity and its relationship with parameters of health-related fitness in schoolchildren

    Directory of Open Access Journals (Sweden)

    Andrés Rosa Guillamón

    2015-12-01

    Full Text Available Background and objective: The aim of this study was to analyze the relationship between aerobic capacity and other parameters determining fitness in primary school. Methods: A cross-sectional descriptive study, 298 schoolchildren (139 males and 159 females aged 8-12. Body composition (weight and height and physical fitness (capacity, motor aerobic and musculoskeletal was assessed by ALPHA-Fitness battery. Aerobic capacity and body mass index (under/normal-weight and overweight/obesity were categorized using standard criteria. The variable motor / muscle overall capacity was calculated, and the maximum oxygen consumption (VO2max was indirectly estimated.  Results: The analysis of covariance (ANCOVA found that men have better values in the test 4x10m (p <0.001, longitudinal jump (p <0.001, Course-Navette (p <0.001 and in VO2max (p <0.001. The ANOVA test showed that schoolchildren with better aerobic capacity have lower weight and body mass index (p <0.001 for both, better performance in the test longitudinal jump (p <0.001 and better overall motor / muscle capacity, and increased VO2max (p <0.001 for both. Conclusion: The results of this study suggest that schoolchildren with healthy cardiorespiratory fitness had better physical fitness and are more likely to have healthy anthropometric parameters.

  18. Lumped-Parameter Models for Windturbine Footings on Layered Ground

    DEFF Research Database (Denmark)

    Andersen, Lars

    The design of modern wind turbines is typically based on lifetime analyses using aeroelastic codes. In this regard, the impedance of the foundations must be described accurately without increasing the overall size of the computationalmodel significantly. This may be obtained by the fitting...... of a lumped-parameter model to the results of a rigorous model or experimental results. In this paper, guidelines are given for the formulation of such lumped-parameter models and examples are given in which the models are utilised for the analysis of a wind turbine supported by a surface footing on a layered...

  19. Fitness of allozyme variants in Drosophila pseudoobscura. II. Selection at the Est-5, Odh and Mdh-2 loci

    Energy Technology Data Exchange (ETDEWEB)

    Marinkovic, D; Ayala, F J

    1975-01-01

    We have studied the effects on fitness of allelic variation at three gene loci (Est-5, Odh, and Mdh-2) coding for enzymes in Drosophila pseudoobscura. Genotype has a significant effect on fitness for all six parameters measured (female fecundity, male mating capacity, egg-to-adult survival under near-optimal and under competitive conditions, and rate of development under near-optimal and under competitive conditions). No single genotype is best for all six fitness parameters; rather, genotypes with superior performance during a certain stage of the life-cycle may have low fitness at some other stage, or in different environmental conditions. Heterozygotes are sometimes best when all fitness parameters are considered. There are significant interactions between loci. The various forms of balancing selection uncovered in our experiments may account for the polymorphisms occurring in natural populations of D. pseudoobscura at the three loci studied. (auth)

  20. Edge Modeling by Two Blur Parameters in Varying Contrasts.

    Science.gov (United States)

    Seo, Suyoung

    2018-06-01

    This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.

  1. Parameter Estimation of a Reliability Model of Demand-Caused and Standby-Related Failures of Safety Components Exposed to Degradation by Demand Stress and Ageing That Undergo Imperfect Maintenance

    Directory of Open Access Journals (Sweden)

    S. Martorell

    2017-01-01

    Full Text Available One can find many reliability, availability, and maintainability (RAM models proposed in the literature. However, such models become more complex day after day, as there is an attempt to capture equipment performance in a more realistic way, such as, explicitly addressing the effect of component ageing and degradation, surveillance activities, and corrective and preventive maintenance policies. Then, there is a need to fit the best model to real data by estimating the model parameters using an appropriate tool. This problem is not easy to solve in some cases since the number of parameters is large and the available data is scarce. This paper considers two main failure models commonly adopted to represent the probability of failure on demand (PFD of safety equipment: (1 by demand-caused and (2 standby-related failures. It proposes a maximum likelihood estimation (MLE approach for parameter estimation of a reliability model of demand-caused and standby-related failures of safety components exposed to degradation by demand stress and ageing that undergo imperfect maintenance. The case study considers real failure, test, and maintenance data for a typical motor-operated valve in a nuclear power plant. The results of the parameters estimation and the adoption of the best model are discussed.

  2. Evaluation of bacterial run and tumble motility parameters through trajectory analysis

    Science.gov (United States)

    Liang, Xiaomeng; Lu, Nanxi; Chang, Lin-Ching; Nguyen, Thanh H.; Massoudieh, Arash

    2018-04-01

    In this paper, a method for extraction of the behavior parameters of bacterial migration based on the run and tumble conceptual model is described. The methodology is applied to the microscopic images representing the motile movement of flagellated Azotobacter vinelandii. The bacterial cells are considered to change direction during both runs and tumbles as is evident from the movement trajectories. An unsupervised cluster analysis was performed to fractionate each bacterial trajectory into run and tumble segments, and then the distribution of parameters for each mode were extracted by fitting mathematical distributions best representing the data. A Gaussian copula was used to model the autocorrelation in swimming velocity. For both run and tumble modes, Gamma distribution was found to fit the marginal velocity best, and Logistic distribution was found to represent better the deviation angle than other distributions considered. For the transition rate distribution, log-logistic distribution and log-normal distribution, respectively, was found to do a better job than the traditionally agreed exponential distribution. A model was then developed to mimic the motility behavior of bacteria at the presence of flow. The model was applied to evaluate its ability to describe observed patterns of bacterial deposition on surfaces in a micro-model experiment with an approach velocity of 200 μm/s. It was found that the model can qualitatively reproduce the attachment results of the micro-model setting.

  3. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s. 5 figures

  4. Potential fitting biases resulting from grouping data into variable width bins

    International Nuclear Information System (INIS)

    Towers, S.

    2014-01-01

    When reading peer-reviewed scientific literature describing any analysis of empirical data, it is natural and correct to proceed with the underlying assumption that experiments have made good faith efforts to ensure that their analyses yield unbiased results. However, particle physics experiments are expensive and time consuming to carry out, thus if an analysis has inherent bias (even if unintentional), much money and effort can be wasted trying to replicate or understand the results, particularly if the analysis is fundamental to our understanding of the universe. In this note we discuss the significant biases that can result from data binning schemes. As we will show, if data are binned such that they provide the best comparison to a particular (but incorrect) model, the resulting model parameter estimates when fitting to the binned data can be significantly biased, leading us to too often accept the model hypothesis when it is not in fact true. When using binned likelihood or least squares methods there is of course no a priori requirement that data bin sizes need to be constant, but we show that fitting to data grouped into variable width bins is particularly prone to produce biased results if the bin boundaries are chosen to optimize the comparison of the binned data to a wrong model. The degree of bias that can be achieved simply with variable binning can be surprisingly large. Fitting the data with an unbinned likelihood method, when possible to do so, is the best way for researchers to show that their analyses are not biased by binning effects. Failing that, equal bin widths should be employed as a cross-check of the fitting analysis whenever possible

  5. Potential fitting biases resulting from grouping data into variable width bins

    Energy Technology Data Exchange (ETDEWEB)

    Towers, S., E-mail: smtowers@asu.edu

    2014-07-30

    When reading peer-reviewed scientific literature describing any analysis of empirical data, it is natural and correct to proceed with the underlying assumption that experiments have made good faith efforts to ensure that their analyses yield unbiased results. However, particle physics experiments are expensive and time consuming to carry out, thus if an analysis has inherent bias (even if unintentional), much money and effort can be wasted trying to replicate or understand the results, particularly if the analysis is fundamental to our understanding of the universe. In this note we discuss the significant biases that can result from data binning schemes. As we will show, if data are binned such that they provide the best comparison to a particular (but incorrect) model, the resulting model parameter estimates when fitting to the binned data can be significantly biased, leading us to too often accept the model hypothesis when it is not in fact true. When using binned likelihood or least squares methods there is of course no a priori requirement that data bin sizes need to be constant, but we show that fitting to data grouped into variable width bins is particularly prone to produce biased results if the bin boundaries are chosen to optimize the comparison of the binned data to a wrong model. The degree of bias that can be achieved simply with variable binning can be surprisingly large. Fitting the data with an unbinned likelihood method, when possible to do so, is the best way for researchers to show that their analyses are not biased by binning effects. Failing that, equal bin widths should be employed as a cross-check of the fitting analysis whenever possible.

  6. Optimization of parameters for fitting linear accelerator photon beams using a modified CBEAM model

    International Nuclear Information System (INIS)

    Ayyangar, K.; Daftari, I.; Palta, J.; Suntharalingam, N.

    1989-01-01

    Measured beam profiles and central-axis depth-dose data for 6- and 25-MV photon beams are used to generate a dose matrix which represents the full beam. A corresponding dose matrix is also calculated using the modified CBEAM model. The calculational model uses the usual set of three parameters to define the intensity at beam edges and the parameter that accounts for collimator transmission. An additional set of three parameters is used for the primary profile factor, expressed as a function of distance from the central axis. An optimization program has been adapted to automatically adjust these parameters to minimize the χ 2 between the measured and calculated data. The average values of the parameters for small (6x6 cm 2 ), medium (10x10 cm 2 ), and large (20x20 cm 2 ) field sizes are found to represent the beam adequately for all field sizes. The calculated and the measured doses at any point agree to within 2% for any field size in the range 4x4 to 40x40 cm 2

  7. ROLE OF WATERSHED SUBDIVISION ON MODELING THE EFFECTIVENESS OF BEST MANAGEMENT PRACTICES WITH SWAT

    Science.gov (United States)

    Distributed parameter watershed models are often used for evaluating the effectiveness of various best management practices (BMPs). Streamflow, sediment, and nutrient yield predictions of a watershed model can be affected by spatial resolution as dictated by watershed subdivisio...

  8. Fitting the Phenomenological MSSM

    CERN Document Server

    AbdusSalam, S S; Quevedo, F; Feroz, F; Hobson, M

    2010-01-01

    We perform a global Bayesian fit of the phenomenological minimal supersymmetric standard model (pMSSM) to current indirect collider and dark matter data. The pMSSM contains the most relevant 25 weak-scale MSSM parameters, which are simultaneously fit using `nested sampling' Monte Carlo techniques in more than 15 years of CPU time. We calculate the Bayesian evidence for the pMSSM and constrain its parameters and observables in the context of two widely different, but reasonable, priors to determine which inferences are robust. We make inferences about sparticle masses, the sign of the $\\mu$ parameter, the amount of fine tuning, dark matter properties and the prospects for direct dark matter detection without assuming a restrictive high-scale supersymmetry breaking model. We find the inferred lightest CP-even Higgs boson mass as an example of an approximately prior independent observable. This analysis constitutes the first statistically convergent pMSSM global fit to all current data.

  9. Multi-Parameter Estimation for Orthorhombic Media

    KAUST Repository

    Masmoudi, Nabil

    2015-08-19

    Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.

  10. Multi-Parameter Estimation for Orthorhombic Media

    KAUST Repository

    Masmoudi, Nabil; Alkhalifah, Tariq Ali

    2015-01-01

    Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.

  11. Lumped-Parameter Models for Wind-Turbine Footings on Layered Ground

    DEFF Research Database (Denmark)

    Andersen, Lars; Liingaard, Morten

    2007-01-01

    The design of modern wind turbines is typically based on lifetime analyses using aeroelastic codes. In this regard, the impedance of the foundations must be described accurately without increasing the overall size of the computational model significantly. This may be obtained by the fitting...... of a lumped-parameter model to the results of a rigorous model or experimental results. In this paper, guidelines are given for the formulation of such lumped-parameter models and examples are given in which the models are utilised for the analysis of a wind turbine supported by a surface footing on a layered...

  12. Observational constraints on Hubble parameter in viscous generalized Chaplygin gas

    Science.gov (United States)

    Thakur, P.

    2018-04-01

    Cosmological model with viscous generalized Chaplygin gas (in short, VGCG) is considered here to determine observational constraints on its equation of state parameters (in short, EoS) from background data. These data consists of H(z)-z (OHD) data, Baryonic Acoustic Oscillations peak parameter, CMB shift parameter and SN Ia data (Union 2.1). Best-fit values of the EoS parameters including present Hubble parameter (H0) and their acceptable range at different confidence limits are determined. In this model the permitted range for the present Hubble parameter and the transition redshift (zt) at 1σ confidence limits are H0= 70.24^{+0.34}_{-0.36} and zt=0.76^{+0.07}_{-0.07} respectively. These EoS parameters are then compared with those of other models. Present age of the Universe (t0) have also been determined here. Akaike information criterion and Bayesian information criterion for the model selection have been adopted for comparison with other models. It is noted that VGCG model satisfactorily accommodates the present accelerating phase of the Universe.

  13. Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2017-01-01

    The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days

  14. Reproducing tailing in breakthrough curves: Are statistical models equally representative and predictive?

    Science.gov (United States)

    Pedretti, Daniele; Bianchi, Marco

    2018-03-01

    Breakthrough curves (BTCs) observed during tracer tests in highly heterogeneous aquifers display strong tailing. Power laws are popular models for both the empirical fitting of these curves, and the prediction of transport using upscaling models based on best-fitted estimated parameters (e.g. the power law slope or exponent). The predictive capacity of power law based upscaling models can be however questioned due to the difficulties to link model parameters with the aquifers' physical properties. This work analyzes two aspects that can limit the use of power laws as effective predictive tools: (a) the implication of statistical subsampling, which often renders power laws undistinguishable from other heavily tailed distributions, such as the logarithmic (LOG); (b) the difficulties to reconcile fitting parameters obtained from models with different formulations, such as the presence of a late-time cutoff in the power law model. Two rigorous and systematic stochastic analyses, one based on benchmark distributions and the other on BTCs obtained from transport simulations, are considered. It is found that a power law model without cutoff (PL) results in best-fitted exponents (αPL) falling in the range of typical experimental values reported in the literature (1.5 tailing becomes heavier. Strong fluctuations occur when the number of samples is limited, due to the effects of subsampling. On the other hand, when the power law model embeds a cutoff (PLCO), the best-fitted exponent (αCO) is insensitive to the degree of tailing and to the effects of subsampling and tends to a constant αCO ≈ 1. In the PLCO model, the cutoff rate (λ) is the parameter that fully reproduces the persistence of the tailing and is shown to be inversely correlated to the LOG scale parameter (i.e. with the skewness of the distribution). The theoretical results are consistent with the fitting analysis of a tracer test performed during the MADE-5 experiment. It is shown that a simple

  15. Computing ordinary least-squares parameter estimates for the National Descriptive Model of Mercury in Fish

    Science.gov (United States)

    Donato, David I.

    2013-01-01

    A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.

  16. Global fits of GUT-scale SUSY models with GAMBIT

    Science.gov (United States)

    Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin

    2017-12-01

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

  17. Global fits of GUT-scale SUSY models with GAMBIT

    Energy Technology Data Exchange (ETDEWEB)

    Athron, Peter [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Bringmann, Torsten; Dal, Lars A.; Krislock, Abram; Raklev, Are [University of Oslo, Department of Physics, Oslo (Norway); Buckley, Andy [University of Glasgow, SUPA, School of Physics and Astronomy, Glasgow (United Kingdom); Chrzaszcz, Marcin [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); H. Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Krakow (Poland); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Jackson, Paul; White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, CNRS, ENS de Lyon, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval (France); Theoretical Physics Department, CERN, Geneva (Switzerland); Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Rogan, Christopher [Harvard University, Department of Physics, Cambridge, MA (United States); Ruiz de Austri, Roberto [IFIC-UV/CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Saavedra, Aldo [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); The University of Sydney, Faculty of Engineering and Information Technologies, Centre for Translational Data Science, School of Physics, Camperdown, NSW (Australia); Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Serra, Nicola [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); Collaboration: The GAMBIT Collaboration

    2017-12-15

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos. (orig.)

  18. Fitting measurement models to vocational interest data: are dominance models ideal?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz; Rounds, James; Williams, Bruce A

    2009-09-01

    In this study, the authors examined the item response process underlying 3 vocational interest inventories: the Occupational Preference Inventory (C.-P. Deng, P. I. Armstrong, & J. Rounds, 2007), the Interest Profiler (J. Rounds, T. Smith, L. Hubert, P. Lewis, & D. Rivkin, 1999; J. Rounds, C. M. Walker, et al., 1999), and the Interest Finder (J. E. Wall & H. E. Baker, 1997; J. E. Wall, L. L. Wise, & H. E. Baker, 1996). Item response theory (IRT) dominance models, such as the 2-parameter and 3-parameter logistic models, assume that item response functions (IRFs) are monotonically increasing as the latent trait increases. In contrast, IRT ideal point models, such as the generalized graded unfolding model, have IRFs that peak where the latent trait matches the item. Ideal point models are expected to fit better because vocational interest inventories ask about typical behavior, as opposed to requiring maximal performance. Results show that across all 3 interest inventories, the ideal point model provided better descriptions of the response process. The importance of specifying the correct item response model for precise measurement is discussed. In particular, scores computed by a dominance model were shown to be sometimes illogical: individuals endorsing mostly realistic or mostly social items were given similar scores, whereas scores based on an ideal point model were sensitive to which type of items respondents endorsed.

  19. Parameters identification of photovoltaic models using an improved JAYA optimization algorithm

    International Nuclear Information System (INIS)

    Yu, Kunjie; Liang, J.J.; Qu, B.Y.; Chen, Xu; Wang, Heshan

    2017-01-01

    Highlights: • IJAYA algorithm is proposed to identify the PV model parameters efficiently. • A self-adaptive weight is introduced to purposefully adjust the search process. • Experience-based learning strategy is developed to enhance the population diversity. • Chaotic learning method is proposed to refine the quality of the best solution. • IJAYA features the superior performance in identifying parameters of PV models. - Abstract: Parameters identification of photovoltaic (PV) models based on measured current-voltage characteristic curves is significant for the simulation, evaluation, and control of PV systems. To accurately and reliably identify the parameters of different PV models, an improved JAYA (IJAYA) optimization algorithm is proposed in the paper. In IJAYA, a self-adaptive weight is introduced to adjust the tendency of approaching the best solution and avoiding the worst solution at different search stages, which enables the algorithm to approach the promising area at the early stage and implement the local search at the later stage. Furthermore, an experience-based learning strategy is developed and employed randomly to maintain the population diversity and enhance the exploration ability. A chaotic elite learning method is proposed to refine the quality of the best solution in each generation. The proposed IJAYA is used to solve the parameters identification problems of different PV models, i.e., single diode, double diode, and PV module. Comprehensive experiment results and analyses indicate that IJAYA can obtain a highly competitive performance compared with other state-of-the-state algorithms, especially in terms of accuracy and reliability.

  20. The impact of structural error on parameter constraint in a climate model

    Science.gov (United States)

    McNeall, Doug; Williams, Jonny; Booth, Ben; Betts, Richard; Challenor, Peter; Wiltshire, Andy; Sexton, David

    2016-11-01

    Uncertainty in the simulation of the carbon cycle contributes significantly to uncertainty in the projections of future climate change. We use observations of forest fraction to constrain carbon cycle and land surface input parameters of the global climate model FAMOUS, in the presence of an uncertain structural error. Using an ensemble of climate model runs to build a computationally cheap statistical proxy (emulator) of the climate model, we use history matching to rule out input parameter settings where the corresponding climate model output is judged sufficiently different from observations, even allowing for uncertainty. Regions of parameter space where FAMOUS best simulates the Amazon forest fraction are incompatible with the regions where FAMOUS best simulates other forests, indicating a structural error in the model. We use the emulator to simulate the forest fraction at the best set of parameters implied by matching the model to the Amazon, Central African, South East Asian, and North American forests in turn. We can find parameters that lead to a realistic forest fraction in the Amazon, but that using the Amazon alone to tune the simulator would result in a significant overestimate of forest fraction in the other forests. Conversely, using the other forests to tune the simulator leads to a larger underestimate of the Amazon forest fraction. We use sensitivity analysis to find the parameters which have the most impact on simulator output and perform a history-matching exercise using credible estimates for simulator discrepancy and observational uncertainty terms. We are unable to constrain the parameters individually, but we rule out just under half of joint parameter space as being incompatible with forest observations. We discuss the possible sources of the discrepancy in the simulated Amazon, including missing processes in the land surface component and a bias in the climatology of the Amazon.

  1. TWO-PARAMETER ISOTHERMS OF METHYL ORANGE SORPTION BY PINECONE DERIVED ACTIVATED CARBON

    Directory of Open Access Journals (Sweden)

    M. R. Samarghandi ، M. Hadi ، S. Moayedi ، F. Barjasteh Askari

    2009-10-01

    Full Text Available The adsorption of a mono azo dye methyl-orange (MeO onto granular pinecone derived activated carbon (GPAC, from aqueous solutions, was studied in a batch system. Seven two-parameter isotherm models Langmuir, Freundlich, Dubinin-Radushkevic, Temkin, Halsey, Jovanovic and Hurkins-Jura were used to fit the experimental data. The results revealed that the adsorption isotherm models fitted the data in the order of Jovanovic (X2=1.374 > Langmuir > Dubinin-Radushkevic > Temkin > Freundlich > Halsey > Hurkins-Jura isotherms. Adsorption isotherms modeling showed that the interaction of dye with activated carbon surface is localized monolayer adsorption. A comparison of kinetic models was evaluated for the pseudo-second order, Elovich and Lagergren kinetic models. Lagergren first order model was found to agree well with the experimental data (X2=9.231. In order to determine the best-fit isotherm and kinetic models, two error analysis methods of Residual Mean Square Error and Chi-square statistic (X2 were used to evaluate the data.

  2. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    Science.gov (United States)

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  3. Measured, modeled, and causal conceptions of fitness

    Science.gov (United States)

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  4. Model for gas hydrates applied to CCS systems part II. Fitting of parameters for models of hydrates of pure gases

    Czech Academy of Sciences Publication Activity Database

    Vinš, Václav; Jäger, A.; Hrubý, Jan; Span, R.

    2017-01-01

    Roč. 435, March (2017), s. 104-117 ISSN 0378-3812 R&D Projects: GA MŠk(CZ) 7F14466; GA ČR(CZ) GJ15-07129Y Institutional support: RVO:61388998 Keywords : carbon capture and storage * clathrate * parameter fitting Subject RIV: BJ - Thermodynamics Impact factor: 2.473, year: 2016 http://ac.els-cdn.com/S0378381216306069/1-s2.0-S0378381216306069-main.pdf?_tid=7b6bf82c-2f22-11e7-8661-00000aab0f02&acdnat=1493721260_17561db239dd867f17c2ad3bda9a5540

  5. Parameter-free methods distinguish Wnt pathway models and guide design of experiments

    KAUST Repository

    MacLean, Adam L.; Rosen, Zvi; Byrne, Helen M.; Harrington, Heather A.

    2015-01-01

    models can fit this time course. We appeal to algebraic methods (concepts from chemical reaction network theory and matroid theory) to analyze the models without recourse to specific parameter values. These approaches provide insight into aspects of Wnt

  6. Fitting model-based psychometric functions to simultaneity and temporal-order judgment data: MATLAB and R routines.

    Science.gov (United States)

    Alcalá-Quintana, Rocío; García-Pérez, Miguel A

    2013-12-01

    Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.

  7. Inverse modeling and animation of growing single-stemmed trees at interactive rates

    Science.gov (United States)

    S. Rudnick; L. Linsen; E.G. McPherson

    2007-01-01

    For city planning purposes, animations of growing trees of several species can be used to deduce which species may best fit a particular environment. The models used for the animation must conform to real measured data. We present an approach for inverse modeling to fit global growth parameters. The model comprises local production rules, which are iteratively and...

  8. Revised models and genetic parameter estimates for production and ...

    African Journals Online (AJOL)

    Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...

  9. Fitting Diffusion Item Response Theory Models for Responses and Response Times Using the R Package diffIRT

    Directory of Open Access Journals (Sweden)

    Dylan Molenaar

    2015-08-01

    Full Text Available In the psychometric literature, item response theory models have been proposed that explicitly take the decision process underlying the responses of subjects to psychometric test items into account. Application of these models is however hampered by the absence of general and flexible software to fit these models. In this paper, we present diffIRT, an R package that can be used to fit item response theory models that are based on a diffusion process. We discuss parameter estimation and model fit assessment, show the viability of the package in a simulation study, and illustrate the use of the package with two datasets pertaining to extraversion and mental rotation. In addition, we illustrate how the package can be used to fit the traditional diffusion model (as it has been originally developed in experimental psychology to data.

  10. Kernel-density estimation and approximate Bayesian computation for flexible epidemiological model fitting in Python.

    Science.gov (United States)

    Irvine, Michael A; Hollingsworth, T Déirdre

    2018-05-26

    Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Scaling up watershed model parameters: flow and load simulations of the Edisto River Basin, South Carolina, 2007-09

    Science.gov (United States)

    Feaster, Toby D.; Benedict, Stephen T.; Clark, Jimmy M.; Bradley, Paul M.; Conrads, Paul

    2014-01-01

    As part of an ongoing effort by the U.S. Geological Survey to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River Basin, analyses and simulations of the hydrology of the Edisto River Basin were made using the topography-based hydrological model (TOPMODEL). A primary focus of the investigation was to assess the potential for scaling up a previous application of TOPMODEL for the McTier Creek watershed, which is a small headwater catchment to the Edisto River Basin. Scaling up was done in a step-wise manner, beginning with applying the calibration parameters, meteorological data, and topographic-wetness-index data from the McTier Creek TOPMODEL to the Edisto River TOPMODEL. Additional changes were made for subsequent simulations, culminating in the best simulation, which included meteorological and topographic wetness index data from the Edisto River Basin and updated calibration parameters for some of the TOPMODEL calibration parameters. The scaling-up process resulted in nine simulations being made. Simulation 7 best matched the streamflows at station 02175000, Edisto River near Givhans, SC, which was the downstream limit for the TOPMODEL setup, and was obtained by adjusting the scaling factor, including streamflow routing, and using NEXRAD precipitation data for the Edisto River Basin. The Nash-Sutcliffe coefficient of model-fit efficiency and Pearson’s correlation coefficient for simulation 7 were 0.78 and 0.89, respectively. Comparison of goodness-of-fit statistics between measured and simulated daily mean streamflow for the McTier Creek and Edisto River models showed that with calibration, the Edisto River TOPMODEL produced slightly better results than the McTier Creek model, despite the substantial difference in the drainage-area size at the outlet locations for the two models (30.7 and 2,725 square miles, respectively). Along with the TOPMODEL

  12. Dissecting galaxy formation models with sensitivity analysis—a new approach to constrain the Milky Way formation history

    International Nuclear Information System (INIS)

    Gómez, Facundo A.; O'Shea, Brian W.; Coleman-Smith, Christopher E.; Tumlinson, Jason; Wolpert, Robert L.

    2014-01-01

    We present an application of a statistical tool known as sensitivity analysis to characterize the relationship between input parameters and observational predictions of semi-analytic models of galaxy formation coupled to cosmological N-body simulations. We show how a sensitivity analysis can be performed on our chemo-dynamical model, ChemTreeN, to characterize and quantify its relationship between model input parameters and predicted observable properties. The result of this analysis provides the user with information about which parameters are most important and most likely to affect the prediction of a given observable. It can also be used to simplify models by identifying input parameters that have no effect on the outputs (i.e., observational predictions) of interest. Conversely, sensitivity analysis allows us to identify what model parameters can be most efficiently constrained by the given observational data set. We have applied this technique to real observational data sets associated with the Milky Way, such as the luminosity function of the dwarf satellites. The results from the sensitivity analysis are used to train specific model emulators of ChemTreeN, only involving the most relevant input parameters. This allowed us to efficiently explore the input parameter space. A statistical comparison of model outputs and real observables is used to obtain a 'best-fitting' parameter set. We consider different Milky-Way-like dark matter halos to account for the dependence of the best-fitting parameter selection process on the underlying merger history of the models. For all formation histories considered, running ChemTreeN with best-fitting parameters produced luminosity functions that tightly fit their observed counterpart. However, only one of the resulting stellar halo models was able to reproduce the observed stellar halo mass within 40 kpc of the Galactic center. On the basis of this analysis, it is possible to disregard certain models, and their

  13. Fitting neuron models to spike trains

    Directory of Open Access Journals (Sweden)

    Cyrille eRossant

    2011-02-01

    Full Text Available Computational modeling is increasingly used to understand the function of neural circuitsin systems neuroscience.These studies require models of individual neurons with realisticinput-output properties.Recently, it was found that spiking models can accurately predict theprecisely timed spike trains produced by cortical neurons in response tosomatically injected currents,if properly fitted. This requires fitting techniques that are efficientand flexible enough to easily test different candidate models.We present a generic solution, based on the Brian simulator(a neural network simulator in Python, which allowsthe user to define and fit arbitrary neuron models to electrophysiological recordings.It relies on vectorization and parallel computing techniques toachieve efficiency.We demonstrate its use on neural recordings in the barrel cortex andin the auditory brainstem, and confirm that simple adaptive spiking modelscan accurately predict the response of cortical neurons. Finally, we show how a complexmulticompartmental model can be reduced to a simple effective spiking model.

  14. Multi-parameters scanning in HTI media

    KAUST Repository

    Masmoudi, Nabil

    2014-08-05

    Building credible anisotropy models is crucial in imaging. One way to estimate anisotropy parameters is to relate them analytically to traveltime, which is challenging in inhomogeneous media. Using perturbation theory, we develop traveltime approximations for transversely isotropic media with horizontal symmetry axis (HTI) as explicit functions of the anellipticity parameter η and the symmetry axis azimuth ϕ in inhomogeneous background media. Specifically, our expansion assumes an inhomogeneous elliptically anisotropic background medium, which may be obtained from well information and stacking velocity analysis in HTI media. This formulation has advantages on two fronts: on one hand, it alleviates the computational complexity associated with solving the HTI eikonal equation, and on the other hand, it provides a mechanism to scan for the best fitting parameters η and ϕ without the need for repetitive modeling of traveltimes, because the traveltime coefficients of the expansion are independent of the perturbed parameters η and ϕ. The accuracy of our expansion is further enhanced by the use of shanks transform. We show the effectiveness of our scheme with tests on a 3D model and we propose an approach for multi-parameters scanning in TI media.

  15. Multi-parameters scanning in HTI media

    KAUST Repository

    Masmoudi, Nabil; Alkhalifah, Tariq Ali

    2014-01-01

    Building credible anisotropy models is crucial in imaging. One way to estimate anisotropy parameters is to relate them analytically to traveltime, which is challenging in inhomogeneous media. Using perturbation theory, we develop traveltime approximations for transversely isotropic media with horizontal symmetry axis (HTI) as explicit functions of the anellipticity parameter η and the symmetry axis azimuth ϕ in inhomogeneous background media. Specifically, our expansion assumes an inhomogeneous elliptically anisotropic background medium, which may be obtained from well information and stacking velocity analysis in HTI media. This formulation has advantages on two fronts: on one hand, it alleviates the computational complexity associated with solving the HTI eikonal equation, and on the other hand, it provides a mechanism to scan for the best fitting parameters η and ϕ without the need for repetitive modeling of traveltimes, because the traveltime coefficients of the expansion are independent of the perturbed parameters η and ϕ. The accuracy of our expansion is further enhanced by the use of shanks transform. We show the effectiveness of our scheme with tests on a 3D model and we propose an approach for multi-parameters scanning in TI media.

  16. Psychophysical parameters of a multidimensional pain scale in newborns

    International Nuclear Information System (INIS)

    De Oliveira, M V M; De Jesus, J A L; Tristao, R M

    2012-01-01

    The Premature Infant Pain Profile (PIPP) is a promising multidimensional tool for comparison and testing of new technologies in newborn pain assessment studies since it may adhere to basic psychophysical parameters of intensity, direction, reactivity, regulation and slope described in analyses of physiological pain indicators. The aim of this study was to evaluate whether these psychophysical parameters can be achieved using the PIPP in acute pain assessment. Thirty-six healthy term newborn infants were conveniently sampled whilst being videotaped before, during and after heel prick blood sampling. The images were blind-scored by three trained independent raters and scored against the PIPP. The PIPP and its facial action indicators met the parameters of intensity, reactivity and regulation (all p < 0.001). The heart rate variability did not meet any parameter (all p > 0.05). The oxygen saturation variability met only the intensity parameter (p < 0.05). The behavioural state indicator met all parameters and had the best correlation to the psychophysical parameters of all indicators of PIPP (all p < 0.001). We concluded that the overall PIPP meets the assumptions of these psychophysical parameters, being the behavioural state indicator which best fit the model. (paper)

  17. Measuring Quasar Spin via X-ray Continuum Fitting

    Science.gov (United States)

    Jenkins, Matthew; Pooley, David; Rappaport, Saul; Steiner, Jack

    2018-01-01

    We have identified several quasars whose X-ray spectra appear very soft. When fit with power-law models, the best-fit indices are greater than 3. This is very suggestive of thermal disk emission, indicating that the X-ray spectrum is dominated by the disk component. Galactic black hole binaries in such states have been successfully fit with disk-blackbody models to constrain the inner radius, which also constrains the spin of the black hole. We have fit those models to XMM-Newton spectra of several of our identified soft X-ray quasars to place constraints on the spins of the supermassive black holes.

  18. Fitting the e+e- → e+e- lineshape

    International Nuclear Information System (INIS)

    Martinez, M.; Miquel, R.

    1992-01-01

    The implications of different treatments of the e + e - →e + e - cross sections in the context of Z parameter fitting are discussed. We show that fitting with a complete description of the process might become important for an accurate determination of the Z parameters. A fitting formula describing the integrated cross section in terms of the Z parameters is presented. This formula agrees with the most accurate calculations in the Standard Model to within 1 per mil. (orig.)

  19. SU-D-204-05: Fitting Four NTCP Models to Treatment Outcome Data of Salivary Glands Recorded Six Months After Radiation Therapy for Head and Neck Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Mavroidis, P; Price, A; Kostich, M; Green, R; Das, S; Marks, L; Chera, B [University North Carolina, Chapel Hill, NC (United States); Amdur, R; Mendenhall, W [University of Florida, Gainesville, FL (United States); Sheets, N [University of North Carolina, Raleigh, NC (United States)

    2016-06-15

    Purpose: To estimate the radiobiological parameters of four popular NTCP models that describe the dose-response relations of salivary glands to the severity of patient reported dry mouth 6 months post chemo-radiotherapy. To identify the glands, which best correlate with the manifestation of those clinical endpoints. Finally, to evaluate the goodness-of-fit of the NTCP models. Methods: Forty-three patients were treated on a prospective multiinstitutional phase II study for oropharyngeal squamous cell carcinoma. All the patients received 60 Gy IMRT and they reported symptoms using the novel patient reported outcome version of the CTCAE. We derived the individual patient dosimetric data of the parotid and submandibular glands (SMG) as separate structures as well as combinations. The Lyman-Kutcher-Burman (LKB), Relative Seriality (RS), Logit and Relative Logit (RL) NTCP models were used to fit the patients data. The fitting of the different models was assessed through the area under the receiver operating characteristic curve (AUC) and the Odds Ratio methods. Results: The AUC values were highest for the contralateral parotid for Grade ≥ 2 (0.762 for the LKB, RS, Logit and 0.753 for the RL). For the salivary glands the AUC values were: 0.725 for the LKB, RS, Logit and 0.721 for the RL. For the contralateral SMG the AUC values were: 0.721 for LKB, 0.714 for Logit and 0.712 for RS and RL. The Odds Ratio for the contralateral parotid was 5.8 (1.3–25.5) for all the four NTCP models for the radiobiological dose threshold of 21Gy. Conclusion: It was shown that all the examined NTCP models could fit the clinical data well with very similar accuracy. The contralateral parotid gland appears to correlated best with the clinical endpoints of severe/very severe dry mouth. An EQD2Gy dose of 21Gy appears to be a safe threshold to be used as a constraint in treatment planning.

  20. Fast fitting of non-Gaussian state-space models to animal movement data via Template Model Builder

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Whoriskey, Kim; Yurkowski, David

    2015-01-01

    recommend using the Laplace approximation combined with automatic differentiation (as implemented in the novel R package Template Model Builder; TMB) for the fast fitting of continuous-time multivariate non-Gaussian SSMs. Through Argos satellite tracking data, we demonstrate that the use of continuous...... are able to estimate additional parameters compared to previous methods, all without requiring a substantial increase in computational time. The model implementation is made available through the R package argosTrack....

  1. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    Science.gov (United States)

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  2. Exploratory Analyses To Improve Model Fit: Errors Due to Misspecification and a Strategy To Reduce Their Occurrence.

    Science.gov (United States)

    Green, Samuel B.; Thompson, Marilyn S.; Poirier, Jennifer

    1999-01-01

    The use of Lagrange multiplier (LM) tests in specification searches and the efforts that involve the addition of extraneous parameters to models are discussed. Presented are a rationale and strategy for conducting specification searches in two stages that involve adding parameters to LM tests to maximize fit and then deleting parameters not needed…

  3. Classical algorithms for automated parameter-search methods in compartmental neural models - A critical survey based on simulations using neuron

    International Nuclear Information System (INIS)

    Mutihac, R.; Mutihac, R.C.; Cicuttin, A.

    2001-09-01

    gradient-descent techniques are adequate if the parameter space is low-dimensional, relatively smooth, and has a few local minima (e.g., parameterizing single-neuron compartmental models). Only the fast algorithms and/or a decent (low) number of model parameters are candidates for automated parameter search because of practical reasons. Eventually, the size of the parameter space may be reduced and/or parallel supercomputers may be used. Data overfitting may negatively affect the generalization ability of the model. Bayesian methods include Occam's factor, which set the preference for simpler models. Proliferation of (neural) models raises the question of rigorous criteria for comparing the overall performance of various models designed to match the same type of data. Bayesian methods provide the best framework to assess the neural models quantitatively. Paradoxically, parameter-search methods may sometimes be more useful when they fail by discarding unrealistic mechanisms used in the model design, rather than fitting experimental data to an alleged model

  4. Open and closed CDM isocurvature models contrasted with the CMB data

    International Nuclear Information System (INIS)

    Enqvist, Kari; Kurki-Suonio, Hannu; Vaeliviita, Jussi

    2002-01-01

    We consider pure isocurvature cold dark matter models in the case of open and closed universes. We allow for a large spectral tilt and scan the six-dimensional parameter space for the best fit to the COBE, Boomerang, and Maxima-1 data. Taking into account constraints from large-scale structure and big bang nucleosynthesis, we find a best fit with χ 2 =121, which is to be compared to χ 2 =44 of a flat adiabatic reference model. Hence the current data strongly disfavor pure isocurvature perturbations

  5. Modeling Evolution on Nearly Neutral Network Fitness Landscapes

    Science.gov (United States)

    Yakushkina, Tatiana; Saakian, David B.

    2017-08-01

    To describe virus evolution, it is necessary to define a fitness landscape. In this article, we consider the microscopic models with the advanced version of neutral network fitness landscapes. In this problem setting, we suppose a fitness difference between one-point mutation neighbors to be small. We construct a modification of the Wright-Fisher model, which is related to ordinary infinite population models with nearly neutral network fitness landscape at the large population limit. From the microscopic models in the realistic sequence space, we derive two versions of nearly neutral network models: with sinks and without sinks. We claim that the suggested model describes the evolutionary dynamics of RNA viruses better than the traditional Wright-Fisher model with few sequences.

  6. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  7. Building X-ray pulsar timing model without the use of radio parameters

    Science.gov (United States)

    Sun, Hai-feng; Sun, Xiong; Fang, Hai-yan; Shen, Li-rong; Cong, Shao-peng; Liu, Yan-ming; Li, Xiao-ping; Bao, Wei-min

    2018-02-01

    This paper develops a timing solution for the X-ray pulsar timing model without the use of the initial radio model parameters. First, we address the problem of phase ambiguities for the pre-fit residuals in the construction of pulsar timing model. To improve the estimation accuracy of the pulse time of arrival (TOA), we have deduced the general form of test statistics in Fourier transform, and discussed their estimation performances. Meanwhile, a fast maximum likelihood (FML) technique is presented to estimate the pulse TOA, which outperforms cross correlation (CC) estimator and exhibits a performance comparable with maximum likelihood (ML) estimator in spite of a much less reduced computational complexity. Depending on the strategy of the difference minimum of pre-fit residuals, we present an effective forced phase-connected technique to achieve initial model parameters. Then, we use the observations with the Rossi X-Ray Timing Explorer (RXTE) and X-ray pulsar navigation-I (XPNAV-1) satellites for experimental studies, and discuss main differences for the root mean square (RMS) residuals calculated with the X-ray and radio ephemerides. Finally, a chi-square value (CSV) of pulse profiles is presented as a complementary indicator to the RMS residuals for evaluating the model parameters. The results show that the proposed timing solution is valid and effective, and the obtained model parameters can be a reasonable alternative to the radio ephemeris.

  8. Invited commentary: Lost in estimation--searching for alternatives to markov chains to fit complex Bayesian models.

    Science.gov (United States)

    Molitor, John

    2012-03-01

    Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.

  9. Source term modelling parameters for Project-90

    International Nuclear Information System (INIS)

    Shaw, W.; Smith, G.; Worgan, K.; Hodgkinson, D.; Andersson, K.

    1992-04-01

    This document summarises the input parameters for the source term modelling within Project-90. In the first place, the parameters relate to the CALIBRE near-field code which was developed for the Swedish Nuclear Power Inspectorate's (SKI) Project-90 reference repository safety assessment exercise. An attempt has been made to give best estimate values and, where appropriate, a range which is related to variations around base cases. It should be noted that the data sets contain amendments to those considered by KBS-3. In particular, a completely new set of inventory data has been incorporated. The information given here does not constitute a complete set of parameter values for all parts of the CALIBRE code. Rather, it gives the key parameter values which are used in the constituent models within CALIBRE and the associated studies. For example, the inventory data acts as an input to the calculation of the oxidant production rates, which influence the generation of a redox front. The same data is also an initial value data set for the radionuclide migration component of CALIBRE. Similarly, the geometrical parameters of the near-field are common to both sub-models. The principal common parameters are gathered here for ease of reference and avoidance of unnecessary duplication and transcription errors. (au)

  10. Fitting Hidden Markov Models to Psychological Data

    Directory of Open Access Journals (Sweden)

    Ingmar Visser

    2002-01-01

    Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.

  11. Incorporating doubly resonant $W^\\pm$ data in a global fit of SMEFT parameters to lift flat directions

    CERN Document Server

    Berthier, Laure; Trott, Michael

    2016-09-27

    We calculate the double pole contribution to two to four fermion scattering through $W^{\\pm}$ currents at tree level in the Standard Model Effective Field Theory (SMEFT). We assume all fermions to be massless, $\\rm U(3)^5$ flavour and $\\rm CP$ symmetry. Using this result, we update the global constraint picture on SMEFT parameters including LEPII data on these charged current processes, and also include modifications to our fit procedure motivated by a companion paper focused on $W^{\\pm}$ mass extractions. The fit reported is now to 177 observables and emphasises the need for a consistent inclusion of theoretical errors, and a consistent treatment of observables. Including charged current data lifts the two-fold degeneracy previously encountered in LEP (and lower energy) data, and allows us to set simultaneous constraints on 20 of 53 Wilson coefficients in the SMEFT, consistent with our assumptions. This allows the model independent inclusion of LEP data in SMEFT studies at LHC, which are projected into the S...

  12. Feature extraction through least squares fit to a simple model

    International Nuclear Information System (INIS)

    Demuth, H.B.

    1976-01-01

    The Oak Ridge National Laboratory (ORNL) presented the Los Alamos Scientific Laboratory (LASL) with 18 radiographs of fuel rod test bundles. The problem is to estimate the thickness of the gap between some cylindrical rods and a flat wall surface. The edges of the gaps are poorly defined due to finite source size, x-ray scatter, parallax, film grain noise, and other degrading effects. The radiographs were scanned and the scan-line data were averaged to reduce noise and to convert the problem to one dimension. A model of the ideal gap, convolved with an appropriate point-spread function, was fit to the averaged data with a least squares program; and the gap width was determined from the final fitted-model parameters. The least squares routine did converge and the gaps obtained are of reasonable size. The method is remarkably insensitive to noise. This report describes the problem, the techniques used to solve it, and the results and conclusions. Suggestions for future work are also given

  13. Importance of Nonperturbative QCD Parameters for Bottom Mesons

    Directory of Open Access Journals (Sweden)

    A. Upadhyay

    2014-01-01

    Full Text Available The importance of nonperturbative quantum chromodynamics (QCD parameters is discussed in context to the predicting power for bottom meson masses and isospin splitting. In the framework of heavy quark effective theory, the work presented here focuses on the different allowed values of the two nonperturbative QCD parameters used in heavy quark effective theory formula, and using the best fitted parameter, masses of the excited bottom meson states in jp=1/2+ doublet in strange and nonstrange sectors are calculated here. The calculated masses are found to be matching well with experiments and other phenomenological models. The mass splitting and hyperfine splitting have also been analyzed for both strange and nonstrange heavy mesons with respect to spin and flavor symmetries.

  14. Random-growth urban model with geographical fitness

    Science.gov (United States)

    Kii, Masanobu; Akimoto, Keigo; Doi, Kenji

    2012-12-01

    This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.

  15. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    Science.gov (United States)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  16. Investigate the effect of anisotropic order parameter on the specific heat of anisotropic two-band superconductors

    International Nuclear Information System (INIS)

    Udomsamuthirun, P.; Peamsuwan, R.; Kumvongsa, C.

    2009-01-01

    The effect of anisotropic order parameter on the specific heat of anisotropic two-band superconductors in BCS weak-coupling limit is investigated. An analytical specific heat jump and the numerical specific heat are shown by using anisotropic order parameters, and the electron-phonon interaction and non-electron-phonon interaction. The two models of anisotropic order parameters are used for numerical calculation that we find little effect on the numerical results. The specific heat jump of MgB 2 , Lu 2 Fe 3 Si 5 and Nb 3 Sn superconductors can fit well with both of them. By comparing the experimental data with overall range of temperature, the best fit is Nb 3 Sn, MgB 2 , and Lu 2 Fe 3 Si 5 superconductors.

  17. A self-adaptive genetic algorithm to estimate JA model parameters considering minor loops

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Hai-liang; Wen, Xi-shan; Lan, Lei; An, Yun-zhu; Li, Xiao-ping

    2015-01-15

    A self-adaptive genetic algorithm for estimating Jiles–Atherton (JA) magnetic hysteresis model parameters is presented. The fitness function is established based on the distances between equidistant key points of normalized hysteresis loops. Linearity function and logarithm function are both adopted to code the five parameters of JA model. Roulette wheel selection is used and the selection pressure is adjusted adaptively by deducting a proportional which depends on current generation common value. The Crossover operator is established by combining arithmetic crossover and multipoint crossover. Nonuniform mutation is improved by adjusting the mutation ratio adaptively. The algorithm is used to estimate the parameters of one kind of silicon-steel sheet’s hysteresis loops, and the results are in good agreement with published data. - Highlights: • We present a method to find JA parameters for both major and minor loops. • Fitness function is based on distances between key points of normalized loops. • The selection pressure is adjusted adaptively based on generations.

  18. A self-adaptive genetic algorithm to estimate JA model parameters considering minor loops

    International Nuclear Information System (INIS)

    Lu, Hai-liang; Wen, Xi-shan; Lan, Lei; An, Yun-zhu; Li, Xiao-ping

    2015-01-01

    A self-adaptive genetic algorithm for estimating Jiles–Atherton (JA) magnetic hysteresis model parameters is presented. The fitness function is established based on the distances between equidistant key points of normalized hysteresis loops. Linearity function and logarithm function are both adopted to code the five parameters of JA model. Roulette wheel selection is used and the selection pressure is adjusted adaptively by deducting a proportional which depends on current generation common value. The Crossover operator is established by combining arithmetic crossover and multipoint crossover. Nonuniform mutation is improved by adjusting the mutation ratio adaptively. The algorithm is used to estimate the parameters of one kind of silicon-steel sheet’s hysteresis loops, and the results are in good agreement with published data. - Highlights: • We present a method to find JA parameters for both major and minor loops. • Fitness function is based on distances between key points of normalized loops. • The selection pressure is adjusted adaptively based on generations

  19. Assessing models for parameters of the Ångström-Prescott formula in China

    DEFF Research Database (Denmark)

    Liu, Xiaoying; Xu, Yinlong; Zhong, Xiuli

    2012-01-01

    against the calibrated ones. Models 1, 6 and 7 showed an advantage in keeping the physical meaning of their modeled parameters due to the small magnitude of and the use of the relation of (a + b) versus other variables as a constraint, respectively. All models tended to perform best in zone II and poorest...... () (models 1–2), altitude (model 7), altitude and (model 3), altitude, and latitude (model 4), altitude and latitude (model 5) and annual average air temperature (model 6). It was found that model 7 performed best, followed by models 6, 1, 3, 2 and 4. The better performance of models 7 and 6 and the fact....... This also suggests that applicability of a Rs model is not proportional to its complexity. The common feature of the better performing models suggests that accurate modeling of parameter a is more important than that of b. Therefore, priority should be given to parameter models having higher accuracy for a...

  20. Determining cosmological parameters with the latest observational data

    International Nuclear Information System (INIS)

    Xia Junqing; Li Hong; Zhao Gongbo; Zhang Xinmin

    2008-01-01

    In this paper, we combine the latest observational data, including the WMAP five-year data (WMAP5), BOOMERanG, CBI, VSA, ACBAR, as well as the baryon acoustic oscillations (BAO) and type Ia supernovae (SN) ''union'' compilation (307 sample), and use the Markov Chain Monte Carlo method to determine the cosmological parameters, such as the equation of state (EoS) of dark energy, the curvature of the universe, the total neutrino mass, and the parameters associated with the power spectrum of primordial fluctuations. In a flat universe, we obtain the tight limit on the constant EoS of dark energy as w=-0.977±0.056(stat)±0.057(sys). For the dynamical dark energy models with the time evolving EoS parametrized as w de (a)=w 0 +w 1 (1-a), we find that the best-fit values are w 0 =-1.08 and w 1 =0.368, while the ΛCDM model remains a good fit to the current data. For the curvature of the universe Ω k , our results give -0.012 k de =-1. When considering the dynamics of dark energy, the flat universe is still a good fit to the current data, -0.015 k s ≥1 are excluded at more than 2σ confidence level. However, in the framework of dynamical dark energy models, the allowed region in the parameter space of (n s ,r) is enlarged significantly. Finally, we find no strong evidence for the large running of the spectral index.

  1. X-33 Telemetry Best Source Selection, Processing, Display, and Simulation Model Comparison

    Science.gov (United States)

    Burkes, Darryl A.

    1998-01-01

    The X-33 program requires the use of multiple telemetry ground stations to cover the launch, ascent, transition, descent, and approach phases for the flights from Edwards AFB to landings at Dugway Proving Grounds, UT and Malmstrom AFB, MT. This paper will discuss the X-33 telemetry requirements and design, including information on fixed and mobile telemetry systems, best source selection, and support for Range Safety Officers. A best source selection system will be utilized to automatically determine the best source based on the frame synchronization status of the incoming telemetry streams. These systems will be used to select the best source at the landing sites and at NASA Dryden Flight Research Center to determine the overall best source between the launch site, intermediate sites, and landing site sources. The best source at the landing sites will be decommutated to display critical flight safety parameters for the Range Safety Officers. The overall best source will be sent to the Lockheed Martin's Operational Control Center at Edwards AFB for performance monitoring by X-33 program personnel and for monitoring of critical flight safety parameters by the primary Range Safety Officer. The real-time telemetry data (received signal strength, etc.) from each of the primary ground stations will also be compared during each nu'ssion with simulation data generated using the Dynamic Ground Station Analysis software program. An overall assessment of the accuracy of the model will occur after each mission. Acknowledgment: The work described in this paper was NASA supported through cooperative agreement NCC8-115 with Lockheed Martin Skunk Works.

  2. Contrast Gain Control Model Fits Masking Data

    Science.gov (United States)

    Watson, Andrew B.; Solomon, Joshua A.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    We studied the fit of a contrast gain control model to data of Foley (JOSA 1994), consisting of thresholds for a Gabor patch masked by gratings of various orientations, or by compounds of two orientations. Our general model includes models of Foley and Teo & Heeger (IEEE 1994). Our specific model used a bank of Gabor filters with octave bandwidths at 8 orientations. Excitatory and inhibitory nonlinearities were power functions with exponents of 2.4 and 2. Inhibitory pooling was broad in orientation, but narrow in spatial frequency and space. Minkowski pooling used an exponent of 4. All of the data for observer KMF were well fit by the model. We have developed a contrast gain control model that fits masking data. Unlike Foley's, our model accepts images as inputs. Unlike Teo & Heeger's, our model did not require multiple channels for different dynamic ranges.

  3. SURVEY DESIGN FOR SPECTRAL ENERGY DISTRIBUTION FITTING: A FISHER MATRIX APPROACH

    International Nuclear Information System (INIS)

    Acquaviva, Viviana; Gawiser, Eric; Bickerton, Steven J.; Grogin, Norman A.; Guo Yicheng; Lee, Seong-Kook

    2012-01-01

    The spectral energy distribution (SED) of a galaxy contains information on the galaxy's physical properties, and multi-wavelength observations are needed in order to measure these properties via SED fitting. In planning these surveys, optimization of the resources is essential. The Fisher Matrix (FM) formalism can be used to quickly determine the best possible experimental setup to achieve the desired constraints on the SED-fitting parameters. However, because it relies on the assumption of a Gaussian likelihood function, it is in general less accurate than other slower techniques that reconstruct the probability distribution function (PDF) from the direct comparison between models and data. We compare the uncertainties on SED-fitting parameters predicted by the FM to the ones obtained using the more thorough PDF-fitting techniques. We use both simulated spectra and real data, and consider a large variety of target galaxies differing in redshift, mass, age, star formation history, dust content, and wavelength coverage. We find that the uncertainties reported by the two methods agree within a factor of two in the vast majority (∼90%) of cases. If the age determination is uncertain, the top-hat prior in age used in PDF fitting to prevent each galaxy from being older than the universe needs to be incorporated in the FM, at least approximately, before the two methods can be properly compared. We conclude that the FM is a useful tool for astronomical survey design.

  4. Modeling hepatitis C virus kinetics under therapy using pharmacokinetic and pharmacodynamic information

    Energy Technology Data Exchange (ETDEWEB)

    Perelson, Alan S [Los Alamos National Laboratory; Shudo, Emi [Los Alamos National Laboratory; Ribeiro, Ruy M [Los Alamos National Laboratory

    2008-01-01

    Mathematical models have proven helpful in analyzing the virological response to antiviral therapy in hepatitis C virus (HCY) infected subjects. Objective: To summarize the uses and limitations of different models for analyzing HCY kinetic data under pegylated interferon therapy. Methods: We formulate mathematical models and fit them by nonlinear least square regression to patient data in order estimate model parameters. We compare the goodness of fit and parameter values estimated by different models statistically. Results/Conclusion: The best model for parameter estimation depends on the availability and the quality of data as well as the therapy used. We also discuss the mathematical models that will be needed to analyze HCV kinetic data from clinical trials with new antiviral drugs.

  5. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example

    Science.gov (United States)

    Helgesson, P.; Sjöstrand, H.

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  6. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example.

    Science.gov (United States)

    Helgesson, P; Sjöstrand, H

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  7. Development of an Agent-Based Model (ABM) to Simulate the Immune System and Integration of a Regression Method to Estimate the Key ABM Parameters by Fitting the Experimental Data

    Science.gov (United States)

    Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le

    2015-01-01

    Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589

  8. Linear time algorithms to construct populations fitting multiple constraint distributions at genomic scales.

    Science.gov (United States)

    Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi

    2017-10-09

    Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.

  9. Fitting outbreak models to data from many small norovirus outbreaks

    Directory of Open Access Journals (Sweden)

    Eamon B. O’Dea

    2014-03-01

    Full Text Available Infectious disease often occurs in small, independent outbreaks in populations with varying characteristics. Each outbreak by itself may provide too little information for accurate estimation of epidemic model parameters. Here we show that using standard stochastic epidemic models for each outbreak and allowing parameters to vary between outbreaks according to a linear predictor leads to a generalized linear model that accurately estimates parameters from many small and diverse outbreaks. By estimating initial growth rates in addition to transmission rates, we are able to characterize variation in numbers of initially susceptible individuals or contact patterns between outbreaks. With simulation, we find that the estimates are fairly robust to the data being collected at discrete intervals and imputation of about half of all infectious periods. We apply the method by fitting data from 75 norovirus outbreaks in health-care settings. Our baseline regression estimates are 0.0037 transmissions per infective-susceptible day, an initial growth rate of 0.27 transmissions per infective day, and a symptomatic period of 3.35 days. Outbreaks in long-term-care facilities had significantly higher transmission and initial growth rates than outbreaks in hospitals.

  10. Four-parameter analytical local model potential for atoms

    International Nuclear Information System (INIS)

    Fei, Yu; Jiu-Xun, Sun; Rong-Gang, Tian; Wei, Yang

    2009-01-01

    Analytical local model potential for modeling the interaction in an atom reduces the computational effort in electronic structure calculations significantly. A new four-parameter analytical local model potential is proposed for atoms Li through Lr, and the values of four parameters are shell-independent and obtained by fitting the results of X a method. At the same time, the energy eigenvalues, the radial wave functions and the total energies of electrons are obtained by solving the radial Schrödinger equation with a new form of potential function by Numerov's numerical method. The results show that our new form of potential function is suitable for high, medium and low Z atoms. A comparison among the new potential function and other analytical potential functions shows the greater flexibility and greater accuracy of the present new potential function. (atomic and molecular physics)

  11. FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES

    Directory of Open Access Journals (Sweden)

    U. S. Panday

    2012-09-01

    Full Text Available In literature and in photogrammetric workstations many approaches and systems to automatically reconstruct buildings from remote sensing data are described and available. Those building models are being used for instance in city modeling or in cadastre context. If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS data. This leads to inconsistent building outlines, which has a negative influence on visual impression, but more seriously also represents a wrong legal boundary in the cadaster. Oblique aerial images as opposed to nadir-view images reveal greater detail, enabling to see different views of an object taken from different directions. Building walls are visible from oblique images directly and those images are used for automated roof overhang estimation in this research. A fitting algorithm is employed to find roof parameters of simple buildings. It uses a least squares algorithm to fit projected wire frames to their corresponding edge lines extracted from the images. Self-occlusion is detected based on intersection result of viewing ray and the planes formed by the building whereas occlusion from other objects is detected using an ALS point cloud. Overhang and ground height are obtained by sweeping vertical and horizontal planes respectively. Experimental results are verified with high resolution ortho-images, field survey, and ALS data. Planimetric accuracy of 1cm mean and 5cm standard deviation was obtained, while buildings' orientation were accurate to mean of 0.23° and standard deviation of 0.96° with ortho-image. Overhang parameters were aligned to approximately 10cm with field survey. The ground and roof heights were accurate to mean of – 9cm and 8cm with standard deviations of 16cm and 8cm with ALS respectively. The developed approach reconstructs 3D building models well in cases of sufficient texture. More images should be acquired for

  12. CKM parameter fits, the Bs0- anti Bs0 mixing ratio xs and CP-violating phases in B decays

    International Nuclear Information System (INIS)

    Ali, A.; London, D.

    1993-02-01

    We review and update constraints on the parameters of the flavour mixing matrix (V CKM ) in the Standard Model. In performing these fits, we use inputs from the measurements of parallel ε parallel , the CP-violating parameter in K decays, x d = (ΔM)/Γ, the mixing parameter in B 0 d -anti B 0 d mixing, and the present measurements of the matrix elements parallel V cb parallel and parallel V ub parallel . We take into account the next-to-leading order QCD results in our analysis, wherever available, and incorporate results stemming from the ongoing lattice calculations of the B-meson coupling constants, which predict a value f Bd = 200 ± 30 MeV, though for the sake of comparison we also show the CKM fits for smaller values of f Bd . We use the updated CKM matrix to predict the mixing ration x, relevant for B 0 s - anti B 0 s , mixing, and the phases in the CKM unitarity triangle, sin 2α, sin 2β and sin 2γ, which determine the CP-violating asymmetries on B-decays. The importance of measuring the ratio x, in restricting the allowed values of the CKM parameters is emphasized. (orig.)

  13. A new Bayesian recursive technique for parameter estimation

    Science.gov (United States)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  14. What Population Reveals about Individual Cell Identity: Single-Cell Parameter Estimation of Models of Gene Expression in Yeast.

    Directory of Open Access Journals (Sweden)

    Artémis Llamosi

    2016-02-01

    Full Text Available Significant cell-to-cell heterogeneity is ubiquitously observed in isogenic cell populations. Consequently, parameters of models of intracellular processes, usually fitted to population-averaged data, should rather be fitted to individual cells to obtain a population of models of similar but non-identical individuals. Here, we propose a quantitative modeling framework that attributes specific parameter values to single cells for a standard model of gene expression. We combine high quality single-cell measurements of the response of yeast cells to repeated hyperosmotic shocks and state-of-the-art statistical inference approaches for mixed-effects models to infer multidimensional parameter distributions describing the population, and then derive specific parameters for individual cells. The analysis of single-cell parameters shows that single-cell identity (e.g. gene expression dynamics, cell size, growth rate, mother-daughter relationships is, at least partially, captured by the parameter values of gene expression models (e.g. rates of transcription, translation and degradation. Our approach shows how to use the rich information contained into longitudinal single-cell data to infer parameters that can faithfully represent single-cell identity.

  15. Evaluation of reaction mechanisms and the kinetic parameters for the transesterification of castor oil by liquid enzymes

    DEFF Research Database (Denmark)

    Andrade, Thalles Allan; Errico, Massimiliano; Christensen, Knud Villy

    2017-01-01

    of the transesterification of castor oil with methanol using the enzyme Eversa® Transform as catalyst were investigated. Reactions were carried out for 8 hours at 35 °C with: an alcohol-to-oil molar ratio equal to 6:1, a 5 wt% of liquid enzyme solution and addition of 5 wt% of water by weight of castor oil. From...... methanolysis rates of glycerides obtained, indicated that transesterification dominates over hydrolysis. The mechanism among the four models proposed that gave the best fit could be simplified, eliminating the kinetic parameters with negligible effects on the reaction rates. This model was able to fit...

  16. Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments

    Science.gov (United States)

    Lane, Peter C. R.; Gobet, Fernand

    2013-03-01

    Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the `speciated non-dominated sorting genetic algorithm' for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.

  17. A Profile of Fitness Parameters and Performance of Volleyball Players

    Directory of Open Access Journals (Sweden)

    Govind B. Taware

    2013-07-01

    Full Text Available Background: Ball games require comprehen-sive ability including physical, technical, men-tal and tactical abilities. Among them, physicalabilities of players exert marked effects on theskill of the players themselves and the tacticsof the team. Therefore players must have thephysical abilities to meet the demand of thesport. Volleyball is one of the most popularlyplayed games in the world. Unfortunately, thelevel of performance of the Indian volleyballplayers lags far behind the international stan-dards. Aim of the Study: The present study wasaimed to assess flexibility, muscular endurance,power and cardio-respiratory endurance of vol-leyball players and to compare the results withage matched controls. Also, to compare thefindings of the volleyball players with that ofthe international norms from the available lit-erature and to make some suggestions for theimprovement in their performance level. Ma-terial and Methods: The study was carried outin 40 male volleyball players aged between 17to 26 years and 40 ages matched male controls.Physical fitness parameters namely flexibility,muscular endurance, power and cardio-respi-ratory endurance were measured, data was ana-lyzed using unpaired ‘t’-test. Results: It was ob-served that all physical fitness parameters weresignificantly more in players as compared totheir aged-matched controls but when values ofthe subjects were compared to internationalstandards; our subjects were behind the recom-mended norms for the elite volleyball players.Conclusion: The volleyball players have moreadvantage of flexibility muscular endurance,power and cardio-respiratory endurance.

  18. A new method to estimate parameters of linear compartmental models using artificial neural networks

    International Nuclear Information System (INIS)

    Gambhir, Sanjiv S.; Keppenne, Christian L.; Phelps, Michael E.; Banerjee, Pranab K.

    1998-01-01

    At present, the preferred tool for parameter estimation in compartmental analysis is an iterative procedure; weighted nonlinear regression. For a large number of applications, observed data can be fitted to sums of exponentials whose parameters are directly related to the rate constants/coefficients of the compartmental models. Since weighted nonlinear regression often has to be repeated for many different data sets, the process of fitting data from compartmental systems can be very time consuming. Furthermore the minimization routine often converges to a local (as opposed to global) minimum. In this paper, we examine the possibility of using artificial neural networks instead of weighted nonlinear regression in order to estimate model parameters. We train simple feed-forward neural networks to produce as outputs the parameter values of a given model when kinetic data are fed to the networks' input layer. The artificial neural networks produce unbiased estimates and are orders of magnitude faster than regression algorithms. At noise levels typical of many real applications, the neural networks are found to produce lower variance estimates than weighted nonlinear regression in the estimation of parameters from mono- and biexponential models. These results are primarily due to the inability of weighted nonlinear regression to converge. These results establish that artificial neural networks are powerful tools for estimating parameters for simple compartmental models. (author)

  19. A Bayesian framework for parameter estimation in dynamical models.

    Directory of Open Access Journals (Sweden)

    Flávio Codeço Coelho

    Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.

  20. Local fit evaluation of structural equation models using graphical criteria.

    Science.gov (United States)

    Thoemmes, Felix; Rosseel, Yves; Textor, Johannes

    2018-03-01

    Evaluation of model fit is critically important for every structural equation model (SEM), and sophisticated methods have been developed for this task. Among them are the χ² goodness-of-fit test, decomposition of the χ², derived measures like the popular root mean square error of approximation (RMSEA) or comparative fit index (CFI), or inspection of residuals or modification indices. Many of these methods provide a global approach to model fit evaluation: A single index is computed that quantifies the fit of the entire SEM to the data. In contrast, graphical criteria like d-separation or trek-separation allow derivation of implications that can be used for local fit evaluation, an approach that is hardly ever applied. We provide an overview of local fit evaluation from the viewpoint of SEM practitioners. In the presence of model misfit, local fit evaluation can potentially help in pinpointing where the problem with the model lies. For models that do fit the data, local tests can identify the parts of the model that are corroborated by the data. Local tests can also be conducted before a model is fitted at all, and they can be used even for models that are globally underidentified. We discuss appropriate statistical local tests, and provide applied examples. We also present novel software in R that automates this type of local fit evaluation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. Reduction of the number of parameters needed for a polynomial random regression test-day model

    NARCIS (Netherlands)

    Pool, M.H.; Meuwissen, T.H.E.

    2000-01-01

    Legendre polynomials were used to describe the (co)variance matrix within a random regression test day model. The goodness of fit depended on the polynomial order of fit, i.e., number of parameters to be estimated per animal but is limited by computing capacity. Two aspects: incomplete lactation

  2. Parameters Calculation of ZnO Surge Arrester Models by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    A. Bayadi

    2006-09-01

    Full Text Available This paper proposes to provide a new technique based on the genetic algorithm to obtain the best possible series of values of the parameters of the ZnO surge arresters models. The validity of the predicted parameters is then checked by comparing the results predicted with the experimental results available in the literature. Using the ATP-EMTP package an application of the arrester model on network system studies is presented and discussed.

  3. Testing the validity of stock-recruitment curve fits

    International Nuclear Information System (INIS)

    Christensen, S.W.; Goodyear, C.P.

    1988-01-01

    The utilities relied heavily on the Ricker stock-recruitment model as the basis for quantifying biological compensation in the Hudson River power case. They presented many fits of the Ricker model to data derived from striped bass catch and effort records compiled by the National Marine Fisheries Service. Based on this curve-fitting exercise, a value of 4 was chosen for the parameter alpha in the Ricker model, and this value was used to derive the utilities' estimates of the long-term impact of power plants on striped bass populations. A technique was developed and applied to address a single fundamental question: if the Ricker model were applicable to the Hudson River striped bass population, could the estimates of alpha from the curve-fitting exercise be considered reliable. The technique involved constructing a simulation model that incorporated the essential biological features of the population and simulated the characteristics of the available actual catch-per-unit-effort data through time. The ability or failure to retrieve the known parameter values underlying the simulation model via the curve-fitting exercise was a direct test of the reliability of the results of fitting stock-recruitment curves to the real data. The results demonstrated that estimates of alpha from the curve-fitting exercise were not reliable. The simulation-modeling technique provides an effective way to identify whether or not particular data are appropriate for use in fitting such models. 39 refs., 2 figs., 3 tabs

  4. Five adjustable parameter fit of quark and lepton masses and mixings

    International Nuclear Information System (INIS)

    Nielsen, H.B.; Takanishi, Y.

    2002-05-01

    We develop a model of ours fitting the quark and lepton masses and mixing angles by removing from the model a Higgs field previously introduced to organise a large atmospheric mixing angle for neutrino oscillations. Due to the off-diagonal elements dominating in the see-saw neutrino mass matrix the large atmospheric mixing angle comes essentially by itself. It turns out that we have now only five adjustable Higgs field vacuum expectation values needed to fit all the masses and mixings order of magnitudewise taking into account the renormalisation group runnings in all sectors. The CHOOZ angle comes out close to the experimental bound. (orig.)

  5. topicmodels: An R Package for Fitting Topic Models

    Directory of Open Access Journals (Sweden)

    Bettina Grun

    2011-05-01

    Full Text Available Topic models allow the probabilistic modeling of term frequency occurrences in documents. The fitted model can be used to estimate the similarity between documents as well as between a set of specified keywords using an additional layer of latent variables which are referred to as topics. The R package topicmodels provides basic infrastructure for fitting topic models based on data structures from the text mining package tm. The package includes interfaces to two algorithms for fitting topic models: the variational expectation-maximization algorithm provided by David M. Blei and co-authors and an algorithm using Gibbs sampling by Xuan-Hieu Phan and co-authors.

  6. A scaled Lagrangian method for performing a least squares fit of a model to plant data

    International Nuclear Information System (INIS)

    Crisp, K.E.

    1988-01-01

    Due to measurement errors, even a perfect mathematical model will not be able to match all the corresponding plant measurements simultaneously. A further discrepancy may be introduced if an un-modelled change in conditions occurs within the plant which should have required a corresponding change in model parameters - e.g. a gradual deterioration in the performance of some component(s). Taking both these factors into account, what is required is that the overall discrepancy between the model predictions and the plant data is kept to a minimum. This process is known as 'model fitting', A method is presented for minimising any function which consists of the sum of squared terms, subject to any constraints. Its most obvious application is in the process of model fitting, where a weighted sum of squares of the differences between model predictions and plant data is the function to be minimised. When implemented within existing Central Electricity Generating Board computer models, it will perform a least squares fit of a model to plant data within a single job submission. (author)

  7. Two-Stage Method Based on Local Polynomial Fitting for a Linear Heteroscedastic Regression Model and Its Application in Economics

    Directory of Open Access Journals (Sweden)

    Liyun Su

    2012-01-01

    Full Text Available We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations.

  8. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  9. Top ten accelerating cosmological models

    International Nuclear Information System (INIS)

    Szydlowski, Marek; Kurek, Aleksandra; Krawiec, Adam

    2006-01-01

    Recent astronomical observations indicate that the Universe is presently almost flat and undergoing a period of accelerated expansion. Basing on Einstein's general relativity all these observations can be explained by the hypothesis of a dark energy component in addition to cold dark matter (CDM). Because the nature of this dark energy is unknown, it was proposed some alternative scenario to explain the current accelerating Universe. The key point of this scenario is to modify the standard FRW equation instead of mysterious dark energy component. The standard approach to constrain model parameters, based on the likelihood method, gives a best-fit model and confidence ranges for those parameters. We always arbitrary choose the set of parameters which define a model which we compare with observational data. Because in the generic case, the introducing of new parameters improves a fit to the data set, there appears the problem of elimination of model parameters which can play an insufficient role. The Bayesian information criteria of model selection (BIC) is dedicated to promotion a set of parameters which should be incorporated to the model. We divide class of all accelerating cosmological models into two groups according to the two types of explanation acceleration of the Universe. Then the Bayesian framework of model selection is used to determine the set of parameters which gives preferred fit to the SNIa data. We find a few of flat cosmological models which can be recommend by the Bayes factor. We show that models with dark energy as a new fluid are favoured over models featuring a modified FRW equation

  10. Determination of probability density functions for parameters in the Munson-Dawson model for creep behavior of salt

    International Nuclear Information System (INIS)

    Pfeifle, T.W.; Mellegard, K.D.; Munson, D.E.

    1992-10-01

    The modified Munson-Dawson (M-D) constitutive model that describes the creep behavior of salt will be used in performance assessment calculations to assess compliance of the Waste Isolation Pilot Plant (WIPP) facility with requirements governing the disposal of nuclear waste. One of these standards requires that the uncertainty of future states of the system, material model parameters, and data be addressed in the performance assessment models. This paper presents a method in which measurement uncertainty and the inherent variability of the material are characterized by treating the M-D model parameters as random variables. The random variables can be described by appropriate probability distribution functions which then can be used in Monte Carlo or structural reliability analyses. Estimates of three random variables in the M-D model were obtained by fitting a scalar form of the model to triaxial compression creep data generated from tests of WIPP salt. Candidate probability distribution functions for each of the variables were then fitted to the estimates and their relative goodness-of-fit tested using the Kolmogorov-Smirnov statistic. A sophisticated statistical software package obtained from BMDP Statistical Software, Inc. was used in the M-D model fitting. A separate software package, STATGRAPHICS, was used in fitting the candidate probability distribution functions to estimates of the variables. Skewed distributions, i.e., lognormal and Weibull, were found to be appropriate for the random variables analyzed

  11. Determination of modeling parameters for power IGBTs under pulsed power conditions

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Gregory E [Los Alamos National Laboratory; Van Gordon, Jim A [U. OF MISSOURI; Kovaleski, Scott D [U. OF MISSOURI

    2010-01-01

    While the power insulated gate bipolar transistor (IGRT) is used in many applications, it is not well characterized under pulsed power conditions. This makes the IGBT difficult to model for solid state pulsed power applications. The Oziemkiewicz implementation of the Hefner model is utilized to simulate IGBTs in some circuit simulation software packages. However, the seventeen parameters necessary for the Oziemkiewicz implementation must be known for the conditions under which the device will be operating. Using both experimental and simulated data with a least squares curve fitting technique, the parameters necessary to model a given IGBT can be determined. This paper presents two sets of these seventeen parameters that correspond to two different models of power IGBTs. Specifically, these parameters correspond to voltages up to 3.5 kV, currents up to 750 A, and pulse widths up to 10 {micro}s. Additionally, comparisons of the experimental and simulated data will be presented.

  12. Temporal variation and scaling of parameters for a monthly hydrologic model

    Science.gov (United States)

    Deng, Chao; Liu, Pan; Wang, Dingbao; Wang, Weiguang

    2018-03-01

    The temporal variation of model parameters is affected by the catchment conditions and has a significant impact on hydrological simulation. This study aims to evaluate the seasonality and downscaling of model parameter across time scales based on monthly and mean annual water balance models with a common model framework. Two parameters of the monthly model, i.e., k and m, are assumed to be time-variant at different months. Based on the hydrological data set from 121 MOPEX catchments in the United States, we firstly analyzed the correlation between parameters (k and m) and catchment properties (NDVI and frequency of rainfall events, α). The results show that parameter k is positively correlated with NDVI or α, while the correlation is opposite for parameter m, indicating that precipitation and vegetation affect monthly water balance by controlling temporal variation of parameters k and m. The multiple linear regression is then used to fit the relationship between ε and the means and coefficient of variations of parameters k and m. Based on the empirical equation and the correlations between the time-variant parameters and NDVI, the mean annual parameter ε is downscaled to monthly k and m. The results show that it has lower NSEs than these from model with time-variant k and m being calibrated through SCE-UA, while for several study catchments, it has higher NSEs than that of the model with constant parameters. The proposed method is feasible and provides a useful tool for temporal scaling of model parameter.

  13. Calculating the parameters of experimental data Gauss distribution using the least square fit method and evaluation of their accuracy

    International Nuclear Information System (INIS)

    Guseva, E.V.; Peregudov, V.N.

    1982-01-01

    The FITGAV program for calculation of parameters of the Gauss curve describing experimental data is considered. The calculations are based on the least square fit method. The estimations of errors in the parameter determination as a function of experimental data sample volume and their statistical significance are obtained. The curve fit using 100 points occupies less than 1 s at the SM-4 type computer

  14. Methods of comparing associative models and an application to retrospective revaluation.

    Science.gov (United States)

    Witnauer, James E; Hutchings, Ryan; Miller, Ralph R

    2017-11-01

    Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  16. Changes in relative fit of human heat stress indices to cardiovascular, respiratory, and renal hospitalizations across five Australian urban populations

    Science.gov (United States)

    Goldie, James; Alexander, Lisa; Lewis, Sophie C.; Sherwood, Steven C.; Bambrick, Hilary

    2018-03-01

    Various human heat stress indices have been developed to relate atmospheric measures of extreme heat to human health impacts, but the usefulness of different indices across various health impacts and in different populations is poorly understood. This paper determines which heat stress indices best fit hospital admissions for sets of cardiovascular, respiratory, and renal diseases across five Australian cities. We hypothesized that the best indices would be largely dependent on location. We fit parent models to these counts in the summers (November-March) between 2001 and 2013 using negative binomial regression. We then added 15 heat stress indices to these models, ranking their goodness of fit using the Akaike information criterion. Admissions for each health outcome were nearly always higher in hot or humid conditions. Contrary to our hypothesis that location would determine the best-fitting heat stress index, we found that the best indices were related largely by health outcome of interest, rather than location as hypothesized. In particular, heatwave and temperature indices had the best fit to cardiovascular admissions, humidity indices had the best fit to respiratory admissions, and combined heat-humidity indices had the best fit to renal admissions. With a few exceptions, the results were similar across all five cities. The best-fitting heat stress indices appear to be useful across several Australian cities with differing climates, but they may have varying usefulness depending on the outcome of interest. These findings suggest that future research on heat and health impacts, and in particular hospital demand modeling, could better reflect reality if it avoided "all-cause" health outcomes and used heat stress indices appropriate to specific diseases and disease groups.

  17. Changes in relative fit of human heat stress indices to cardiovascular, respiratory, and renal hospitalizations across five Australian urban populations.

    Science.gov (United States)

    Goldie, James; Alexander, Lisa; Lewis, Sophie C; Sherwood, Steven C; Bambrick, Hilary

    2018-03-01

    Various human heat stress indices have been developed to relate atmospheric measures of extreme heat to human health impacts, but the usefulness of different indices across various health impacts and in different populations is poorly understood. This paper determines which heat stress indices best fit hospital admissions for sets of cardiovascular, respiratory, and renal diseases across five Australian cities. We hypothesized that the best indices would be largely dependent on location. We fit parent models to these counts in the summers (November-March) between 2001 and 2013 using negative binomial regression. We then added 15 heat stress indices to these models, ranking their goodness of fit using the Akaike information criterion. Admissions for each health outcome were nearly always higher in hot or humid conditions. Contrary to our hypothesis that location would determine the best-fitting heat stress index, we found that the best indices were related largely by health outcome of interest, rather than location as hypothesized. In particular, heatwave and temperature indices had the best fit to cardiovascular admissions, humidity indices had the best fit to respiratory admissions, and combined heat-humidity indices had the best fit to renal admissions. With a few exceptions, the results were similar across all five cities. The best-fitting heat stress indices appear to be useful across several Australian cities with differing climates, but they may have varying usefulness depending on the outcome of interest. These findings suggest that future research on heat and health impacts, and in particular hospital demand modeling, could better reflect reality if it avoided "all-cause" health outcomes and used heat stress indices appropriate to specific diseases and disease groups.

  18. Fitting the Fractional Polynomial Model to Non-Gaussian Longitudinal Data

    Directory of Open Access Journals (Sweden)

    Ji Hoon Ryoo

    2017-08-01

    Full Text Available As in cross sectional studies, longitudinal studies involve non-Gaussian data such as binomial, Poisson, gamma, and inverse-Gaussian distributions, and multivariate exponential families. A number of statistical tools have thus been developed to deal with non-Gaussian longitudinal data, including analytic techniques to estimate parameters in both fixed and random effects models. However, as yet growth modeling with non-Gaussian data is somewhat limited when considering the transformed expectation of the response via a linear predictor as a functional form of explanatory variables. In this study, we introduce a fractional polynomial model (FPM that can be applied to model non-linear growth with non-Gaussian longitudinal data and demonstrate its use by fitting two empirical binary and count data models. The results clearly show the efficiency and flexibility of the FPM for such applications.

  19. Application of Artificial Bee Colony in Model Parameter Identification of Solar Cells

    Directory of Open Access Journals (Sweden)

    Rongjie Wang

    2015-07-01

    Full Text Available The identification of values of solar cell parameters is of great interest for evaluating solar cell performances. The algorithm of an artificial bee colony was used to extract model parameters of solar cells from current-voltage characteristics. Firstly, the best-so-for mechanism was introduced to the original artificial bee colony. Then, a method was proposed to identify parameters for a single diode model and double diode model using this improved artificial bee colony. Experimental results clearly demonstrate the effectiveness of the proposed method and its superior performance compared to other competing methods.

  20. AXIFLUX, Cosine Function Fit of Experimental Axial Flux in Cylindrical Reactor

    International Nuclear Information System (INIS)

    Holte, O.

    1980-01-01

    1 - Nature of physical problem solved: Calculates the parameters of the cosine function that will best fit data from axial flux distribution measurements in a cylindrical reactor. 2 - Method of solution: Steepest descent for the minimization. 3 - Restrictions on the complexity of the problem: Number of measured points less than 200

  1. Convolution based profile fitting

    International Nuclear Information System (INIS)

    Kern, A.; Coelho, A.A.; Cheary, R.W.

    2002-01-01

    Full text: In convolution based profile fitting, profiles are generated by convoluting functions together to form the observed profile shape. For a convolution of 'n' functions this process can be written as, Y(2θ)=F 1 (2θ)x F 2 (2θ)x... x F i (2θ)x....xF n (2θ). In powder diffractometry the functions F i (2θ) can be interpreted as the aberration functions of the diffractometer, but in general any combination of appropriate functions for F i (2θ) may be used in this context. Most direct convolution fitting methods are restricted to combinations of F i (2θ) that can be convoluted analytically (e.g. GSAS) such as Lorentzians, Gaussians, the hat (impulse) function and the exponential function. However, software such as TOPAS is now available that can accurately convolute and refine a wide variety of profile shapes numerically, including user defined profiles, without the need to convolute analytically. Some of the most important advantages of modern convolution based profile fitting are: 1) virtually any peak shape and angle dependence can normally be described using minimal profile parameters in laboratory and synchrotron X-ray data as well as in CW and TOF neutron data. This is possible because numerical convolution and numerical differentiation is used within the refinement procedure so that a wide range of functions can easily be incorporated into the convolution equation; 2) it can use physically based diffractometer models by convoluting the instrument aberration functions. This can be done for most laboratory based X-ray powder diffractometer configurations including conventional divergent beam instruments, parallel beam instruments, and diffractometers used for asymmetric diffraction. It can also accommodate various optical elements (e.g. multilayers and monochromators) and detector systems (e.g. point and position sensitive detectors) and has already been applied to neutron powder diffraction systems (e.g. ANSTO) as well as synchrotron based

  2. A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model

    DEFF Research Database (Denmark)

    Spann, Robert; Roca, Christophe; Kold, David

    2017-01-01

    Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...... of parameters was performed in order to get a good model fit to the data. However, not all parameters are identifiable with the given data set and model structure. Sensitivity, identifiability, and uncertainty analysis were completed and a relevant identifiable subset of parameters was determined for a new...

  3. Derivation of potential model for LiAlO2 by simple and effective optimization of model parameters

    International Nuclear Information System (INIS)

    Tsuchihira, H.; Oda, T.; Tanaka, S.

    2009-01-01

    Interatomic potentials of LiAlO 2 were constructed by a simple and effective method. In this method, the model function consists of multiple inverse polynomial functions with an exponential truncation function, and parameters in the potential model can be optimized as a solution of simultaneous linear equations. Potential energies obtained by ab initio calculation are used as fitting targets for model parameter optimization. Lattice constants, elastic properties, defect-formation energy, thermal expansions and the melting point were calculated under the constructed potential models. The results showed good agreement with experimental values and ab initio calculation results, which underscores the validity of the presented method.

  4. Fitting theories of nuclear binding energies

    International Nuclear Information System (INIS)

    Bertsch, G.F.; Sabbey, B.; Uusnaekki, M.

    2005-01-01

    In developing theories of nuclear binding energy such as density-functional theory, the effort required to make a fit can be daunting because of the large number of parameters that may be in the theory and the large number of nuclei in the mass table. For theories based on the Skyrme interaction, the effort can be reduced considerably by using the singular value decomposition to reduce the size of the parameter space. We find that the sensitive parameters define a space of dimension four or so, and within this space a linear refit is adequate for a number of Skyrme parameters sets from the literature. We find no marked differences in the quality of the fit among the SLy4, the BSk4, and SkP parameter sets. The root-mean-square residual error in even-even nuclei is about 1.5 MeV, half the value of the liquid drop model. We also discuss an alternative norm for evaluating mass fits, the Chebyshev norm. It focuses attention on the cases with the largest discrepancies between theory and experiment. We show how it works with the liquid drop model and make some applications to models based on Skyrme energy functionals. The Chebyshev norm seems to be more sensitive to new experimental data than the root-mean-square norm. The method also has the advantage that candidate improvements to the theories can be assessed with computations on smaller sets of nuclei

  5. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  6. Lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)

  7. Genetic algorithm using independent component analysis in x-ray reflectivity curve fitting of periodic layer structures

    International Nuclear Information System (INIS)

    Tiilikainen, J; Bosund, V; Tilli, J-M; Sormunen, J; Mattila, M; Hakkarainen, T; Lipsanen, H

    2007-01-01

    A novel genetic algorithm (GA) utilizing independent component analysis (ICA) was developed for x-ray reflectivity (XRR) curve fitting. EFICA was used to reduce mutual information, or interparameter dependences, during the combinatorial phase. The performance of the new algorithm was studied by fitting trial XRR curves to target curves which were computed using realistic multilayer models. The median convergence properties of conventional GA, GA using principal component analysis and the novel GA were compared. GA using ICA was found to outperform the other methods with problems having 41 parameters or more to be fitted without additional XRR curve calculations. The computational complexity of the conventional methods was linear but the novel method had a quadratic computational complexity due to the applied ICA method which sets a practical limit for the dimensionality of the problem to be solved. However, the novel algorithm had the best capability to extend the fitting analysis based on Parratt's formalism to multiperiodic layer structures

  8. Comparison of Physical Fitness Parameters with EUROFIT Test Battery of Male Adolescent Soccer Players and Sedentary Counterparts

    Directory of Open Access Journals (Sweden)

    Özgür ERİKOĞLU

    2015-09-01

    Full Text Available The aim of this study was to compare physical fitness parameters of male adolescent soccer players and sedentary counterparts. A total of 26 male adolescents participated in this study voluntarily: Active soccer players (n: 3, age x : 13,00 ± 0,00 and sedentary counterparts (n: 13, age x :12,92 ± 0,75. The EUROFIT test battery was used to determine physical fitness. The test battery includes body height and weight measurements, touching the discs, flamingo balan ce, throwing health ball, vertical jumping, sit and reach, sit - up for 30 s, 20 meter sprint run, and 20 meter shuttle run tests. Data were analyzed by Mann Whitney U test. Significance was defined as p.05. In conclusion, children who do sports are more successful on most of the fitness parameters than sedentary children.

  9. Clinical validation of the LKB model and parameter sets for predicting radiation-induced pneumonitis from breast cancer radiotherapy

    International Nuclear Information System (INIS)

    Tsougos, Ioannis; Mavroidis, Panayiotis; Theodorou, Kyriaki; Rajala, J; Pitkaenen, M A; Holli, K; Ojala, A T; Hyoedynmaa, S; Jaervenpaeae, Ritva; Lind, Bengt K; Kappas, Constantin

    2006-01-01

    The choice of the appropriate model and parameter set in determining the relation between the incidence of radiation pneumonitis and dose distribution in the lung is of great importance, especially in the case of breast radiotherapy where the observed incidence is fairly low. From our previous study based on 150 breast cancer patients, where the fits of dose-volume models to clinical data were estimated (Tsougos et al 2005 Evaluation of dose-response models and parameters predicting radiation induced pneumonitis using clinical data from breast cancer radiotherapy Phys. Med. Biol. 50 3535-54), one could get the impression that the relative seriality is significantly better than the LKB NTCP model. However, the estimation of the different NTCP models was based on their goodness-of-fit on clinical data, using various sets of published parameters from other groups, and this fact may provisionally justify the results. Hence, we sought to investigate further the LKB model, by applying different published parameter sets for the very same group of patients, in order to be able to compare the results. It was shown that, depending on the parameter set applied, the LKB model is able to predict the incidence of radiation pneumonitis with acceptable accuracy, especially when implemented on a sub-group of patients (120) receiving D-bar-bar vertical bar EUD higher than 8 Gy. In conclusion, the goodness-of-fit of a certain radiobiological model on a given clinical case is closely related to the selection of the proper scoring criteria and parameter set as well as to the compatibility of the clinical case from which the data were derived. (letter to the editor)

  10. The performance of simulated annealing in parameter estimation for vapor-liquid equilibrium modeling

    Directory of Open Access Journals (Sweden)

    A. Bonilla-Petriciolet

    2007-03-01

    Full Text Available In this paper we report the application and evaluation of the simulated annealing (SA optimization method in parameter estimation for vapor-liquid equilibrium (VLE modeling. We tested this optimization method using the classical least squares and error-in-variable approaches. The reliability and efficiency of the data-fitting procedure are also considered using different values for algorithm parameters of the SA method. Our results indicate that this method, when properly implemented, is a robust procedure for nonlinear parameter estimation in thermodynamic models. However, in difficult problems it still can converge to local optimums of the objective function.

  11. Regionalization of SWAT Model Parameters for Use in Ungauged Watersheds

    Directory of Open Access Journals (Sweden)

    Indrajeet Chaubey

    2010-11-01

    Full Text Available There has been a steady shift towards modeling and model-based approaches as primary methods of assessing watershed response to hydrologic inputs and land management, and of quantifying watershed-wide best management practice (BMP effectiveness. Watershed models often require some degree of calibration and validation to achieve adequate watershed and therefore BMP representation. This is, however, only possible for gauged watersheds. There are many watersheds for which there are very little or no monitoring data available, thus the question as to whether it would be possible to extend and/or generalize model parameters obtained through calibration of gauged watersheds to ungauged watersheds within the same region. This study explored the possibility of developing regionalized model parameter sets for use in ungauged watersheds. The study evaluated two regionalization methods: global averaging, and regression-based parameters, on the SWAT model using data from priority watersheds in Arkansas. Resulting parameters were tested and model performance determined on three gauged watersheds. Nash-Sutcliffe efficiencies (NS for stream flow obtained using regression-based parameters (0.53–0.83 compared well with corresponding values obtained through model calibration (0.45–0.90. Model performance obtained using global averaged parameter values was also generally acceptable (0.4 ≤ NS ≤ 0.75. Results from this study indicate that regionalized parameter sets for the SWAT model can be obtained and used for making satisfactory hydrologic response predictions in ungauged watersheds.

  12. Goodness-of-Fit Assessment of Item Response Theory Models

    Science.gov (United States)

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  13. Use of stochastic methods for robust parameter extraction from impedance spectra

    International Nuclear Information System (INIS)

    Bueschel, Paul; Troeltzsch, Uwe; Kanoun, Olfa

    2011-01-01

    The fitting of impedance models to measured data is an essential step in impedance spectroscopy (IS). Due to often complicated, nonlinear models, big number of parameters, large search spaces and presence of noise, an automated determination of the unknown parameters is a challenging task. The stronger the nonlinear behavior of a model, the weaker is the convergence of the corresponding regression and the probability to trap into local minima increases during parameter extraction. For fast measurements or automatic measurement systems these problems became the limiting factors of use. We compared the usability of stochastic algorithms, evolution, simulated annealing and particle filter with the widely used tool LEVM for parameter extraction for IS. The comparison is based on one reference model by J.R. Macdonald and a battery model used with noisy measurement data. The results show different performances of the algorithms for these two problems depending on the search space and the model used for optimization. The obtained results by particle filter were the best for both models. This method delivers the most reliable result for both cases even for the ill posed battery model.

  14. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens

    2016-01-01

    be used directly for accurate full-scale transient simulations. The model was validated against full-scale data with an engine following the European Transient Cycle. The validation showed that the predictive capability for nitrogen oxides (NOx) was satisfactory. After re-estimation of the adsorption...... and desorption parameters with full-scale transient data, the fit for both NOx and NH3-slip was satisfactory....

  15. Phylogenetic tree reconstruction accuracy and model fit when proportions of variable sites change across the tree.

    Science.gov (United States)

    Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R

    2010-05-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.

  16. A new analytical method for estimating lumped parameter constants of linear viscoelastic models from strain rate tests

    Science.gov (United States)

    Mattei, G.; Ahluwalia, A.

    2018-04-01

    We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.

  17. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    Directory of Open Access Journals (Sweden)

    Javier Macias-Guarasa

    2012-10-01

    Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  18. Models for estimating photosynthesis parameters from in situ production profiles

    Science.gov (United States)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of

  19. Obtention of the parameters of the Voigt function using the least square fit method

    International Nuclear Information System (INIS)

    Flores Ll, H.; Cabral P, A.; Jimenez D, H.

    1990-01-01

    The fundamental parameters of the Voigt function are determined: lorentzian wide (Γ L ) and gaussian wide (Γ G ) with an error for almost all the cases inferior to 1% in the intervals 0.01 ≤ Γ L / Γ G ≤1 and 0.3 ≤ Γ G / Γ L ≤1. This is achieved using the least square fit method with an algebraic function, being obtained a simple method to obtain the fundamental parameters of the Voigt function used in many spectroscopies. (Author)

  20. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references.

  1. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    International Nuclear Information System (INIS)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references

  2. [Unfolding item response model using best-worst scaling].

    Science.gov (United States)

    Ikehara, Kazuya

    2015-02-01

    In attitude measurement and sensory tests, the unfolding model is typically used. In this model, response probability is formulated by the distance between the person and the stimulus. In this study, we proposed an unfolding item response model using best-worst scaling (BWU model), in which a person chooses the best and worst stimulus among repeatedly presented subsets of stimuli. We also formulated an unfolding model using best scaling (BU model), and compared the accuracy of estimates between the BU and BWU models. A simulation experiment showed that the BWU modell performed much better than the BU model in terms of bias and root mean square errors of estimates. With reference to Usami (2011), the proposed models were apllied to actual data to measure attitudes toward tardiness. Results indicated high similarity between stimuli estimates generated with the proposed models and those of Usami (2011).

  3. Simple inhomogeneous cosmological (toy) models

    International Nuclear Information System (INIS)

    Isidro, Eddy G. Chirinos; Zimdahl, Winfried; Vargas, Cristofher Zuñiga

    2016-01-01

    Based on the Lemaître-Tolman-Bondi (LTB) metric we consider two flat inhomogeneous big-bang models. We aim at clarifying, as far as possible analytically, basic features of the dynamics of the simplest inhomogeneous models and to point out the potential usefulness of exact inhomogeneous solutions as generalizations of the homogeneous configurations of the cosmological standard model. We discuss explicitly partial successes but also potential pitfalls of these simplest models. Although primarily seen as toy models, the relevant free parameters are fixed by best-fit values using the Joint Light-curve Analysis (JLA)-sample data. On the basis of a likelihood analysis we find that a local hump with an extension of almost 2 Gpc provides a better description of the observations than a local void for which we obtain a best-fit scale of about 30 Mpc. Future redshift-drift measurements are discussed as a promising tool to discriminate between inhomogeneous configurations and the ΛCDM model.

  4. A Best-Estimate Reactor Core Monitor Using State Feedback Strategies to Reduce Uncertainties

    International Nuclear Information System (INIS)

    Martin, Robert P.; Edwards, Robert M.

    2000-01-01

    The development and demonstration of a new algorithm to reduce modeling and state-estimation uncertainty in best-estimate simulation codes has been investigated. Demonstration is given by way of a prototype reactor core monitor. The architecture of this monitor integrates a control-theory-based, distributed-parameter estimation technique into a production-grade best-estimate simulation code. The Kalman Filter-Sequential Least-Squares (KFSLS) parameter estimation algorithm has been extended for application into the computational environment of the best-estimate simulation code RELAP5-3D. In control system terminology, this configuration can be thought of as a 'best-estimate' observer. The application to a distributed-parameter reactor system involves a unique modal model that approximates physical components, such as the reactor, by describing both states and parameters by an orthogonal expansion. The basic KFSLS parameter estimation is used to dynamically refine a spatially varying (distributed) parameter. The application of the distributed-parameter estimator is expected to complement a traditional nonlinear best-estimate simulation code by providing a mechanism for reducing both code input (modeling) and output (state-estimation) uncertainty in complex, distributed-parameter systems

  5. Automatic fitting of spiking neuron models to electrophysiological recordings

    Directory of Open Access Journals (Sweden)

    Cyrille Rossant

    2010-03-01

    Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.

  6. Cosmological-model-parameter determination from satellite-acquired type Ia and IIP Supernova Data

    International Nuclear Information System (INIS)

    Podariu, Silviu; Nugent, Peter; Ratra, Bharat

    2000-01-01

    We examine the constraints that satellite-acquired Type Ia and IIP supernova apparent magnitude versus redshift data will place on cosmological model parameters in models with and without a constant or time-variable cosmological constant lambda. High-quality data which could be acquired in the near future will result in tight constraints on these parameters. For example, if all other parameters of a spatially-flat model with a constant lambda are known, the supernova data should constrain the non-relativistic matter density parameter omega to better than 1 (2, 0.5) at 1 sigma with neutral (worst case, best case) assumptions about data quality

  7. Nonlinear models applied to seed germination of Rhipsalis cereuscula Haw (Cactaceae

    Directory of Open Access Journals (Sweden)

    Terezinha Aparecida Guedes

    2014-09-01

    Full Text Available The objective of this analysis was to fit germination data of Rhipsalis cereuscula Haw seeds to the Weibull model with three parameters using Frequentist and Bayesian methods. Five parameterizations were compared using the Bayesian analysis to fit a prior distribution. The parameter estimates from the Frequentist method were similar to the Bayesian responses considering the following non-informative a priori distribution for the parameter vectors: gamma (10³, 10³ in the model M1, normal (0, 106 in the model M2, uniform (0, Lsup in the model M3, exp (μ in the model M4 and Lnormal (μ, 106 in the model M5. However, to achieve the convergence in the models M4 and M5, we applied the μ from the estimates of the Frequentist approach. The best models fitted by the Bayesian method were the M1 and M3. The adequacy of these models was based on the advantages over the Frequentist method such as the reduced computational efforts and the possibility of comparison.

  8. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO3 standards

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s

  9. A genetic algorithm for optimizing multi-pole Debye models of tissue dielectric properties

    International Nuclear Information System (INIS)

    Clegg, J; Robinson, M P

    2012-01-01

    Models of tissue dielectric properties (permittivity and conductivity) enable the interactions of tissues and electromagnetic fields to be simulated, which has many useful applications in microwave imaging, radio propagation, and non-ionizing radiation dosimetry. Parametric formulae are available, based on a multi-pole model of tissue dispersions, but although they give the dielectric properties over a wide frequency range, they do not convert easily to the time domain. An alternative is the multi-pole Debye model which works well in both time and frequency domains. Genetic algorithms are an evolutionary approach to optimization, and we found that this technique was effective at finding the best values of the multi-Debye parameters. Our genetic algorithm optimized these parameters to fit to either a Cole–Cole model or to measured data, and worked well over wide or narrow frequency ranges. Over 10 Hz–10 GHz the best fits for muscle, fat or bone were each found for ten dispersions or poles in the multi-Debye model. The genetic algorithm is a fast and effective method of developing tissue models that compares favourably with alternatives such as the rational polynomial fit. (paper)

  10. Is the mental wellbeing of young Australians best represented by a single, multidimensional or bifactor model?

    Science.gov (United States)

    Hides, Leanne; Quinn, Catherine; Stoyanov, Stoyan; Cockshaw, Wendell; Mitchell, Tegan; Kavanagh, David J

    2016-07-30

    Internationally there is a growing interest in the mental wellbeing of young people. However, it is unclear whether mental wellbeing is best conceptualized as a general wellbeing factor or a multidimensional construct. This paper investigated whether mental wellbeing, measured by the Mental Health Continuum-Short Form (MHC-SF), is best represented by: (1) a single-factor general model; (2) a three-factor multidimensional model or (3) a combination of both (bifactor model). 2220 young Australians aged between 16 and 25 years completed an online survey including the MHC-SF and a range of other wellbeing and mental ill-health measures. Exploratory factor analysis supported a bifactor solution, comprised of a general wellbeing factor, and specific group factors of psychological, social and emotional wellbeing. Confirmatory factor analysis indicated that the bifactor model had a better fit than competing single and three-factor models. The MHC-SF total score was more strongly associated with other wellbeing and mental ill-health measures than the social, emotional or psychological subscale scores. Findings indicate that the mental wellbeing of young people is best conceptualized as an overarching latent construct (general wellbeing) to which emotional, social and psychological domains contribute. The MHC-SF total score is a valid and reliable measure of this general wellbeing factor. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Constraining statistical-model parameters using fusion and spallation reactions

    Directory of Open Access Journals (Sweden)

    Charity Robert J.

    2011-10-01

    Full Text Available The de-excitation of compound nuclei has been successfully described for several decades by means of statistical models. However, such models involve a large number of free parameters and ingredients that are often underconstrained by experimental data. We show how the degeneracy of the model ingredients can be partially lifted by studying different entrance channels for de-excitation, which populate different regions of the parameter space of the compound nucleus. Fusion reactions, in particular, play an important role in this strategy because they fix three out of four of the compound-nucleus parameters (mass, charge and total excitation energy. The present work focuses on fission and intermediate-mass-fragment emission cross sections. We prove how equivalent parameter sets for fusion-fission reactions can be resolved using another entrance channel, namely spallation reactions. Intermediate-mass-fragment emission can be constrained in a similar way. An interpretation of the best-fit IMF barriers in terms of the Wigner energies of the nascent fragments is discussed.

  12. Fitness Club

    CERN Multimedia

    Fitness Club

    2012-01-01

      The CERN Fitness Club is pleased to announce its new early morning class which will be taking place on: Tuesdays from 24th April 07:30 to 08:15 216 (Pump Hall, close to entrance C) – Facilities include changing rooms and showers. The Classes: The early morning classes will focus on workouts which will help you build not only strength and stamina, but will also improve your balance, and coordination. Our qualified instructor Germana will accompany you throughout the workout  to ensure you stay motivated so you achieve the best results. Sign up and discover the best way to start your working day full of energy! How to subscribe? We invite you along to a FREE trial session, if you enjoy the activity, please sign up via our website: https://espace.cern.ch/club-fitness/Activities/SUBSCRIBE.aspx. * * * * * * * * Saturday 28th April Get in shape for the summer at our fitness workshop and zumba dance party: Fitness workshop with Germana 13:00 to 14:30 - 216 (Pump Hall) Price...

  13. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  14. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  15. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  16. Effect of Ramadan intermittent fasting on selective fitness profile parameters in young untrained Muslim men.

    Science.gov (United States)

    Roy, Anindita Singha; Bandyopadhyay, Amit

    2015-01-01

    The present study was aimed at investigating the effects of sleep deprivation and dietary irregularities during Ramadan intermittent fasting (RIF) on selective fitness profile parameters in young untrained male Muslim individuals. 77 untrained Muslim men were recruited in the study. They were divided into the experimental group (EG; n=37, age: 22.62±1.77 years) and the control group (CG; n=40, age: 23.00±1.48 years). EG was undergoing RIF while CG abstained. Aerobic fitness, anaerobic capacity or high-intensity efforts (HIEs), agility, flexibility, vertical jump height and handgrip strength were measured on 8 separate occasions-15 days before RIF, 7 days before RIF, 1st day of RIF, 7th day of RIF, 15th day of RIF, 21st day of RIF, last day of RIF and 15 days after RIF. Aerobic fitness and HIE showed a significant difference (p<0.05) during RIF in EG. Agility and flexibility score showed a significant decrease in EG during RIF, whereas changes in the vertical jump score (VJT) and handgrip strength were statistically insignificant. Studied parameters showed an insignificant variation in CG during RIF. Aerobic fitness, HIEs, agility and flexibility showed a significant intergroup variation during different experimental trials. The present investigation revealed that RIF had adverse effects on aerobic fitness, HIEs, agility and flexibility of young untrained Muslims of Kolkata, India. VJT, waist-hip ratio and handgrip strength were not affected by RIF in the studied population. Mild but statistically insignificant reduction in body mass was also reflected after the mid-Ramadan week.

  17. Data-driven techniques to estimate parameters in a rate-dependent ferromagnetic hysteresis model

    International Nuclear Information System (INIS)

    Hu Zhengzheng; Smith, Ralph C.; Ernstberger, Jon M.

    2012-01-01

    The quantification of rate-dependent ferromagnetic hysteresis is important in a range of applications including high speed milling using Terfenol-D actuators. There exist a variety of frameworks for characterizing rate-dependent hysteresis including the magnetic model in Ref. , the homogenized energy framework, Preisach formulations that accommodate after-effects, and Prandtl-Ishlinskii models. A critical issue when using any of these models to characterize physical devices concerns the efficient estimation of model parameters through least squares data fits. A crux of this issue is the determination of initial parameter estimates based on easily measured attributes of the data. In this paper, we present data-driven techniques to efficiently and robustly estimate parameters in the homogenized energy model. This framework was chosen due to its physical basis and its applicability to ferroelectric, ferromagnetic and ferroelastic materials.

  18. Calibration of a biome-biogeochemical cycles model for modeling the net primary production of teak forests through inverse modeling of remotely sensed data

    Science.gov (United States)

    Imvitthaya, Chomchid; Honda, Kiyoshi; Lertlum, Surat; Tangtham, Nipon

    2011-01-01

    In this paper, we present the results of a net primary production (NPP) modeling of teak (Tectona grandis Lin F.), an important species in tropical deciduous forests. The biome-biogeochemical cycles or Biome-BGC model was calibrated to estimate net NPP through the inverse modeling approach. A genetic algorithm (GA) was linked with Biome-BGC to determine the optimal ecophysiological model parameters. The Biome-BGC was calibrated by adjusting the ecophysiological model parameters to fit the simulated LAI to the satellite LAI (SPOT-Vegetation), and the best fitness confirmed the high accuracy of generated ecophysioligical parameter from GA. The modeled NPP, using optimized parameters from GA as input data, was evaluated using daily NPP derived by the MODIS satellite and the annual field data in northern Thailand. The results showed that NPP obtained using the optimized ecophysiological parameters were more accurate than those obtained using default literature parameterization. This improvement occurred mainly because the model's optimized parameters reduced the bias by reducing systematic underestimation in the model. These Biome-BGC results can be effectively applied in teak forests in tropical areas. The study proposes a more effective method of using GA to determine ecophysiological parameters at the site level and represents a first step toward the analysis of the carbon budget of teak plantations at the regional scale.

  19. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    Science.gov (United States)

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  20. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Science.gov (United States)

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  1. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Directory of Open Access Journals (Sweden)

    A H Sabry

    Full Text Available The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  2. Assessment and modeling of the groundwater hydrogeochemical quality parameters via geostatistical approaches

    Science.gov (United States)

    Karami, Shawgar; Madani, Hassan; Katibeh, Homayoon; Fatehi Marj, Ahmad

    2018-03-01

    Geostatistical methods are one of the advanced techniques used for interpolation of groundwater quality data. The results obtained from geostatistics will be useful for decision makers to adopt suitable remedial measures to protect the quality of groundwater sources. Data used in this study were collected from 78 wells in Varamin plain aquifer located in southeast of Tehran, Iran, in 2013. Ordinary kriging method was used in this study to evaluate groundwater quality parameters. According to what has been mentioned in this paper, seven main quality parameters (i.e. total dissolved solids (TDS), sodium adsorption ratio (SAR), electrical conductivity (EC), sodium (Na+), total hardness (TH), chloride (Cl-) and sulfate (SO4 2-)), have been analyzed and interpreted by statistical and geostatistical methods. After data normalization by Nscore method in WinGslib software, variography as a geostatistical tool to define spatial regression was compiled and experimental variograms were plotted by GS+ software. Then, the best theoretical model was fitted to each variogram based on the minimum RSS. Cross validation method was used to determine the accuracy of the estimated data. Eventually, estimation maps of groundwater quality were prepared in WinGslib software and estimation variance map and estimation error map were presented to evaluate the quality of estimation in each estimated point. Results showed that kriging method is more accurate than the traditional interpolation methods.

  3. Best Statistical Distribution of flood variables for Johor River in Malaysia

    Science.gov (United States)

    Salarpour Goodarzi, M.; Yusop, Z.; Yusof, F.

    2012-12-01

    A complex flood event is always characterized by a few characteristics such as flood peak, flood volume, and flood duration, which might be mutually correlated. This study explored the statistical distribution of peakflow, flood duration and flood volume at Rantau Panjang gauging station on the Johor River in Malaysia. Hourly data were recorded for 45 years. The data were analysed based on water year (July - June). Five distributions namely, Log Normal, Generalize Pareto, Log Pearson, Normal and Generalize Extreme Value (GEV) were used to model the distribution of all the three variables. Anderson-Darling and Kolmogorov-Smirnov goodness-of-fit tests were used to evaluate the best fit. Goodness-of-fit tests at 5% level of significance indicate that all the models can be used to model the distribution of peakflow, flood duration and flood volume. However, Generalize Pareto distribution is found to be the most suitable model when tested with the Anderson-Darling test and the, Kolmogorov-Smirnov suggested that GEV is the best for peakflow. The result of this research can be used to improve flood frequency analysis. Comparison between Generalized Extreme Value, Generalized Pareto and Log Pearson distributions in the Cumulative Distribution Function of peakflow

  4. Quality assessment and artificial neural networks modeling for characterization of chemical and physical parameters of potable water.

    Science.gov (United States)

    Salari, Marjan; Salami Shahid, Esmaeel; Afzali, Seied Hosein; Ehteshami, Majid; Conti, Gea Oliveri; Derakhshan, Zahra; Sheibani, Solmaz Nikbakht

    2018-04-22

    Today, due to the increase in the population, the growth of industry and the variety of chemical compounds, the quality of drinking water has decreased. Five important river water quality properties such as: dissolved oxygen (DO), total dissolved solids (TDS), total hardness (TH), alkalinity (ALK) and turbidity (TU) were estimated by parameters such as: electric conductivity (EC), temperature (T), and pH that could be measured easily with almost no costs. Simulate water quality parameters were examined with two methods of modeling include mathematical and Artificial Neural Networks (ANN). Mathematical methods are based on polynomial fitting with least square method and ANN modeling algorithms are feed-forward networks. All conditions/circumstances covered by neural network modeling were tested for all parameters in this study, except for Alkalinity. All optimum ANN models developed to simulate water quality parameters had precision value as R-value close to 0.99. The ANN model extended to simulate alkalinity with R-value equals to 0.82. Moreover, Surface fitting techniques were used to refine data sets. Presented models and equations are reliable/useable tools for studying water quality parameters at similar rivers, as a proper replacement for traditional water quality measuring equipment's. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Applicability of Different Hydraulic Parameters to Describe Soil Detachment in Eroding Rills

    Science.gov (United States)

    Wirtz, Stefan; Seeger, Manuel; Zell, Andreas; Wagner, Christian; Wagner, Jean-Frank; Ries, Johannes B.

    2013-01-01

    This study presents the comparison of experimental results with assumptions used in numerical models. The aim of the field experiments is to test the linear relationship between different hydraulic parameters and soil detachment. For example correlations between shear stress, unit length shear force, stream power, unit stream power and effective stream power and the detachment rate does not reveal a single parameter which consistently displays the best correlation. More importantly, the best fit does not only vary from one experiment to another, but even between distinct measurement points. Different processes in rill erosion are responsible for the changing correlations. However, not all these procedures are considered in soil erosion models. Hence, hydraulic parameters alone are not sufficient to predict detachment rates. They predict the fluvial incising in the rill's bottom, but the main sediment sources are not considered sufficiently in its equations. The results of this study show that there is still a lack of understanding of the physical processes underlying soil erosion. Exerted forces, soil stability and its expression, the abstraction of the detachment and transport processes in shallow flowing water remain still subject of unclear description and dependence. PMID:23717669

  6. FIREFLY (Fitting IteRativEly For Likelihood analYsis): a full spectral fitting code

    Science.gov (United States)

    Wilkinson, David M.; Maraston, Claudia; Goddard, Daniel; Thomas, Daniel; Parikh, Taniya

    2017-12-01

    We present a new spectral fitting code, FIREFLY, for deriving the stellar population properties of stellar systems. FIREFLY is a chi-squared minimization fitting code that fits combinations of single-burst stellar population models to spectroscopic data, following an iterative best-fitting process controlled by the Bayesian information criterion. No priors are applied, rather all solutions within a statistical cut are retained with their weight. Moreover, no additive or multiplicative polynomials are employed to adjust the spectral shape. This fitting freedom is envisaged in order to map out the effect of intrinsic spectral energy distribution degeneracies, such as age, metallicity, dust reddening on galaxy properties, and to quantify the effect of varying input model components on such properties. Dust attenuation is included using a new procedure, which was tested on Integral Field Spectroscopic data in a previous paper. The fitting method is extensively tested with a comprehensive suite of mock galaxies, real galaxies from the Sloan Digital Sky Survey and Milky Way globular clusters. We also assess the robustness of the derived properties as a function of signal-to-noise ratio (S/N) and adopted wavelength range. We show that FIREFLY is able to recover age, metallicity, stellar mass, and even the star formation history remarkably well down to an S/N ∼ 5, for moderately dusty systems. Code and results are publicly available.1

  7. Influential input parameters for reflood model of MARS code

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Bang, Young Seok [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-10-15

    Best Estimate (BE) calculation has been more broadly used in nuclear industries and regulations to reduce the significant conservatism for evaluating Loss of Coolant Accident (LOCA). Reflood model has been identified as one of the problems in BE calculation. The objective of the Post BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) program of OECD/NEA is to make progress the issue of the quantification of the uncertainty of the physical models in system thermal hydraulic codes, by considering an experimental result especially for reflood. It is important to establish a methodology to identify and select the parameters influential to the response of reflood phenomena following Large Break LOCA. For this aspect, a reference calculation and sensitivity analysis to select the dominant influential parameters for FEBA experiment are performed.

  8. Microscopic calculation of the Majorana parameters of the interacting boson model for the Hg isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Druce, C.H.; Barrett, B.R. (Arizona Univ., Tucson (USA). Dept. of Physics); Pittel, S. (Delaware Univ., Newark (USA). Bartol Research Foundation); Duval, P.D. (BEERS Associates, Reston, VA (USA))

    1985-07-11

    The parameters of the Majorana interaction of the neutron-proton interacting boson model are calculated for the Hg isotopes. The calculations utilize the Otsuka-Arima-Iachello mapping procedure and also lead to predictions for the other boson parameters. The resulting spectra are compared with experimental spectra and those obtained from phenomenological fits.

  9. Microscopic calculation of the Majorana parameters of the interacting boson model for the Hg isotopes

    Science.gov (United States)

    Druce, C. H.; Pittel, S.; Barrett, B. R.; Duval, P. D.

    1985-07-01

    The parameters of the Majorana interaction of the neutron-proton interacting boson model are calculated for the Hg isotopes. The calculations utilize the Otsuka-Arima-Iachello mapping procedure and also lead to predictions for the other boson parameters. The resulting spectra are compared with experimental spectra and those obtained from phenomenological fits.

  10. Model Parameter Variability for Enhanced Anaerobic Bioremediation of DNAPL Source Zones

    Science.gov (United States)

    Mao, X.; Gerhard, J. I.; Barry, D. A.

    2005-12-01

    The objective of the Source Area Bioremediation (SABRE) project, an international collaboration of twelve companies, two government agencies and three research institutions, is to evaluate the performance of enhanced anaerobic bioremediation for the treatment of chlorinated ethene source areas containing dense, non-aqueous phase liquids (DNAPL). This 4-year, 5.7 million dollars research effort focuses on a pilot-scale demonstration of enhanced bioremediation at a trichloroethene (TCE) DNAPL field site in the United Kingdom, and includes a significant program of laboratory and modelling studies. Prior to field implementation, a large-scale, multi-laboratory microcosm study was performed to determine the optimal system properties to support dehalogenation of TCE in site soil and groundwater. This statistically-based suite of experiments measured the influence of key variables (electron donor, nutrient addition, bioaugmentation, TCE concentration and sulphate concentration) in promoting the reductive dechlorination of TCE to ethene. As well, a comprehensive biogeochemical numerical model was developed for simulating the anaerobic dehalogenation of chlorinated ethenes. An appropriate (reduced) version of this model was combined with a parameter estimation method based on fitting of the experimental results. Each of over 150 individual microcosm calibrations involved matching predicted and observed time-varying concentrations of all chlorinated compounds. This study focuses on an analysis of this suite of fitted model parameter values. This includes determining the statistical correlation between parameters typically employed in standard Michaelis-Menten type rate descriptions (e.g., maximum dechlorination rates, half-saturation constants) and the key experimental variables. The analysis provides insight into the degree to which aqueous phase TCE and cis-DCE inhibit dechlorination of less-chlorinated compounds. Overall, this work provides a database of the numerical

  11. FITTING A THREE DIMENSIONAL PEM FUEL CELL MODEL TO MEASUREMENTS BY TUNING THE POROSITY AND

    DEFF Research Database (Denmark)

    Bang, Mads; Odgaard, Madeleine; Condra, Thomas Joseph

    2004-01-01

    the distribution of current density and further how thisaffects the polarization curve.The porosity and conductivity of the catalyst layer are some ofthe most difficult parameters to measure, estimate and especiallycontrol. Yet the proposed model shows how these two parameterscan have significant influence...... on the performance of the fuel cell.The two parameters are shown to be key elements in adjusting thethree-dimensional model to fit measured polarization curves.Results from the proposed model are compared to single cellmeasurements on a test MEA from IRD Fuel Cells.......A three-dimensional, computational fluid dynamics (CFD) model of a PEM fuel cell is presented. The model consists ofstraight channels, porous gas diffusion layers, porous catalystlayers and a membrane. In this computational domain, most ofthe transport phenomena which govern the performance of the...

  12. Are Physical Education Majors Models for Fitness?

    Science.gov (United States)

    Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela

    2012-01-01

    The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…

  13. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-09-28

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  14. FATAL, General Experiment Fitting Program by Nonlinear Regression Method

    International Nuclear Information System (INIS)

    Salmon, L.; Budd, T.; Marshall, M.

    1982-01-01

    1 - Description of problem or function: A generalized fitting program with a free-format keyword interface to the user. It permits experimental data to be fitted by non-linear regression methods to any function describable by the user. The user requires the minimum of computer experience but needs to provide a subroutine to define his function. Some statistical output is included as well as 'best' estimates of the function's parameters. 2 - Method of solution: The regression method used is based on a minimization technique devised by Powell (Harwell Subroutine Library VA05A, 1972) which does not require the use of analytical derivatives. The method employs a quasi-Newton procedure balanced with a steepest descent correction. Experience shows this to be efficient for a very wide range of application. 3 - Restrictions on the complexity of the problem: The current version of the program permits functions to be defined with up to 20 parameters. The function may be fitted to a maximum of 400 points, preferably with estimated values of weight given

  15. Scaling up watershed model parameters--Flow and load simulations of the Edisto River Basin

    Science.gov (United States)

    Feaster, Toby D.; Benedict, Stephen T.; Clark, Jimmy M.; Bradley, Paul M.; Conrads, Paul

    2014-01-01

    The Edisto River is the longest and largest river system completely contained in South Carolina and is one of the longest free flowing blackwater rivers in the United States. The Edisto River basin also has fish-tissue mercury concentrations that are some of the highest recorded in the United States. As part of an effort by the U.S. Geological Survey to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River basin, analyses and simulations of the hydrology of the Edisto River basin were made with the topography-based hydrological model (TOPMODEL). The potential for scaling up a previous application of TOPMODEL for the McTier Creek watershed, which is a small headwater catchment to the Edisto River basin, was assessed. Scaling up was done in a step-wise process beginning with applying the calibration parameters, meteorological data, and topographic wetness index data from the McTier Creek TOPMODEL to the Edisto River TOPMODEL. Additional changes were made with subsequent simulations culminating in the best simulation, which included meteorological and topographic wetness index data from the Edisto River basin and updated calibration parameters for some of the TOPMODEL calibration parameters. Comparison of goodness-of-fit statistics between measured and simulated daily mean streamflow for the two models showed that with calibration, the Edisto River TOPMODEL produced slightly better results than the McTier Creek model, despite the significant difference in the drainage-area size at the outlet locations for the two models (30.7 and 2,725 square miles, respectively). Along with the TOPMODEL hydrologic simulations, a visualization tool (the Edisto River Data Viewer) was developed to help assess trends and influencing variables in the stream ecosystem. Incorporated into the visualization tool were the water-quality load models TOPLOAD, TOPLOAD-H, and LOADEST

  16. ITEM LEVEL DIAGNOSTICS AND MODEL - DATA FIT IN ITEM ...

    African Journals Online (AJOL)

    Global Journal

    Item response theory (IRT) is a framework for modeling and analyzing item response ... data. Though, there is an argument that the evaluation of fit in IRT modeling has been ... National Council on Measurement in Education ... model data fit should be based on three types of ... prediction should be assessed through the.

  17. Statistical MOSFET Parameter Extraction with Parameter Selection for Minimal Point Measurement

    Directory of Open Access Journals (Sweden)

    Marga Alisjahbana

    2013-11-01

    Full Text Available A method to statistically extract MOSFET model parameters from a minimal number of transistor I(V characteristic curve measurements, taken during fabrication process monitoring. It includes a sensitivity analysis of the model, test/measurement point selection, and a parameter extraction experiment on the process data. The actual extraction is based on a linear error model, the sensitivity of the MOSFET model with respect to the parameters, and Newton-Raphson iterations. Simulated results showed good accuracy of parameter extraction and I(V curve fit for parameter deviations of up 20% from nominal values, including for a process shift of 10% from nominal.

  18. Parameter identification of ZnO surge arrester models based on genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bayadi, Abdelhafid [Laboratoire d' Automatique de Setif, Departement d' Electrotechnique, Faculte des Sciences de l' Ingenieur, Universite Ferhat ABBAS de Setif, Route de Bejaia Setif 19000 (Algeria)

    2008-07-15

    The correct and adequate modelling of ZnO surge arresters characteristics is very important for insulation coordination studies and systems reliability. In this context many researchers addressed considerable efforts to the development of surge arresters models to reproduce the dynamic characteristics observed in their behaviour when subjected to fast front impulse currents. The difficulties with these models reside essentially in the calculation and the adjustment of their parameters. This paper proposes a new technique based on genetic algorithm to obtain the best possible series of parameter values of ZnO surge arresters models. The validity of the predicted parameters is then checked by comparing the predicted results with the experimental results available in the literature. Using the ATP-EMTP package, an application of the arrester model on network system studies is presented and discussed. (author)

  19. Mössbauer parameters of ordinary chondrites influenced by the fit accuracy of the troilite component: an example of Chelyabinsk LL5 meteorite

    Energy Technology Data Exchange (ETDEWEB)

    Maksimova, A. A. [Ural Federal University, Department of Physical Techniques and Devices for Quality Control, Institute of Physics and Technology (Russian Federation); Klencsár, Z. [Hungarian Academy of Sciences, Institute of Materials and Environmental Chemistry, Research Centre for Natural Sciences (Hungary); Oshtrakh, M. I., E-mail: oshtrakh@gmail.com; Petrova, E. V.; Grokhovsky, V. I. [Ural Federal University, Department of Physical Techniques and Devices for Quality Control, Institute of Physics and Technology (Russian Federation); Kuzmann, E.; Homonnay, Z. [Eötvös Loránd University, Institute of Chemistry (Hungary); Semionkin, V. A. [Ural Federal University, Department of Physical Techniques and Devices for Quality Control, Institute of Physics and Technology (Russian Federation)

    2016-12-15

    The influence of the fit accuracy of the troilite component in the Mössbauer spectra of ordinary chondrites on the parameters obtained for other spectral components was evaluated using the Mössbauer spectrum of Chelyabinsk LL5 meteorite fragment with light lithology as a typical example. It was shown that with respect to the application of a usual sextet component where quadrupole interaction is taken into account in the first-order perturbation limit, substantial improvement of the spectrum fit can be achieved either by using the full Hamiltonian description of the troilite component or by its formal approximation with the superposition of three symmetric doublet components. Parameter values obtained for the main spectral components related to olivine and pyroxene were not sensitive to the fit of troilite component while parameters of the minor spectral components depended on the way of troilite component fitting.

  20. Genetic parameters for direct and maternal calving ease in Walloon dairy cattle based on linear and threshold models.

    Science.gov (United States)

    Vanderick, S; Troch, T; Gillon, A; Glorieux, G; Gengler, N

    2014-12-01

    Calving ease scores from Holstein dairy cattle in the Walloon Region of Belgium were analysed using univariate linear and threshold animal models. Variance components and derived genetic parameters were estimated from a data set including 33,155 calving records. Included in the models were season, herd and sex of calf × age of dam classes × group of calvings interaction as fixed effects, herd × year of calving, maternal permanent environment and animal direct and maternal additive genetic as random effects. Models were fitted with the genetic correlation between direct and maternal additive genetic effects either estimated or constrained to zero. Direct heritability for calving ease was approximately 8% with linear models and approximately 12% with threshold models. Maternal heritabilities were approximately 2 and 4%, respectively. Genetic correlation between direct and maternal additive effects was found to be not significantly different from zero. Models were compared in terms of goodness of fit and predictive ability. Criteria of comparison such as mean squared error, correlation between observed and predicted calving ease scores as well as between estimated breeding values were estimated from 85,118 calving records. The results provided few differences between linear and threshold models even though correlations between estimated breeding values from subsets of data for sires with progeny from linear model were 17 and 23% greater for direct and maternal genetic effects, respectively, than from threshold model. For the purpose of genetic evaluation for calving ease in Walloon Holstein dairy cattle, the linear animal model without covariance between direct and maternal additive effects was found to be the best choice. © 2014 Blackwell Verlag GmbH.

  1. Parameter-free methods distinguish Wnt pathway models and guide design of experiments

    KAUST Repository

    MacLean, Adam L.

    2015-02-17

    The canonical Wnt signaling pathway, mediated by β-catenin, is crucially involved in development, adult stem cell tissue maintenance, and a host of diseases including cancer. We analyze existing mathematical models of Wnt and compare them to a new Wnt signaling model that targets spatial localization; our aim is to distinguish between the models and distill biological insight from them. Using Bayesian methods we infer parameters for each model from mammalian Wnt signaling data and find that all models can fit this time course. We appeal to algebraic methods (concepts from chemical reaction network theory and matroid theory) to analyze the models without recourse to specific parameter values. These approaches provide insight into aspects of Wnt regulation: the new model, via control of shuttling and degradation parameters, permits multiple stable steady states corresponding to stem-like vs. committed cell states in the differentiation hierarchy. Our analysis also identifies groups of variables that should be measured to fully characterize and discriminate between competing models, and thus serves as a guide for performing minimal experiments for model comparison.

  2. Quarkonium level fitting with two-power potentials

    International Nuclear Information System (INIS)

    Joshi, G.C.; Wignall, J.W.G.

    1981-01-01

    An attempt has been made to fit psi and UPSILON energy levels and leptonic decay width ratios with a non-relativistic potential model using a potential of the form V(r) = Arsup(p) + Brsup(q) + C. It is found that reasonable fits to states below hadronic decay threshold can be obtained for values of the powers p and q anywhere along a family of curves in the (p,q) plane that smoothly join the Martin potential (p = 0, q = 0.1) to the potential forms with p approximately -1 suggested by QCD; for the latter case the best fit is obtained with q approximately 0.4 - 0.5

  3. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  4. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  5. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    2010-07-01

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  6. Probabilistic model fitting for spatio-temporal variability studies of precipitation: the Sara-Brut system - a case study

    International Nuclear Information System (INIS)

    Dorado Delgado, Jennifer; Burbano Criollo, Juan Carlos; Molina Tabares, Jose Manuel; Carvajal Escobar, Yesid; Aristizabal, Hector Fabio

    2006-01-01

    In this study, space and time variability of monthly and annual rainfall was analyzed for the downstream influence zone of a Colombian supply-regulation reservoir, Sara-Brut, located on the Cauca valley department. Monthly precipitation data from 18 gauge stations and for a 29-year record (1975-2003) were used. These data were processed by means of time series completion, consistency analyses and sample statistics computations. Theoretical probabilistic distribution models such as Gumbel, normal, lognormal and wake by, and other empirical distributions such as Weibull and Landwehr were applied in order to fit the historical precipitation data set. The fit standard error (FSE) was used to test the goodness of fit of the theoretical distribution models and to choose the best of this probabilistic function. The wake by approach showed the best goodness of fit in 89% of the total gauges taken into account. Time variability was analyzed by means of wake by estimated values of monthly and annual precipitation associated with return periods of 1,052, 1,25, 2, 10, 20 and 50 years. Precipitation space variability is presents by means of ArcGis v8.3 and using krigging as interpolation method. In general terms the results obtained from this study show significant distribution variability in precipitation over the whole area, and particularity, the formation of dry and humid nucleus over the northeastern strip and microclimates at the southwestern and central zone of the study area were observed, depending on the season of year. The mentioned distribution pattern is likely caused by the influence of pacific wind streams, which come from the Andean western mountain range. It is expected that the results from this work be helpful for future planning and hydrologic project design

  7. Seasonal and spatial variation in broadleaf forest model parameters

    Science.gov (United States)

    Groenendijk, M.; van der Molen, M. K.; Dolman, A. J.

    2009-04-01

    Process based, coupled ecosystem carbon, energy and water cycle models are used with the ultimate goal to project the effect of future climate change on the terrestrial carbon cycle. A typical dilemma in such exercises is how much detail the model must be given to describe the observations reasonably realistic while also be general. We use a simple vegetation model (5PM) with five model parameters to study the variability of the parameters. These parameters are derived from the observed carbon and water fluxes from the FLUXNET database. For 15 broadleaf forests the model parameters were derived for different time resolutions. It appears that in general for all forests, the correlation coefficient between observed and simulated carbon and water fluxes improves with a higher parameter time resolution. The quality of the simulations is thus always better when a higher time resolution is used. These results show that annual parameters are not capable of properly describing weather effects on ecosystem fluxes, and that two day time resolution yields the best results. A first indication of the climate constraints can be found by the seasonal variation of the covariance between Jm, which describes the maximum electron transport for photosynthesis, and climate variables. A general seasonality we found is that during winter the covariance with all climate variables is zero. Jm increases rapidly after initial spring warming, resulting in a large covariance with air temperature and global radiation. During summer Jm is less variable, but co-varies negatively with air temperature and vapour pressure deficit and positively with soil water content. A temperature response appears during spring and autumn for broadleaf forests. This shows that an annual model parameter cannot be representative for the entire year. And relations with mean annual temperature are not possible. During summer the photosynthesis parameters are constrained by water availability, soil water content and

  8. Relationships of radiation track structure to biological effect: a re-interpretation of the parameters of the Katz model

    International Nuclear Information System (INIS)

    Goodhead, D.T.

    1989-01-01

    The Katz track-model of cell inactivation has been more successful than any other biophysical model in fitting and predicting inactivation of mammalian cells exposed to a wide variety of ionising radiations. Although the model was developed as a parameterised phenomenological description, without necessarily implying any particular mechanistic processes, the present analysis attempts to interpret it and thereby benefit further from its success to date. A literal interpretation of the parameters leads to contradictions with other experimental and theoretical information, especially since the fitted parameters imply very large (> ∼ 4 μm) subcellular sensitive sites which each require very large amounts (> ∼ 100 keV) of energy deposition in order to be inactivated. Comparisons of these fits with those for cell mutation suggest a re-interpretation in terms of (1) very much smaller sites and (2) a clearer distinction between the ion-kill and γ-kill modes of inactivation. It is suggested that this re-interpretation may be able to guide future development of the phenomenological Katz model and also parameterisation of mechanistic biophysical models. (author)

  9. A bivariate contaminated binormal model for robust fitting of proper ROC curves to a pair of correlated, possibly degenerate, ROC datasets.

    Science.gov (United States)

    Zhai, Xuetong; Chakraborty, Dev P

    2017-06-01

    The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics

  10. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    Science.gov (United States)

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  11. Whole Protein Native Fitness Potentials

    Science.gov (United States)

    Faraggi, Eshel; Kloczkowski, Andrzej

    2013-03-01

    Protein structure prediction can be separated into two tasks: sample the configuration space of the protein chain, and assign a fitness between these hypothetical models and the native structure of the protein. One of the more promising developments in this area is that of knowledge based energy functions. However, standard approaches using pair-wise interactions have shown shortcomings demonstrated by the superiority of multi-body-potentials. These shortcomings are due to residue pair-wise interaction being dependent on other residues along the chain. We developed a method that uses whole protein information filtered through machine learners to score protein models based on their likeness to native structures. For all models we calculated parameters associated with the distance to the solvent and with distances between residues. These parameters, in addition to energy estimates obtained by using a four-body-potential, DFIRE, and RWPlus were used as training for machine learners to predict the fitness of the models. Testing on CASP 9 targets showed that our method is superior to DFIRE, RWPlus, and the four-body potential, which are considered standards in the field.

  12. Oyster Creek cycle 10 nodal model parameter optimization study using PSMS

    International Nuclear Information System (INIS)

    Dougher, J.D.

    1987-01-01

    The power shape monitoring system (PSMS) is an on-line core monitoring system that uses a three-dimensional nodal code (NODE-B) to perform nodal power calculations and compute thermal margins. The PSMS contains a parameter optimization function that improves the ability of NODE-B to accurately monitor core power distributions. This functions iterates on the model normalization parameters (albedos and mixing factors) to obtain the best agreement between predicted and measured traversing in-core probe (TIP) reading on a statepoint-by-statepoint basis. Following several statepoint optimization runs, an average set of optimized normalization parameters can be determined and can be implemented into the current or subsequent cycle core model for on-line core monitoring. A statistical analysis of 19 high-power steady-state state-points throughout Oyster Creek cycle 10 operation has shown a consistently poor virgin model performance. The normalization parameters used in the cycle 10 NODE-B model were based on a cycle 8 study, which evaluated only Exxon fuel types. The introduction of General Electric (GE) fuel into cycle 10 (172 assemblies) was a significant fuel/core design change that could have altered the optimum set of normalization parameters. Based on the need to evaluate a potential change in the model normalization parameters for cycle 11 and in an attempt to account for the poor cycle 10 model performance, a parameter optimization study was performed

  13. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    Science.gov (United States)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  14. Study of experimentally undetermined neutrino parameters in the light of baryogenesis considering type I and type II Seesaw models

    International Nuclear Information System (INIS)

    Kalita, Rupam

    2017-01-01

    We study to connect all the experimentally undetermined neutrino parameters namely lightest neutrino mass, neutrino CP phases and baryon asymmetry of the Universe within the framework of a model where both type I and type II seesaw mechanisms can contribute to tiny neutrino masses. In this work we study the effects of Dirac and Majorana neutrino phases in the origin of matter-antimatter asymmetry through the mechanism of leptogenesis. Type I seesaw mass matrix considered to a tri-bimaximal (TBM) type neutrino mixing which always gives non zero reactor mixing angle. The type II seesaw mass matrix is then considered in such a way that the necessary deviation from TBM mixing and the best fit values of neutrino parameters can be obtained when both type I and type II seesaw contributions are taken into account. We consider different contribution from type I and type II seesaw mechanism to study the effects of neutrino CP phases in the baryon asymmetry of the universe. We further study to connect all these experimentally undetermined neutrino parameters by considering various contribution of type I and type II seesaw. (author)

  15. Derivation of cell population kinetic parameters from clinical statistical data (program RAD3)

    International Nuclear Information System (INIS)

    Cohen, L.

    1978-01-01

    Cellular lethality models generally require up to 6 parameters to simulate a clinical course of fractionated radiation therapy and to derive an estimate of the cellular surviving fraction for a given treatment scheme. These parameters are the mean cellular lethal dose, the extrapolation number, the ratio of sublethal to irreparable events, the regeneration rate, the repopulation limit (cell cycles), and a field-size or tumor-volume factor. A computer program (RAD3) was designed to derive best-fitting values for these parameters in relation to available clinical data based on the assumption that if a number of different fractionation schemes yield similar reactions, the cellular surviving fractions will be about equal in each instance. Parameters were derived for a variety of human tissues from which realistic iso-effect functions could be generated

  16. The GP tests of competence assessment: which part best predicts fitness to practise decisions?

    Science.gov (United States)

    Jayaweera, Hirosha Keshani; Potts, Henry W W; Keshwani, Karim; Valerio, Chris; Baker, Magdalen; Mehdizadeh, Leila; Sturrock, Alison

    2018-01-02

    The General Medical Council (GMC) conducts Tests of Competence (ToC) for doctors referred for Fitness to Practise (FtP) issues. GPs take a single best answer knowledge test, an Objective Structured Clinical Examination (OSCE), and a Simulated Surgery (SimSurg) assessment which is a simulated GP consultation. The aim of this study was to examine the similarities between OSCEs and SimSurg to determine whether each assessment contributed something unique to GP ToCs. A mixed methods approach was used. Data were collated on 153 GPs who were required to undertake a ToC as a part of being investigated for FtP issues between February 2010 and October 2016. Using correlation analysis, we examined to what degree performance on the knowledge test, OSCE, and SimSurg related to case examiner recommendations and FtP outcomes, including the unique predictive power of these three assessments. The outcome measures were case examiner recommendations (i) not fit to practise; ii) fit to practise on a limited basis; or iii) fit to practise) as well as FtP outcomes (i) erased/removed from the register; ii) having restrictions/conditions; or iii) be in good standing). For the qualitative component, 45 GP assessors were asked to rate whether they assess the same competencies and which assessment provides better feedback about candidates. There was significant overlap between OSCEs and SimSurg, p < 0.001. SimSurg had additional predictive power in the presence of OSCEs and the knowledge test (p = 0.030) in distinguishing doctors from different FtP categories, while OSCEs did not (p = 0.080). Both the OSCEs (p = 0.004) and SimSurg (p < 0.001) had significant negative correlations with case examiner recommendations when accounting for the effects of the other two assessments. Inductive thematic analysis of the responses to the questionnaire showed that assessors perceived OSCEs to be better suited to target specific knowledge and skills. SimSurg was thought to produce a

  17. SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.

    Science.gov (United States)

    Zi, Zhike

    2011-04-01

    Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.

  18. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.

    Directory of Open Access Journals (Sweden)

    Afnizanfaizal Abdullah

    Full Text Available The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.

  19. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.

    Science.gov (United States)

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.

  20. SU-E-T-399: Determination of the Radiobiological Parameters That Describe the Dose-Response Relations of Xerostomia and Disgeusia From Head and Neck Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Mavroidis, P; Stathakis, S; Papanikolaou, N [University of Texas Health Science Center, UTHSCSA, San Antonio, TX (United States); Peixoto Xavier, C [University of Coimbra, Coimbra, Coimbra (Portugal); Costa Ferreira, B [University of Aveiro, Coimbra, Coimbra (Portugal); Khouri, L; Carmo Lopes, M do [IPOCFG, EPE, Coimbra, Coimbra (Portugal)

    2014-06-01

    Purpose: To estimate the radiobiological parameters that describe the doseresponse relations of xerostomia and disgeusia from head and neck cancer radiotherapy. To identify the organs that are best correlated with the manifestation of those clinical endpoints. Finally, to evaluate the goodnessof- fit by comparing the model predictions against the actual clinical results. Methods: In this study, 349 head and neck cancer patients were included. For each patient the dose volume histograms (DVH) of parotids (separate and combined), mandible, submandibular glands (separate and combined) and salivary glands were calculated. The follow-up of those patients was recorded at different times after the completion of the treatment (7 weeks, 3, 7, 12, 18 and 24 months). Acute and late xerostomia and acute disgeusia were the clinical endpoints examined. A maximum likelihood fitting was performed to calculate the best estimates of the parameters used by the relative seriality model. The statistical methods of the error distribution, the receiver operating characteristic (ROC) curve, the Pearson's test and the Akaike's information criterion were utilized to assess the goodness-of-fit and the agreement between the pattern of the radiobiological predictions with that of the clinical records. Results: The estimated values of the radiobiological parameters of salivary glands are D50 = 25.2 Gy, γ = 0.52, s = 0.001. The statistical analysis confirmed the clinical validity of those parameters (area under the ROC curve = 0.65 and AIC = 38.3). Conclusion: The analysis proved that the treatment outcome pattern of the patient material can be reproduced by the relative seriality model and the estimated radiobiological parameters. Salivary glands were found to have strong volume dependence (low relative seriality). Diminishing the biologically effective uniform dose to salivary glands below 30 Gy may significantly reduce the risk of complications to the patients irradiated for

  1. SU-E-T-399: Determination of the Radiobiological Parameters That Describe the Dose-Response Relations of Xerostomia and Disgeusia From Head and Neck Radiotherapy

    International Nuclear Information System (INIS)

    Mavroidis, P; Stathakis, S; Papanikolaou, N; Peixoto Xavier, C; Costa Ferreira, B; Khouri, L; Carmo Lopes, M do

    2014-01-01

    Purpose: To estimate the radiobiological parameters that describe the doseresponse relations of xerostomia and disgeusia from head and neck cancer radiotherapy. To identify the organs that are best correlated with the manifestation of those clinical endpoints. Finally, to evaluate the goodnessof- fit by comparing the model predictions against the actual clinical results. Methods: In this study, 349 head and neck cancer patients were included. For each patient the dose volume histograms (DVH) of parotids (separate and combined), mandible, submandibular glands (separate and combined) and salivary glands were calculated. The follow-up of those patients was recorded at different times after the completion of the treatment (7 weeks, 3, 7, 12, 18 and 24 months). Acute and late xerostomia and acute disgeusia were the clinical endpoints examined. A maximum likelihood fitting was performed to calculate the best estimates of the parameters used by the relative seriality model. The statistical methods of the error distribution, the receiver operating characteristic (ROC) curve, the Pearson's test and the Akaike's information criterion were utilized to assess the goodness-of-fit and the agreement between the pattern of the radiobiological predictions with that of the clinical records. Results: The estimated values of the radiobiological parameters of salivary glands are D50 = 25.2 Gy, γ = 0.52, s = 0.001. The statistical analysis confirmed the clinical validity of those parameters (area under the ROC curve = 0.65 and AIC = 38.3). Conclusion: The analysis proved that the treatment outcome pattern of the patient material can be reproduced by the relative seriality model and the estimated radiobiological parameters. Salivary glands were found to have strong volume dependence (low relative seriality). Diminishing the biologically effective uniform dose to salivary glands below 30 Gy may significantly reduce the risk of complications to the patients irradiated for

  2. The Application of Best Estimate and Uncertainty Analysis Methodology to Large LOCA Power Pulse in a CANDU 6 Reactor

    International Nuclear Information System (INIS)

    Abdul-Razzak, A.; Zhang, J.; Sills, H.E.; Flatt, L.; Jenkins, D.; Wallace, D.J.; Popov, N.

    2002-01-01

    The paper describes briefly a best estimate plus uncertainty analysis (BE+UA) methodology and presents its proto-typing application to the power pulse phase of a limiting large Loss-of-Coolant Accident (LOCA) for a CANDU 6 reactor fuelled with CANFLEX R fuel. The methodology is consistent with and builds on world practice. The analysis is divided into two phases to focus on the dominant parameters for each phase and to allow for the consideration of all identified highly ranked parameters in the statistical analysis and response surface fits for margin parameters. The objective of this analysis is to quantify improvements in predicted safety margins under best estimate conditions. (authors)

  3. An R package for fitting age, period and cohort models

    Directory of Open Access Journals (Sweden)

    Adriano Decarli

    2014-11-01

    Full Text Available In this paper we present the R implementation of a GLIM macro which fits age-period-cohort model following Osmond and Gardner. In addition to the estimates of the corresponding model, owing to the programming capability of R as an object oriented language, methods for printing, plotting and summarizing the results are provided. Furthermore, the researcher has fully access to the output of the main function (apc which returns all the models fitted within the function. It is so possible to critically evaluate the goodness of fit of the resulting model.

  4. An improved hybrid of particle swarm optimization and the gravitational search algorithm to produce a kinetic parameter estimation of aspartate biochemical pathways.

    Science.gov (United States)

    Ismail, Ahmad Muhaimin; Mohamad, Mohd Saberi; Abdul Majid, Hairudin; Abas, Khairul Hamimah; Deris, Safaai; Zaki, Nazar; Mohd Hashim, Siti Zaiton; Ibrahim, Zuwairie; Remli, Muhammad Akmal

    2017-12-01

    Mathematical modelling is fundamental to understand the dynamic behavior and regulation of the biochemical metabolisms and pathways that are found in biological systems. Pathways are used to describe complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. However, measuring these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Computational approaches are required to estimate these parameters. The estimation is converted into multimodal optimization problems that require a global optimization algorithm that can avoid local solutions. These local solutions can lead to a bad fit when calibrating with a model. Although the model itself can potentially match a set of experimental data, a high-performance estimation algorithm is required to improve the quality of the solutions. This paper describes an improved hybrid of particle swarm optimization and the gravitational search algorithm (IPSOGSA) to improve the efficiency of a global optimum (the best set of kinetic parameter values) search. The findings suggest that the proposed algorithm is capable of narrowing down the search space by exploiting the feasible solution areas. Hence, the proposed algorithm is able to achieve a near-optimal set of parameters at a fast convergence speed. The proposed algorithm was tested and evaluated based on two aspartate pathways that were obtained from the BioModels Database. The results show that the proposed algorithm outperformed other standard optimization algorithms in terms of accuracy and near-optimal kinetic parameter estimation. Nevertheless, the proposed algorithm is only expected to work well in

  5. Degeneracy of time series models: The best model is not always the correct model

    International Nuclear Information System (INIS)

    Judd, Kevin; Nakamura, Tomomichi

    2006-01-01

    There are a number of good techniques for finding, in some sense, the best model of a deterministic system given a time series of observations. We examine a problem called model degeneracy, which has the consequence that even when a perfect model of a system exists, one does not find it using the best techniques currently available. The problem is illustrated using global polynomial models and the theory of Groebner bases

  6. Searching for the best modeling specification for assessing the effects of temperature and humidity on health: a time series analysis in three European cities.

    Science.gov (United States)

    Rodopoulou, Sophia; Samoli, Evangelia; Analitis, Antonis; Atkinson, Richard W; de'Donato, Francesca K; Katsouyanni, Klea

    2015-11-01

    Epidemiological time series studies suggest daily temperature and humidity are associated with adverse health effects including increased mortality and hospital admissions. However, there is no consensus over which metric or lag best describes the relationships. We investigated which temperature and humidity model specification most adequately predicted mortality in three large European cities. Daily counts of all-cause mortality, minimum, maximum and mean temperature and relative humidity and apparent temperature (a composite measure of ambient and dew point temperature) were assembled for Athens, London, and Rome for 6 years between 1999 and 2005. City-specific Poisson regression models were fitted separately for warm (April-September) and cold (October-March) periods adjusting for seasonality, air pollution, and public holidays. We investigated goodness of model fit for each metric for delayed effects up to 13 days using three model fit criteria: sum of the partial autocorrelation function, AIC, and GCV. No uniformly best index for all cities and seasonal periods was observed. The effects of temperature were uniformly shown to be more prolonged during cold periods and the majority of models suggested separate temperature and humidity variables performed better than apparent temperature in predicting mortality. Our study suggests that the nature of the effects of temperature and humidity on mortality vary between cities for unknown reasons which require further investigation but may relate to city-specific population, socioeconomic, and environmental characteristics. This may have consequences on epidemiological studies and local temperature-related warning systems.

  7. Blood parameters in draught oxen during work: relationship to physical fitness.

    Science.gov (United States)

    Zanzinger, J; Becker, K

    1992-08-01

    1. Four Zebu and four Simmental oxen were submitted to moderate and exhaustive work. Venous blood samples were taken before, immediately after and 30 min after work and assayed for several blood parameters. 2. Draught work led to a decrease in carbon dioxide (pvCO2) and increases in pH, oxygen (pvO2), triglycerides, free fatty acids (FFA) and lactate. 3. Zebu oxen had higher pvCO2 and FFA and lower pH, pvO2 and lactate in response to exercise. 4. Ratios of individual draught power output and values of pvO2 and lactate after work enable the identification of fit and/or weak individuals.

  8. Demonstrations in Solute Transport Using Dyes: Part II. Modeling.

    Science.gov (United States)

    Butters, Greg; Bandaranayake, Wije

    1993-01-01

    A solution of the convection-dispersion equation is used to describe the solute breakthrough curves generated in the demonstrations in the companion paper. Estimation of the best fit model parameters (solute velocity, dispersion, and retardation) is illustrated using the method of moments for an example data set. (Author/MDH)

  9. Microscopic calculation of parameters of the sdg interacting boson model for 104-110Pd isotopes

    International Nuclear Information System (INIS)

    Liu Yong

    1995-01-01

    The parameters of the sdg interacting boson model Hamiltonian are calculated for the 104-110 Pd isotopes. The calculations utilize the microscopic procedure based on the Dyson boson mapping proposed by Yang-Liu-Qi and extended to include the g boson effects. The calculated parameters reproduce those values from the phenomenological fits. The resulting spectra are compared with the experimental spectra

  10. Global parameter optimization of a Mather-type plasma focus in the framework of the Gratton–Vargas two-dimensional snowplow model

    International Nuclear Information System (INIS)

    Auluck, S K H

    2014-01-01

    Dense plasma focus (DPF) is known to produce highly energetic ions, electrons and plasma environment which can be used for breeding short-lived isotopes, plasma nanotechnology and other material processing applications. Commercial utilization of DPF in such areas would need a design tool that can be deployed in an automatic search for the best possible device configuration for a given application. The recently revisited (Auluck 2013 Phys. Plasmas 20 112501) Gratton–Vargas (GV) two-dimensional analytical snowplow model of plasma focus provides a numerical formula for dynamic inductance of a Mather-type plasma focus fitted to thousands of automated computations, which enables the construction of such a design tool. This inductance formula is utilized in the present work to explore global optimization, based on first-principles optimality criteria, in a four-dimensional parameter-subspace of the zero-resistance GV model. The optimization process is shown to reproduce the empirically observed constancy of the drive parameter over eight decades in capacitor bank energy. The optimized geometry of plasma focus normalized to the anode radius is shown to be independent of voltage, while the optimized anode radius is shown to be related to capacitor bank inductance. (paper)

  11. Influence of delayed neutron parameter calculation accuracy on results of modeled WWER scram experiments

    International Nuclear Information System (INIS)

    Artemov, V.G.; Gusev, V.I.; Zinatullin, R.E.; Karpov, A.S.

    2007-01-01

    Using modeled WWER cram rod drop experiments, performed at the Rostov NPP, as an example, the influence of delayed neutron parameters on the modeling results was investigated. The delayed neutron parameter values were taken from both domestic and foreign nuclear databases. Numerical modeling was carried out on the basis of SAPFIR 9 5andWWERrogram package. Parameters of delayed neutrons were acquired from ENDF/B-VI and BNAB-78 validated data files. It was demonstrated that using delay fraction data from different databases in reactivity meters led to significantly different reactivity results. Based on the results of numerically modeled experiments, delayed neutron parameters providing the best agreement between calculated and measured data were selected and recommended for use in reactor calculations (Authors)

  12. A dynamic marketing model with best reply and inertia

    International Nuclear Information System (INIS)

    Bischi, Gian Italo; Cerboni Baiardi, Lorenzo

    2015-01-01

    In this paper we consider a nonlinear discrete-time dynamic model proposed by Farris et al. (2005) as a market share attraction model with two firms that decide marketing efforts over time according to best reply strategies with naïve expectations. The model also considers an adaptive adjustment toward best reply, a form of inertia or anchoring attitude, and we investigate the effects of heterogeneities among firms. A rich scenario of local and global bifurcations is obtained even with just two competing firms, and a comparison is proposed with apparently similar duopoly models based on repeated best reply dynamics with naïve expectations and adaptive adjustment.

  13. VizieR Online Data Catalog: GRB prompt emission fitted with the DREAM model (Ahlgren+, 2015)

    Science.gov (United States)

    Ahlgren, B.; Larsson, J.; Nymark, T.; Ryde, F.; Pe'Er, A.

    2018-01-01

    We illustrate the application of the DREAM model by fitting it to two different, bright Fermi GRBs; GRB 090618 and GRB 100724B. While GRB 090618 is well fitted by a Band function, GRB 100724B was the first example of a burst with a significant additional BB component (Guiriec et al. 2011ApJ...727L..33G). GRB 090618 is analysed using Gamma-ray Burst Monitor (GBM) data (Meegan et al. 2009ApJ...702..791M) from the NaI and BGO detectors. For GRB 100724B, we used GBM data from the NaI and BGO detectors as well as Large Area Telescope Low Energy (LAT-LLE) data. For both bursts we selected NaI detectors seeing the GRB at an off-axis angle lower than 60° and the BGO detector as being the best aligned of the two BGO detectors. The spectra were fitted in the energy ranges 8-1000 keV (NaI), 200-40000 keV (BGO) and 30-1000 MeV (LAT-LLE). (2 data files).

  14. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2006-01-01

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...

  15. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...

  16. Estimation of genetic parameters for growth traits in a breeding program for rainbow trout (Oncorhynchus mykiss) in China.

    Science.gov (United States)

    Hu, G; Gu, W; Bai, Q L; Wang, B Q

    2013-04-26

    Genetic parameters and breeding values for growth traits were estimated in the first and, currently, the only family selective breeding program for rainbow trout (Oncorhynchus mykiss) in China. Genetic and phenotypic data were collected for growth traits from 75 full-sibling families with a 2-generation pedigree. Genetic parameters and breeding values for growth traits of rainbow trout were estimated using the derivative-free restricted maximum likelihood method. The goodness-of-fit of the models was tested using Akaike and Bayesian information criteria. Genetic parameters and breeding values were estimated using the best-fit model for each trait. The values for heritability estimating body weight and length ranged from 0.20 to 0.45 and from 0.27 to 0.60, respectively, and the heritability of condition factor was 0.34. Our results showed a moderate degree of heritability for growth traits in this breeding program and suggested that the genetic and phenotypic tendency of body length, body weight, and condition factor were similar. Therefore, the selection of phenotypic values based on pedigree information was also suitable in this research population.

  17. Fitter. The package for fitting a chosen theoretical multi-parameter function through a set of data points. Application to experimental data of the YuMO spectrometer. Version 2.1.0. Long write-up and user's guide

    International Nuclear Information System (INIS)

    Solov'ev, A.G.; Stadnik, A.V.; Islamov, A.N.; Kuklin, A.I.

    2008-01-01

    Fitter is a C++ program aimed to fit a chosen theoretical multi-parameter function through a set of data points. The method of fitting is chi-square minimization. Moreover, the robust fitting method can be applied to Fitter. Fitter was designed to be used for a small-angle neutron scattering data analysis. Respective theoretical models are implemented in it. Some commonly used models (Gaussian and polynomials) are also implemented for wider applicability

  18. A simplified model of choice behavior under uncertainty

    Directory of Open Access Journals (Sweden)

    Ching-Hung Lin

    2016-08-01

    Full Text Available The Iowa Gambling Task (IGT has been standardized as a clinical assessment tool (Bechara, 2007. Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU model (Busemeyer and Stout, 2002 to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated the prospect utility (PU models (Ahn et al., 2008 to be more effective than the EU models in the IGT. Nevertheless, after some preliminary tests, we propose that Ahn et al. (2008 PU model is not optimal due to some incompatible results between our behavioral and modeling data. This study aims to modify Ahn et al. (2008 PU model to a simplified model and collected 145 subjects’ IGT performance as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly while α approaching zero. More specifically, we retested the key parameters α, λ , and A in the PU model. Notably, the power of influence of the parameters α, λ, and A has a hierarchical order in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay-loss-shift rather than foreseeing the long-term outcome. However, there still have other behavioral variables that are not well revealed under these dynamic uncertainty situations. Therefore, the optimal behavioral models may not have been found. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated.

  19. Performance Analysis of Different NeQuick Ionospheric Model Parameters

    Directory of Open Access Journals (Sweden)

    WANG Ningbo

    2017-04-01

    Full Text Available Galileo adopts NeQuick model for single-frequency ionospheric delay corrections. For the standard operation of Galileo, NeQuick model is driven by the effective ionization level parameter Az instead of the solar activity level index, and the three broadcast ionospheric coefficients are determined by a second-polynomial through fitting the Az values estimated from globally distributed Galileo Sensor Stations (GSS. In this study, the processing strategies for the estimation of NeQuick ionospheric coefficients are discussed and the characteristics of the NeQuick coefficients are also analyzed. The accuracy of Global Position System (GPS broadcast Klobuchar, original NeQuick2 and fitted NeQuickC as well as Galileo broadcast NeQuickG models is evaluated over the continental and oceanic regions, respectively, in comparison with the ionospheric total electron content (TEC provided by global ionospheric maps (GIM, GPS test stations and JASON-2 altimeter. The results show that NeQuickG can mitigate ionospheric delay by 54.2%~65.8% on a global scale, and NeQuickC can correct for 71.1%~74.2% of the ionospheric delay. NeQuick2 performs at the same level with NeQuickG, which is a bit better than that of GPS broadcast Klobuchar model.

  20. CHARACTERIZING THE FORMATION HISTORY OF MILKY WAY LIKE STELLAR HALOS WITH MODEL EMULATORS

    International Nuclear Information System (INIS)

    Gómez, Facundo A.; O'Shea, Brian W.; Coleman-Smith, Christopher E.; Tumlinson, Jason; Wolpert, Robert L.

    2012-01-01

    We use the semi-analytic model ChemTreeN, coupled to cosmological N-body simulations, to explore how different galaxy formation histories can affect observational properties of Milky Way like galaxies' stellar halos and their satellite populations. Gaussian processes are used to generate model emulators that allow one to statistically estimate a desired set of model outputs at any location of a p-dimensional input parameter space. This enables one to explore the full input parameter space orders of magnitude faster than could be done otherwise. Using mock observational data sets generated by ChemTreeN itself, we show that it is possible to successfully recover the input parameter vectors used to generate the mock observables if the merger history of the host halo is known. However, our results indicate that for a given observational data set, the determination of 'best-fit' parameters is highly susceptible to the particular merger history of the host. Very different halo merger histories can reproduce the same observational data set, if the 'best-fit' parameters are allowed to vary from history to history. Thus, attempts to characterize the formation history of the Milky Way using these kind of techniques must be performed statistically, analyzing large samples of high-resolution N-body simulations.

  1. Application of the continuously-yielding joint model for studying disposal of high-level nuclear waste in crystalline rock

    International Nuclear Information System (INIS)

    Hakala, M.; Johansson, E.; Simonen, A.

    1993-04-01

    The non-linear Continuously-Yielding (CY) joint model and its use in numerical analyses of a nuclear waste repository are studied in the report. On major advantage of using CY-model is that laboratory test results, if available, can directly be used in analyses thus reducing uncertainties about joint input parameters. The new testing machine MTS-815 of Helsinki University of Technology was used to determine the joint behaviour of some granitic joints from the depth of 400-600 m below the ground surface. The procedure for triaxial joint tests was refined during this work. Two programs called NormFit and SherFit were developed and tested to determine the best fit parameter values for CY-model from laboratory test results

  2. Does model fit decrease the uncertainty of the data in comparison with a general non-model least squares fit?

    International Nuclear Information System (INIS)

    Pronyaev, V.G.

    2003-01-01

    The information entropy is taken as a measure of knowledge about the object and the reduced univariante variance as a common measure of uncertainty. Covariances in the model versus non-model least square fits are discussed

  3. Fitting a three-parameter lognormal distribution with applications to hydrogeochemical data from the National Uranium Resource Evaluation Program

    International Nuclear Information System (INIS)

    Kane, V.E.

    1979-10-01

    The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical data from the National Uranium Resource Evaluation Program

  4. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    Science.gov (United States)

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  5. Efficient occupancy model-fitting for extensive citizen-science data

    Science.gov (United States)

    Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen

  6. Evolving and energy dependent optical model description of heavy-ion elastic scattering

    International Nuclear Information System (INIS)

    Michaelian, K.

    1996-01-01

    We present the application of a genetic algorithm to the problem of determining an energy dependent optical model description of heavy-ion elastic scattering. The problem requires a search for the global best optical model potential and its energy dependence in a very rugged 12 dimensional parameter space of complex topographical features with many local minima. Random solutions are created in the first generation. The fitness of a solution is related to the χ 2 fit of the calculated differential cross sections with the experimental data. Best fit solutions are evolved through cross over and mutation following the biological example. This genetic algorithm approach combined with local gradient minimization is shown to provide a global, complete and extremely efficient search method, well adapted to complex fitness landscapes. These characteristics, combined with the facility of application, should make it the search method of choice for a wide variety of problems from nuclear physics. (Author)

  7. A proposed best practice model validation framework for banks

    Directory of Open Access Journals (Sweden)

    Pieter J. (Riaan de Jongh

    2017-06-01

    Full Text Available Background: With the increasing use of complex quantitative models in applications throughout the financial world, model risk has become a major concern. The credit crisis of 2008–2009 provoked added concern about the use of models in finance. Measuring and managing model risk has subsequently come under scrutiny from regulators, supervisors, banks and other financial institutions. Regulatory guidance indicates that meticulous monitoring of all phases of model development and implementation is required to mitigate this risk. Considerable resources must be mobilised for this purpose. The exercise must embrace model development, assembly, implementation, validation and effective governance. Setting: Model validation practices are generally patchy, disparate and sometimes contradictory, and although the Basel Accord and some regulatory authorities have attempted to establish guiding principles, no definite set of global standards exists. Aim: Assessing the available literature for the best validation practices. Methods: This comprehensive literature study provided a background to the complexities of effective model management and focussed on model validation as a component of model risk management. Results: We propose a coherent ‘best practice’ framework for model validation. Scorecard tools are also presented to evaluate if the proposed best practice model validation framework has been adequately assembled and implemented. Conclusion: The proposed best practice model validation framework is designed to assist firms in the construction of an effective, robust and fully compliant model validation programme and comprises three principal elements: model validation governance, policy and process.

  8. Bayesian parameter estimation for stochastic models of biological cell migration

    Science.gov (United States)

    Dieterich, Peter; Preuss, Roland

    2013-08-01

    Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.

  9. A Numerical Fit of Analytical to Simulated Density Profiles in Dark Matter Haloes

    Science.gov (United States)

    Caimmi, R.; Marmo, C.; Valentinuzzi, T.

    2005-06-01

    Analytical and geometrical properties of generalized power-law (GPL) density profiles are investigated in detail. In particular, a one-to-one correspondence is found between mathematical parameters (a scaling radius, r_0, a scaling density, rho_0, and three exponents, alpha, beta, gamma), and geometrical parameters (the coordinates of the intersection of the asymptotes, x_C, y_C, and three vertical intercepts, b, b_beta, b_gamma, related to the curve and the asymptotes, respectively): (r_0,rho_0,alpha,beta,gamma) (x_C,y_C,b,b_beta,b_gamma). Then GPL density profiles are compared with simulated dark haloes (SDH) density profiles, and nonlinear least-absolute values and least-squares fits involving the above mentioned five parameters (RFSM5 method) are prescribed. More specifically, the sum of absolute values or squares of absolute logarithmic residuals, R_i= log rhoSDH(r_i)-log rhoGPL(r_i), is evaluated on 10^5 points making a 5- dimension hypergrid, through a few iterations. The size is progressively reduced around a fiducial minimum, and superpositions on nodes of earlier hypergrids are avoided. An application is made to a sample of 17 SDHs on the scale of cluster of galaxies, within a flat LambdaCDM cosmological model (Rasia et al. 2004). In dealing with the mean SDH density profile, a virial radius, rvir, averaged over the whole sample, is assigned, which allows the calculation of the remaining parameters. Using a RFSM5 method provides a better fit with respect to other methods. The geometrical parameters, averaged over the whole sample of best fitting GPL density profiles, yield (alpha,beta,gamma) approx(0.6,3.1,1.0), to be compared with (alpha,beta,gamma)=(1,3,1), i.e. the NFW density profile (Navarro et al. 1995, 1996, 1997), (alpha,beta,gamma)=(1.5,3,1.5) (Moore et al. 1998, 1999), (alpha,beta,gamma)=(1,2.5,1) (Rasia et al. 2004); and, in addition, gamma approx 1.5 (Hiotelis 2003), deduced from the application of a RFSM5 method, but using a different

  10. The fit of cobalt-chromium three-unit fixed dental prostheses fabricated with four different techniques: a comparative in vitro study.

    Science.gov (United States)

    Örtorp, Anders; Jönsson, David; Mouhsen, Alaa; Vult von Steyern, Per

    2011-04-01

    This study sought to evaluate and compare the marginal and internal fit in vitro of three-unit FDPs in Co-Cr made using four fabrication techniques, and to conclude in which area the largest misfit is present. An epoxy resin master model was produced. The impression was first made with silicone, and master and working models were then produced. A total of 32 three-unit Co-Cr FDPs were fabricated with four different production techniques: conventional lost-wax method (LW), milled wax with lost-wax method (MW), milled Co-Cr (MC), and direct laser metal sintering (DLMS). Each of the four groups consisted of eight FDPs (test groups). The FDPs were cemented on their cast and standardised-sectioned. The cement film thickness of the marginal and internal gaps was measured in a stereomicroscope, digital photos were taken at 12× magnification and then analyzed using measurement software. Statistical analyses were performed with one-way ANOVA and Tukey's test. Best fit based on the means (SDs) in μm for all measurement points was in the DLMS group 84 (60) followed by MW 117 (89), LW 133 (89) and MC 166 (135). Significant differences were present between MC and DLMS (p<0.05). The regression analyses presented differences within the parameters: production technique, tooth size, position and measurement point (p < 0.05). Best fit was found in the DLMS group followed by MW, LW and MC. In all four groups, best fit in both abutments was along the axial walls and in the deepest part of the chamfer preparation. The greatest misfit was present occlusally in all specimens. Copyright © 2010 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  11. A morphing technique for signal modelling in a multidimensional space of coupling parameters

    CERN Document Server

    The ATLAS collaboration

    2015-01-01

    This note describes a morphing method that produces signal models for fits to data in which both the affected event yields and kinematic distributions are simultaneously taken into account. The signal model is morphed in a continuous manner through the available multi-dimensional parameter space. Searches for deviations from Standard Model predictions for Higgs boson properties have so far used information either from event yields or kinematic distributions. The combined approach described here is expected to substantially enhance the sensitivity to beyond the Standard Model contributions.

  12. Track fitting and resolution with digital detectors

    International Nuclear Information System (INIS)

    Duerdoth, I.

    1982-01-01

    The analysis of data from detectors which give digitised measurements, such as MWPCs, is considered. These measurements are necessarily correlated and it is shown that the uncertainty in the combination of N measurements may fall faster than the canonical 1/√N. A new method of track fitting is described which exploits the digital aspects and which takes the correlations into account. It divides the parameter space into cells and the centroid of a cell is taken as the best estimate. The method is shown to have some advantages over the standard least-squares analysis. If the least-squares method is used for digital detectors the goodness-of-fit may not be a reliable estimate of the accuracy. The cell method is particularly suitable for implementation on microcomputers which lack floating point and divide facilities. (orig.)

  13. Comparison of adsorption equilibrium models for the study of CL-, NO3- and SO4(2-) removal from aqueous solutions by an anion exchange resin.

    Science.gov (United States)

    Dron, Julien; Dodi, Alain

    2011-06-15

    The removal of chloride, nitrate and sulfate ions from aqueous solutions by a macroporous resin is studied through the ion exchange systems OH(-)/Cl(-), OH(-)/NO(3)(-), OH(-)/SO(4)(2-), and HCO(3)(-)/Cl(-), Cl(-)/NO(3)(-), Cl(-)/SO(4)(2-). They are investigated by means of Langmuir, Freundlich, Dubinin-Radushkevitch (D-R) and Dubinin-Astakhov (D-A) single-component adsorption isotherms. The sorption parameters and the fitting of the models are determined by nonlinear regression and discussed. The Langmuir model provides a fair estimation of the sorption capacity whatever the system under study, on the contrary to Freundlich and D-R models. The adsorption energies deduced from Dubinin and Langmuir isotherms are in good agreement, and the surface parameter of the D-A isotherm appears consistent. All models agree on the order of affinity OH(-)models provide the best fit to the experimental points, indicating that the micropore volume filling theory is the best representation of the ion exchange processes under study among other adsorption isotherms. The nonlinear regression results are also compared with linear regressions. While the parameter values are not affected, the evaluation of the best fitting model is biased by linearization. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments

    Directory of Open Access Journals (Sweden)

    Demeter Lisa

    2010-05-01

    Full Text Available Abstract Background The replication rate (or fitness between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV. HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Results Based on a mathematical model and several statistical methods (least-squares approach and measurement error models, a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1. Conclusions Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/.

  15. CURVE LSFIT, Gamma Spectrometer Calibration by Interactive Fitting Method

    International Nuclear Information System (INIS)

    Olson, D.G.

    1992-01-01

    1 - Description of program or function: CURVE and LSFIT are interactive programs designed to obtain the best data fit to an arbitrary curve. CURVE finds the type of fitting routine which produces the best curve. The types of fitting routines available are linear regression, exponential, logarithmic, power, least squares polynomial, and spline. LSFIT produces a reliable calibration curve for gamma ray spectrometry by using the uncertainty value associated with each data point. LSFIT is intended for use where an entire efficiency curve is to be made starting at 30 KeV and continuing to 1836 KeV. It creates calibration curves using up to three least squares polynomial fits to produce the best curve for photon energies above 120 KeV and a spline function to combine these fitted points with a best fit for points below 120 KeV. 2 - Method of solution: The quality of fit is tested by comparing the measured y-value to the y-value calculated from the fitted curve. The fractional difference between these two values is printed for the evaluation of the quality of the fit. 3 - Restrictions on the complexity of the problem - Maxima of: 2000 data points calibration curve output (LSFIT) 30 input data points 3 least squares polynomial fits (LSFIT) The least squares polynomial fit requires that the number of data points used exceed the degree of fit by at least two

  16. Modeling metabolic networks in C. glutamicum: a comparison of rate laws in combination with various parameter optimization strategies

    Directory of Open Access Journals (Sweden)

    Oldiges Marco

    2009-01-01

    Full Text Available Abstract Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1 experimental measurement of participating molecules, (2 assignment of rate laws to each reaction, and (3 parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1 coarse-grained comparison of the algorithms on all models and (2 fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics

  17. A comparison of approaches in fitting continuum SEDs

    International Nuclear Information System (INIS)

    Liu Yao; Wang Hong-Chi; Madlener David; Wolf Sebastian

    2013-01-01

    We present a detailed comparison of two approaches, the use of a pre-calculated database and simulated annealing (SA), for fitting the continuum spectral energy distribution (SED) of astrophysical objects whose appearance is dominated by surrounding dust. While pre-calculated databases are commonly used to model SED data, only a few studies to date employed SA due to its unclear accuracy and convergence time for this specific problem. From a methodological point of view, different approaches lead to different fitting quality, demand on computational resources and calculation time. We compare the fitting quality and computational costs of these two approaches for the task of SED fitting to provide a guide to the practitioner to find a compromise between desired accuracy and available resources. To reduce uncertainties inherent to real datasets, we introduce a reference model resembling a typical circumstellar system with 10 free parameters. We derive the SED of the reference model with our code MC3 D at 78 logarithmically distributed wavelengths in the range [0.3 μm, 1.3 mm] and use this setup to simulate SEDs for the database and SA. Our result directly demonstrates the applicability of SA in the field of SED modeling, since the algorithm regularly finds better solutions to the optimization problem than a pre-calculated database. As both methods have advantages and shortcomings, a hybrid approach is preferable. While the database provides an approximate fit and overall probability distributions for all parameters deduced using Bayesian analysis, SA can be used to improve upon the results returned by the model grid.

  18. 2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT

    Science.gov (United States)

    Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.

    2018-01-01

    We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.

  19. Comparative analysis of tree classification models for detecting fusarium oxysporum f. sp cubense (TR4) based on multi soil sensor parameters

    Science.gov (United States)

    Estuar, Maria Regina Justina; Victorino, John Noel; Coronel, Andrei; Co, Jerelyn; Tiausas, Francis; Señires, Chiara Veronica

    2017-09-01

    Use of wireless sensor networks and smartphone integration design to monitor environmental parameters surrounding plantations is made possible because of readily available and affordable sensors. Providing low cost monitoring devices would be beneficial, especially to small farm owners, in a developing country like the Philippines, where agriculture covers a significant amount of the labor market. This study discusses the integration of wireless soil sensor devices and smartphones to create an application that will use multidimensional analysis to detect the presence or absence of plant disease. Specifically, soil sensors are designed to collect soil quality parameters in a sink node from which the smartphone collects data from via Bluetooth. Given these, there is a need to develop a classification model on the mobile phone that will report infection status of a soil. Though tree classification is the most appropriate approach for continuous parameter-based datasets, there is a need to determine whether tree models will result to coherent results or not. Soil sensor data that resides on the phone is modeled using several variations of decision tree, namely: decision tree (DT), best-fit (BF) decision tree, functional tree (FT), Naive Bayes (NB) decision tree, J48, J48graft and LAD tree, where decision tree approaches the problem by considering all sensor nodes as one. Results show that there are significant differences among soil sensor parameters indicating that there are variances in scores between the infected and uninfected sites. Furthermore, analysis of variance in accuracy, recall, precision and F1 measure scores from tree classification models homogeneity among NBTree, J48graft and J48 tree classification models.

  20. On diffusion processes with variable drift rates as models for decision making during learning

    International Nuclear Information System (INIS)

    Eckhoff, P; Holmes, P; Law, C; Connolly, P M; Gold, J I

    2008-01-01

    We investigate Ornstein-Uhlenbeck and diffusion processes with variable drift rates as models of evidence accumulation in a visual discrimination task. We derive power-law and exponential drift-rate models and characterize how parameters of these models affect the psychometric function describing performance accuracy as a function of stimulus strength and viewing time. We fit the models to psychophysical data from monkeys learning the task to identify parameters that best capture performance as it improves with training. The most informative parameter was the overall drift rate describing the signal-to-noise ratio of the sensory evidence used to form the decision, which increased steadily with training. In contrast, secondary parameters describing the time course of the drift during motion viewing did not exhibit steady trends. The results indicate that relatively simple versions of the diffusion model can fit behavior over the course of training, thereby giving a quantitative account of learning effects on the underlying decision process

  1. A Comparison of Item Fit Statistics for Mixed IRT Models

    Science.gov (United States)

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  2. Parametric fitting of data obtained from detectors with finite resolution and limited acceptance

    International Nuclear Information System (INIS)

    Gagunashvili, N.D.

    2011-01-01

    A goodness-of-fit test for fitting of a parametric model to data obtained from a detector with finite resolution and limited acceptance is proposed. The parameters of the model are found by minimization of a statistic that is used for comparing experimental data and simulated reconstructed data. Numerical examples are presented to illustrate and validate the fitting procedure.

  3. A three-parameter langmuir-type model for fitting standard curves of sandwich enzyme immunoassays with special attention to the α-fetoprotein assay

    NARCIS (Netherlands)

    Kortlandt, W.; Endeman, H.J.; Hoeke, J.O.O.

    In a simplified approach to the reaction kinetics of enzyme-linked immunoassays, a Langmuir-type equation y = [ax/(b + x)] + c was derived. This model proved to be superior to logit-log and semilog models in the curve-fitting of standard curves. An assay for α-fetoprotein developed in our laboratory

  4. Scanning anisotropy parameters in horizontal transversely isotropic media

    KAUST Repository

    Masmoudi, Nabil

    2016-10-12

    The horizontal transversely isotropic model, with arbitrary symmetry axis orientation, is the simplest effective representative that explains the azimuthal behaviour of seismic data. Estimating the anisotropy parameters of this model is important in reservoir characterisation, specifically in terms of fracture delineation. We propose a travel-time-based approach to estimate the anellipticity parameter η and the symmetry axis azimuth ϕ of a horizontal transversely isotropic medium, given an inhomogeneous elliptic background model (which might be obtained from velocity analysis and well velocities). This is accomplished through a Taylor\\'s series expansion of the travel-time solution (of the eikonal equation) as a function of parameter η and azimuth angle ϕ. The accuracy of the travel time expansion is enhanced by the use of Shanks transform. This results in an accurate approximation of the solution of the non-linear eikonal equation and provides a mechanism to scan simultaneously for the best fitting effective parameters η and ϕ, without the need for repetitive modelling of travel times. The analysis of the travel time sensitivity to parameters η and ϕ reveals that travel times are more sensitive to η than to the symmetry axis azimuth ϕ. Thus, η is better constrained from travel times than the azimuth. Moreover, the two-parameter scan in the homogeneous case shows that errors in the background model affect the estimation of η and ϕ differently. While a gradual increase in errors in the background model leads to increasing errors in η, inaccuracies in ϕ, on the other hand, depend on the background model errors. We also propose a layer-stripping method valid for a stack of arbitrary oriented symmetry axis horizontal transversely isotropic layers to convert the effective parameters to the interval layer values.

  5. FITS: a function-fitting program

    Energy Technology Data Exchange (ETDEWEB)

    Balestrini, S.J.; Chezem, C.G.

    1982-01-01

    FITS is an iterating computer program that adjusts the parameters of a function to fit a set of data points according to the least squares criterion and then lists and plots the results. The function can be programmed or chosen from a library that is provided. The library can be expanded to include up to 99 functions. A general plotting routine, contained in the program but useful in its own right, is described separately in an Appendix.

  6. Progress on reference input parameter library for nuclear model calculations of nuclear data (III)

    International Nuclear Information System (INIS)

    Su Zongdi; Liu Jianfeng; Huang Zhongfu

    1997-01-01

    A new set of the average neutron resonance spacings D 0 and neutron strength functions S 0 for 309 nuclei were reestimated on the basis of the resolved resonance parameters reevaluated from BNL-325, ENDF/B-6, JEF-2, and JENDL-3, and the cumulative number N 0 of low low lying levels for 344 nuclei were also reevaluated by means of histograms. Three sets of level density parameters for the Gilbert-Cameron (GC) formula, back-shifted Fermi gas model(BS) and generated superfluid model (GSM) have been reesitmated by fitting the D 0 and N 0 values of CENPL.LRD-2

  7. Deriving global parameter estimates for the Noah land surface model using FLUXNET and machine learning

    Science.gov (United States)

    Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.

    2016-11-01

    With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.

  8. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Assessing performance of Bayesian state-space models fit to Argos satellite telemetry locations processed with Kalman filtering.

    Directory of Open Access Journals (Sweden)

    Mónica A Silva

    Full Text Available Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF. The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km was nearly half that of LS estimates (11.6 ± 8.4 km. Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.

  10. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    Science.gov (United States)

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  11. An approximation to the adaptive exponential integrate-and-fire neuron model allows fast and predictive fitting to physiological data

    Directory of Open Access Journals (Sweden)

    Loreen eHertäg

    2012-09-01

    Full Text Available For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('in-vivo-like' input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  12. Applying Least Absolute Shrinkage Selection Operator and Akaike Information Criterion Analysis to Find the Best Multiple Linear Regression Models between Climate Indices and Components of Cow's Milk.

    Science.gov (United States)

    Marami Milani, Mohammad Reza; Hense, Andreas; Rahmani, Elham; Ploeger, Angelika

    2016-07-23

    This study focuses on multiple linear regression models relating six climate indices (temperature humidity THI, environmental stress ESI, equivalent temperature index ETI, heat load HLI, modified HLI (HLI new ), and respiratory rate predictor RRP) with three main components of cow's milk (yield, fat, and protein) for cows in Iran. The least absolute shrinkage selection operator (LASSO) and the Akaike information criterion (AIC) techniques are applied to select the best model for milk predictands with the smallest number of climate predictors. Uncertainty estimation is employed by applying bootstrapping through resampling. Cross validation is used to avoid over-fitting. Climatic parameters are calculated from the NASA-MERRA global atmospheric reanalysis. Milk data for the months from April to September, 2002 to 2010 are used. The best linear regression models are found in spring between milk yield as the predictand and THI, ESI, ETI, HLI, and RRP as predictors with p -value < 0.001 and R ² (0.50, 0.49) respectively. In summer, milk yield with independent variables of THI, ETI, and ESI show the highest relation ( p -value < 0.001) with R ² (0.69). For fat and protein the results are only marginal. This method is suggested for the impact studies of climate variability/change on agriculture and food science fields when short-time series or data with large uncertainty are available.

  13. Modeling annual extreme temperature using generalized extreme value distribution: A case study in Malaysia

    Science.gov (United States)

    Hasan, Husna; Salam, Norfatin; Kassim, Suraiya

    2013-04-01

    Extreme temperature of several stations in Malaysia is modeled by fitting the annual maximum to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are used to detect stochastic trends among the stations. The Mann-Kendall (MK) test suggests a non-stationary model. Three models are considered for stations with trend and the Likelihood Ratio test is used to determine the best-fitting model. The results show that Subang and Bayan Lepas stations favour a model which is linear for the location parameters while Kota Kinabalu and Sibu stations are suitable with a model in the logarithm of the scale parameters. The return level is the level of events (maximum temperature) which is expected to be exceeded once, on average, in a given number of years, is obtained.

  14. Dual-process models of associative recognition in young and older adults: evidence from receiver operating characteristics.

    Science.gov (United States)

    Healy, Michael R; Light, Leah L; Chung, Christie

    2005-07-01

    In 3 experiments, young and older adults studied lists of unrelated word pairs and were given confidence-rated item and associative recognition tests. Several different models of recognition were fit to the confidence-rating data using techniques described by S. Macho (2002, 2004). Concordant with previous findings, item recognition data were best fit by an unequal-variance signal detection theory model for both young and older adults. For both age groups, associative recognition performance was best explained by models incorporating both recollection and familiarity components. Examination of parameter estimates supported the conclusion that recollection is reduced in old age, but inferences about age differences in familiarity were highly model dependent. Implications for dual-process models of memory in old age are discussed. ((c) 2005 APA, all rights reserved).

  15. SHORT COMMUNICATION: Status of Physical Fitness Index (PFI % and Anthropometric Parameters in Residential School Children Compared to Nonresidential School Children

    Directory of Open Access Journals (Sweden)

    Jyoti P Khodnapur

    2012-07-01

    Full Text Available Background: Physical fitness is the prime criterion for survival, to achieve any goal and to lead a healthy life. Effect of exercise to have a good physical fitness is well known since ancient Vedas. Physical fitness can be recorded by cardiopulmonary efficiency test like Physical Fitness Index (PFI % which is a powerful indicator of cardiopulmonary efficiency. Regular exercise increases PFI by increasing oxygen consumption. Residential school children are exposed to regular exercise and nutritious food under the guidance. Aims and Objectives: Our study is aimed to compare the physical fitness index status and anthropometric parameters in Residential Sainik (n=100 school children compared to Non-Residential (n=100 school children (aged between 12-16 years of Bijapur. Material and Methods: PFI was measured by Harvard Step Test [1]. TheAnthropometrical parameters like Height (cms, Weight (Kg, Body Surface Area (BSA in sq.mts, Body Mass Index (BMI in Kg/m2, Mid Arm Circumference (cms, Chest Circumference (cms and Abdominal Circumference (cms were recorded. Results: Mean score of PFI(%, Height(cms, Weight(Kg, BSA(sq.mts, BMI(Kg/m2, Mid Arm Circumference(cms, Chest Circumference (cms and Abdominal Circumference (cms were significantly higher (p=0.000 in Residential school children compared to Non Residential school children. In conclusion regular exercise and nutritious diet under the guidance increases the physical fitness and growth in growing children.

  16. Fit-for-purpose: species distribution model performance depends on evaluation criteria - Dutch Hoverflies as a case study.

    Science.gov (United States)

    Aguirre-Gutiérrez, Jesús; Carvalheiro, Luísa G; Polce, Chiara; van Loon, E Emiel; Raes, Niels; Reemer, Menno; Biesmeijer, Jacobus C

    2013-01-01

    Understanding species distributions and the factors limiting them is an important topic in ecology and conservation, including in nature reserve selection and predicting climate change impacts. While Species Distribution Models (SDM) are the main tool used for these purposes, choosing the best SDM algorithm is not straightforward as these are plentiful and can be applied in many different ways. SDM are used mainly to gain insight in 1) overall species distributions, 2) their past-present-future probability of occurrence and/or 3) to understand their ecological niche limits (also referred to as ecological niche modelling). The fact that these three aims may require different models and outputs is, however, rarely considered and has not been evaluated consistently. Here we use data from a systematically sampled set of species occurrences to specifically test the performance of Species Distribution Models across several commonly used algorithms. Species range in distribution patterns from rare to common and from local to widespread. We compare overall model fit (representing species distribution), the accuracy of the predictions at multiple spatial scales, and the consistency in selection of environmental correlations all across multiple modelling runs. As expected, the choice of modelling algorithm determines model outcome. However, model quality depends not only on the algorithm, but also on the measure of model fit used and the scale at which it is used. Although model fit was higher for the consensus approach and Maxent, Maxent and GAM models were more consistent in estimating local occurrence, while RF and GBM showed higher consistency in environmental variables selection. Model outcomes diverged more for narrowly distributed species than for widespread species. We suggest that matching study aims with modelling approach is essential in Species Distribution Models, and provide suggestions how to do this for different modelling aims and species' data

  17. Estimation of Staphylococcus aureus growth parameters from turbidity data: characterization of strain variation and comparison of methods.

    Science.gov (United States)

    Lindqvist, R

    2006-07-01

    Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.

  18. Standard error propagation in R-matrix model fitting for light elements

    International Nuclear Information System (INIS)

    Chen Zhenpeng; Zhang Rui; Sun Yeying; Liu Tingjin

    2003-01-01

    The error propagation features with R-matrix model fitting 7 Li, 11 B and 17 O systems were researched systematically. Some laws of error propagation were revealed, an empirical formula P j = U j c / U j d = K j · S-bar · √m / √N for describing standard error propagation was established, the most likely error ranges for standard cross sections of 6 Li(n,t), 10 B(n,α0) and 10 B(n,α1) were estimated. The problem that the standard error of light nuclei standard cross sections may be too small results mainly from the R-matrix model fitting, which is not perfect. Yet R-matrix model fitting is the most reliable evaluation method for such data. The error propagation features of R-matrix model fitting for compound nucleus system of 7 Li, 11 B and 17 O has been studied systematically, some laws of error propagation are revealed, and these findings are important in solving the problem mentioned above. Furthermore, these conclusions are suitable for similar model fitting in other scientific fields. (author)

  19. Hydrological model performance and parameter estimation in the wavelet-domain

    Directory of Open Access Journals (Sweden)

    B. Schaefli

    2009-10-01

    Full Text Available This paper proposes a method for rainfall-runoff model calibration and performance analysis in the wavelet-domain by fitting the estimated wavelet-power spectrum (a representation of the time-varying frequency content of a time series of a simulated discharge series to the one of the corresponding observed time series. As discussed in this paper, calibrating hydrological models so as to reproduce the time-varying frequency content of the observed signal can lead to different results than parameter estimation in the time-domain. Therefore, wavelet-domain parameter estimation has the potential to give new insights into model performance and to reveal model structural deficiencies. We apply the proposed method to synthetic case studies and a real-world discharge modeling case study and discuss how model diagnosis can benefit from an analysis in the wavelet-domain. The results show that for the real-world case study of precipitation – runoff modeling for a high alpine catchment, the calibrated discharge simulation captures the dynamics of the observed time series better than the results obtained through calibration in the time-domain. In addition, the wavelet-domain performance assessment of this case study highlights the frequencies that are not well reproduced by the model, which gives specific indications about how to improve the model structure.

  20. Model-fitting approach to kinetic analysis of non-isothermal oxidation of molybdenite

    International Nuclear Information System (INIS)

    Ebrahimi Kahrizsangi, R.; Abbasi, M. H.; Saidi, A.

    2007-01-01

    The kinetics of molybdenite oxidation was studied by non-isothermal TGA-DTA with heating rate 5 d eg C .min -1 . The model-fitting kinetic approach applied to TGA data. The Coats-Redfern method used of model fitting. The popular model-fitting gives excellent fit non-isothermal data in chemically controlled regime. The apparent activation energy was determined to be about 34.2 kcalmol -1 With pre-exponential factor about 10 8 sec -1 for extent of reaction less than 0.5

  1. LOCO with Constraints and Improved Fitting Technique

    International Nuclear Information System (INIS)

    Not Available

    2007-01-01

    LOCO has been a powerful beam-based diagnostics and optics control method for storage rings and synchrotrons worldwide ever since it was established at NSLS by J. Safranek. This method measures the orbit response matrix and optionally the dispersion function of the machine. The data are then fitted to a lattice model by adjusting parameters such as quadrupole and skew quadrupole strengths in the model, BPM gains and rolls, corrector gains and rolls of the measurement system. Any abnormality of the machine that affects the machine optics can then be identified. The resulting lattice model is equivalent to the real machine lattice as seen by the BPMs. Since there are usually two or more BPMs per betatron period in modern circular accelerators, the model is often a very accurate representation of the real machine. According to the fitting result, one can correct the machine lattice to the design lattice by changing the quadrupole and skew quadrupole strengths. LOCO is so important that it is routinely performed at many electron storage rings to guarantee machine performance, especially after the Matlab-based LOCO code became available. However, for some machines, LOCO is not easy to carry out. In some cases, LOCO fitting converges to an unrealistic solution with large changes to the quadrupole strengths ΔK. The quadrupole gradient changes can be so large that the resulting lattice model fails to find a closed orbit and subsequent iterations become impossible. In cases when LOCO converges, the solution can have ΔK that is larger than realistic and often along with a spurious zigzag pattern between adjacent quadrupoles. This degeneracy behavior of LOCO is due to the correlation between the fitting parameters - usually between neighboring quadrupoles. The fitting scheme is therefore less restrictive over certain patterns of changes to these quadrupoles with which the correlated quadrupoles fight each other and the net effect is very inefficient χ 2 reduction, i

  2. Comparison of the Wang and Wachsmuth models for π production with measurements at 12 GeV/c

    International Nuclear Information System (INIS)

    Fernow, R.C.

    1996-01-01

    We converted the invariant cross section measurements of Blobel et al. at 12 GeV/c into the form d 2 σ/dΩdp as a function of the LAB total momentum p and p T . We adjusted the parameters of the pion production models of Wang and of Wachsmuth-Hagedorn-Ranft to obtain the best fit to the data. Neither model gave a statistically accurate fit to the data. copyright 1995 American Institute of Physics

  3. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  4. When the model fits the frame: the impact of regulatory fit on efficacy appraisal and persuasion in health communication.

    Science.gov (United States)

    Bosone, Lucia; Martinez, Frédéric; Kalampalikis, Nikos

    2015-04-01

    In health-promotional campaigns, positive and negative role models can be deployed to illustrate the benefits or costs of certain behaviors. The main purpose of this article is to investigate why, how, and when exposure to role models strengthens the persuasiveness of a message, according to regulatory fit theory. We argue that exposure to a positive versus a negative model activates individuals' goals toward promotion rather than prevention. By means of two experiments, we demonstrate that high levels of persuasion occur when a message advertising healthy dietary habits offers a regulatory fit between its framing and the described role model. Our data also establish that the effects of such internal regulatory fit by vicarious experience depend on individuals' perceptions of response-efficacy and self-efficacy. Our findings constitute a significant theoretical complement to previous research on regulatory fit and contain valuable practical implications for health-promotional campaigns. © 2015 by the Society for Personality and Social Psychology, Inc.

  5. Chempy: A flexible chemical evolution model for abundance fitting. Do the Sun's abundances alone constrain chemical evolution models?

    Science.gov (United States)

    Rybizki, Jan; Just, Andreas; Rix, Hans-Walter

    2017-09-01

    Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar

  6. Fitting Statistical Distributions Functions on Ozone Concentration Data at Coastal Areas

    International Nuclear Information System (INIS)

    Muhammad Yazid Nasir; Nurul Adyani Ghazali; Muhammad Izwan Zariq Mokhtar; Norhazlina Suhaimi

    2016-01-01

    Ozone is known as one of the pollutant that contributes to the air pollution problem. Therefore, it is important to carry out the study on ozone. The objective of this study is to find the best statistical distribution for ozone concentration. There are three distributions namely Inverse Gaussian, Weibull and Lognormal were chosen to fit one year hourly average ozone concentration data in 2010 at Port Dickson and Port Klang. Maximum likelihood estimation (MLE) method was used to estimate the parameters to develop the probability density function (PDF) graph and cumulative density function (CDF) graph. Three performance indicators (PI) that are normalized absolute error (NAE), prediction accuracy (PA), and coefficient of determination (R 2 ) were used to determine the goodness-of-fit criteria of the distribution. Result shows that Weibull distribution is the best distribution with the smallest error measure value (NAE) at Port Klang and Port Dickson is 0.08 and 0.31, respectively. The best score for highest adequacy measure (PA: 0.99) with the value of R 2 is 0.98 (Port Klang) and 0.99 (Port Dickson). These results provide useful information to local authorities for prediction purpose. (author)

  7. A model for hormonal control of the menstrual cycle: structural consistency but sensitivity with regard to data.

    Science.gov (United States)

    Selgrade, J F; Harris, L A; Pasteur, R D

    2009-10-21

    This study presents a 13-dimensional system of delayed differential equations which predicts serum concentrations of five hormones important for regulation of the menstrual cycle. Parameters for the system are fit to two different data sets for normally cycling women. For these best fit parameter sets, model simulations agree well with the two different data sets but one model also has an abnormal stable periodic solution, which may represent polycystic ovarian syndrome. This abnormal cycle occurs for the model in which the normal cycle has estradiol levels at the high end of the normal range. Differences in model behavior are explained by studying hysteresis curves in bifurcation diagrams with respect to sensitive model parameters. For instance, one sensitive parameter is indicative of the estradiol concentration that promotes pituitary synthesis of a large amount of luteinizing hormone, which is required for ovulation. Also, it is observed that models with greater early follicular growth rates may have a greater risk of cycling abnormally.

  8. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....

  9. High-order dynamic modeling and parameter identification of structural discontinuities in Timoshenko beams by using reflection coefficients

    Science.gov (United States)

    Fan, Qiang; Huang, Zhenyu; Zhang, Bing; Chen, Dayue

    2013-02-01

    Properties of discontinuities, such as bolt joints and cracks in the waveguide structures, are difficult to evaluate by either analytical or numerical methods due to the complexity and uncertainty of the discontinuities. In this paper, the discontinuity in a Timoshenko beam is modeled with high-order parameters and then these parameters are identified by using reflection coefficients at the discontinuity. The high-order model is composed of several one-order sub-models in series and each sub-model consists of inertia, stiffness and damping components in parallel. The order of the discontinuity model is determined based on the characteristics of the reflection coefficient curve and the accuracy requirement of the dynamic modeling. The model parameters are identified through the least-square fitting iteration method, of which the undetermined model parameters are updated in iteration to fit the dynamic reflection coefficient curve with the wave-based one. By using the spectral super-element method (SSEM), simulation cases, including one-order discontinuities on infinite- and finite-beams and a two-order discontinuity on an infinite beam, were employed to evaluate both the accuracy of the discontinuity model and the effectiveness of the identification method. For practical considerations, effects of measurement noise on the discontinuity parameter identification are investigated by adding different levels of noise to the simulated data. The simulation results were then validated by the corresponding experiments. Both the simulation and experimental results show that (1) the one-order discontinuities can be identified accurately with the maximum errors of 6.8% and 8.7%, respectively; (2) and the high-order discontinuities can be identified with the maximum errors of 15.8% and 16.2%, respectively; and (3) the high-order model can predict the complex discontinuity much more accurately than the one-order discontinuity model.

  10. Models and methods for derivation of in vivo neuroreceptor parameters with PET and SPECT reversible radiotracers

    International Nuclear Information System (INIS)

    Slifstein, Mark; Laruelle, Marc

    2001-01-01

    The science of quantitative analysis of PET and SPECT neuroreceptor imaging studies has grown considerably over the past decade. A number of methods have been proposed in which receptor parameter estimation results from fitting data to a model of the underlying kinetics of ligand uptake in the brain. These approaches have come to be collectively known as model-based methods and several have received widespread use. Here, we briefly review the most frequently used methods and examine their strengths and weaknesses. Kinetic modeling is the most direct implementation of the compartment models, but with some tracers accurate input function measurement and good compartment configuration identification can be difficult to obtain. Other methods were designed to overcome some particular vulnerability to error of classical kinetic modeling, but introduced new vulnerabilities in the process. Reference region methods obviate the need for arterial plasma measurement, but are not as robust to violations of the underlying modeling assumptions as methods using the arterial input function. Graphical methods give estimates of V T without the requirement of compartment model specification, but provide a biased estimator in the presence of statistical noise. True equilibrium methods are quite robust, but their use is limited to experiments with tracers that are suitable for constant infusion. In conclusion, there is no universally 'best' method that is applicable to all neuroreceptor imaging studies, and carefully evaluation of model-based methods is required for each radiotracer

  11. An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models

    Science.gov (United States)

    Ames, Allison J.; Penfield, Randall D.

    2015-01-01

    Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…

  12. The Early Eocene equable climate problem: can perturbations of climate model parameters identify possible solutions?

    Science.gov (United States)

    Sagoo, Navjit; Valdes, Paul; Flecker, Rachel; Gregoire, Lauren J

    2013-10-28

    Geological data for the Early Eocene (56-47.8 Ma) indicate extensive global warming, with very warm temperatures at both poles. However, despite numerous attempts to simulate this warmth, there are remarkable data-model differences in the prediction of these polar surface temperatures, resulting in the so-called 'equable climate problem'. In this paper, for the first time an ensemble with a perturbed climate-sensitive model parameters approach has been applied to modelling the Early Eocene climate. We performed more than 100 simulations with perturbed physics parameters, and identified two simulations that have an optimal fit with the proxy data. We have simulated the warmth of the Early Eocene at 560 ppmv CO2, which is a much lower CO2 level than many other models. We investigate the changes in atmospheric circulation, cloud properties and ocean circulation that are common to these simulations and how they differ from the remaining simulations in order to understand what mechanisms contribute to the polar warming. The parameter set from one of the optimal Early Eocene simulations also produces a favourable fit for the last glacial maximum boundary climate and outperforms the control parameter set for the present day. Although this does not 'prove' that this model is correct, it is very encouraging that there is a parameter set that creates a climate model able to simulate well very different palaeoclimates and the present-day climate. Interestingly, to achieve the great warmth of the Early Eocene this version of the model does not have a strong future climate change Charney climate sensitivity. It produces a Charney climate sensitivity of 2.7(°)C, whereas the mean value of the 18 models in the IPCC Fourth Assessment Report (AR4) is 3.26(°)C±0.69(°)C. Thus, this value is within the range and below the mean of the models included in the AR4.

  13. A method for tuning parameters of Monte Carlo generators and a determination of the unintegrated gluon density

    International Nuclear Information System (INIS)

    Bacchetta, Alessandro; Jung, Hannes; Kutak, Krzysztof

    2010-02-01

    A method for tuning parameters in Monte Carlo generators is described and applied to a specific case. The method works in the following way: each observable is generated several times using different values of the parameters to be tuned. The output is then approximated by some analytic form to describe the dependence of the observables on the parameters. This approximation is used to find the values of the parameter that give the best description of the experimental data. This results in significantly faster fitting compared to an approach in which the generator is called iteratively. As an application, we employ this method to fit the parameters of the unintegrated gluon density used in the Cascade Monte Carlo generator, using inclusive deep inelastic data measured by the H1 Collaboration. We discuss the results of the fit, its limitations, and its strong points. (orig.)

  14. Analysis of chromosome aberration data by hybrid-scale models

    International Nuclear Information System (INIS)

    Indrawati, Iwiq; Kumazawa, Shigeru

    2000-02-01

    This paper presents a new methodology for analyzing data of chromosome aberrations, which is useful to understand the characteristics of dose-response relationships and to construct the calibration curves for the biological dosimetry. The hybrid scale of linear and logarithmic scales brings a particular plotting paper, where the normal section paper, two types of semi-log papers and the log-log paper are continuously connected. The hybrid-hybrid plotting paper may contain nine kinds of linear relationships, and these are conveniently called hybrid scale models. One can systematically select the best-fit model among the nine models by among the conditions for a straight line of data points. A biological interpretation is possible with some hybrid-scale models. In this report, the hybrid scale models were applied to separately reported data on chromosome aberrations in human lymphocytes as well as on chromosome breaks in Tradescantia. The results proved that the proposed models fit the data better than the linear-quadratic model, despite the demerit of the increased number of model parameters. We showed that the hybrid-hybrid model (both variables of dose and response using the hybrid scale) provides the best-fit straight lines to be used as the reliable and readable calibration curves of chromosome aberrations. (author)

  15. Full-profile fitting of emission spectrum to determine transition intensity parameters of Yb3+:GdTaO4

    International Nuclear Information System (INIS)

    Zhang Qingli; Sun Guihua; Ning Kaijie; Liu Wenpeng; Sun Dunlu; Yin Shaotang; Shi Chaoshu

    2016-01-01

    The Judd–Ofelt theoretic transition intensity parameters of luminescence of rare-earth ions in solids are important for the quantitative analysis of luminescence. It is very difficult to determine them with emission or absorption spectra for a long time. A “full profile fitting” method to obtain in solids with its emission spectrum is proposed, in which the contribution of a radiative transition to the emission spectrum is expressed as the product of transition probability, line profile function, instrument measurement constant and transition center frequency or wavelength, and the whole experimental emission spectrum is the sum of all transitions. In this way, the emission spectrum is expressed as a function with the independent variables intensity parameters , full width at half maximum (FWHM) of profile functions, instrument measurement constant, wavelength, and the Huang–Rhys factor S if the lattice vibronic peaks in the emission spectrum should be considered. The ratios of the experimental to the calculated energy lifetimes are incorporated into the fitting function to remove the arbitrariness during fitting and other parameters. Employing this method obviates measurement of the absolute emission spectrum intensity. It also eliminates dependence upon the number of emission transition peaks. Every experiment point in emission spectra, which usually have at least hundreds of data points, is the function with variables and other parameters, so it is usually viable to determine and other parameters using a large number of experimental values. We applied this method to determine twenty-five of Yb 3+ in GdTaO 4 . The calculated and experiment energy lifetimes, experimental and calculated emission spectrum are very consistent, indicating that it is viable to obtain the transition intensity parameters of rare-earth ions in solids by a full profile fitting to the ions’ emission spectrum. The calculated emission cross sections of Yb 3+ :GdTaO 4 also indicate

  16. Music in CrossFit®—Influence on Performance, Physiological, and Psychological Parameters

    Directory of Open Access Journals (Sweden)

    Gavin Brupbacher

    2014-01-01

    Full Text Available Gaining increasing popularity within the fitness sector, CrossFit® serves as an appealing and efficient high intensity training approach to develop strength and endurance on a functional level; and music is often utilized to produce ergogenic effects. The present randomized, controlled, crossover study aimed at investigating the effects of music vs. non-music on performance, physiological and psychological outcomes. Thirteen (age: 27.5, standard deviation (SD 6.2 years, healthy, moderately trained subjects performed four identical workouts over two weeks. The order of the four workouts (two with, and two without music, 20 min each was randomly assigned for each individual. Acute responses in work output, heart rate, blood lactate, rate of perceived exertion, perceived pain, and affective reaction were measured at the 5th, 10th, 15th, and 20th min during the training sessions. Training with music resulted in a significantly lower work output (460.3 repetitions, SD 98.1 vs. 497.8 repetitions, SD 103.7; p = 0.03. All other parameters did not differ between both music conditions. This is partly in line with previous findings that instead of providing ergogenic effects, applying music during CrossFit® may serve as a more distractive stimulus. Future studies should separate the influence of music on a more individual basis with larger sample sizes.

  17. Fitting the two-compartment model in DCE-MRI by linear inversion.

    Science.gov (United States)

    Flouri, Dimitra; Lesnic, Daniel; Sourbron, Steven P

    2016-09-01

    Model fitting of dynamic contrast-enhanced-magnetic resonance imaging-MRI data with nonlinear least squares (NLLS) methods is slow and may be biased by the choice of initial values. The aim of this study was to develop and evaluate a linear least squares (LLS) method to fit the two-compartment exchange and -filtration models. A second-order linear differential equation for the measured concentrations was derived where model parameters act as coefficients. Simulations of normal and pathological data were performed to determine calculation time, accuracy and precision under different noise levels and temporal resolutions. Performance of the LLS was evaluated by comparison against the NLLS. The LLS method is about 200 times faster, which reduces the calculation times for a 256 × 256 MR slice from 9 min to 3 s. For ideal data with low noise and high temporal resolution the LLS and NLLS were equally accurate and precise. The LLS was more accurate and precise than the NLLS at low temporal resolution, but less accurate at high noise levels. The data show that the LLS leads to a significant reduction in calculation times, and more reliable results at low noise levels. At higher noise levels the LLS becomes exceedingly inaccurate compared to the NLLS, but this may be improved using a suitable weighting strategy. Magn Reson Med 76:998-1006, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  18. Effects of supplemental training on fitness and aesthetic competence parameters in contemporary dance: a randomised controlled trial.

    Science.gov (United States)

    Angioi, Manuela; Metsios, George; Twitchett, Emily A; Koutedakis, Yiannis; Wyon, Matthew

    2012-03-01

    Within aesthetic sports such as figure skating and rhythmic gymnastics, physical fitness has been shown to have positive benefits on performance outcomes. Presently the link between physical fitness and aesthetic contemporary dance performance has not been demonstrated within an intervention study. In this study, 24 females engaged in contemporary dance (age 27 ± 5.9 yrs; height 165.3 ± 4.8 cm; weight 59.2 ± 7.6 kg) were recruited and randomly assigned to either an exercise (n = 12) or a control group (n = 12). Three dancers withdrew during the study. The intervention group completed a 6-week conditioning programme comprising two 1-hr sessions of circuit and vibration training per week. The circuit training focused on local muscular endurance and aerobic conditioning and vibration training protocol concentrated on power. Repeated measures ANOVA revealed significant increases for the conditioning group in lower body muscular power (11%), upper body muscular endurance (22%), aerobic fitness (11%), and aesthetic competence (12%) (p fitness parameters with the exception of aerobic fitness as well as a decrease in aesthetic competence (7%). A 6-week circuit and vibration training programme, which supplemented normal dance commitments, revealed significant increases in selected fitness components and a concomitant increase in aesthetic competence in contemporary professional and student dancers.

  19. Mathematical Models for the Apparent Mass of the Seated Human Body Exposed to Vertical Vibration

    Science.gov (United States)

    Wei, L.; Griffin, M. J.

    1998-05-01

    Alternative mathematical models of the vertical apparent mass of the seated human body are developed. The optimum parameters of four models (two single-degree-of-freedom models and two two-degree-of-freedom models) are derived from the mean measured apparent masses of 60 subjects (24 men, 24 women, 12 children) previously reported. The best fits were obtained by fitting the phase data with single-degree-of-freedom and two-degree-of-freedom models having rigid support structures. For these two models, curve fitting was performed on each of the 60 subjects (so as to obtain optimum model parameters for each subject), for the averages of each of the three groups of subjects, and for the entire group of subjects. The values obtained are tabulated. Use of a two-degree-of-freedom model provided a better fit to the phase of the apparent mass at frequencies greater than about 8 Hz and an improved fit to the modulus of the apparent mass at frequencies around 5 Hz. It is concluded that the two-degree-of-freedom model provides an apparent mass similar to that of the human body, but this does not imply that the body moves in the same manner as the masses in this optimized two-degree-of-freedom model.

  20. Statistical Diagnosis of the Best Weibull Methods for Wind Power Assessment for Agricultural Applications

    Directory of Open Access Journals (Sweden)

    Abul Kalam Azad

    2014-05-01

    Full Text Available The best Weibull distribution methods for the assessment of wind energy potential at different altitudes in desired locations are statistically diagnosed in this study. Seven different methods, namely graphical method (GM, method of moments (MOM, standard deviation method (STDM, maximum likelihood method (MLM, power density method (PDM, modified maximum likelihood method (MMLM and equivalent energy method (EEM were used to estimate the Weibull parameters and six statistical tools, namely relative percentage of error, root mean square error (RMSE, mean percentage of error, mean absolute percentage of error, chi-square error and analysis of variance were used to precisely rank the methods. The statistical fittings of the measured and calculated wind speed data are assessed for justifying the performance of the methods. The capacity factor and total energy generated by a small model wind turbine is calculated by numerical integration using Trapezoidal sums and Simpson’s rules. The results show that MOM and MLM are the most efficient methods for determining the value of k and c to fit Weibull distribution curves.

  1. Updated Status of the Global Electroweak Fit and Constraints on New Physics

    CERN Document Server

    Baak, M; Haller, J; Hoecker, A; Kennedy, D; Moenig, K; Schott, M; Stelzer, J

    2012-01-01

    We present an update of the Standard Model fit to electroweak precision data. We include newest experimental results on the top quark mass, the W mass and width, and the Higgs boson mass bounds from LEP, Tevatron and the LHC. We also include a new determination of the electromagnetic coupling strength at the Z pole. We find for the Higgs boson mass (96 +31 -24) GeV and (120 +12 -5) GeV when not including and including the direct Higgs searches, respectively. From the latter fit we indirectly determine the W mass to be (80.362 +- 0.013) GeV. We exploit the data to determine experimental constraints on the oblique vacuum polarisation parameters, and confront these with predictions from the Standard Model (SM) and selected SM extensions. By fitting the oblique parameters to the electroweak data we derive allowed regions in the BSM parameter spaces. We revisit and consistently update these constraints for a fourth fourth fermion generation, two Higgs doublet, inert Higgs and littlest Higgs models, models with lar...

  2. Updated Status of the Global Electroweak Fit and Constraints on New Physics

    CERN Document Server

    Baak, Max; Haller, Johannes; Hoecker, Andreas; Ludwig, Doerthe; Moenig, Klaus; Schott, Matthias; Stelzer, Joerg

    2011-01-01

    We present an update of the Standard Model fit to electroweak precision data. We include newest experimental results on the top quark mass, the W mass and width, and the Higgs boson mass bounds from LEP, Tevatron and the cLHC. We also include a new determination of the electromagnetic coupling strength at the Z pole. We find for the Higgs boson mass (96 +31 -24) GeV and (120 +12 -5) GeV when not including and including the direct Higgs searches, respectively. From the latter fit we indirectly determine the W mass to be (80.359 +0.017 -0.010) GeV. We exploit the data to determine experimental constraints on the oblique vacuum polarisation parameters, and confront these with predictions from the Standard Model (SM) and selected SM extensions. By fitting the oblique parameters to the electroweak data we derive allowed regions in the BSM parameter spaces. We revisit and consistently update these constraints for a fourth family, two Higgs doublet, inert Higgs and littlest Higgs models, models with large,...

  3. Screening for colorectal cancer: what fits best?

    LENUS (Irish Health Repository)

    Lee, Chun Seng

    2012-06-01

    Colorectal cancer (CRC) screening has been shown to be effective in reducing CRC incidence and mortality. There are currently a number of screening modalities available for implementation into a population-based CRC screening program. Each screening method offers different strengths but also possesses its own limitations as a population-based screening strategy. We review the current evidence base for accepted CRC screening tools and evaluate their merits alongside their challenges in fulfilling their role in the detection of CRC. We also aim to provide an outlook on the demands of a low-risk population-based CRC screening program with a view to providing insight as to which modality would best suit current and future needs.

  4. Estimating mass of σ-meson and study on application of the linear σ-model

    International Nuclear Information System (INIS)

    Ding Yibing; Li Xin; Li Xueqian; Liu Xiang; Shen Hong; Shen Pengnian; Wang Guoli; Zeng Xiaoqiang

    2004-01-01

    Whether the σ-meson (f 0 (600)) exists as a real particle is a long-standing problem in both particle physics and nuclear physics. In this work, we analyse the deuteron binding energy in the linear σ-model and by fitting the data, we are able to determine the range of m σ and also investigate applicability of the linear σ-model for the interaction between hadrons in the energy region of MeVs. Our result shows that the best fit to the data of the deuteron binding energy and others advocates a narrow range for the σ-meson mass as 520 ≤ m σ ≤ 580 MeV and the concrete values depend on the input parameters such as the couplings. Inversely by fitting the experimental data, one can set constraints on the couplings and the other relevant phenomenological parameters in the model

  5. Assessment of structural model and parameter uncertainty with a multi-model system for soil water balance models

    Science.gov (United States)

    Michalik, Thomas; Multsch, Sebastian; Frede, Hans-Georg; Breuer, Lutz

    2016-04-01

    Water for agriculture is strongly limited in arid and semi-arid regions and often of low quality in terms of salinity. The application of saline waters for irrigation increases the salt load in the rooting zone and has to be managed by leaching to maintain a healthy soil, i.e. to wash out salts by additional irrigation. Dynamic simulation models are helpful tools to calculate the root zone water fluxes and soil salinity content in order to investigate best management practices. However, there is little information on structural and parameter uncertainty for simulations regarding the water and salt balance of saline irrigation. Hence, we established a multi-model system with four different models (AquaCrop, RZWQM, SWAP, Hydrus1D/UNSATCHEM) to analyze the structural and parameter uncertainty by using the Global Likelihood and Uncertainty Estimation (GLUE) method. Hydrus1D/UNSATCHEM and SWAP were set up with multiple sets of different implemented functions (e.g. matric and osmotic stress for root water uptake) which results in a broad range of different model structures. The simulations were evaluated against soil water and salinity content observations. The posterior distribution of the GLUE analysis gives behavioral parameters sets and reveals uncertainty intervals for parameter uncertainty. Throughout all of the model sets, most parameters accounting for the soil water balance show a low uncertainty, only one or two out of five to six parameters in each model set displays a high uncertainty (e.g. pore-size distribution index in SWAP and Hydrus1D/UNSATCHEM). The differences between the models and model setups reveal the structural uncertainty. The highest structural uncertainty is observed for deep percolation fluxes between the model sets of Hydrus1D/UNSATCHEM (~200 mm) and RZWQM (~500 mm) that are more than twice as high for the latter. The model sets show a high variation in uncertainty intervals for deep percolation as well, with an interquartile range (IQR) of

  6. Optimization of parameters for enhanced oil recovery from enzyme treated wild apricot kernels.

    Science.gov (United States)

    Rajaram, Mahatre R; Kumbhar, Baburao K; Singh, Anupama; Lohani, Umesh Chandra; Shahi, Navin C

    2012-08-01

    Present investigation was undertaken with the overall objective of optimizing the enzymatic parameters i.e. moisture content during hydrolysis, enzyme concentration, enzyme ratio and incubation period on wild apricot kernel processing for better oil extractability and increased oil recovery. Response surface methodology was adopted in the experimental design. A central composite rotatable design of four variables at five levels was chosen. The parameters and their range for the experiments were moisture content during hydrolysis (20-32%, w.b.), enzyme concentration (12-16% v/w of sample), combination of pectolytic and cellulolytic enzyme i.e. enzyme ratio (30:70-70:30) and incubation period (12-16 h). Aspergillus foetidus and Trichoderma viride was used for production of crude enzyme i.e. pectolytic and cellulolytic enzyme respectively. A complete second order model for increased oil recovery as the function of enzymatic parameters fitted the data well. The best fit model for oil recovery was also developed. The effect of various parameters on increased oil recovery was determined at linear, quadric and interaction level. The increased oil recovery ranged from 0.14 to 2.53%. The corresponding conditions for maximum oil recovery were 23% (w.b.), 15 v/w of the sample, 60:40 (pectolytic:cellulolytic), 13 h. Results of the study indicated that incubation period during enzymatic hydrolysis is the most important factor affecting oil yield followed by enzyme ratio, moisture content and enzyme concentration in the decreasing order. Enzyme ratio, incubation period and moisture content had insignificant effect on oil recovery. Second order model for increased oil recovery as a function of enzymatic hydrolysis parameters predicted the data adequately.

  7. AMS-02 fits dark matter

    Science.gov (United States)

    Balázs, Csaba; Li, Tong

    2016-05-01

    In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model.

  8. AMS-02 fits dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Balázs, Csaba; Li, Tong [ARC Centre of Excellence for Particle Physics at the Tera-scale,School of Physics and Astronomy, Monash University, Melbourne, Victoria 3800 (Australia)

    2016-05-05

    In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model.

  9. New ROOT Graphical User Interfaces for fitting

    International Nuclear Information System (INIS)

    Maline, D Gonzalez; Moneta, L; Antcheva, I

    2010-01-01

    ROOT, as a scientific data analysis framework, provides extensive capabilities via Graphical User Interfaces (GUI) for performing interactive analysis and visualizing data objects like histograms and graphs. A new interface for fitting has been developed for performing, exploring and comparing fits on data point sets such as histograms, multi-dimensional graphs or trees. With this new interface, users can build interactively the fit model function, set parameter values and constraints and select fit and minimization methods with their options. Functionality for visualizing the fit results is as well provided, with the possibility of drawing residuals or confidence intervals. Furthermore, the new fit panel reacts as a standalone application and it does not prevent users from interacting with other windows. We will describe in great detail the functionality of this user interface, covering as well new capabilities provided by the new fitting and minimization tools introduced recently in the ROOT framework.

  10. Sensitivity analysis of respiratory parameter uncertainties: impact of criterion function form and constraints.

    Science.gov (United States)

    Lutchen, K R

    1990-08-01

    A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.

  11. Simultaneous fits in ISIS on the example of GRO J1008-57

    Science.gov (United States)

    Kühnel, Matthias; Müller, Sebastian; Kreykenbohm, Ingo; Schwarm, Fritz-Walter; Grossberger, Christoph; Dauser, Thomas; Pottschmidt, Katja; Ferrigno, Carlo; Rothschild, Richard E.; Klochkov, Dmitry; Staubert, Rüdiger; Wilms, Joern

    2015-04-01

    Parallel computing and steadily increasing computation speed have led to a new tool for analyzing multiple datasets and datatypes: fitting several datasets simultaneously. With this technique, physically connected parameters of individual data can be treated as a single parameter by implementing this connection into the fit directly. We discuss the terminology, implementation, and possible issues of simultaneous fits based on the X-ray data analysis tool Interactive Spectral Interpretation System (ISIS). While all data modeling tools in X-ray astronomy allow in principle fitting data from multiple data sets individually, the syntax used in these tools is not often well suited for this task. Applying simultaneous fits to the transient X-ray binary GRO J1008-57, we find that the spectral shape is only dependent on X-ray flux. We determine time independent parameters such as, e.g., the folding energy E_fold, with unprecedented precision.

  12. α -induced reactions on 115In: Cross section measurements and statistical model analysis

    Science.gov (United States)

    Kiss, G. G.; Szücs, T.; Mohr, P.; Török, Zs.; Huszánk, R.; Gyürky, Gy.; Fülöp, Zs.

    2018-05-01

    Background: α -nucleus optical potentials are basic ingredients of statistical model calculations used in nucleosynthesis simulations. While the nucleon+nucleus optical potential is fairly well known, for the α +nucleus optical potential several different parameter sets exist and large deviations, reaching sometimes even an order of magnitude, are found between the cross section predictions calculated using different parameter sets. Purpose: A measurement of the radiative α -capture and the α -induced reaction cross sections on the nucleus 115In at low energies allows a stringent test of statistical model predictions. Since experimental data are scarce in this mass region, this measurement can be an important input to test the global applicability of α +nucleus optical model potentials and further ingredients of the statistical model. Methods: The reaction cross sections were measured by means of the activation method. The produced activities were determined by off-line detection of the γ rays and characteristic x rays emitted during the electron capture decay of the produced Sb isotopes. The 115In(α ,γ )119Sb and 115In(α ,n )Sb118m reaction cross sections were measured between Ec .m .=8.83 and 15.58 MeV, and the 115In(α ,n )Sb118g reaction was studied between Ec .m .=11.10 and 15.58 MeV. The theoretical analysis was performed within the statistical model. Results: The simultaneous measurement of the (α ,γ ) and (α ,n ) cross sections allowed us to determine a best-fit combination of all parameters for the statistical model. The α +nucleus optical potential is identified as the most important input for the statistical model. The best fit is obtained for the new Atomki-V1 potential, and good reproduction of the experimental data is also achieved for the first version of the Demetriou potentials and the simple McFadden-Satchler potential. The nucleon optical potential, the γ -ray strength function, and the level density parametrization are also

  13. Observational constraint on the interacting dark energy models including the Sandage-Loeb test

    Science.gov (United States)

    Zhang, Ming-Jian; Liu, Wen-Biao

    2014-05-01

    Two types of interacting dark energy models are investigated using the type Ia supernova (SNIa), observational data (OHD), cosmic microwave background shift parameter, and the secular Sandage-Loeb (SL) test. In the investigation, we have used two sets of parameter priors including WMAP-9 and Planck 2013. They have shown some interesting differences. We find that the inclusion of SL test can obviously provide a more stringent constraint on the parameters in both models. For the constant coupling model, the interaction term has been improved to be only a half of the original scale on corresponding errors. Comparing with only SNIa and OHD, we find that the inclusion of the SL test almost reduces the best-fit interaction to zero, which indicates that the higher-redshift observation including the SL test is necessary to track the evolution of the interaction. For the varying coupling model, data with the inclusion of the SL test show that the parameter at C.L. in Planck priors is , where the constant is characteristic for the severity of the coincidence problem. This indicates that the coincidence problem will be less severe. We then reconstruct the interaction , and we find that the best-fit interaction is also negative, similar to the constant coupling model. However, for a high redshift, the interaction generally vanishes at infinity. We also find that the phantom-like dark energy with is favored over the CDM model.

  14. Probability Model of Allele Frequency of Alzheimer’s Disease Genetic Risk Factor

    Directory of Open Access Journals (Sweden)

    Afshin Fayyaz-Movaghar

    2016-06-01

    Full Text Available Background and Purpose: The identification of genetics risk factors of human diseases is very important. This study is conducted to model the allele frequencies (AFs of Alzheimer’s disease. Materials and Methods: In this study, several candidate probability distributions are fitted on a data set of Alzheimer’s disease genetic risk factor. Unknown parameters of the considered distributions are estimated, and some criterions of goodness-of-fit are calculated for the sake of comparison. Results: Based on some statistical criterions, the beta distribution gives the best fit on AFs. However, the estimate values of the parameters of beta distribution lead us to the standard uniform distribution. Conclusion: The AFs of Alzheimer’s disease follow the standard uniform distribution.

  15. The Structure of Fitness Landscapes in Antibiotic-Resistant Bacteria

    Science.gov (United States)

    Deris, Barrett; Kim, Minsu; Zhang, Zhongge; Okano, Hiroyuki; Hermsen, Rutger; Gore, Jeff; Hwa, Terence

    2014-03-01

    To predict the emergence of antibiotic resistance, quantitative relations must be established between the fitness of drug-resistant organisms and the molecular mechanisms conferring resistance. We have investigated E. coli strains expressing resistance to translation-inhibiting antibiotics. We show that resistance expression and drug inhibition are linked in a positive feedback loop arising from an innate, global effect of drug-inhibited growth on gene expression. This feedback leads generically to plateau-shaped fitness landscapes and concomitantly, for strains expressing at least moderate degrees of drug resistance, gives rise to an abrupt drop in growth rates of cultures at threshold drug concentrations. A simple quantitative model of bacterial growth based on this innate feedback accurately predicts experimental observations without ad hoc parameter fitting. We describe how drug-inhibited growth rate and the threshold drug concentration (the minimum inhibitory concentration, or MIC) depend on the few biochemical parameters that characterize the molecular details of growth inhibition and drug resistance (e.g., the drug-target dissociation constant). And finally, we discuss how these parameters can shape fitness landscapes to determine evolutionary dynamics and evolvability.

  16. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  17. FITS: a function-fitting program

    Energy Technology Data Exchange (ETDEWEB)

    Balestrini, S.J.; Chezem, C.G.

    1982-08-01

    FITS is an iterating computer program that adjusts the parameters of a function to fit a set of data points according to the least squares criterion and then lists and plots the results. The function can be programmed or chosen from a library that is provided. The library can be expanded to include up to 99 functions. A general plotting routine, contained in the program but useful in its own right, is described separately in Appendix A. An example problem file and its solution is given in Appendix B.

  18. Normal tissue complication probability modeling of radiation-induced hypothyroidism after head-and-neck radiation therapy.

    Science.gov (United States)

    Bakhshandeh, Mohsen; Hashemi, Bijan; Mahdavi, Seied Rabi Mehdi; Nikoofar, Alireza; Vasheghani, Maryam; Kazemnejad, Anoshirvan

    2013-02-01

    To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with α/β = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D(50) estimated from the models was approximately 44 Gy. The implemented normal tissue complication probability models showed a parallel architecture for the

  19. Normal Tissue Complication Probability Modeling of Radiation-Induced Hypothyroidism After Head-and-Neck Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Bakhshandeh, Mohsen [Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Hashemi, Bijan, E-mail: bhashemi@modares.ac.ir [Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Mahdavi, Seied Rabi Mehdi [Department of Medical Physics, Faculty of Medical Sciences, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Nikoofar, Alireza; Vasheghani, Maryam [Department of Radiation Oncology, Hafte-Tir Hospital, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Kazemnejad, Anoshirvan [Department of Biostatistics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran (Iran, Islamic Republic of)

    2013-02-01

    Purpose: To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Methods and Materials: Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with {alpha}/{beta} = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Results: Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D{sub 50} estimated from the models was approximately 44 Gy. Conclusions: The implemented

  20. LEP asymmetries and fits of the standard model

    International Nuclear Information System (INIS)

    Pietrzyk, B.

    1994-01-01

    The lepton and quark asymmetries measured at LEP are presented. The results of the Standard Model fits to the electroweak data presented at this conference are given. The top mass obtained from the fit to the LEP data is 172 -14-20 +13+18 GeV; it is 177 -11-19 +11+18 when also the collider, ν and A LR data are included. (author). 10 refs., 3 figs., 2 tabs

  1. Characterization of PV panel and global optimization of its model parameters using genetic algorithm

    International Nuclear Information System (INIS)

    Ismail, M.S.; Moghavvemi, M.; Mahlia, T.M.I.

    2013-01-01

    Highlights: • Genetic Algorithm optimization ability had been utilized to extract parameters of PV panel model. • Effect of solar radiation and temperature variations was taken into account in fitness function evaluation. • We used Matlab-Simulink to simulate operation of the PV-panel to validate results. • Different cases were analyzed to ascertain which of them gives more accurate results. • Accuracy and applicability of this approach to be used as a valuable tool for PV modeling were clearly validated. - Abstract: This paper details an improved modeling technique for a photovoltaic (PV) module; utilizing the optimization ability of a genetic algorithm, with different parameters of the PV module being computed via this approach. The accurate modeling of any PV module is incumbent upon the values of these parameters, as it is imperative in the context of any further studies concerning different PV applications. Simulation, optimization and the design of the hybrid systems that include PV are examples of these applications. The global optimization of the parameters and the applicability for the entire range of the solar radiation and a wide range of temperatures are achievable via this approach. The Manufacturer’s Data Sheet information is used as a basis for the purpose of parameter optimization, with an average absolute error fitness function formulated; and a numerical iterative method used to solve the voltage-current relation of the PV module. The results of single-diode and two-diode models are evaluated in order to ascertain which of them are more accurate. Other cases are also analyzed in this paper for the purpose of comparison. The Matlab–Simulink environment is used to simulate the operation of the PV module, depending on the extracted parameters. The results of the simulation are compared with the Data Sheet information, which is obtained via experimentation in order to validate the reliability of the approach. Three types of PV modules

  2. Diffraction Traveltime Approximation to Estimate Anisotropy Parameters in Complex TI Media

    KAUST Repository

    Waheed, Umair bin

    2013-05-01

    Diffracted waves carry valuable information that can help improve our velocity modeling capability for better imaging of the subsurface. They are especially useful for anisotropic media as they inherently possess a wide range of dips necessary to resolve the angular dependence of velocity. We develop a scheme for diffraction traveltime computations based on perturbation theory for transverse isotropic media with tilted axis of symmetry (TTI). The formulation has advantages on two fronts: firstly it alleviates the computational complexity associated with solving the TTI eikonal equation and secondly it provides a mechanism to scan for the best fit anellipticity parameter without the need for repetitive modeling of traveltimes. The accuracy of such an expansion is further enhanced by the use of Shanks transform. We establish the effectiveness of the proposed formulation with tests on a homogeneous TTI model and the BP TTI model.

  3. Comparison between different models for rheological characterization of sludge from settling tank

    Directory of Open Access Journals (Sweden)

    Malczewska Beata

    2017-09-01

    Full Text Available The municipal sludge characterized non-Newtonian behaviour, therefore the viscosity of the sewage sludge is not a constant value. The laboratory investigation was made using coaxial cylinder with rotating torque and gravimetric concentration of the investigated sludge ranged from 4.40% to 2.09%. This paper presents the investigation on the effect of concentration of rheological sludge behaviour. The three different rheological models: Bingham (plastic model, Ostwald-de Waele (power-law, Hershel-Bulkley’s were calculated by fitting the experimental data of shear stress as a function of shear rate to these models. In this study, the 3-parameter Herschel- Bulkley’s model fits the experimental data best.

  4. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    Science.gov (United States)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  5. Parameter Extraction Method for the Electrical Model of a Silicon Photomultiplier

    Science.gov (United States)

    Licciulli, Francesco; Marzocca, Cristoforo

    2016-10-01

    The availability of an effective electrical model, able to accurately reproduce the signals generated by a Silicon Photo-Multiplier coupled to the front-end electronics, is mandatory when the performance of a detection system based on this kind of detector has to be evaluated by means of reliable simulations. We propose a complete extraction procedure able to provide the whole set of the parameters involved in a well-known model of the detector, which includes the substrate ohmic resistance. The technique allows achieving very good quality of the fit between simulation results provided by the model and experimental data, thanks to accurate discrimination between the quenching and substrate resistances, which results in a realistic set of extracted parameters. The extraction procedure has been applied to a commercial device considering a wide range of different conditions in terms of input resistance of the front-end electronics and interconnection parasitics. In all the considered situations, very good correspondence has been found between simulations and measurements, especially for what concerns the leading edge of the current pulses generated by the detector, which strongly affects the timing performance of the detection system, thus confirming the effectiveness of the model and the associated parameter extraction technique.

  6. Analysis Test of Understanding of Vectors with the Three-Parameter Logistic Model of Item Response Theory and Item Response Curves Technique

    Science.gov (United States)

    Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan

    2016-01-01

    This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming…

  7. Automatic parameter estimation of multicompartmental neuron models via minimization of trace error with control adjustment.

    Science.gov (United States)

    Brookings, Ted; Goeritz, Marie L; Marder, Eve

    2014-11-01

    We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy. Copyright © 2014 the American Physiological Society.

  8. Refitting density dependent relativistic model parameters including Center-of-Mass corrections

    International Nuclear Information System (INIS)

    Avancini, Sidney S.; Marinelli, Jose R.; Carlson, Brett Vern

    2011-01-01

    Full text: Relativistic mean field models have become a standard approach for precise nuclear structure calculations. After the seminal work of Serot and Walecka, which introduced a model Lagrangian density where the nucleons interact through the exchange of scalar and vector mesons, several models were obtained through its generalization, including other meson degrees of freedom, non-linear meson interactions, meson-meson interactions, etc. More recently density dependent coupling constants were incorporated into the Walecka-like models, which are then extensively used. In particular, for these models a connection with the density functional theory can be established. Due to the inherent difficulties presented by field theoretical models, only the mean field approximation is used for the solution of these models. In order to calculate finite nuclei properties in the mean field approximation, a reference set has to be fixed and therefore the translational symmetry is violated. It is well known that in such case spurious effects due to the center-of-mass (COM) motion are present, which are more pronounced for light nuclei. In a previous work we have proposed a technique based on the Pierls-Yoccoz projection operator applied to the mean-field relativistic solution, in order to project out spurious COM contributions. In this work we obtain a new fitting for the density dependent parameters of a density dependent hadronic model, taking into account the COM corrections. Our fitting is obtained taking into account the charge radii and binding energies for He 4 , O 16 , Ca 40 , Ca 48 , Ni 56 , Ni 68 , Sn 100 , Sn 132 and Pb 208 . We show that the nuclear observables calculated using our fit are of a quality comparable to others that can be found in the literature, with the advantage that now a translational invariant many-body wave function is at our disposal. (author)

  9. Combined Effects of Lignosus rhinocerotis Supplementation and Resistance Training on Isokinetic Muscular Strength and Power, Anaerobic and Aerobic Fitness Level, and Immune Parameters in Young Males.

    Science.gov (United States)

    Chen, Chee Keong; Hamdan, Nor Faeiza; Ooi, Foong Kiew; Wan Abd Hamid, Wan Zuraida

    2016-01-01

    This study investigated the effects of Lignosus rhinocerotis (LRS) supplementation and resistance training (RT) on isokinetic muscular strength and power, anaerobic and aerobic fitness, and immune parameters in young males. Participants were randomly assigned to four groups: Control (C), LRS, RT, and combined RT-LRS (RT-LRS). Participants in the LRS and RT-LRS groups consumed 500 mg of LRS daily for 8 weeks. RT was conducted 3 times/week for 8 weeks for participants in the RT and RT-LRS groups. The following parameters were measured before and after the intervention period: Anthropometric data, isokinetic muscular strength and power, and anaerobic and aerobic fitness. Blood samples were also collected to determine immune parameters. Isokinetic muscular strength and power were increased ( P anaerobic power and capacity and aerobic fitness in this group. Similarly, RT group had increases ( P anaerobic power and capacity, aerobic fitness, T lymphocytes (CD3 and CD4), and B lymphocytes (CD19) counts were observed in the RT group. RT elicited increased isokinetic muscular strength and power, anaerobic and aerobic fitness, and immune parameters among young males. However, supplementation with LRS during RT did not provide additive benefits.

  10. Updated fit to three neutrino mixing: exploring the accelerator-reactor complementarity

    International Nuclear Information System (INIS)

    Esteban, Ivan; Gonzalez-Garcia, M.C.; Maltoni, Michele; Martinez-Soler, Ivan; Schwetz, Thomas

    2017-01-01

    We perform a combined fit to global neutrino oscillation data available as of fall 2016 in the scenario of three-neutrino oscillations and present updated allowed ranges of the six oscillation parameters. We discuss the differences arising between the consistent combination of the data samples from accelerator and reactor experiments compared to partial combinations. We quantify the confidence in the determination of the less precisely known parameters θ 23 , δ CP , and the neutrino mass ordering by performing a Monte Carlo study of the long baseline accelerator and reactor data. We find that the sensitivity to the mass ordering and the θ 23 octant is below 1σ. Maximal θ 23 mixing is allowed at slightly more than 90% CL. The best fit for the CP violating phase is around 270 ∘ , CP conservation is allowed at slightly above 1σ, and values of δ CP ≃90 ∘ are disfavored at around 99% CL for normal ordering and higher CL for inverted ordering.

  11. Some dynamical aspects of interacting quintessence model

    Science.gov (United States)

    Choudhury, Binayak S.; Mondal, Himadri Shekhar; Chatterjee, Devosmita

    2018-04-01

    In this paper, we consider a particular form of coupling, namely B=σ (\\dot{ρ _m}-\\dot{ρ _φ }) in spatially flat (k=0) Friedmann-Lemaitre-Robertson-Walker (FLRW) space-time. We perform phase-space analysis for this interacting quintessence (dark energy) and dark matter model for different numerical values of parameters. We also show the phase-space analysis for the `best-fit Universe' or concordance model. In our analysis, we observe the existence of late-time scaling attractors.

  12. Association between obesity and various parameters of physical fitness in Korean students.

    Science.gov (United States)

    Kim, Jae-Woo; Seo, Dong-Il; Swearingin, B; So, Wi-Young

    2013-01-01

    The purpose of this study was to evaluate the association between the types of obesity classified according to the body mass index (BMI) and/or waist circumference (WC) and the various parameters of physical fitness in Korean college students. BMI, WC, and fitness assessments were performed on 726 male college student volunteers who visited a public health center in Seoul, Korea. Classification based on BMI and/or WC was established according to the data in the WHO's Asia-Pacific standard report, and the subjects were divided into the following 4 groups: (1) obese as determined by BMI, but not WC (BMI Obesity Group, BOG); (2) obese as determined by WC, but not BMI (WC Obesity Group, WOG); (3) obese as determined by both BMI and WC (BWOG); and (4) non-obese normal group (NG). Fitness assessment parameters such as cardiorespiratory endurance, cardiovascular function, muscular endurance, muscular strength, flexibility, power, agility, and balance were evaluated through the following measurements: time required to run 1.5 km, physical efficiency index (PEI), vital capacity (ℓ), push-ups (reps/2 min), sit-ups (reps/2 min), back strength (kg), grip strength (kg), sit and reach distance (cm), vertical jumps (cm), whole body reaction time (ms), side steps (reps/30 s), and maximum time of standing on 1 foot with closed eyes (s). The odds ratios (OR) (95% confidence interval [CI]) of the BOG and WOG for the 1.5-km run were 0.367 (0.192-0.701) and 0.168 (0.037-0.773), respectively; of the BWOG and WOG for vital capacity were 5.900 (1.298-26.827) and 5.364 (1.166-24.670), respectively; of the BOG for push-ups was 0.517 (0.279-0.959); of the WOG for back strength was 0.206 (0.045-0.945); of the BWOG and BOG for grip strength were 5.973 (1.314-27.157) and 2.036 (1.089-3.807), respectively; and of the BOG for the whole body reaction time was 0.405 (0.212-0.774), as compared to the NG. We conclude that all 3 types of obesity (classified into the BWOG, BOG, and WOG) result in

  13. Fitting monthly Peninsula Malaysian rainfall using Tweedie distribution

    Science.gov (United States)

    Yunus, R. M.; Hasan, M. M.; Zubairi, Y. Z.

    2017-09-01

    In this study, the Tweedie distribution was used to fit the monthly rainfall data from 24 monitoring stations of Peninsula Malaysia for the period from January, 2008 to April, 2015. The aim of the study is to determine whether the distributions within the Tweedie family fit well the monthly Malaysian rainfall data. Within the Tweedie family, the gamma distribution is generally used for fitting the rainfall totals, however the Poisson-gamma distribution is more useful to describe two important features of rainfall pattern, which are the occurrences (dry months) and the amount (wet months). First, the appropriate distribution of the monthly rainfall was identified within the Tweedie family for each station. Then, the Tweedie Generalised Linear Model (GLM) with no explanatory variable was used to model the monthly rainfall data. Graphical representation was used to assess model appropriateness. The QQ plots of quantile residuals show that the Tweedie models fit the monthly rainfall data better for majority of the stations in the west coast and mid land than those in the east coast of Peninsula. This significant finding suggests that the best fitted distribution depends on the geographical location of the monitoring station. In this paper, a simple model is developed for generating synthetic rainfall data for use in various areas, including agriculture and irrigation. We have showed that the data that were simulated using the Tweedie distribution have fairly similar frequency histogram to that of the actual data. Both the mean number of rainfall events and mean amount of rain for a month were estimated simultaneously for the case that the Poisson gamma distribution fits the data reasonably well. Thus, this work complements previous studies that fit the rainfall amount and the occurrence of rainfall events separately, each to a different distribution.

  14. PF2fit: Polar Fast Fourier Matched Alignment of Atomistic Structures with 3D Electron Microscopy Maps.

    Directory of Open Access Journals (Sweden)

    Radhakrishna Bettadapura

    2015-10-01

    Full Text Available There continue to be increasing occurrences of both atomistic structure models in the PDB (possibly reconstructed from X-ray diffraction or NMR data, and 3D reconstructed cryo-electron microscopy (3D EM maps (albeit at coarser resolution of the same or homologous molecule or molecular assembly, deposited in the EMDB. To obtain the best possible structural model of the molecule at the best achievable resolution, and without any missing gaps, one typically aligns (match and fits the atomistic structure model with the 3D EM map. We discuss a new algorithm and generalized framework, named PF(2 fit (Polar Fast Fourier Fitting for the best possible structural alignment of atomistic structures with 3D EM. While PF(2 fit enables only a rigid, six dimensional (6D alignment method, it augments prior work on 6D X-ray structure and 3D EM alignment in multiple ways: Scoring. PF(2 fit includes a new scoring scheme that, in addition to rewarding overlaps between the volumes occupied by the atomistic structure and 3D EM map, rewards overlaps between the volumes complementary to them. We quantitatively demonstrate how this new complementary scoring scheme improves upon existing approaches. PF(2 fit also includes two scoring functions, the non-uniform exterior penalty and the skeleton-secondary structure score, and implements the scattering potential score as an alternative to traditional Gaussian blurring. Search. PF(2 fit utilizes a fast polar Fourier search scheme, whose main advantage is the ability to search over uniformly and adaptively sampled subsets of the space of rigid-body motions. PF(2 fit also implements a new reranking search and scoring methodology that considerably improves alignment metrics in results obtained from the initial search.

  15. The Soldier Fitness Tracker: global delivery of Comprehensive Soldier Fitness.

    Science.gov (United States)

    Fravell, Mike; Nasser, Katherine; Cornum, Rhonda

    2011-01-01

    Carefully implemented technology strategies are vital to the success of large-scale initiatives such as the U.S. Army's Comprehensive Soldier Fitness (CSF) program. Achieving the U.S. Army's vision for CSF required a robust information technology platform that was scaled to millions of users and that leveraged the Internet to enable global reach. The platform needed to be agile, provide powerful real-time reporting, and have the capacity to quickly transform to meet emerging requirements. Existing organizational applications, such as "Single Sign-On," and authoritative data sources were exploited to the maximum extent possible. Development of the "Soldier Fitness Tracker" is the most recent, and possibly the best, demonstration of the potential benefits possible when existing organizational capabilities are married to new, innovative applications. Combining the capabilities of the extant applications with the newly developed applications expedited development, eliminated redundant data collection, resulted in the exceeding of program objectives, and produced a comfortable experience for the end user, all in less than six months. This is a model for future technology integration. (c) 2010 APA, all rights reserved.

  16. Describing Growth Pattern of Bali Cows Using Non-linear Regression Models

    Directory of Open Access Journals (Sweden)

    Mohd. Hafiz A.W

    2016-12-01

    Full Text Available The objective of this study was to evaluate the best fit non-linear regression model to describe the growth pattern of Bali cows. Estimates of asymptotic mature weight, rate of maturing and constant of integration were derived from Brody, von Bertalanffy, Gompertz and Logistic models which were fitted to cross-sectional data of body weight taken from 74 Bali cows raised in MARDI Research Station Muadzam Shah Pahang. Coefficient of determination (R2 and residual mean squares (MSE were used to determine the best fit model in describing the growth pattern of Bali cows. Von Bertalanffy model was the best model among the four growth functions evaluated to determine the mature weight of Bali cattle as shown by the highest R2 and lowest MSE values (0.973 and 601.9, respectively, followed by Gompertz (0.972 and 621.2, respectively, Logistic (0.971 and 648.4, respectively and Brody (0.932 and 660.5, respectively models. The correlation between rate of maturing and mature weight was found to be negative in the range of -0.170 to -0.929 for all models, indicating that animals of heavier mature weight had lower rate of maturing. The use of non-linear model could summarize the weight-age relationship into several biologically interpreted parameters compared to the entire lifespan weight-age data points that are difficult and time consuming to interpret.

  17. Experiences With an Optimal Estimation Algorithm for Surface and Atmospheric Parameter Retrieval From Passive Microwave Data in the Arctic

    DEFF Research Database (Denmark)

    Scarlat, Raul Cristian; Heygster, Georg; Pedersen, Leif Toudal

    2017-01-01

    is constrained using numerical weather prediction data in order to retrieve a set of geophysical parameters that best fit the measurements. A sensitivity study demonstrates the method is robust and that the solution it provides is not dependent on initialization conditions. The retrieval parameters have been...

  18. Checking the Adequacy of Fit of Models from Split-Plot Designs

    DEFF Research Database (Denmark)

    Almini, A. A.; Kulahci, Murat; Montgomery, D. C.

    2009-01-01

    models. In this article, we propose the computation of two R-2, R-2-adjusted, prediction error sums of squares (PRESS), and R-2-prediction statistics to measure the adequacy of fit for the WP and the SP submodels in a split-plot design. This is complemented with the graphical analysis of the two types......One of the main features that distinguish split-plot experiments from other experiments is that they involve two types of experimental errors: the whole-plot (WP) error and the subplot (SP) error. Taking this into consideration is very important when computing measures of adequacy of fit for split-plot...... of errors to check for any violation of the underlying assumptions and the adequacy of fit of split-plot models. Using examples, we show how computing two measures of model adequacy of fit for each split-plot design model is appropriate and useful as they reveal whether the correct WP and SP effects have...

  19. HDFITS: Porting the FITS data model to HDF5

    Science.gov (United States)

    Price, D. C.; Barsdell, B. R.; Greenhill, L. J.

    2015-09-01

    The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new 'HDFITS' format, where data are stored in HDF5 in a FITS-like manner. We show that HDFITS allows faster reading of data (up to 100x of FITS in some use cases), and improved compression (higher compression ratios and higher throughput). Finally, we show that by only changing the import lines in Python-based FITS utilities, HDFITS formatted data can be presented transparently as an in-memory FITS equivalent.

  20. Parametric fitting of corneal height data to a biconic surface.

    Science.gov (United States)

    Janunts, Edgar; Kannengießer, Marc; Langenbucher, Achim

    2015-03-01

    As the average corneal shape can effectively be approximated by a conic section, a determination of the corneal shape by biconic parameters is desired. The purpose of the paper is to introduce a straightforward mathematical approach for extracting clinically relevant parameters of corneal surface, such as radii of curvature and conic constants for principle meridians and astigmatism. A general description for modeling the ocular surfaces in a biconic form is given, based on which an implicit parametric surface fitting algorithm is introduced. The solution of the biconic fitting is obtained by a two sequential least squares optimization approach with constrains. The data input can be raw information from any corneal topographer with not necessarily a uniform data distribution. Various simulated and clinical data are studied including surfaces with rotationally symmetric and non-symmetric geometries. The clinical data was obtained from the Pentacam (Oculus) for the patient having undergone a refractive surgery. A sub-micrometer fitting accuracy was obtained for all simulated surfaces: 0,08 μm RMS fitting error at max for rotationally symmetric and 0,125 μm for non-symmetric surfaces. The astigmatism was recovered in a sub-minutes resolution. The equality in rotational symmetric and the superiority in non-symmetric surfaces of the presented model over the widely used quadric fitting model is shown. The introduced biconic surface fitting algorithm is able to recover the apical radii of curvature and conic constants in principle meridians. This methodology could be a platform for advanced IOL calculations and enhanced contact lens fitting. Copyright © 2014. Published by Elsevier GmbH.

  1. Application of Best Estimate Approach for Modelling of QUENCH-03 and QUENCH-06 Experiments

    Directory of Open Access Journals (Sweden)

    Tadas Kaliatka

    2016-04-01

    In this article, the QUENCH-03 and QUENCH-06 experiments are modelled using ASTEC and RELAP/SCDAPSIM codes. For the uncertainty and sensitivity analysis, SUSA3.5 and SUNSET tools were used. The article demonstrates that applying the best estimate approach, it is possible to develop basic QUENCH input deck and to develop the two sets of input parameters, covering maximal and minimal ranges of uncertainties. These allow simulating different (but with the same nature tests, receiving calculation results with the evaluated range of uncertainties.

  2. Fitting diameter distribution models to data from forest inventories with concentric plot design

    Energy Technology Data Exchange (ETDEWEB)

    Nanos, N.; Sjöstedt de Luna, S.

    2017-11-01

    Aim: Several national forest inventories use a complex plot design based on multiple concentric subplots where smaller diameter trees are inventoried when lying in the smaller-radius subplots and ignored otherwise. Data from these plots are truncated with threshold (truncation) diameters varying according to the distance from the plot centre. In this paper we designed a maximum likelihood method to fit the Weibull diameter distribution to data from concentric plots. Material and methods: Our method (M1) was based on multiple truncated probability density functions to build the likelihood. In addition, we used an alternative method (M2) presented recently. We used methods M1 and M2 as well as two other reference methods to estimate the Weibull parameters in 40000 simulated plots. The spatial tree pattern of the simulated plots was generated using four models of spatial point patterns. Two error indices were used to assess the relative performance of M1 and M2 in estimating relevant stand-level variables. In addition, we estimated the Quadratic Mean plot Diameter (QMD) using Expansion Factors (EFs). Main results: Methods M1 and M2 produced comparable estimation errors in random and cluster tree spatial patterns. Method M2 produced biased parameter estimates in plots with inhomogeneous Poisson patterns. Estimation of QMD using EFs produced biased results in plots within inhomogeneous intensity Poisson patterns. Research highlights:We designed a new method to fit the Weibull distribution to forest inventory data from concentric plots that achieves high accuracy and precision in parameter estimates regardless of the within-plot spatial tree pattern.

  3. The Impact of Modeling Assumptions in Galactic Chemical Evolution Models

    Science.gov (United States)

    Côté, Benoit; O'Shea, Brian W.; Ritter, Christian; Herwig, Falk; Venn, Kim A.

    2017-02-01

    We use the OMEGA galactic chemical evolution code to investigate how the assumptions used for the treatment of galactic inflows and outflows impact numerical predictions. The goal is to determine how our capacity to reproduce the chemical evolution trends of a galaxy is affected by the choice of implementation used to include those physical processes. In pursuit of this goal, we experiment with three different prescriptions for galactic inflows and outflows and use OMEGA within a Markov Chain Monte Carlo code to recover the set of input parameters that best reproduces the chemical evolution of nine elements in the dwarf spheroidal galaxy Sculptor. This provides a consistent framework for comparing the best-fit solutions generated by our different models. Despite their different degrees of intended physical realism, we found that all three prescriptions can reproduce in an almost identical way the stellar abundance trends observed in Sculptor. This result supports the similar conclusions originally claimed by Romano & Starkenburg for Sculptor. While the three models have the same capacity to fit the data, the best values recovered for the parameters controlling the number of SNe Ia and the strength of galactic outflows, are substantially different and in fact mutually exclusive from one model to another. For the purpose of understanding how a galaxy evolves, we conclude that only reproducing the evolution of a limited number of elements is insufficient and can lead to misleading conclusions. More elements or additional constraints such as the Galaxy’s star-formation efficiency and the gas fraction are needed in order to break the degeneracy between the different modeling assumptions. Our results show that the successes and failures of chemical evolution models are predominantly driven by the input stellar yields, rather than by the complexity of the Galaxy model itself. Simple models such as OMEGA are therefore sufficient to test and validate stellar yields. OMEGA

  4. Parameter Estimation as a Problem in Statistical Thermodynamics.

    Science.gov (United States)

    Earle, Keith A; Schneider, David J

    2011-03-14

    In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

  5. Intelligent design of mechanical parameters of the joint in vehicle body concept design model

    Science.gov (United States)

    Hou, Wen-bin; Zhang, Hong-zhe; Hou, Da-jun; Hu, Ping

    2013-05-01

    In order to estimate the mechanical properties of the overall structure of the body accurately and quickly in conceptual design phase of the body, the beam and shell mixing elements was used to build simplified finite element model of the body. Through the BP neural network algorithm, the parameters of the mechanical property of joints element which had more affection on calculation accuracy were calculated and the joint finite element model based on the parameters was also constructed. The case shown that the method can improve the accuracy of the vehicle simulation results, while not too many design details were needed, which was fit to the demand in the vehicle body conceptual design phase.

  6. The drift diffusion model as the choice rule in reinforcement learning.

    Science.gov (United States)

    Pedersen, Mads Lund; Frank, Michael J; Biele, Guido

    2017-08-01

    Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyperactivity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups.

  7. Isochrone Fitting of Hubble Photometry in UV–VIS–IR Bands

    Science.gov (United States)

    Barker, Hallie; Paust, Nathaniel E. Q.

    2018-03-01

    We present new isochrone fits to color–magnitude diagrams from Hubble Space Telescope Wide Field Camera 3 and Advanced Camera for Surveys photometry of the globular clusters M13 and M80 in five bands from the ultraviolet to near-infrared. Isochrone fits to the photometry using the Dartmouth Stellar Evolution Program (DSEP), the PAdova and TRieste Stellar Evolution Code (PARSEC), and MESA Isochrones and Stellar Tracks (MIST) are examined to study the isochrone morphology. Additionally, cluster ages, extinctions, and distances are found from the visible-infrared color–magnitude diagrams. We conduct careful qualitative analysis on the inconsistencies of the fits across twelve color combinations of the five observed bands, and find that the (F606W‑F814W) color generally produces very good fits, but that there are large discrepancies when the data is fit using colors including UV bands for all three models. We also find that the best fits in the UV are achieved using MIST isochrones, but that they require metallicities that are lower than the other two models, as well published spectroscopic values. Finally, we directly compare DSEP and PARSEC by performing isochrone-isochrone fitting, and find that, for globular cluster aged populations, similar appearing PARSEC isochrones are on average 1.5 Gyr younger than DSEP isochrones. We find that the two models become less discrepant at lower metallicities.

  8. Stationary and non-stationary extreme value modeling of extreme temperature in Malaysia

    Science.gov (United States)

    Hasan, Husna; Salleh, Nur Hanim Mohd; Kassim, Suraiya

    2014-09-01

    Extreme annual temperature of eighteen stations in Malaysia is fitted to the Generalized Extreme Value distribution. Stationary and non-stationary models with trend are considered for each station and the Likelihood Ratio test is used to determine the best-fitting model. Results show that three out of eighteen stations i.e. Bayan Lepas, Labuan and Subang favor a model which is linear in the location parameter. A hierarchical cluster analysis is employed to investigate the existence of similar behavior among the stations. Three distinct clusters are found in which one of them consists of the stations that favor the non-stationary model. T-year estimated return levels of the extreme temperature are provided based on the chosen models.

  9. Automatic Curve Fitting Based on Radial Basis Functions and a Hierarchical Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    G. Trejo-Caballero

    2015-01-01

    Full Text Available Curve fitting is a very challenging problem that arises in a wide variety of scientific and engineering applications. Given a set of data points, possibly noisy, the goal is to build a compact representation of the curve that corresponds to the best estimate of the unknown underlying relationship between two variables. Despite the large number of methods available to tackle this problem, it remains challenging and elusive. In this paper, a new method to tackle such problem using strictly a linear combination of radial basis functions (RBFs is proposed. To be more specific, we divide the parameter search space into linear and nonlinear parameter subspaces. We use a hierarchical genetic algorithm (HGA to minimize a model selection criterion, which allows us to automatically and simultaneously determine the nonlinear parameters and then, by the least-squares method through Singular Value Decomposition method, to compute the linear parameters. The method is fully automatic and does not require subjective parameters, for example, smooth factor or centre locations, to perform the solution. In order to validate the efficacy of our approach, we perform an experimental study with several tests on benchmarks smooth functions. A comparative analysis with two successful methods based on RBF networks has been included.

  10. SIMULTANEOUS FITS IN ISIS ON THE EXAMPLE OF GRO J1008–57

    Directory of Open Access Journals (Sweden)

    M. Kühnel

    2015-04-01

    Full Text Available Parallel computing and steadily increasing computation speed have led to a new tool for analyzing multiple datasets and datatypes: fitting several datasets simultaneously.  With this technique, physically connected parameters of individual data can be treated as a single parameter by implementing this connection directly into the fit. We discuss the terminology, implementation, and possible issues of simultaneous fits based on the Interactive Spectral Interpretation System (ISIS X-ray data analysis tool. While all data modeling tools in X-ray astronomy in principle allow data to be fitted individually from multiple data sets, the syntax used in these tools is not often well suited for this task. Applying simultaneous fits to the transient X-ray binary GRO J1008–57, we find that the spectral shape is only dependent on X-ray flux. We determine time independent parameters e.g., the folding energy Efold, with unprecedented precision.

  11. Otolith reading and multi-model inference for improved estimation of age and growth in the gilthead seabream Sparus aurata (L.)

    Science.gov (United States)

    Mercier, Lény; Panfili, Jacques; Paillon, Christelle; N'diaye, Awa; Mouillot, David; Darnaude, Audrey M.

    2011-05-01

    Accurate knowledge of fish age and growth is crucial for species conservation and management of exploited marine stocks. In exploited species, age estimation based on otolith reading is routinely used for building growth curves that are used to implement fishery management models. However, the universal fit of the von Bertalanffy growth function (VBGF) on data from commercial landings can lead to uncertainty in growth parameter inference, preventing accurate comparison of growth-based history traits between fish populations. In the present paper, we used a comprehensive annual sample of wild gilthead seabream ( Sparus aurata L.) in the Gulf of Lions (France, NW Mediterranean) to test a methodology improving growth modelling for exploited fish populations. After validating the timing for otolith annual increment formation for all life stages, a comprehensive set of growth models (including VBGF) were fitted to the obtained age-length data, used as a whole or sub-divided between group 0 individuals and those coming from commercial landings (ages 1-6). Comparisons in growth model accuracy based on Akaike Information Criterion allowed assessment of the best model for each dataset and, when no model correctly fitted the data, a multi-model inference (MMI) based on model averaging was carried out. The results provided evidence that growth parameters inferred with VBGF must be used with high caution. Hence, VBGF turned to be among the less accurate for growth prediction irrespective of the dataset and its fit to the whole population, the juvenile or the adult datasets provided different growth parameters. The best models for growth prediction were the Tanaka model, for group 0 juveniles, and the MMI, for the older fish, confirming that growth differs substantially between juveniles and adults. All asymptotic models failed to correctly describe the growth of adult S. aurata, probably because of the poor representation of old individuals in the dataset. Multi-model

  12. Natural priors, CMSSM fits and LHC weather forecasts

    International Nuclear Information System (INIS)

    Allanach, Benjamin C.; Cranmer, Kyle; Lester, Christopher G.; Weber, Arne M.

    2007-01-01

    Previous LHC forecasts for the constrained minimal supersymmetric standard model (CMSSM), based on current astrophysical and laboratory measurements, have used priors that are flat in the parameter tan β, while being constrained to postdict the central experimental value of M Z . We construct a different, new and more natural prior with a measure in μ and B (the more fundamental MSSM parameters from which tan β and M Z are actually derived). We find that as a consequence this choice leads to a well defined fine-tuning measure in the parameter space. We investigate the effect of such on global CMSSM fits to indirect constraints, providing posterior probability distributions for Large Hadron Collider (LHC) sparticle production cross sections. The change in priors has a significant effect, strongly suppressing the pseudoscalar Higgs boson dark matter annihilation region, and diminishing the probable values of sparticle masses. We also show how to interpret fit information from a Markov Chain Monte Carlo in a frequentist fashion; namely by using the profile likelihood. Bayesian and frequentist interpretations of CMSSM fits are compared and contrasted

  13. Model Fit and Item Factor Analysis: Overfactoring, Underfactoring, and a Program to Guide Interpretation.

    Science.gov (United States)

    Clark, D Angus; Bowles, Ryan P

    2018-04-23

    In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.

  14. SED fitting with MCMC: methodology and application to large galaxy surveys

    OpenAIRE

    Acquaviva, Viviana; Gawiser, Eric; Guaita, Lucia

    2011-01-01

    We present GalMC (Acquaviva et al 2011), our publicly available Markov Chain Monte Carlo algorithm for SED fitting, show the results obtained for a stacked sample of Lyman Alpha Emitting galaxies at z ~ 3, and discuss the dependence of the inferred SED parameters on the assumptions made in modeling the stellar populations. We also introduce SpeedyMC, a version of GalMC based on interpolation of pre-computed template libraries. While the flexibility and number of SED fitting parameters is redu...

  15. Real-time computation of parameter fitting and image reconstruction using graphical processing units

    Science.gov (United States)

    Locans, Uldis; Adelmann, Andreas; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Günther; Wang, Qiulin

    2017-06-01

    In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of μSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the achieved speedup. During this work, we focused on single GPU systems to show that real time data analysis of these problems can be achieved without the need for large computing clusters. The results show that the currently used application for parameter fitting, which uses OpenMP to parallelize calculations over multiple CPU cores, can be accelerated around 40 times through the use of a GPU. The speedup may vary depending on the size and complexity of the problem. For PET image analysis, the obtained speedups of the GPU version were more than × 40 larger compared to a single core CPU implementation. The achieved results show that it is possible to improve the execution time by orders of magnitude.

  16. HERSCHEL PACS OBSERVATIONS AND MODELING OF DEBRIS DISKS IN THE TUCANA-HOROLOGIUM ASSOCIATION

    Energy Technology Data Exchange (ETDEWEB)

    Donaldson, J. K. [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); Roberge, A. [Exoplanets and Stellar Astrophysics Laboratory, NASA Goddard Space Flight Center, Code 667, Greenbelt, MD 20771 (United States); Chen, C. H. [Space Telescope Science Institute, 3700 San Martin Dr., Baltimore, MD 21218 (United States); Augereau, J.-C.; Menard, F. [UJF - Grenoble 1/CNRS-INSU, Institut de Planetologie et d' Astrophysique de Grenoble (IPAG) UMR 5274, Grenoble, F-38041 (France); Dent, W. R. F. [ALMA, Avda Apoquindo 3846, Piso 19, Edificio Alsacia, Las Condes, Santiago (Chile); Eiroa, C.; Meeus, G. [Dpt. Fisica Teorica, Facultad de Ciencias, Universidad Autonoma de Madrid, Cantoblanco, 28049 Madrid (Spain); Krivov, A. V. [Astrophysikalishes Institut, Friedrich-Schiller-Universitaet Jena, Schillergaesschen 2-3, 07745 Jena (Germany); Mathews, G. S. [Institute for Astronomy (IfA), University of Hawaii, 2680 Woodlawn Dr., Honolulu, HI 96822 (United States); Riviere-Marichalar, P. [Centro de Astrobiologia Depto. Astrofisica (CSIC-INTA), POB 78, 28691 Villanueva de la Canada (Spain); Sandell, G., E-mail: jessd@astro.umd.edu [SOFIA-USRA, NASA Ames Research Center, Building N232, Rm. 146, Moffett Field, CA 94035 (United States)

    2012-07-10

    We present Herschel PACS photometry of 17 B- to M-type stars in the 30 Myr old Tucana-Horologium Association. This work is part of the Herschel Open Time Key Programme 'Gas in Protoplanetary Systems'. 6 of the 17 targets were found to have infrared excesses significantly greater than the expected stellar IR fluxes, including a previously unknown disk around HD30051. These six debris disks were fitted with single-temperature blackbody models to estimate the temperatures and abundances of the dust in the systems. For the five stars that show excess emission in the Herschel PACS photometry and also have Spitzer IRS spectra, we fit the data with models of optically thin debris disks with realistic grain properties in order to better estimate the disk parameters. The model is determined by a set of six parameters: surface density index, grain size distribution index, minimum and maximum grain sizes, and the inner and outer radii of the disk. The best-fitting parameters give us constraints on the geometry of the dust in these systems, as well as lower limits to the total dust masses. The HD105 disk was further constrained by fitting marginally resolved PACS 70 {mu}m imaging.

  17. Han's model parameters for microalgae grown under intermittent illumination: Determined using particle swarm optimization.

    Science.gov (United States)

    Pozzobon, Victor; Perre, Patrick

    2018-01-21

    This work provides a model and the associated set of parameters allowing for microalgae population growth computation under intermittent lightning. Han's model is coupled with a simple microalgae growth model to yield a relationship between illumination and population growth. The model parameters were obtained by fitting a dataset available in literature using Particle Swarm Optimization method. In their work, authors grew microalgae in excess of nutrients under flashing conditions. Light/dark cycles used for these experimentations are quite close to those found in photobioreactor, i.e. ranging from several seconds to one minute. In this work, in addition to producing the set of parameters, Particle Swarm Optimization robustness was assessed. To do so, two different swarm initialization techniques were used, i.e. uniform and random distribution throughout the search-space. Both yielded the same results. In addition, swarm distribution analysis reveals that the swarm converges to a unique minimum. Thus, the produced set of parameters can be trustfully used to link light intensity to population growth rate. Furthermore, the set is capable to describe photodamages effects on population growth. Hence, accounting for light overexposure effect on algal growth. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. What we talk about when we talk about recovery: a systematic review and best-fit framework synthesis of qualitative literature.

    Science.gov (United States)

    Stuart, Simon Robertson; Tansey, Louise; Quayle, Ethel

    2017-06-01

    The recovery approach is increasingly popular among mental-health services, but there is a lack of consensus about its applicability and it has been criticised for imposing professionalised ideas onto what was originally a service-user concept. To carry out a review and synthesis of qualitative research to answer the question: "What do we know about how service users with severe and enduring mental illness experience the process of recovery?" It was hoped that this would improve clarity and increase understanding. A systematic review identified 15 peer-reviewed articles examining experiences of recovery. Twelve of these were analysed using best-fit framework synthesis, with the CHIME model of recovery providing the exploratory framework. The optimistic themes of CHIME accounted for the majority of people's experiences, but more than 30% of data were not felt to be encapsulated. An expanded conceptualisation of recovery is proposed, in which difficulties are more prominently considered. An overly optimistic, professionally imposed view of recovery might homogenise or even blame individuals rather than empower them. Further understanding is needed of different experiences of recovery, and of people's struggles to recover.

  19. Confronting Theoretical Predictions With Experimental Data; Fitting Strategy For Multi-Dimensional Distributions

    Directory of Open Access Journals (Sweden)

    Tomasz Przedziński

    2015-01-01

    Full Text Available After developing a Resonance Chiral Lagrangian (RχL model to describe hadronic τ lepton decays [18], the model was confronted with experimental data. This was accomplished using a fitting framework which was developed to take into account the complexity of the model and to ensure the numerical stability for the algorithms used in the fitting. Since the model used in the fit contained 15 parameters and there were only three 1-dimensional distributions available, we could expect multiple local minima or even whole regions of equal potential to appear. Our methods had to thoroughly explore the whole parameter space and ensure, as well as possible, that the result is a global minimum. This paper is focused on the technical aspects of the fitting strategy used. The first approach was based on re-weighting algorithm published in [17] and produced results in around two weeks. Later approach, with improved theoretical model and simple parallelization algorithm based on Inter-Process Communication (IPC methods of UNIX system, reduced computation time down to 2-3 days. Additional approximations were introduced to the model decreasing time to obtain the preliminary results down to 8 hours. This allowed to better validate the results leading to a more robust analysis published in [12].

  20. Using multistage models to describe radiation-induced leukaemia

    International Nuclear Information System (INIS)

    Little, M.P.; Muirhead, C.R.; Boice, J.D. Jr.; Kleinerman, R.A.

    1995-01-01

    The Armitage-Doll model of carcinogenesis is fitted to data on leukaemia mortality among the Japanese atomic bomb survivors with the DS86 dosimetry and on leukaemia incidence in the International Radiation Study of Cervical Cancer patients. Two different forms of model are fitted: the first postulates up to two radiation-affected stages and the second additionally allows for the presence at birth of a non-trivial population of cells which have already accumulated the first of the mutations leading to malignancy. Among models of the first form, a model with two adjacent radiation-affected stages appears to fit the data better than other models of the first form, including both models with two affected stages in any order and models with only one affected stage. The best fitting model predicts a linear-quadratic dose-response and reductions of relative risk with increasing time after exposure and age at exposure, in agreement with what has previously been observed in the Japanese and cervical cancer data. However, on the whole it does not provide an adequate fit to either dataset. The second form of model appears to provide a rather better fit, but the optimal models have biologically implausible parameters (the number of initiated cells at birth is negative) so that this model must also be regarded as providing an unsatisfactory description of the data. (author)