The global electroweak Standard Model fit after the Higgs discovery
Baak, Max
2013-01-01
We present an update of the global Standard Model (SM) fit to electroweak precision data under the assumption that the new particle discovered at the LHC is the SM Higgs boson. In this scenario all parameters entering the calculations of electroweak precision observalbes are known, allowing, for the first time, to over-constrain the SM at the electroweak scale and assert its validity. Within the SM the W boson mass and the effective weak mixing angle can be accurately predicted from the global fit. The results are compatible with, and exceed in precision, the direct measurements. An updated determination of the S, T and U parameters, which parametrize the oblique vacuum corrections, is given. The obtained values show good consistency with the SM expectation and no direct signs of new physics are seen. We conclude with an outlook to the global electroweak fit for a future e+e- collider.
Global fits of GUT-scale SUSY models with GAMBIT
Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin
2017-12-01
We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.
Global fits of GUT-scale SUSY models with GAMBIT
Energy Technology Data Exchange (ETDEWEB)
Athron, Peter [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Bringmann, Torsten; Dal, Lars A.; Krislock, Abram; Raklev, Are [University of Oslo, Department of Physics, Oslo (Norway); Buckley, Andy [University of Glasgow, SUPA, School of Physics and Astronomy, Glasgow (United Kingdom); Chrzaszcz, Marcin [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); H. Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Krakow (Poland); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Jackson, Paul; White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, CNRS, ENS de Lyon, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval (France); Theoretical Physics Department, CERN, Geneva (Switzerland); Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Rogan, Christopher [Harvard University, Department of Physics, Cambridge, MA (United States); Ruiz de Austri, Roberto [IFIC-UV/CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Saavedra, Aldo [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); The University of Sydney, Faculty of Engineering and Information Technologies, Centre for Translational Data Science, School of Physics, Camperdown, NSW (Australia); Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Serra, Nicola [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); Collaboration: The GAMBIT Collaboration
2017-12-15
We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos. (orig.)
Revisiting the Global Electroweak Fit of the Standard Model and Beyond with Gfitter
Flächer, Henning; Haller, J; Höcker, A; Mönig, K; Stelzer, J
2009-01-01
The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter projec...
Gfitter - Revisiting the global electroweak fit of the Standard Model and beyond
Energy Technology Data Exchange (ETDEWEB)
Flaecher, H.; Hoecker, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Goebel, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Haller, J. [Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Moenig, K.; Stelzer, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2008-11-15
The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M{sub H}=116.4{sup +18.3}{sub -1.3} GeV, and the 2{sigma} and 3{sigma} allowed regions [114,145] GeV and [[113,168] and [180,225
Assessing a moderating effect and the global fit of a PLS model on online trading
Directory of Open Access Journals (Sweden)
Juan J. García-Machado
2017-12-01
Full Text Available This paper proposes a PLS Model for the study of Online Trading. Traditional investing has experienced a revolution due to the rise of e-trading services that enable investors to use Internet conduct secure trading. On the hand, model results show that there is a positive, direct and statistically significant relationship between personal outcome expectations, perceived relative advantage, shared vision and economy-based trust with the quality of knowledge. On the other hand, trading frequency and portfolio performance has also this relationship. After including the investor’s income and financial wealth (IFW as moderating effect, the PLS model was enhanced, and we found that the interaction term is negative and statistically significant, so, higher IFW levels entail a weaker relationship between trading frequency and portfolio performance and vice-versa. Finally, with regard to the goodness of overall model fit measures, they showed that the model is fit for SRMR and dG measures, so it is likely that the model is true.
The inert doublet model in the light of Fermi-LAT gamma-ray data: a global fit analysis
Eiteneuer, Benedikt; Goudelis, Andreas; Heisig, Jan
2017-09-01
We perform a global fit within the inert doublet model taking into account experimental observables from colliders, direct and indirect dark matter searches and theoretical constraints. In particular, we consider recent results from searches for dark matter annihilation-induced gamma-rays in dwarf spheroidal galaxies and relax the assumption that the inert doublet model should account for the entire dark matter in the Universe. We, moreover, study in how far the model is compatible with a possible dark matter explanation of the so-called Galactic center excess. We find two distinct parameter space regions that are consistent with existing constraints and can simultaneously explain the excess: One with dark matter masses near the Higgs resonance and one around 72 GeV where dark matter annihilates predominantly into pairs of virtual electroweak gauge bosons via the four-vertex arising from the inert doublet's kinetic term. We briefly discuss future prospects to probe these scenarios.
The inert doublet model in the light of Fermi-LAT gamma-ray data: a global fit analysis
Energy Technology Data Exchange (ETDEWEB)
Eiteneuer, Benedikt; Heisig, Jan [RWTH Aachen University, Institute for Theoretical Particle Physics and Cosmology, Aachen (Germany); Goudelis, Andreas [UMR 7589 CNRS and UPMC, Laboratoire de Physique Theorique et Hautes Energies (LPTHE), Paris (France)
2017-09-15
We perform a global fit within the inert doublet model taking into account experimental observables from colliders, direct and indirect dark matter searches and theoretical constraints. In particular, we consider recent results from searches for dark matter annihilation-induced gamma-rays in dwarf spheroidal galaxies and relax the assumption that the inert doublet model should account for the entire dark matter in the Universe. We, moreover, study in how far the model is compatible with a possible dark matter explanation of the so-called Galactic center excess. We find two distinct parameter space regions that are consistent with existing constraints and can simultaneously explain the excess: One with dark matter masses near the Higgs resonance and one around 72 GeV where dark matter annihilates predominantly into pairs of virtual electroweak gauge bosons via the four-vertex arising from the inert doublet's kinetic term. We briefly discuss future prospects to probe these scenarios. (orig.)
The Soldier Fitness Tracker: global delivery of Comprehensive Soldier Fitness.
Fravell, Mike; Nasser, Katherine; Cornum, Rhonda
2011-01-01
Carefully implemented technology strategies are vital to the success of large-scale initiatives such as the U.S. Army's Comprehensive Soldier Fitness (CSF) program. Achieving the U.S. Army's vision for CSF required a robust information technology platform that was scaled to millions of users and that leveraged the Internet to enable global reach. The platform needed to be agile, provide powerful real-time reporting, and have the capacity to quickly transform to meet emerging requirements. Existing organizational applications, such as "Single Sign-On," and authoritative data sources were exploited to the maximum extent possible. Development of the "Soldier Fitness Tracker" is the most recent, and possibly the best, demonstration of the potential benefits possible when existing organizational capabilities are married to new, innovative applications. Combining the capabilities of the extant applications with the newly developed applications expedited development, eliminated redundant data collection, resulted in the exceeding of program objectives, and produced a comfortable experience for the end user, all in less than six months. This is a model for future technology integration. (c) 2010 APA, all rights reserved.
Global fits of the two-loop renormalized Two-Higgs-Doublet model with soft Z 2 breaking
Chowdhury, Debtosh; Eberhardt, Otto
2015-11-01
We determine the next-to-leading order renormalization group equations for the Two-Higgs-Doublet model with a softly broken Z 2 symmetry and CP conservation in the scalar potential. We use them to identify the parameter regions which are stable up to the Planck scale and find that in this case the quartic couplings of the Higgs potential cannot be larger than 1 in magnitude and that the absolute values of the S-matrix eigenvalues cannot exceed 2 .5 at the electroweak symmetry breaking scale. Interpreting the 125 GeV resonance as the light CP -even Higgs eigenstate, we combine stability constraints, electroweak precision and flavour observables with the latest ATLAS and CMS data on Higgs signal strengths and heavy Higgs searches in global parameter fits to all four types of Z 2 symmetry. We quantify the maximal deviations from the alignment limit and find that in type II and Y the mass of the heavy CP -even ( CP -odd) scalar cannot be smaller than 340 GeV (360 GeV). Also, we pinpoint the physical parameter regions compatible with a stable scalar potential up to the Planck scale. Motivated by the question how natural a Higgs mass of 125 GeV can be in the context of a Two-Higgs-Doublet model, we also address the hierarchy problem and find that the Two-Higgs-Doublet model does not offer a perturbative solution to it beyond 5 TeV.
GLOBAL AND STRICT CURVE FITTING METHOD
Nakajima, Y.; Mori, S.
2004-01-01
To find a global and smooth curve fitting, cubic BSpline method and gathering line methods are investigated. When segmenting and recognizing a contour curve of character shape, some global method is required. If we want to connect contour curves around a singular point like crossing points,
Stanley, Leanne M.; Edwards, Michael C.
2016-01-01
The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…
Globfit: Consistently fitting primitives by discovering global relations
Li, Yangyan; Wu, Xiaokun; Chrysathou, Yiorgos; Sharf, Andrei Sharf; Cohen-Or, Daniel; Mitra, Niloy J.
2011-01-01
Given a noisy and incomplete point set, we introduce a method that simultaneously recovers a set of locally fitted primitives along with their global mutual relations. We operate under the assumption that the data corresponds to a man-made engineering object consisting of basic primitives, possibly repeated and globally aligned under common relations. We introduce an algorithm to directly couple the local and global aspects of the problem. The local fit of the model is determined by how well the inferred model agrees to the observed data, while the global relations are iteratively learned and enforced through a constrained optimization. Starting with a set of initial RANSAC based locally fitted primitives, relations across the primitives such as orientation, placement, and equality are progressively learned and conformed to. In each stage, a set of feasible relations are extracted among the candidate relations, and then aligned to, while best fitting to the input data. The global coupling corrects the primitives obtained in the local RANSAC stage, and brings them to precise global alignment. We test the robustness of our algorithm on a range of synthesized and scanned data, with varying amounts of noise, outliers, and non-uniform sampling, and validate the results against ground truth, where available. © 2011 ACM.
Globfit: Consistently fitting primitives by discovering global relations
Li, Yangyan
2011-07-01
Given a noisy and incomplete point set, we introduce a method that simultaneously recovers a set of locally fitted primitives along with their global mutual relations. We operate under the assumption that the data corresponds to a man-made engineering object consisting of basic primitives, possibly repeated and globally aligned under common relations. We introduce an algorithm to directly couple the local and global aspects of the problem. The local fit of the model is determined by how well the inferred model agrees to the observed data, while the global relations are iteratively learned and enforced through a constrained optimization. Starting with a set of initial RANSAC based locally fitted primitives, relations across the primitives such as orientation, placement, and equality are progressively learned and conformed to. In each stage, a set of feasible relations are extracted among the candidate relations, and then aligned to, while best fitting to the input data. The global coupling corrects the primitives obtained in the local RANSAC stage, and brings them to precise global alignment. We test the robustness of our algorithm on a range of synthesized and scanned data, with varying amounts of noise, outliers, and non-uniform sampling, and validate the results against ground truth, where available. © 2011 ACM.
A global fit of the γ-ray galactic center excess within the scalar singlet Higgs portal model
International Nuclear Information System (INIS)
Cuoco, Alessandro; Eiteneuer, Benedikt; Heisig, Jan; Krämer, Michael
2016-01-01
We analyse the excess in the γ-ray emission from the center of our galaxy observed by Fermi-LAT in terms of dark matter annihilation within the scalar Higgs portal model. In particular, we include the astrophysical uncertainties from the dark matter distribution and allow for unspecified additional dark matter components. We demonstrate through a detailed numerical fit that the strength and shape of the γ-ray spectrum can indeed be described by the model in various regions of dark matter masses and couplings. Constraints from invisible Higgs decays, direct dark matter searches, indirect searches in dwarf galaxies and for γ-ray lines, and constraints from the dark matter relic density reduce the parameter space to dark matter masses near the Higgs resonance. We find two viable regions: one where the Higgs-dark matter coupling is of O(10"−"2), and an additional dark matter component beyond the scalar WIMP of our model is preferred, and one region where the Higgs-dark matter coupling may be significantly smaller, but where the scalar WIMP constitutes a significant fraction or even all of dark matter. Both viable regions are hard to probe in future direct detection and collider experiments.
A Global Moving Hotspot Reference Frame: How well it fits?
Doubrovine, P. V.; Steinberger, B.; Torsvik, T. H.
2010-12-01
Since the early 1970s, when Jason Morgan proposed that hotspot tracks record motion of lithosphere over deep-seated mantle plumes, the concept of fixed hotspots has dominated the way we think about absolute plate reconstructions. In the last decade, with compelling evidence for southward drift of the Hawaiian hotspot from paleomagnetic studies, and for the relative motion between the Pacific and Indo-Atlantic hotspots from refined plate circuit reconstructions, the perception changed and a global moving hotspot reference frame (GMHRF) was introduced, in which numerical models of mantle convection and advection of plume conduits in the mantle flow were used to estimate hotspot motion. This reference frame showed qualitatively better performance in fitting hotspot tracks globally, but the error analysis and formal estimates of the goodness of fitted rotations were lacking in this model. Here we present a new generation of the GMHRF, in which updated plate circuit reconstructions and radiometric age data from the hotspot tracks were combined with numerical models of plume motion, and uncertainties of absolute plate rotations were estimated through spherical regression analysis. The overall quality of fit was evaluated using a formal statistical test, by comparing misfits produced by the model with uncertainties assigned to the data. Alternative plate circuit models linking the Pacific plate to the plates of Indo-Atlantic hemisphere were tested and compared to the fixed hotspot models with identical error budgets. Our results show that, with an appropriate choice of the Pacific plate circuit, it is possible to reconcile relative plate motions and modeled motions of mantle plumes globally back to Late Cretaceous time (80 Ma). In contrast, all fixed hotspot models failed to produce acceptable fits for Paleogene to Late Cretaceous time (30-80 Ma), highlighting significance of relative motion between the Pacific and Indo-Atlantic hotspots during this interval. The
Fitting PAC spectra with stochastic models: PolyPacFit
Energy Technology Data Exchange (ETDEWEB)
Zacate, M. O., E-mail: zacatem1@nku.edu [Northern Kentucky University, Department of Physics and Geology (United States); Evenson, W. E. [Utah Valley University, College of Science and Health (United States); Newhouse, R.; Collins, G. S. [Washington State University, Department of Physics and Astronomy (United States)
2010-04-15
PolyPacFit is an advanced fitting program for time-differential perturbed angular correlation (PAC) spectroscopy. It incorporates stochastic models and provides robust options for customization of fits. Notable features of the program include platform independence and support for (1) fits to stochastic models of hyperfine interactions, (2) user-defined constraints among model parameters, (3) fits to multiple spectra simultaneously, and (4) any spin nuclear probe.
International Nuclear Information System (INIS)
Martin Llorente, F.
1990-01-01
The models of atmospheric pollutants dispersion are based in mathematic algorithms that describe the transport, diffusion, elimination and chemical reactions of atmospheric contaminants. These models operate with data of contaminants emission and make an estimation of quality air in the area. This model can be applied to several aspects of atmospheric contamination
International Nuclear Information System (INIS)
Vickers, Stefan J.
2014-01-01
We infer the size of weak-annihilation contributions in the framework of QCD factorisation from latest data of decay rates, strong phases and CP asymmetries of charmless hadronic B→M 1 M 2 decays that are mediated by b→(d,s) QCD- and QED penguin operators, such as B→(Kπ,Kη (') ,KK), B→(Kρ,Kφ,Kω,K * πK * η (') ), B→(K * ρ,K*φ,K * ω,K * K * ) and, B s →(ππ,Kπ,KK,K * φ,K * K * ,φφ), admitting one phenomenological parameter per final state system M 1 M 2 . Beyond the Standard Model, we study the possibility to determine simultaneously the phenomenological weak annihilation and new-physics parameters from data, employing model-independent scenarios with an enhanced electroweak Standard Model sector, an enhanced Z-penguin coupling and an extended operator basis, with O b =(anti sb)(anti bb) as well as including complementary constraints from b→sγ and b→sl + l - . The impact of these scenarios on so far unmeasured CP-violating observables in, for instance, anti B s →φφ and anti B s → anti K *0 K *0 , which will become available in the foreseeable future from the LHCb and Belle II collaborations, is discussed.
Energy Technology Data Exchange (ETDEWEB)
Vickers, Stefan J.
2014-09-02
We infer the size of weak-annihilation contributions in the framework of QCD factorisation from latest data of decay rates, strong phases and CP asymmetries of charmless hadronic B→M{sub 1}M{sub 2} decays that are mediated by b→(d,s) QCD- and QED penguin operators, such as B→(Kπ,Kη{sup (')},KK), B→(Kρ,Kφ,Kω,K{sup *}πK{sup *}η{sup (')}), B→(K{sup *}ρ,K*φ,K{sup *}ω,K{sup *}K{sup *}) and, B{sub s}→(ππ,Kπ,KK,K{sup *}φ,K{sup *}K{sup *},φφ), admitting one phenomenological parameter per final state system M{sub 1}M{sub 2}. Beyond the Standard Model, we study the possibility to determine simultaneously the phenomenological weak annihilation and new-physics parameters from data, employing model-independent scenarios with an enhanced electroweak Standard Model sector, an enhanced Z-penguin coupling and an extended operator basis, with O{sup b}=(anti sb)(anti bb) as well as including complementary constraints from b→sγ and b→sl{sup +}l{sup -}. The impact of these scenarios on so far unmeasured CP-violating observables in, for instance, anti B{sub s}→φφ and anti B{sub s}→ anti K{sup *0}K{sup *0}, which will become available in the foreseeable future from the LHCb and Belle II collaborations, is discussed.
Curve fitting methods for solar radiation data modeling
Energy Technology Data Exchange (ETDEWEB)
Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-10-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
International Nuclear Information System (INIS)
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-01-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods
Local fit evaluation of structural equation models using graphical criteria.
Thoemmes, Felix; Rosseel, Yves; Textor, Johannes
2018-03-01
Evaluation of model fit is critically important for every structural equation model (SEM), and sophisticated methods have been developed for this task. Among them are the χ² goodness-of-fit test, decomposition of the χ², derived measures like the popular root mean square error of approximation (RMSEA) or comparative fit index (CFI), or inspection of residuals or modification indices. Many of these methods provide a global approach to model fit evaluation: A single index is computed that quantifies the fit of the entire SEM to the data. In contrast, graphical criteria like d-separation or trek-separation allow derivation of implications that can be used for local fit evaluation, an approach that is hardly ever applied. We provide an overview of local fit evaluation from the viewpoint of SEM practitioners. In the presence of model misfit, local fit evaluation can potentially help in pinpointing where the problem with the model lies. For models that do fit the data, local tests can identify the parts of the model that are corroborated by the data. Local tests can also be conducted before a model is fitted at all, and they can be used even for models that are globally underidentified. We discuss appropriate statistical local tests, and provide applied examples. We also present novel software in R that automates this type of local fit evaluation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Measured, modeled, and causal conceptions of fitness
Abrams, Marshall
2012-01-01
This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804
Is Scrum fit for global software engineering?
DEFF Research Database (Denmark)
Lous, Pernille; Kuhrmann, Marco; Tell, Paolo
2017-01-01
Distributed software engineering and agility are strongly pushing on today's software industry. Due to inherent incompatibilities, for years, studying Scrum and its application in distributed setups has been subject to theoretical and applied research, and an increasing body of knowledge reports...... insights into this combination. Through a systematic literature review, this paper contributes a collection of experiences on the application of Scrum to global software engineering (GSE). In total, we identified 40 challenges in 19 categories practitioners face when using Scrum in GSE. Among...... the challenges, scaling Scrum to GSE and adopting practices accordingly are the most frequently named. Our findings also show that most solution proposals aim at modifying elements of the Scrum core processes. We thus conclude that, even though Scrum allows for extensive modification, Scrum itself represents...
Directory of Open Access Journals (Sweden)
Mehmet YEŞİLBUDAK
2018-03-01
Full Text Available The information about solar parameters is important in the installation of photovoltaic energy systems that are reliable, environmentally friendly and sustainable. In this study, initially, long-term global solar radiation, sunshine duration and air temperature data of Ankara are analyzed on the annual, monthly and daily basis, elaborately. Afterwards, three different empirical methods that are polynomial, Gaussian and Fourier are used for the purpose of modeling long-term monthly total global solar radiation, monthly total sunshine duration and monthly mean air temperature data. The coefficient of determination and the root mean square error are computed as statistical test metrics in order to compare data modeling performance of the mentioned empirical methods. The empirical methods that provide the best results enable to model the solar characteristics of Ankara more accurately and the achieved outcomes constitute the significant resource for other locations with similar climatic conditions.
Are Physical Education Majors Models for Fitness?
Kamla, James; Snyder, Ben; Tanner, Lori; Wash, Pamela
2012-01-01
The National Association of Sport and Physical Education (NASPE) (2002) has taken a firm stance on the importance of adequate fitness levels of physical education teachers stating that they have the responsibility to model an active lifestyle and to promote fitness behaviors. Since the NASPE declaration, national initiatives like Let's Move…
Contrast Gain Control Model Fits Masking Data
Watson, Andrew B.; Solomon, Joshua A.; Null, Cynthia H. (Technical Monitor)
1994-01-01
We studied the fit of a contrast gain control model to data of Foley (JOSA 1994), consisting of thresholds for a Gabor patch masked by gratings of various orientations, or by compounds of two orientations. Our general model includes models of Foley and Teo & Heeger (IEEE 1994). Our specific model used a bank of Gabor filters with octave bandwidths at 8 orientations. Excitatory and inhibitory nonlinearities were power functions with exponents of 2.4 and 2. Inhibitory pooling was broad in orientation, but narrow in spatial frequency and space. Minkowski pooling used an exponent of 4. All of the data for observer KMF were well fit by the model. We have developed a contrast gain control model that fits masking data. Unlike Foley's, our model accepts images as inputs. Unlike Teo & Heeger's, our model did not require multiple channels for different dynamic ranges.
Fitting neuron models to spike trains
Directory of Open Access Journals (Sweden)
Cyrille eRossant
2011-02-01
Full Text Available Computational modeling is increasingly used to understand the function of neural circuitsin systems neuroscience.These studies require models of individual neurons with realisticinput-output properties.Recently, it was found that spiking models can accurately predict theprecisely timed spike trains produced by cortical neurons in response tosomatically injected currents,if properly fitted. This requires fitting techniques that are efficientand flexible enough to easily test different candidate models.We present a generic solution, based on the Brian simulator(a neural network simulator in Python, which allowsthe user to define and fit arbitrary neuron models to electrophysiological recordings.It relies on vectorization and parallel computing techniques toachieve efficiency.We demonstrate its use on neural recordings in the barrel cortex andin the auditory brainstem, and confirm that simple adaptive spiking modelscan accurately predict the response of cortical neurons. Finally, we show how a complexmulticompartmental model can be reduced to a simple effective spiking model.
Fitting Hidden Markov Models to Psychological Data
Directory of Open Access Journals (Sweden)
Ingmar Visser
2002-01-01
Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.
The global electroweak fit at NNLO and prospects for the LHC and ILC
International Nuclear Information System (INIS)
Baak, M.; Hoecker, A.; Cuth, J.; Schott, M.; Haller, J.; Kogler, R.; Moenig, K.; Stelzer, J.
2014-01-01
For a long time, global fits of the electroweak sector of the standard model (SM) have been used to exploit measurements of electroweak precision observables at lepton colliders (LEP, SLC), together with measurements at hadron colliders (Tevatron, LHC) and accurate theoretical predictions at multi-loop level, to constrain free parameters of the SM, such as the Higgs and top masses. Today, all fundamental SM parameters entering these fits are experimentally determined, including information on the Higgs couplings, and the global fits are used as powerful tools to assess the validity of the theory and to constrain scenarios for new physics. Future measurements at the Large Hadron Collider (LHC) and the International Linear Collider (ILC) promise to improve the experimental precision of key observables used in the fits. This paper presents updated electroweak fit results using the latest NNLO theoretical predictions and prospects for the LHC and ILC. The impact of experimental and theoretical uncertainties is analysed in detail. We compare constraints from the electroweak fit on the Higgs couplings with direct LHC measurements, and we examine present and future prospects of these constraints using a model with modified couplings of the Higgs boson to fermions and bosons. (orig.)
A global fitting code for multichordal neutral beam spectroscopic data
International Nuclear Information System (INIS)
Seraydarian, R.P.; Burrell, K.H.; Groebner, R.J.
1992-05-01
Knowledge of the heat deposition profile is crucial to all transport analysis of beam heated discharges. The heat deposition profile can be inferred from the fast ion birth profile which, in turn, is directly related to the loss of neutral atoms from the beam. This loss can be measured spectroscopically be the decrease in amplitude of spectral emissions from the beam as it penetrates the plasma. The spectra are complicated by the motional Stark effect which produces a manifold of nine bright peaks for each of the three beam energy components. A code has been written to analyze this kind of data. In the first phase of this work, spectra from tokamak shots are fit with a Stark splitting and Doppler shift model that ties together the geometry of several spatial positions when they are fit simultaneously. In the second phase, a relative position-to-position intensity calibration will be applied to these results to obtain the spectral amplitudes from which beam atom loss can be estimated. This paper reports on the computer code for the first phase. Sample fits to real tokamak spectral data are shown
Induced subgraph searching for geometric model fitting
Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi
2017-11-01
In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.
IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.
Huang, Lihan
2017-12-04
The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.
DEFF Research Database (Denmark)
Manning, Stephan; Larsen, Marcus M.; Bharati, Pratyush
2013-01-01
This article examines antecedents and performance implications of global delivery models (GDMs) in global business services. GDMs require geographically distributed operations to exploit both proximity to clients and time-zone spread for efficient service delivery. We propose and empirically show...
... gov home http://www.girlshealth.gov/ Home Fitness Fitness Want to look and feel your best? Physical ... are? Check out this info: What is physical fitness? top Physical fitness means you can do everyday ...
Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data
McNeish, Daniel; Harring, Jeffrey R.
2017-01-01
To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…
A global fit of the MSSM with GAMBIT
Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin
2017-12-01
We study the seven-dimensional Minimal Supersymmetric Standard Model (MSSM7) with the new GAMBIT software framework, with all parameters defined at the weak scale. Our analysis significantly extends previous weak-scale, phenomenological MSSM fits, by adding more and newer experimental analyses, improving the accuracy and detail of theoretical predictions, including dominant uncertainties from the Standard Model, the Galactic dark matter halo and the quark content of the nucleon, and employing novel and highly-efficient statistical sampling methods to scan the parameter space. We find regions of the MSSM7 that exhibit co-annihilation of neutralinos with charginos, stops and sbottoms, as well as models that undergo resonant annihilation via both light and heavy Higgs funnels. We find high-likelihood models with light charginos, stops and sbottoms that have the potential to be within the future reach of the LHC. Large parts of our preferred parameter regions will also be accessible to the next generation of direct and indirect dark matter searches, making prospects for discovery in the near future rather good.
A global fit of the MSSM with GAMBIT
Energy Technology Data Exchange (ETDEWEB)
Athron, Peter; Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Bringmann, Torsten; Dal, Lars A.; Krislock, Abram; Raklev, Are [University of Oslo, Department of Physics, Oslo (Norway); Buckley, Andy [University of Glasgow, SUPA, School of Physics and Astronomy, Glasgow (United Kingdom); Chrzaszcz, Marcin [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Polish Academy of Sciences, H. Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Jackson, Paul; White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, ENS de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval (France); CERN, Theoretical Physics Department, Geneva (Switzerland); Martinez, Gregory D. [Physics and Astronomy Department, University of California, Los Angeles, CA (United States); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Rogan, Christopher [Harvard University, Department of Physics, Cambridge, MA (United States); Saavedra, Aldo [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); School of Physics, The University of Sydney, Centre for Translational Data Science, Faculty of Engineering and Information Technologies, Sydney, NSW (Australia); Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Serra, Nicola [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); Collaboration: The GAMBIT Collaboration
2017-12-15
We study the seven-dimensional Minimal Supersymmetric Standard Model (MSSM7) with the new GAMBIT software framework, with all parameters defined at the weak scale. Our analysis significantly extends previous weak-scale, phenomenological MSSM fits, by adding more and newer experimental analyses, improving the accuracy and detail of theoretical predictions, including dominant uncertainties from the Standard Model, the Galactic dark matter halo and the quark content of the nucleon, and employing novel and highly-efficient statistical sampling methods to scan the parameter space. We find regions of the MSSM7 that exhibit co-annihilation of neutralinos with charginos, stops and sbottoms, as well as models that undergo resonant annihilation via both light and heavy Higgs funnels. We find high-likelihood models with light charginos, stops and sbottoms that have the potential to be within the future reach of the LHC. Large parts of our preferred parameter regions will also be accessible to the next generation of direct and indirect dark matter searches, making prospects for discovery in the near future rather good. (orig.)
Pelet, S.; Previte, M.J.R.; Laiho, L.H.; So, P.T. C.
2004-01-01
Global fitting algorithms have been shown to improve effectively the accuracy and precision of the analysis of fluorescence lifetime imaging microscopy data. Global analysis performs better than unconstrained data fitting when prior information exists, such as the spatial invariance of the lifetimes of individual fluorescent species. The highly coupled nature of global analysis often results in a significantly slower convergence of the data fitting algorithm as compared with unconstrained ana...
International Nuclear Information System (INIS)
Hughes, T.J.; Fastook, J.L.
1994-05-01
The University of Maine conducted this study for Pacific Northwest Laboratory (PNL) as part of a global climate modeling task for site characterization of the potential nuclear waste respository site at Yucca Mountain, NV. The purpose of the study was to develop a global ice sheet dynamics model that will forecast the three-dimensional configuration of global ice sheets for specific climate change scenarios. The objective of the third (final) year of the work was to produce ice sheet data for glaciation scenarios covering the next 100,000 years. This was accomplished using both the map-plane and flowband solutions of our time-dependent, finite-element gridpoint model. The theory and equations used to develop the ice sheet models are presented. Three future scenarios were simulated by the model and results are discussed
Muon g-2 Estimates. Can One Trust Effective Lagrangians and Global Fits?
International Nuclear Information System (INIS)
Benayoun, M.; DelBuono, L.
2015-07-01
Previous studies have shown that the Hidden Local Symmetry (HLS) Model, supplied with appropriate symmetry breaking mechanisms, provides an Effective Lagrangian (BHLS) which encompasses a large number of processes within a unified framework; a global fit procedure allows for a simultaneous description of the e + e - annihilation into the 6 final states - π + π - , π 0 γ, ηγ, π + π - π 0 , K + K - , K L K S - and includes the dipion spectrum in the τ decay and some more light meson decay partial widths. The contribution to the muon anomalous magnetic moment a th μ of these annihilation channels over the range of validity of the HLS model (up to 1.05 GeV) is found much improved compared to its partner derived from integrating the measured spectra directly. However, most spectra for the process e + e - → π + π - undergo overall scale uncertainties which dominate the other sources, and one may suspect some bias in the dipion contribution to a th μ . However, an iterated fit algorithm, shown to lead to unbiased results by a Monte Carlo study, is defined and applied succesfully to the e + e - → π + π - data samples from CMD2, SND, KLOE (including the latest sample) and BaBar. The iterated fit solution is shown to be further improved and leads to a value for a μ different from aexp above the 4σ level. The contribution of the π + π - intermediate state up to 1.05 GeV to a μ derived from the iterated fit benefits from an uncertainty about 3 times smaller than the corresponding usual estimate. Therefore, global fit techniques are shown to work and lead to improved unbiased results. The main issue raised in this study and the kind of solution proposed may be of concern for other data driven methods when the data samples are dominated by global normalization uncertainties.
Regionalizing global climate models
Pitman, A.J.; Arneth, A.; Ganzeveld, L.N.
2012-01-01
Global climate models simulate the Earth's climate impressively at scales of continents and greater. At these scales, large-scale dynamics and physics largely define the climate. At spatial scales relevant to policy makers, and to impacts and adaptation, many other processes may affect regional and
A new approach to a global fit of the CKM matrix
Energy Technology Data Exchange (ETDEWEB)
Hoecker, A.; Lacker, H.; Laplace, S. [Laboratoire de l' Accelerateur Lineaire, 91 - Orsay (France); Le Diberder, F. [Laboratoire de Physique Nucleaire et des Hautes Energies, 75 - Paris (France)
2001-05-01
We report on a new approach to a global CKM matrix analysis taking into account most recent experimental and theoretical results. The statistical framework (Rfit) developed in this paper advocates frequentist statistics. Other approaches, such as Bayesian statistics or the 95% CL scan method are also discussed. We emphasize the distinction of a model testing and a model dependent, metrological phase in which the various parameters of the theory are estimated. Measurements and theoretical parameters entering the global fit are thoroughly discussed, in particular with respect to their theoretical uncertainties. Graphical results for confidence levels are drawn in various one and two-dimensional parameter spaces. Numerical results are provided for all relevant CKM parameterizations, the CKM elements and theoretical input parameters. Predictions for branching ratios of rare K and B meson decays are obtained. A simple, predictive SUSY extension of the Standard Model is discussed. (authors)
The globalization of training in adolescent health and medicine: one size does not fit all.
Leslie, Karen
2016-08-01
Adolescent medicine across the globe is practiced within a variety of healthcare models, with the shared vision of the promotion of optimal health outcomes for adolescents. In the past decade, there has been a call for transformation in how health professionals are trained, with recommendations that there be adoption of a global outlook, a multiprofessional perspective and a systems approach that considers the connections between education and health systems. Many individuals and groups are now examining how best to accomplish this educational reform. There are tensions between the call for globally accepted standards of education models and practice (a one-size fits all approach) and the need to promote the ability for education practices to be interpreted and transformed to best suit local contexts. This paper discusses some of the key considerations for 'importing' training program models for adolescent health and medicine, including the importance of cultural alignment and the utilization of best evidence and practice in health professions education.
Reconceptualising transnational governance: Making global institutions fit for purpose
Cleary, Seán
2017-01-01
Tensions between national democratic accountability and transnational challenges undermine trust and collective action. Asymmetry between an integrated global economy, fragmented global community, and defective global polity, causes social turbulence. Facing technological disruption, we need a new order to address inequality; transform education; and build social capital. Diplomatic exchanges will not suffice, bur the Paris Agreement and Agenda 2030 were enabled by bottom-up deliberations. Th...
Werner, A.; Sanderson, M.; Hand, W.; Blyth, A.; Groenemeijer, P.; Kunz, M.; Puskeiler, M.; Saville, G.; Michel, G.
2012-04-01
Hail risk models are rare for the insurance industry. This is opposed to the fact that average annual hail losses can be large and hail dominates losses for many motor portfolios worldwide. Insufficient observational data, high spatio-temporal variability and data inhomogenity have hindered creation of credible models so far. In January 2012, a selected group of hail experts met at Willis in London in order to discuss ways to model hail risk at various scales. Discussions aimed at improving our understanding of hail occurrence and severity, and covered recent progress in the understanding of microphysical processes and climatological behaviour and hail vulnerability. The final outcome of the meeting was the formation of a global hail risk model initiative and the launch of a realistic global hail model in order to assess hail loss occurrence and severities for the globe. The following projects will be tackled: Microphysics of Hail and hail severity measures: Understand the physical drivers of hail and hailstone size development in different regions on the globe. Proposed factors include updraft and supercooled liquid water content in the troposphere. What are the thresholds drivers of hail formation around the globe? Hail Climatology: Consider ways to build a realistic global climatological set of hail events based on physical parameters including spatial variations in total availability of moisture, aerosols, among others, and using neural networks. Vulnerability, Exposure, and financial model: Use historical losses and event footprints available in the insurance market to approximate fragility distributions and damage potential for various hail sizes for property, motor, and agricultural business. Propagate uncertainty distributions and consider effects of policy conditions along with aggregating and disaggregating exposure and losses. This presentation provides an overview of ideas and tasks that lead towards a comprehensive global understanding of hail risk for
Muon g-2 estimates: can one trust effective Lagrangians and global fits?
Energy Technology Data Exchange (ETDEWEB)
Benayoun, M., E-mail: benayoun@in2p3.fr [LPNHE des Universités Paris VI et Paris VII IN2P3/CNRS, 75252, Paris (France); David, P. [LPNHE des Universités Paris VI et Paris VII IN2P3/CNRS, 75252, Paris (France); LIED, Université Paris-Diderot/CNRS UMR 8236, 75013, Paris (France); DelBuono, L. [LPNHE des Universités Paris VI et Paris VII IN2P3/CNRS, 75252, Paris (France); Jegerlehner, F. [Institut für Physik, Humboldt-Universität zu Berlin, Newtonstrasse 15, 12489, Berlin (Germany); Deutsches Elektronen-Synchrotron (DESY), Platanenallee 6, 15738, Zeuthen (Germany)
2015-12-26
Previous studies have shown that the Hidden Local Symmetry (HLS) model, supplied with appropriate symmetry breaking mechanisms, provides an effective Lagrangian (Broken Hidden Local Symmetry, BHLS) which encompasses a large number of processes within a unified framework. Based on it, a global fit procedure allows for a simultaneous description of the e{sup +}e{sup -} annihilation into six final states—π{sup +}π{sup -}, π{sup 0}γ, ηγ, π{sup +}π{sup -}π{sup 0}, K{sup +}K{sup -}, K{sub L}K{sub S}—and includes the dipion spectrum in the τ decay and some more light meson decay partial widths. The contribution to the muon anomalous magnetic moment a{sub μ}{sup th} of these annihilation channels over the range of validity of the HLS model (up to 1.05 GeV) is found much improved in comparison to the standard approach of integrating the measured spectra directly. However, because most spectra for the annihilation process e{sup +}e{sup -}→π{sup +}π{sup -} undergo overall scale uncertainties which dominate the other sources, one may suspect some bias in the dipion contribution to a{sub μ}{sup th}, which could question the reliability of the global fit method. However, an iterated global fit algorithm, shown to lead to unbiased results by a Monte Carlo study, is defined and applied successfully to the e{sup +}e{sup -}→π{sup +}π{sup -} data samples from CMD2, SND, KLOE, BaBar, and BESSIII. The iterated fit solution is shown to further improve the prediction for a{sub μ}, which we find to deviate from its experimental value above the 4σ level. The contribution to a{sub μ} of the π{sup +}π{sup -} intermediate state up to 1.05 GeV has an uncertainty about 3 times smaller than the corresponding usual estimate. Therefore, global fit techniques are shown to work and lead to improved unbiased results.
Muon g - 2 estimates. Can one trust effective Lagrangians and global fits?
Energy Technology Data Exchange (ETDEWEB)
Benayoun, M.; DelBuono, L. [LPNHE des Universites Paris VI et Paris VII IN2P3/CNRS, Paris (France); David, P. [LPNHE des Universites Paris VI et Paris VII IN2P3/CNRS, Paris (France); LIED, Universite Paris-Diderot/CNRS UMR 8236, Paris (France); Jegerlehner, F. [Humboldt-Universitaet zu Berlin, Institut fuer Physik, Berlin (Germany); Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2015-12-15
Previous studies have shown that the Hidden Local Symmetry (HLS) model, supplied with appropriate symmetry breaking mechanisms, provides an effective Lagrangian (Broken Hidden Local Symmetry, BHLS) which encompasses a large number of processes within a unified framework. Based on it, a global fit procedure allows for a simultaneous description of the e{sup +}e{sup -} annihilation into six final states - π{sup +}π{sup -}, π{sup 0}γ, ηγ, π{sup +}π{sup -}π{sup 0}, K{sup +}K{sup -}, K{sub L}K{sub S} - and includes the dipion spectrum in the τ decay and some more light meson decay partial widths. The contribution to the muon anomalous magnetic moment a{sub μ}{sup th} of these annihilation channels over the range of validity of the HLS model (up to 1.05 GeV) is found much improved in comparison to the standard approach of integrating the measured spectra directly. However, because most spectra for the annihilation process e{sup +}e{sup -} → π{sup +}π{sup -} undergo overall scale uncertainties which dominate the other sources, one may suspect some bias in the dipion contribution to a{sub μ}{sup th}, which could question the reliability of the global fit method. However, an iterated global fit algorithm, shown to lead to unbiased results by a Monte Carlo study, is defined and applied successfully to the e{sup +}e{sup -} → π{sup +}π{sup -} data samples from CMD2, SND, KLOE, BaBar, and BESSIII. The iterated fit solution is shown to further improve the prediction for a{sub μ}, which we find to deviate from its experimental value above the 4σ level. The contribution to a{sub μ} of the π{sup +}π{sup -} intermediate state up to 1.05 GeV has an uncertainty about 3 times smaller than the corresponding usual estimate. Therefore, global fit techniques are shown to work and lead to improved unbiased results. (orig.)
arXiv Updated Global SMEFT Fit to Higgs, Diboson and Electroweak Data
Ellis, John; Sanz, Verónica; You, Tevong
The ATLAS and CMS collaborations have recently released significant new data on Higgs and diboson production in LHC Run 2. Measurements of Higgs properties have improved in many channels, while kinematic information for $h \\to \\gamma\\gamma$ and $h \\to ZZ$ can now be more accurately incorporated in fits using the STXS method, and $W^+ W^-$ diboson production at high $p_T$ gives new sensitivity to deviations from the Standard Model. We have performed an updated global fit to precision electroweak data, $W^+W^-$ measurements at LEP, and Higgs and diboson data from Runs 1 and 2 of the LHC in the framework of the Standard Model Effective Field Theory (SMEFT), allowing all coefficients to vary across the combined dataset, and present the results in both the Warsaw and SILH operator bases. We exhibit the improvement in the constraints on operator coefficients provided by the LHC Run 2 data, and discuss the correlations between them. We also explore the constraints our fit results impose on several models of physics ...
Muon g-2 Estimates. Can One Trust Effective Lagrangians and Global Fits?
Energy Technology Data Exchange (ETDEWEB)
Benayoun, M.; DelBuono, L. [Paris VI et Paris VII Univ. (France). LPNHE; David, P. [Paris VI et Paris VII Univ. (France). LPNHE; Paris-Diderot Univ./CNRS UMR 8236 (France). LIED; Jegerlehner, F. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2015-07-15
Previous studies have shown that the Hidden Local Symmetry (HLS) Model, supplied with appropriate symmetry breaking mechanisms, provides an Effective Lagrangian (BHLS) which encompasses a large number of processes within a unified framework; a global fit procedure allows for a simultaneous description of the e{sup +}e{sup -} annihilation into the 6 final states - π{sup +}π{sup -}, π{sup 0}γ, ηγ, π{sup +}π{sup -}π{sup 0}, K{sup +}K{sup -}, K{sub L}K{sub S} - and includes the dipion spectrum in the τ decay and some more light meson decay partial widths. The contribution to the muon anomalous magnetic moment a{sup th}{sub μ} of these annihilation channels over the range of validity of the HLS model (up to 1.05 GeV) is found much improved compared to its partner derived from integrating the measured spectra directly. However, most spectra for the process e{sup +}e{sup -} → π{sup +}π{sup -} undergo overall scale uncertainties which dominate the other sources, and one may suspect some bias in the dipion contribution to a{sup th}{sub μ}. However, an iterated fit algorithm, shown to lead to unbiased results by a Monte Carlo study, is defined and applied succesfully to the e{sup +}e{sup -} → π{sup +}π{sup -} data samples from CMD2, SND, KLOE (including the latest sample) and BaBar. The iterated fit solution is shown to be further improved and leads to a value for a{sub μ} different from aexp above the 4σ level. The contribution of the π{sup +}π{sup -} intermediate state up to 1.05 GeV to a{sub μ} derived from the iterated fit benefits from an uncertainty about 3 times smaller than the corresponding usual estimate. Therefore, global fit techniques are shown to work and lead to improved unbiased results. The main issue raised in this study and the kind of solution proposed may be of concern for other data driven methods when the data samples are dominated by global normalization uncertainties.
A Model Fit Statistic for Generalized Partial Credit Model
Liang, Tie; Wells, Craig S.
2009-01-01
Investigating the fit of a parametric model is an important part of the measurement process when implementing item response theory (IRT), but research examining it is limited. A general nonparametric approach for detecting model misfit, introduced by J. Douglas and A. S. Cohen (2001), has exhibited promising results for the two-parameter logistic…
Goodness-of-Fit Assessment of Item Response Theory Models
Maydeu-Olivares, Alberto
2013-01-01
The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…
PDFs, α_s, and quark masses from global fits
International Nuclear Information System (INIS)
Alekhin, Sergey; Bluemlein, Johannes; Moch, Sven-Olaf
2016-09-01
The strong coupling constant α_s and the heavy-quark masses, m_c, m_b, m_t are extracted simultaneously with the parton distribution functions (PDFs) in the updated ABM12 fit including recent data from CERN-SPS, HERA, Tevatron, and the LHC. The values of α_s(M_Z)=0.1147±0.0008(exp.), m_c(m_c)=1.252±0.018(exp.) GeV, m_b(m_b)=3.83±0.12(exp.) GeV, m_t(m_t)=160.9±1.1(exp.) GeV are obtained with the MS heavy-quark mass definition being employed throughout the analysis.
Evaluation of global solar radiation models for Shanghai, China
International Nuclear Information System (INIS)
Yao, Wanxiang; Li, Zhengrong; Wang, Yuyan; Jiang, Fujian; Hu, Lingzhou
2014-01-01
Highlights: • 108 existing models are compared and analyzed by 42 years meteorological data. • Fitting models based on measured data are established according to 42 years data. • All models are compared by recently 10 years meteorological data. • The results show that polynomial models are the most accurate models. - Abstract: In this paper, 89 existing monthly average daily global solar radiation models and 19 existing daily global solar radiation models are compared and analyzed by 42 years meteorological data. The results show that for existing monthly average daily global solar radiation models, linear models and polynomial models have been able to estimate global solar radiation accurately, and complex equation types cannot obviously improve the precision. Considering direct parameters such as latitude, altitude, solar altitude and sunshine duration can help improve the accuracy of the models, but indirect parameters cannot. For existing daily global solar radiation models, multi-parameter models are more accurate than single-parameter models, polynomial models are more accurate than linear models. Then measured data fitting monthly average daily global solar radiation models (MADGSR models) and daily global solar radiation models (DGSR models) are established according to 42 years meteorological data. Finally, existing models and fitting models based on measured data are comparative analysis by recent 10 years meteorological data, and the results show that polynomial models (MADGSR model 2, DGSR model 2 and Maduekwe model 2) are the most accurate models
A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models
Directory of Open Access Journals (Sweden)
Erin Scott
2016-01-01
Full Text Available The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE models to observation reference data (Mackinson et al. 2009. The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting >1000 specific individual searches to find the statistically ‘best fit’ model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the ‘best fit’ model for ecological accuracy.
Williams, P.; Huddelston, M.; Michel, G.; Thompson, S.; Heynert, K.; Pickering, C.; Abbott Donnelly, I.; Fewtrell, T.; Galy, H.; Sperna Weiland, F.; Winsemius, H.; Weerts, A.; Nixon, S.; Davies, P.; Schiferli, D.
2012-04-01
Recently, a Global Flood Model (GFM) initiative has been proposed by Willis, UK Met Office, Esri, Deltares and IBM. The idea is to create a global community platform that enables better understanding of the complexities of flood risk assessment to better support the decisions, education and communication needed to mitigate flood risk. The GFM will provide tools for assessing the risk of floods, for devising mitigation strategies such as land-use changes and infrastructure improvements, and for enabling effective pre- and post-flood event response. The GFM combines humanitarian and commercial motives. It will benefit: - The public, seeking to preserve personal safety and property; - State and local governments, seeking to safeguard economic activity, and improve resilience; - NGOs, similarly seeking to respond proactively to flood events; - The insurance sector, seeking to understand and price flood risk; - Large corporations, seeking to protect global operations and supply chains. The GFM is an integrated and transparent set of modules, each composed of models and data. For each module, there are two core elements: a live "reference version" (a worked example) and a framework of specifications, which will allow development of alternative versions. In the future, users will be able to work with the reference version or substitute their own models and data. If these meet the specification for the relevant module, they will interoperate with the rest of the GFM. Some "crowd-sourced" modules could even be accredited and published to the wider GFM community. Our intent is to build on existing public, private and academic work, improve local adoption, and stimulate the development of multiple - but compatible - alternatives, so strengthening mankind's ability to manage flood impacts. The GFM is being developed and managed by a non-profit organization created for the purpose. The business model will be inspired from open source software (eg Linux): - for non-profit usage
Sparks, R. S. J.; Loughlin, S. C.; Cottrell, E.; Valentine, G.; Newhall, C.; Jolly, G.; Papale, P.; Takarada, S.; Crosweller, S.; Nayembil, M.; Arora, B.; Lowndes, J.; Connor, C.; Eichelberger, J.; Nadim, F.; Smolka, A.; Michel, G.; Muir-Wood, R.; Horwell, C.
2012-04-01
Over 600 million people live close enough to active volcanoes to be affected when they erupt. Volcanic eruptions cause loss of life, significant economic losses and severe disruption to people's lives, as highlighted by the recent eruption of Mount Merapi in Indonesia. The eruption of Eyjafjallajökull, Iceland in 2010 illustrated the potential of even small eruptions to have major impact on the modern world through disruption of complex critical infrastructure and business. The effects in the developing world on economic growth and development can be severe. There is evidence that large eruptions can cause a change in the earth's climate for several years afterwards. Aside from meteor impact and possibly an extreme solar event, very large magnitude explosive volcanic eruptions may be the only natural hazard that could cause a global catastrophe. GVM is a growing international collaboration that aims to create a sustainable, accessible information platform on volcanic hazard and risk. We are designing and developing an integrated database system of volcanic hazards, vulnerability and exposure with internationally agreed metadata standards. GVM will establish methodologies for analysis of the data (eg vulnerability indices) to inform risk assessment, develop complementary hazards models and create relevant hazards and risk assessment tools. GVM will develop the capability to anticipate future volcanism and its consequences. NERC is funding the start-up of this initiative for three years from November 2011. GVM builds directly on the VOGRIPA project started as part of the GRIP (Global Risk Identification Programme) in 2004 under the auspices of the World Bank and UN. Major international initiatives and partners such as the Smithsonian Institution - Global Volcanism Program, State University of New York at Buffalo - VHub, Earth Observatory of Singapore - WOVOdat and many others underpin GVM.
Directory of Open Access Journals (Sweden)
Xiaosheng Yu
2013-01-01
Full Text Available We propose a novel active contour model in a variational level set formulation for image segmentation and target localization. We combine a local image fitting term and a global image fitting term to drive the contour evolution. Our model can efficiently segment the images with intensity inhomogeneity with the contour starting anywhere in the image. In its numerical implementation, an efficient numerical schema is used to ensure sufficient numerical accuracy. We validated its effectiveness in numerous synthetic images and real images, and the promising experimental results show its advantages in terms of accuracy, efficiency, and robustness.
Fitting Markovian binary trees using global and individual demographic data
Hautphenne, Sophie; Massaro, Melanie; Turner, Katharine
2017-01-01
We consider a class of branching processes called Markovian binary trees, in which the individuals lifetime and reproduction epochs are modeled using a transient Markovian arrival process (TMAP). We estimate the parameters of the TMAP based on population data containing information on age-specific fertility and mortality rates. Depending on the degree of detail of the available data, a weighted non-linear regression method or a maximum likelihood method is applied. We discuss the optimal choi...
ITEM LEVEL DIAGNOSTICS AND MODEL - DATA FIT IN ITEM ...
African Journals Online (AJOL)
Global Journal
Item response theory (IRT) is a framework for modeling and analyzing item response ... data. Though, there is an argument that the evaluation of fit in IRT modeling has been ... National Council on Measurement in Education ... model data fit should be based on three types of ... prediction should be assessed through the.
A Comparison of Item Fit Statistics for Mixed IRT Models
Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.
2010-01-01
In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…
An Analysis of Yip's Global Strategy Model, Using Coca-Cola ...
African Journals Online (AJOL)
Analysis of the selected business cases suggest a weak fit between the Yip model of a truly Global strategy ... like Coca-Cola in the beverage industry for effective implementation of a global strategy. ... Keywords: Global Strategy, Leadership.
Pelet, S; Previte, M J R; Laiho, L H; So, P T C
2004-10-01
Global fitting algorithms have been shown to improve effectively the accuracy and precision of the analysis of fluorescence lifetime imaging microscopy data. Global analysis performs better than unconstrained data fitting when prior information exists, such as the spatial invariance of the lifetimes of individual fluorescent species. The highly coupled nature of global analysis often results in a significantly slower convergence of the data fitting algorithm as compared with unconstrained analysis. Convergence speed can be greatly accelerated by providing appropriate initial guesses. Realizing that the image morphology often correlates with fluorophore distribution, a global fitting algorithm has been developed to assign initial guesses throughout an image based on a segmentation analysis. This algorithm was tested on both simulated data sets and time-domain lifetime measurements. We have successfully measured fluorophore distribution in fibroblasts stained with Hoechst and calcein. This method further allows second harmonic generation from collagen and elastin autofluorescence to be differentiated in fluorescence lifetime imaging microscopy images of ex vivo human skin. On our experimental measurement, this algorithm increased convergence speed by over two orders of magnitude and achieved significantly better fits. Copyright 2004 Biophysical Society
Automated Model Fit Method for Diesel Engine Control Development
Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.
2014-01-01
This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
Automated model fit method for diesel engine control development
Seykens, X.L.J.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.J.H.
2014-01-01
This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is
Marzeion, B.; Maussion, F.
2017-12-01
Mountain glaciers are one of the few remaining sub-systems of the global climate system for which no globally applicable, open source, community-driven model exists. Notable examples from the ice sheet community include the Parallel Ice Sheet Model or Elmer/Ice. While the atmospheric modeling community has a long tradition of sharing models (e.g. the Weather Research and Forecasting model) or comparing them (e.g. the Coupled Model Intercomparison Project or CMIP), recent initiatives originating from the glaciological community show a new willingness to better coordinate global research efforts following the CMIP example (e.g. the Glacier Model Intercomparison Project or the Glacier Ice Thickness Estimation Working Group). In the recent past, great advances have been made in the global availability of data and methods relevant for glacier modeling, spanning glacier outlines, automatized glacier centerline identification, bed rock inversion methods, and global topographic data sets. Taken together, these advances now allow the ice dynamics of glaciers to be modeled on a global scale, provided that adequate modeling platforms are available. Here, we present the Open Global Glacier Model (OGGM), developed to provide a global scale, modular, and open source numerical model framework for consistently simulating past and future global scale glacier change. Global not only in the sense of leading to meaningful results for all glaciers combined, but also for any small ensemble of glaciers, e.g. at the headwater catchment scale. Modular to allow combinations of different approaches to the representation of ice flow and surface mass balance, enabling a new kind of model intercomparison. Open source so that the code can be read and used by anyone and so that new modules can be added and discussed by the community, following the principles of open governance. Consistent in order to provide uncertainty measures at all realizable scales.
The FIT Model - Fuel-cycle Integration and Tradeoffs
International Nuclear Information System (INIS)
Piet, Steven J.; Soelberg, Nick R.; Bays, Samuel E.; Pereira, Candido; Pincock, Layne F.; Shaber, Eric L.; Teague, Melissa C.; Teske, Gregory M.; Vedros, Kurt G.
2010-01-01
All mass streams from fuel separation and fabrication are products that must meet some set of product criteria - fuel feedstock impurity limits, waste acceptance criteria (WAC), material storage (if any), or recycle material purity requirements such as zirconium for cladding or lanthanides for industrial use. These must be considered in a systematic and comprehensive way. The FIT model and the 'system losses study' team that developed it (Shropshire2009, Piet2010) are an initial step by the FCR and D program toward a global analysis that accounts for the requirements and capabilities of each component, as well as major material flows within an integrated fuel cycle. This will help the program identify near-term R and D needs and set longer-term goals. The question originally posed to the 'system losses study' was the cost of separation, fuel fabrication, waste management, etc. versus the separation efficiency. In other words, are the costs associated with marginal reductions in separations losses (or improvements in product recovery) justified by the gains in the performance of other systems? We have learned that that is the wrong question. The right question is: how does one adjust the compositions and quantities of all mass streams, given uncertain product criteria, to balance competing objectives including cost? FIT is a method to analyze different fuel cycles using common bases to determine how chemical performance changes in one part of a fuel cycle (say used fuel cooling times or separation efficiencies) affect other parts of the fuel cycle. FIT estimates impurities in fuel and waste via a rough estimate of physics and mass balance for a set of technologies. If feasibility is an issue for a set, as it is for 'minimum fuel treatment' approaches such as melt refining and AIROX, it can help to make an estimate of how performances would have to change to achieve feasibility.
topicmodels: An R Package for Fitting Topic Models
Directory of Open Access Journals (Sweden)
Bettina Grun
2011-05-01
Full Text Available Topic models allow the probabilistic modeling of term frequency occurrences in documents. The fitted model can be used to estimate the similarity between documents as well as between a set of specified keywords using an additional layer of latent variables which are referred to as topics. The R package topicmodels provides basic infrastructure for fitting topic models based on data structures from the text mining package tm. The package includes interfaces to two algorithms for fitting topic models: the variational expectation-maximization algorithm provided by David M. Blei and co-authors and an algorithm using Gibbs sampling by Xuan-Hieu Phan and co-authors.
HDFITS: Porting the FITS data model to HDF5
Price, D. C.; Barsdell, B. R.; Greenhill, L. J.
2015-09-01
The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new 'HDFITS' format, where data are stored in HDF5 in a FITS-like manner. We show that HDFITS allows faster reading of data (up to 100x of FITS in some use cases), and improved compression (higher compression ratios and higher throughput). Finally, we show that by only changing the import lines in Python-based FITS utilities, HDFITS formatted data can be presented transparently as an in-memory FITS equivalent.
DEFF Research Database (Denmark)
Manning, Stephan; Møller Larsen, Marcus; Bharati, Pratyush
-zone spread allowing for 24/7 service delivery and access to resources. Based on comprehensive data we show that providers are likely to establish GDM configurations when clients value access to globally distributed talent pools and speed of service delivery, and in particular when services are highly...
DEFF Research Database (Denmark)
Manning, Stephan; Møller Larsen, Marcus; Bharati, Pratyush M.
2015-01-01
antecedents and contingencies of setting up GDM structures. Based on comprehensive data we show that providers are likely to establish GDM location configurations when clients value access to globally distributed talent and speed of service delivery, in particular when services are highly commoditized...
Analytical fitting model for rough-surface BRDF.
Renhorn, Ingmar G E; Boreman, Glenn D
2008-08-18
A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.
An R package for fitting age, period and cohort models
Directory of Open Access Journals (Sweden)
Adriano Decarli
2014-11-01
Full Text Available In this paper we present the R implementation of a GLIM macro which fits age-period-cohort model following Osmond and Gardner. In addition to the estimates of the corresponding model, owing to the programming capability of R as an object oriented language, methods for printing, plotting and summarizing the results are provided. Furthermore, the researcher has fully access to the output of the main function (apc which returns all the models fitted within the function. It is so possible to critically evaluate the goodness of fit of the resulting model.
Modeling Evolution on Nearly Neutral Network Fitness Landscapes
Yakushkina, Tatiana; Saakian, David B.
2017-08-01
To describe virus evolution, it is necessary to define a fitness landscape. In this article, we consider the microscopic models with the advanced version of neutral network fitness landscapes. In this problem setting, we suppose a fitness difference between one-point mutation neighbors to be small. We construct a modification of the Wright-Fisher model, which is related to ordinary infinite population models with nearly neutral network fitness landscape at the large population limit. From the microscopic models in the realistic sequence space, we derive two versions of nearly neutral network models: with sinks and without sinks. We claim that the suggested model describes the evolutionary dynamics of RNA viruses better than the traditional Wright-Fisher model with few sequences.
International Nuclear Information System (INIS)
Pronyaev, V.G.
2003-01-01
The information entropy is taken as a measure of knowledge about the object and the reduced univariante variance as a common measure of uncertainty. Covariances in the model versus non-model least square fits are discussed
Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images
Tzimiropoulos, Georgios; Pantic, Maja
2016-01-01
Fitting algorithms for Active Appearance Models (AAMs) are usually considered to be robust but slow or fast but less able to generalize well to unseen variations. In this paper, we look into AAM fitting algorithms and make the following orthogonal contributions: We present a simple “project-out‿
Global nuclear material control model
International Nuclear Information System (INIS)
Dreicer, J.S.; Rutherford, D.A.
1996-01-01
The nuclear danger can be reduced by a system for global management, protection, control, and accounting as part of a disposition program for special nuclear materials. The development of an international fissile material management and control regime requires conceptual research supported by an analytical and modeling tool that treats the nuclear fuel cycle as a complete system. Such a tool must represent the fundamental data, information, and capabilities of the fuel cycle including an assessment of the global distribution of military and civilian fissile material inventories, a representation of the proliferation pertinent physical processes, and a framework supportive of national or international perspective. They have developed a prototype global nuclear material management and control systems analysis capability, the Global Nuclear Material Control (GNMC) model. The GNMC model establishes the framework for evaluating the global production, disposition, and safeguards and security requirements for fissile nuclear material
Modelling MIZ dynamics in a global model
Rynders, Stefanie; Aksenov, Yevgeny; Feltham, Daniel; Nurser, George; Naveira Garabato, Alberto
2016-04-01
Exposure of large, previously ice-covered areas of the Arctic Ocean to the wind and surface ocean waves results in the Arctic pack ice cover becoming more fragmented and mobile, with large regions of ice cover evolving into the Marginal Ice Zone (MIZ). The need for better climate predictions, along with growing economic activity in the Polar Oceans, necessitates climate and forecasting models that can simulate fragmented sea ice with a greater fidelity. Current models are not fully fit for the purpose, since they neither model surface ocean waves in the MIZ, nor account for the effect of floe fragmentation on drag, nor include sea ice rheology that represents both the now thinner pack ice and MIZ ice dynamics. All these processes affect the momentum transfer to the ocean. We present initial results from a global ocean model NEMO (Nucleus for European Modelling of the Ocean) coupled to the Los Alamos sea ice model CICE. The model setup implements a novel rheological formulation for sea ice dynamics, accounting for ice floe collisions, thus offering a seamless framework for pack ice and MIZ simulations. The effect of surface waves on ice motion is included through wave pressure and the turbulent kinetic energy of ice floes. In the multidecadal model integrations we examine MIZ and basin scale sea ice and oceanic responses to the changes in ice dynamics. We analyse model sensitivities and attribute them to key sea ice and ocean dynamical mechanisms. The results suggest that the effect of the new ice rheology is confined to the MIZ. However with the current increase in summer MIZ area, which is projected to continue and may become the dominant type of sea ice in the Arctic, we argue that the effects of the combined sea ice rheology will be noticeable in large areas of the Arctic Ocean, affecting sea ice and ocean. With this study we assert that to make more accurate sea ice predictions in the changing Arctic, models need to include MIZ dynamics and physics.
Fitting Simpson's neutrino into the standard model
International Nuclear Information System (INIS)
Valle, J.W.F.
1985-01-01
I show how to accomodate the 17 keV state recently by Simpson as one of the neutrinos of the standard model. Experimental constraints can only be satisfied if the μ and tau neutrino combine to a very good approximation to form a Dirac neutrino of 17 keV leaving a light νsub(e). Neutrino oscillations will provide the most stringent test of the model. The cosmological bounds are also satisfied in a natural way in models with Goldstone bosons. Explicit examples are given in the framework of majoron-type models. Constraints on the lepton symmetry breaking scale which follow from astrophysics, cosmology and laboratory experiments are discussed. (orig.)
Fitting ARMA Time Series by Structural Equation Models.
van Buuren, Stef
1997-01-01
This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)
A person fit test for IRT models for polytomous items
Glas, Cornelis A.W.; Dagohoy, A.V.
2007-01-01
A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability
Fitting polytomous Rasch models in SAS
DEFF Research Database (Denmark)
Christensen, Karl Bang
2006-01-01
The item parameters of a polytomous Rasch model can be estimated using marginal and conditional approaches. This paper describes how this can be done in SAS (V8.2) for three item parameter estimation procedures: marginal maximum likelihood estimation, conditional maximum likelihood estimation, an...
Andreasson, Jesper; Johansson, Thomas
2016-01-01
This article analyses fitness professionals' perceptions and understanding of their occupational education and pedagogical pursuance, framed within the emergence of a global fitness industry. The empirical material consists of interviews with personal trainers and group fitness instructors, as well as observations in their working environment. In…
Critical elements on fitting the Bayesian multivariate Poisson Lognormal model
Zamzuri, Zamira Hasanah binti
2015-10-01
Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.
Global Fits of the Electroweak Standard Theory: Past, Present and Future
Baak, M; Mönig, K
2016-01-01
The last decades have seen tremendous progress in the experimental techniques for measuring key observables of the Standard Theory (ST) as well as in theoretical calculations that has led to highly precise predictions of these observables. Global electroweak fits of the ST compare the precision measurements of electroweak observables from lepton and hadron colliders at CERN and elsewhere with accurate theoretical predictions of the ST calculated at multi-loop level. For a long time, global fits have been used to assess the validity of the ST and to constrain indirectly (by exploiting contributions from quantum loops) the remaining free ST parameters, like the masses of the top quark and Higgs boson before their direct discovery. With the discovery of the Higgs boson at the Large Hadron Collider (LHC), the electroweak sector of the ST is now complete and all fundamental ST parameters are known. Hence the global fits are a powerful tool to probe the internal consistency of the ST, to predict ST parameters with...
Random-growth urban model with geographical fitness
Kii, Masanobu; Akimoto, Keigo; Doi, Kenji
2012-12-01
This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.
LEP asymmetries and fits of the standard model
International Nuclear Information System (INIS)
Pietrzyk, B.
1994-01-01
The lepton and quark asymmetries measured at LEP are presented. The results of the Standard Model fits to the electroweak data presented at this conference are given. The top mass obtained from the fit to the LEP data is 172 -14-20 +13+18 GeV; it is 177 -11-19 +11+18 when also the collider, ν and A LR data are included. (author). 10 refs., 3 figs., 2 tabs
McNeish, Daniel; Hancock, Gregory R
2018-03-01
Lance, Beck, Fan, and Carter (2016) recently advanced 6 new fit indices and associated cutoff values for assessing data-model fit in the structural portion of traditional latent variable path models. The authors appropriately argued that, although most researchers' theoretical interest rests with the latent structure, they still rely on indices of global model fit that simultaneously assess both the measurement and structural portions of the model. As such, Lance et al. proposed indices intended to assess the structural portion of the model in isolation of the measurement model. Unfortunately, although these strategies separate the assessment of the structure from the fit of the measurement model, they do not isolate the structure's assessment from the quality of the measurement model. That is, even with a perfectly fitting measurement model, poorer quality (i.e., less reliable) measurements will yield a more favorable verdict regarding structural fit, whereas better quality (i.e., more reliable) measurements will yield a less favorable structural assessment. This phenomenon, referred to by Hancock and Mueller (2011) as the reliability paradox, affects not only traditional global fit indices but also those structural indices proposed by Lance et al. as well. Fortunately, as this comment will clarify, indices proposed by Hancock and Mueller help to mitigate this problem and allow the structural portion of the model to be assessed independently of both the fit of the measurement model as well as the quality of indicator variables contained therein. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Automatic fitting of spiking neuron models to electrophysiological recordings
Directory of Open Access Journals (Sweden)
Cyrille Rossant
2010-03-01
Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.
The Global Tsunami Model (GTM)
Lorito, S.; Basili, R.; Harbitz, C. B.; Løvholt, F.; Polet, J.; Thio, H. K.
2017-12-01
The tsunamis occurred worldwide in the last two decades have highlighted the need for a thorough understanding of the risk posed by relatively infrequent but often disastrous tsunamis and the importance of a comprehensive and consistent methodology for quantifying the hazard. In the last few years, several methods for probabilistic tsunami hazard analysis have been developed and applied to different parts of the world. In an effort to coordinate and streamline these activities and make progress towards implementing the Sendai Framework of Disaster Risk Reduction (SFDRR) we have initiated a Global Tsunami Model (GTM) working group with the aim of i) enhancing our understanding of tsunami hazard and risk on a global scale and developing standards and guidelines for it, ii) providing a portfolio of validated tools for probabilistic tsunami hazard and risk assessment at a range of scales, and iii) developing a global tsunami hazard reference model. This GTM initiative has grown out of the tsunami component of the Global Assessment of Risk (GAR15), which has resulted in an initial global model of probabilistic tsunami hazard and risk. Started as an informal gathering of scientists interested in advancing tsunami hazard analysis, the GTM is currently in the process of being formalized through letters of interest from participating institutions. The initiative has now been endorsed by the United Nations International Strategy for Disaster Reduction (UNISDR) and the World Bank's Global Facility for Disaster Reduction and Recovery (GFDRR). We will provide an update on the state of the project and the overall technical framework, and discuss the technical issues that are currently being addressed, including earthquake source recurrence models, the use of aleatory variability and epistemic uncertainty, and preliminary results for a probabilistic global hazard assessment, which is an update of the model included in UNISDR GAR15.
Global thermal models of the lithosphere
Cammarano, F.; Guerri, M.
2017-12-01
Unraveling the thermal structure of the outermost shell of our planet is key for understanding its evolution. We obtain temperatures from interpretation of global shear-velocity (VS) models. Long-wavelength thermal structure is well determined by seismic models and only slightly affected by compositional effects and uncertainties in mineral-physics properties. Absolute temperatures and gradients with depth, however, are not well constrained. Adding constraints from petrology, heat-flow observations and thermal evolution of oceanic lithosphere help to better estimate absolute temperatures in the top part of the lithosphere. We produce global thermal models of the lithosphere at different spatial resolution, up to spherical-harmonics degree 24, and provide estimated standard deviations. We provide purely seismic thermal (TS) model and hybrid models where temperatures are corrected with steady-state conductive geotherms on continents and cooling model temperatures on oceanic regions. All relevant physical properties, with the exception of thermal conductivity, are based on a self-consistent thermodynamical modelling approach. Our global thermal models also include density and compressional-wave velocities (VP) as obtained either assuming no lateral variations in composition or a simple reference 3-D compositional structure, which takes into account a chemically depleted continental lithosphere. We found that seismically-derived temperatures in continental lithosphere fit well, overall, with continental geotherms, but a large variation in radiogenic heat is required to reconcile them with heat flow (long wavelength) observations. Oceanic shallow lithosphere below mid-oceanic ridges and young oceans is colder than expected, confirming the possible presence of a dehydration boundary around 80 km depth already suggested in previous studies. The global thermal models should serve as the basis to move at a smaller spatial scale, where additional thermo-chemical variations
Fitting Equilibrium Search Models to Labour Market Data
DEFF Research Database (Denmark)
Bowlus, Audra J.; Kiefer, Nicholas M.; Neumann, George R.
1996-01-01
Specification and estimation of a Burdett-Mortensen type equilibrium search model is considered. The estimation is nonstandard. An estimation strategy asymptotically equivalent to maximum likelihood is proposed and applied. The results indicate that specifications with a small number of productiv...... of productivity types fit the data well compared to the homogeneous model....
Twitter classification model: the ABC of two million fitness tweets.
Vickey, Theodore A; Ginis, Kathleen Martin; Dabrowski, Maciej
2013-09-01
The purpose of this project was to design and test data collection and management tools that can be used to study the use of mobile fitness applications and social networking within the context of physical activity. This project was conducted over a 6-month period and involved collecting publically shared Twitter data from five mobile fitness apps (Nike+, RunKeeper, MyFitnessPal, Endomondo, and dailymile). During that time, over 2.8 million tweets were collected, processed, and categorized using an online tweet collection application and a customized JavaScript. Using the grounded theory, a classification model was developed to categorize and understand the types of information being shared by application users. Our data show that by tracking mobile fitness app hashtags, a wealth of information can be gathered to include but not limited to daily use patterns, exercise frequency, location-based workouts, and overall workout sentiment.
Flexible competing risks regression modeling and goodness-of-fit
DEFF Research Database (Denmark)
Scheike, Thomas; Zhang, Mei-Jie
2008-01-01
In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....
Global scale groundwater flow model
Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc
2013-04-01
As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.
Another method for a global fit of the Cabibbo-Kobayashi-Maskawa matrix
International Nuclear Information System (INIS)
Dita, Petre
2005-01-01
Recently we proposed a novel method for doing global fits on the entries of the Cabibbo-Kobayashi-Maskawa matrix. The new used ingredients were a clear relationship between the entries of the CKM matrix and the experimental data, as well as the use of the necessary and sufficient condition the data have to satisfy in order to find a unitary matrix compatible with them. This condition writes as -1 ≤ cosδ ≤1 where δ is the phase that accounts for CP violation. Numerical results are provided for the CKM matrix entries, the mixing angles between generations and all the angles of the standard unitarity triangle. (author)
[How to fit and interpret multilevel models using SPSS].
Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael
2007-05-01
Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.
Assessing fit in Bayesian models for spatial processes
Jun, M.
2014-09-16
© 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models\\' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.
Assessing fit in Bayesian models for spatial processes
Jun, M.; Katzfuss, M.; Hu, J.; Johnson, V. E.
2014-01-01
© 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.
Person-fit to the Five Factor Model of personality
Czech Academy of Sciences Publication Activity Database
Allik, J.; Realo, A.; Mõttus, R.; Borkenau, P.; Kuppens, P.; Hřebíčková, Martina
2012-01-01
Roč. 71, č. 1 (2012), s. 35-45 ISSN 1421-0185 R&D Projects: GA ČR GAP407/10/2394 Institutional research plan: CEZ:AV0Z70250504 Keywords : Five Factor Model * cross - cultural comparison * person-fit Subject RIV: AN - Psychology Impact factor: 0.638, year: 2012
Updated status of the global electroweak fit and constraints on new physics
Energy Technology Data Exchange (ETDEWEB)
Baak, M.; Hoecker, A.; Schott, M. [CERN, Geneva (Switzerland); Goebel, M.; Ludwig, D. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Haller, J. [Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Goettingen Univ. (Germany). II. Physikalisches Inst.; Moenig, K. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Stelzer, J. [Michigan State Univ., East Lansing, MI (United States). Dept. of Physics and Astronomy
2011-07-15
We present an update of the Standard Model fit to electroweak precision data. We include newest experimental results on the top quark mass, the W mass and width, and the Higgs boson mass bounds from LEP, Tevatron and the LHC. We also include a new determination of the electromagnetic coupling strength at the Z pole. We find for the Higgs boson mass 96{sub -24}{sup +31} GeV and 120{sub -5}{sup +12} GeV when not including and including the direct Higgs searches, respectively. From the latter fit we indirectly determine the W mass to be (80.362{+-} 0.013)GeV. We exploit the data to determine experimental constraints on the oblique vacuum polarisation parameters, and confront these with predictions from the Standard Model (SM) and selected SM extensions. By fitting the oblique parameters to the electroweak data we derive allowed regions in the BSM parameter spaces. We revisit and consistently update these constraints for a fourth fermion generation, two Higgs doublet, inert Higgs and littlest Higgs models, models with large, universal or warped extra dimensions and technicolour. In most of the models studied a heavy Higgs boson can be made compatible with the electroweak precision data. (orig.)
Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS
DEFF Research Database (Denmark)
Bolker, B.M.; Gardner, B.; Maunder, M.
2013-01-01
Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. R is convenient and (relatively) easy...... to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield...
Updated Status of the Global Electroweak Fit and Constraints on New Physics
Baak, M; Haller, J; Hoecker, A; Kennedy, D; Moenig, K; Schott, M; Stelzer, J
2012-01-01
We present an update of the Standard Model fit to electroweak precision data. We include newest experimental results on the top quark mass, the W mass and width, and the Higgs boson mass bounds from LEP, Tevatron and the LHC. We also include a new determination of the electromagnetic coupling strength at the Z pole. We find for the Higgs boson mass (96 +31 -24) GeV and (120 +12 -5) GeV when not including and including the direct Higgs searches, respectively. From the latter fit we indirectly determine the W mass to be (80.362 +- 0.013) GeV. We exploit the data to determine experimental constraints on the oblique vacuum polarisation parameters, and confront these with predictions from the Standard Model (SM) and selected SM extensions. By fitting the oblique parameters to the electroweak data we derive allowed regions in the BSM parameter spaces. We revisit and consistently update these constraints for a fourth fourth fermion generation, two Higgs doublet, inert Higgs and littlest Higgs models, models with lar...
Updated Status of the Global Electroweak Fit and Constraints on New Physics
Baak, Max; Haller, Johannes; Hoecker, Andreas; Ludwig, Doerthe; Moenig, Klaus; Schott, Matthias; Stelzer, Joerg
2011-01-01
We present an update of the Standard Model fit to electroweak precision data. We include newest experimental results on the top quark mass, the W mass and width, and the Higgs boson mass bounds from LEP, Tevatron and the cLHC. We also include a new determination of the electromagnetic coupling strength at the Z pole. We find for the Higgs boson mass (96 +31 -24) GeV and (120 +12 -5) GeV when not including and including the direct Higgs searches, respectively. From the latter fit we indirectly determine the W mass to be (80.359 +0.017 -0.010) GeV. We exploit the data to determine experimental constraints on the oblique vacuum polarisation parameters, and confront these with predictions from the Standard Model (SM) and selected SM extensions. By fitting the oblique parameters to the electroweak data we derive allowed regions in the BSM parameter spaces. We revisit and consistently update these constraints for a fourth family, two Higgs doublet, inert Higgs and littlest Higgs models, models with large,...
GEM - The Global Earthquake Model
Smolka, A.
2009-04-01
Over 500,000 people died in the last decade due to earthquakes and tsunamis, mostly in the developing world, where the risk is increasing due to rapid population growth. In many seismic regions, no hazard and risk models exist, and even where models do exist, they are intelligible only by experts, or available only for commercial purposes. The Global Earthquake Model (GEM) answers the need for an openly accessible risk management tool. GEM is an internationally sanctioned public private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) which will establish an authoritative standard for calculating and communicating earthquake hazard and risk, and will be designed to serve as the critical instrument to support decisions and actions that reduce earthquake losses worldwide. GEM will integrate developments on the forefront of scientific and engineering knowledge of earthquakes, at global, regional and local scale. The work is organized in three modules: hazard, risk, and socio-economic impact. The hazard module calculates probabilities of earthquake occurrence and resulting shaking at any given location. The risk module calculates fatalities, injuries, and damage based on expected shaking, building vulnerability, and the distribution of population and of exposed values and facilities. The socio-economic impact module delivers tools for making educated decisions to mitigate and manage risk. GEM will be a versatile online tool, with open source code and a map-based graphical interface. The underlying data will be open wherever possible, and its modular input and output will be adapted to multiple user groups: scientists and engineers, risk managers and decision makers in the public and private sectors, and the public-at- large. GEM will be the first global model for seismic risk assessment at a national and regional scale, and aims to achieve broad scientific participation and independence. Its development will occur in a
Supersymmetry with prejudice: Fitting the wrong model to LHC data
Allanach, B. C.; Dolan, Matthew J.
2012-09-01
We critically examine interpretations of hypothetical supersymmetric LHC signals, fitting to alternative wrong models of supersymmetry breaking. The signals we consider are some of the most constraining on the sparticle spectrum: invariant mass distributions with edges and endpoints from the golden decay chain q˜→qχ20(→l˜±l∓q)→χ10l+l-q. We assume a constrained minimal supersymmetric standard model (CMSSM) point to be the ‘correct’ one, but fit the signals instead with minimal gauge mediated supersymmetry breaking models (mGMSB) with a neutralino quasistable lightest supersymmetric particle, minimal anomaly mediation and large volume string compactification models. Minimal anomaly mediation and large volume scenario can be unambiguously discriminated against the CMSSM for the assumed signal and 1fb-1 of LHC data at s=14TeV. However, mGMSB would not be discriminated on the basis of the kinematic endpoints alone. The best-fit point spectra of mGMSB and CMSSM look remarkably similar, making experimental discrimination at the LHC based on the edges or Higgs properties difficult. However, using rate information for the golden chain should provide the additional separation required.
International Nuclear Information System (INIS)
Tashkun, S.A.; Perevalov, V.I.; Karlovets, E.V.; Kassi, S.; Campargue, A.
2016-01-01
In a recent work (Karlovets et al., 2016 [1]), we reported the measurement and rovibrational assignments of more than 3300 transitions belonging to 64 bands of five nitrous oxide isotopologues ("1"4N_2"1"6O, "1"4N"1"5N"1"6O, "1"5N"1"4N"1"6O, "1"4N_2"1"8O and "1"4N_2"1"7O) in the high sensitivity CRDS spectrum recorded in the 7915–8334 cm"−"1 spectral range. The assignments were performed by comparison with predictions of the effective Hamiltonian models developed for each isotopologue. In the present paper, the large amount of measurements from our previous work mentioned above and literature are gathered to refine the modeling of the nitrous oxide spectrum in two ways: (i) improvement of the intensity modeling for the principal isotopologue, "1"4N_2"1"6O, near 8000 cm"−"1 from a new fit of the relevant effective dipole moment parameters, (ii) global modeling of "1"4N_2"1"8O line positions from a new fit of the parameters of the global effective Hamiltonian using an exhaustive input dataset collected in the literature in the 12–8231 cm"−"1 region. The fitted set of 81 parameters allowed reproducing near 5800 measured line positions with an RMS deviation of 0.0016 cm"−"1. The dimensionless weighted standard deviation of the fit is 1.22. As an illustration of the improvement of the predictive capabilities of the obtained effective Hamiltonian, two new "1"4N_2"1"8O bands could be assigned in the CRDS spectrum in the 7915–8334 cm"−"1 spectral range. A line list at 296 K has been generated in the 0–10,700 cm"−"1 range for "1"4N_2"1"8O in natural abundance with a 10"−"3"0 cm/molecule intensity cutoff. - Highlights: • Line parameters of two new "1"4N_2"1"8O bands centered at 7966 cm"−"1 and at 8214 cm"−"1. • Refined sets of the "1"4N_2"1"6O effective dipole moment parameters for ΔP=13,14 series. • Global modeling of "1"4N_2"1"8O line positions and intensities in the 12–8231 cm"−"1 range. • 5800 observed of "1"4N_2"1"8O line positions
Thissen, David
2013-01-01
In this commentary, David Thissen states that "Goodness-of-fit assessment for IRT models is maturing; it has come a long way from zero." Thissen then references prior works on "goodness of fit" in the index of Lord and Novick's (1968) classic text; Yen (1984); Drasgow, Levine, Tsien, Williams, and Mead (1995); Chen and…
Takaful Models and Global Practices
Akhter, Waheed
2010-01-01
There is a global interest in Islamic finance in general and Takāful in particular. The main feature that differentiates Takāful services from conventional ones is Sharī‟ah compliance nature of these services. Investors are taking keen interest in this potential market as Muslims constitute about one fourth of the world population (Muslim population, 2006). To streamline operations of a Takāful company, management and Sharī‟ah experts have developed different operational models for Takāful bu...
Berthier, Laure; Trott, Michael
2016-09-27
We calculate the double pole contribution to two to four fermion scattering through $W^{\\pm}$ currents at tree level in the Standard Model Effective Field Theory (SMEFT). We assume all fermions to be massless, $\\rm U(3)^5$ flavour and $\\rm CP$ symmetry. Using this result, we update the global constraint picture on SMEFT parameters including LEPII data on these charged current processes, and also include modifications to our fit procedure motivated by a companion paper focused on $W^{\\pm}$ mass extractions. The fit reported is now to 177 observables and emphasises the need for a consistent inclusion of theoretical errors, and a consistent treatment of observables. Including charged current data lifts the two-fold degeneracy previously encountered in LEP (and lower energy) data, and allows us to set simultaneous constraints on 20 of 53 Wilson coefficients in the SMEFT, consistent with our assumptions. This allows the model independent inclusion of LEP data in SMEFT studies at LHC, which are projected into the S...
Reducing uncertainty based on model fitness: Application to a ...
African Journals Online (AJOL)
A weakness of global sensitivity and uncertainty analysis methodologies is the often subjective definition of prior parameter probability distributions, especially ... The reservoir representing the central part of the wetland, where flood waters separate into several independent distributaries, is a keystone area within the model.
Bosone, Lucia; Martinez, Frédéric; Kalampalikis, Nikos
2015-04-01
In health-promotional campaigns, positive and negative role models can be deployed to illustrate the benefits or costs of certain behaviors. The main purpose of this article is to investigate why, how, and when exposure to role models strengthens the persuasiveness of a message, according to regulatory fit theory. We argue that exposure to a positive versus a negative model activates individuals' goals toward promotion rather than prevention. By means of two experiments, we demonstrate that high levels of persuasion occur when a message advertising healthy dietary habits offers a regulatory fit between its framing and the described role model. Our data also establish that the effects of such internal regulatory fit by vicarious experience depend on individuals' perceptions of response-efficacy and self-efficacy. Our findings constitute a significant theoretical complement to previous research on regulatory fit and contain valuable practical implications for health-promotional campaigns. © 2015 by the Society for Personality and Social Psychology, Inc.
arXiv A global fit of the MSSM with GAMBIT
Athron, Peter; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin
2017-12-18
We study the seven-dimensional Minimal Supersymmetric Standard Model (MSSM7) with the new GAMBIT software framework, with all parameters defined at the weak scale. Our analysis significantly extends previous weak-scale, phenomenological MSSM fits, by adding more and newer experimental analyses, improving the accuracy and detail of theoretical predictions, including dominant uncertainties from the Standard Model, the Galactic dark matter halo and the quark content of the nucleon, and employing novel and highly-efficient statistical sampling methods to scan the parameter space. We find regions of the MSSM7 that exhibit co-annihilation of neutralinos with charginos, stops and sbottoms, as well as models that undergo resonant annihilation via both light and heavy Higgs funnels. We find high-likelihood models with light charginos, stops and sbottoms that have the potential to be within the future reach of the LHC. Large parts of our preferred parameter regions will also be accessible to the next generation of dire...
von Cramon-Taubadel, Noreen; Lycett, Stephen J
2008-05-01
Recent studies comparing craniometric and neutral genetic affinity matrices have concluded that, on average, human cranial variation fits a model of neutral expectation. While human craniometric and genetic data fit a model of isolation by geographic distance, it is not yet clear whether this is due to geographically mediated gene flow or human dispersal events. Recently, human genetic data have been shown to fit an iterative founder effect model of dispersal with an African origin, in line with the out-of-Africa replacement model for modern human origins, and Manica et al. (Nature 448 (2007) 346-349) have demonstrated that human craniometric data also fit this model. However, in contrast with the neutral model of cranial evolution suggested by previous studies, Manica et al. (2007) made the a priori assumption that cranial form has been subject to climatically driven natural selection and therefore correct for climate prior to conducting their analyses. Here we employ a modified theoretical and methodological approach to test whether human cranial variability fits the iterative founder effect model. In contrast with Manica et al. (2007) we employ size-adjusted craniometric variables, since climatic factors such as temperature have been shown to correlate with aspects of cranial size. Despite these differences, we obtain similar results to those of Manica et al. (2007), with up to 26% of global within-population craniometric variation being explained by geographic distance from sub-Saharan Africa. Comparative analyses using non-African origins do not yield significant results. The implications of these results are discussed in the light of the modern human origins debate. (c) 2007 Wiley-Liss, Inc.
Fitting Latent Cluster Models for Networks with latentnet
Directory of Open Access Journals (Sweden)
Pavel N. Krivitsky
2007-12-01
Full Text Available latentnet is a package to fit and evaluate statistical latent position and cluster models for networks. Hoﬀ, Raftery, and Handcock (2002 suggested an approach to modeling networks based on positing the existence of an latent space of characteristics of the actors. Relationships form as a function of distances between these characteristics as well as functions of observed dyadic level covariates. In latentnet social distances are represented in a Euclidean space. It also includes a variant of the extension of the latent position model to allow for clustering of the positions developed in Handcock, Raftery, and Tantrum (2007.The package implements Bayesian inference for the models based on an Markov chain Monte Carlo algorithm. It can also compute maximum likelihood estimates for the latent position model and a two-stage maximum likelihood method for the latent position cluster model. For latent position cluster models, the package provides a Bayesian way of assessing how many groups there are, and thus whether or not there is any clustering (since if the preferred number of groups is 1, there is little evidence for clustering. It also estimates which cluster each actor belongs to. These estimates are probabilistic, and provide the probability of each actor belonging to each cluster. It computes four types of point estimates for the coefficients and positions: maximum likelihood estimate, posterior mean, posterior mode and the estimator which minimizes Kullback-Leibler divergence from the posterior. You can assess the goodness-of-fit of the model via posterior predictive checks. It has a function to simulate networks from a latent position or latent position cluster model.
Rapid world modeling: Fitting range data to geometric primitives
International Nuclear Information System (INIS)
Feddema, J.; Little, C.
1996-01-01
For the past seven years, Sandia National Laboratories has been active in the development of robotic systems to help remediate DOE's waste sites and decommissioned facilities. Some of these facilities have high levels of radioactivity which prevent manual clean-up. Tele-operated and autonomous robotic systems have been envisioned as the only suitable means of removing the radioactive elements. World modeling is defined as the process of creating a numerical geometric model of a real world environment or workspace. This model is often used in robotics to plan robot motions which perform a task while avoiding obstacles. In many applications where the world model does not exist ahead of time, structured lighting, laser range finders, and even acoustical sensors have been used to create three dimensional maps of the environment. These maps consist of thousands of range points which are difficult to handle and interpret. This paper presents a least squares technique for fitting range data to planar and quadric surfaces, including cylinders and ellipsoids. Once fit to these primitive surfaces, the amount of data associated with a surface is greatly reduced up to three orders of magnitude, thus allowing for more rapid handling and analysis of world data
Modeling of global biomass policies
International Nuclear Information System (INIS)
Gielen, Dolf; Fujino, Junichi; Hashimoto, Seiji; Moriguchi, Yuichi
2003-01-01
This paper discusses the BEAP model and its use for the analysis of biomass policies for CO 2 emission reduction. The model considers competing land use, trade and leakage effects, and competing emission reduction strategies. Two policy scenarios are presented. In case of a 2040 time horizon the results suggest that a combination of afforestation and limited use of biomass for energy and materials constitutes the most attractive set of strategies. In case of a 'continued Kyoto' scenario including afforestation permit trade, the results suggest 5.1 Gt emission reduction based on land use change in 2020, two thirds of the total emission reduction by then. In case of global emission reduction, land use, land use change and forestry (LULUCF) accounts for one quarter of the emission reduction. However these results depend on the modeling time horizon. In case of a broader time horizon, maximized biomass production is more attractive than LULUCF. This result can be interpreted as a warning against a market based trading scheme for LULUCF credits. The model results suggest that the bioenergy market is dominated by transportation fuels and heating, and to a lesser extent feedstocks. Bioelectricity does not gain a significant market share in case competing CO 2 -free electricity options such as CO 2 capture and sequestration and nuclear are considered. To some extent trade in agricultural food products such as beef and cereals will be affected by CO 2 policies
Modeling of global biomass policies
International Nuclear Information System (INIS)
Gielen, D.; Fujino, Junichi; Hashimoto, Seiji; Moriguchi, Yuichi
2003-01-01
This paper discusses the BEAP model and its use for the analysis of biomass policies for CO 2 emission reduction. The model considers competing land use, trade and leakage effects, and competing emission reduction strategies. Two policy scenarios are presented. In case of a 2040 time horizon the results suggest that a combination of afforestation and limited use of biomass for energy and materials constitutes the most attractive set of strategies. In case of a 'continued Kyoto' scenario including afforestation permit trade, the results suggest 5.1 Gt emission reduction based on land use change in 2020, two thirds of the total emission reduction by then. In case of global emission reduction, land use, land use change and forestry (LULUCF) accounts for one quarter of the emission reduction. However these results depend on the modeling time horizon. In case of a broader time horizon, maximized biomass production is more attractive than LULUCF. This result can be interpreted as a warning against a market based trading scheme for LULUCF credits. The model results suggest that the bioenergy market is dominated by transportation fuels and heating, and to a lesser extent feedstocks. Bioelectricity does not gain a significant market share in case competing CO 2 -free electricity options such as CO 2 capture and sequestration and nuclear are considered. To some extent trade in agricultural food products such as beef and cereals will be affected by CO 2 policies. (Author)
An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models
Ames, Allison J.; Penfield, Randall D.
2015-01-01
Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…
Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS
Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise
2013-01-01
1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.
Zhan, Fei; Tao, Ye; Zhao, Haifeng
2017-07-01
Time-resolved X-ray absorption spectroscopy (TR-XAS), based on the laser-pump/X-ray-probe method, is powerful in capturing the change of the geometrical and electronic structure of the absorbing atom upon excitation. TR-XAS data analysis is generally performed on the laser-on minus laser-off difference spectrum. Here, a new analysis scheme is presented for the TR-XAS difference fitting in both the extended X-ray absorption fine-structure (EXAFS) and the X-ray absorption near-edge structure (XANES) regions. R-space EXAFS difference fitting could quickly provide the main quantitative structure change of the first shell. The XANES fitting part introduces a global non-derivative optimization algorithm and optimizes the local structure change in a flexible way where both the core XAS calculation package and the search method in the fitting shell are changeable. The scheme was applied to the TR-XAS difference analysis of Fe(phen) 3 spin crossover complex and yielded reliable distance change and excitation population.
Feature extraction through least squares fit to a simple model
International Nuclear Information System (INIS)
Demuth, H.B.
1976-01-01
The Oak Ridge National Laboratory (ORNL) presented the Los Alamos Scientific Laboratory (LASL) with 18 radiographs of fuel rod test bundles. The problem is to estimate the thickness of the gap between some cylindrical rods and a flat wall surface. The edges of the gaps are poorly defined due to finite source size, x-ray scatter, parallax, film grain noise, and other degrading effects. The radiographs were scanned and the scan-line data were averaged to reduce noise and to convert the problem to one dimension. A model of the ideal gap, convolved with an appropriate point-spread function, was fit to the averaged data with a least squares program; and the gap width was determined from the final fitted-model parameters. The least squares routine did converge and the gaps obtained are of reasonable size. The method is remarkably insensitive to noise. This report describes the problem, the techniques used to solve it, and the results and conclusions. Suggestions for future work are also given
Fit reduced GUTS models online: From theory to practice.
Baudrot, Virgile; Veber, Philippe; Gence, Guillaume; Charles, Sandrine
2018-05-20
Mechanistic modeling approaches, such as the toxicokinetic-toxicodynamic (TKTD) framework, are promoted by international institutions such as the European Food Safety Authority and the Organization for Economic Cooperation and Development to assess the environmental risk of chemical products generated by human activities. TKTD models can encompass a large set of mechanisms describing the kinetics of compounds inside organisms (e.g., uptake and elimination) and their effect at the level of individuals (e.g., damage accrual, recovery, and death mechanism). Compared to classical dose-response models, TKTD approaches have many advantages, including accounting for temporal aspects of exposure and toxicity, considering data points all along the experiment and not only at the end, and making predictions for untested situations as realistic exposure scenarios. Among TKTD models, the general unified threshold model of survival (GUTS) is within the most recent and innovative framework but is still underused in practice, especially by risk assessors, because specialist programming and statistical skills are necessary to run it. Making GUTS models easier to use through a new module freely available from the web platform MOSAIC (standing for MOdeling and StAtistical tools for ecotoxIClogy) should promote GUTS operability in support of the daily work of environmental risk assessors. This paper presents the main features of MOSAIC_GUTS: uploading of the experimental data, GUTS fitting analysis, and LCx estimates with their uncertainty. These features will be exemplified from literature data. Integr Environ Assess Manag 2018;00:000-000. © 2018 SETAC. © 2018 SETAC.
Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations
International Nuclear Information System (INIS)
El-Shanshoury, Gh.I.
2017-01-01
The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days
Directory of Open Access Journals (Sweden)
T. Diana L. Van Aduard de Macedo-Soares
2011-02-01
Full Text Available In order to sustain their competitive advantage in the current increasingly globalized and turbulent context, more and more firms are competing globally in alliances and networks that oblige them to adopt new managerial paradigms and tools. However, their strategic analyses rarely take into account the strategic implications of these alliances and networks, considering their global relational characteristics, admittedly because of a lack of adequate tools to do so. This paper contributes to research that seeks to fill this gap by proposing the Global Strategic Network Analysis - SNA - framework. Its purpose is to help firms that compete globally in alliances and networks to carry out their strategic assessments and decision-making with a view to ensuring dynamic strategic fit from both a global and relational perspective.
A fitting LEGACY – modelling Kepler's best stars
Directory of Open Access Journals (Sweden)
Aarslev Magnus J.
2017-01-01
Full Text Available The LEGACY sample represents the best solar-like stars observed in the Kepler mission[5, 8]. The 66 stars in the sample are all on the main sequence or only slightly more evolved. They each have more than one year's observation data in short cadence, allowing for precise extraction of individual frequencies. Here we present model fits using a modified ASTFIT procedure employing two different near-surface-effect corrections, one by Christensen-Dalsgaard[4] and a newer correction proposed by Ball & Gizon[1]. We then compare the results obtained using the different corrections. We find that using the latter correction yields lower masses and significantly lower χ2 values for a large part of the sample.
Five challenges for stochastic epidemic models involving global transmission
Directory of Open Access Journals (Sweden)
Tom Britton
2015-03-01
Full Text Available The most basic stochastic epidemic models are those involving global transmission, meaning that infection rates depend only on the type and state of the individuals involved, and not on their location in the population. Simple as they are, there are still several open problems for such models. For example, when will such an epidemic go extinct and with what probability (questions depending on the population being fixed, changing or growing? How can a model be defined explaining the sometimes observed scenario of frequent mid-sized epidemic outbreaks? How can evolution of the infectious agent transmission rates be modelled and fitted to data in a robust way?
On global and regional spectral evaluation of global geopotential models
International Nuclear Information System (INIS)
Ustun, A; Abbak, R A
2010-01-01
Spectral evaluation of global geopotential models (GGMs) is necessary to recognize the behaviour of gravity signal and its error recorded in spherical harmonic coefficients and associated standard deviations. Results put forward in this wise explain the whole contribution of gravity data in different kinds that represent various sections of the gravity spectrum. This method is more informative than accuracy assessment methods, which use external data such as GPS-levelling. Comparative spectral evaluation for more than one model can be performed both in global and local sense using many spectral tools. The number of GGMs has grown with the increasing number of data collected by the dedicated satellite gravity missions, CHAMP, GRACE and GOCE. This fact makes it necessary to measure the differences between models and to monitor the improvements in the gravity field recovery. In this paper, some of the satellite-only and combined models are examined in different scales, globally and regionally, in order to observe the advances in the modelling of GGMs and their strengths at various expansion degrees for geodetic and geophysical applications. The validation of the published errors of model coefficients is a part of this evaluation. All spectral tools explicitly reveal the superiority of the GRACE-based models when compared against the models that comprise the conventional satellite tracking data. The disagreement between models is large in local/regional areas if data sets are different, as seen from the example of the Turkish territory
A bipartite fitness model for online music streaming services
Pongnumkul, Suchit; Motohashi, Kazuyuki
2018-01-01
This paper proposes an evolution model and an analysis of the behavior of music consumers on online music streaming services. While previous studies have observed power-law degree distributions of usage in online music streaming services, the underlying behavior of users has not been well understood. Users and songs can be described using a bipartite network where an edge exists between a user node and a song node when the user has listened that song. The growth mechanism of bipartite networks has been used to understand the evolution of online bipartite networks Zhang et al. (2013). Existing bipartite models are based on a preferential attachment mechanism László Barabási and Albert (1999) in which the probability that a user listens to a song is proportional to its current popularity. This mechanism does not allow for two types of real world phenomena. First, a newly released song with high quality sometimes quickly gains popularity. Second, the popularity of songs normally decreases as time goes by. Therefore, this paper proposes a new model that is more suitable for online music services by adding fitness and aging functions to the song nodes of the bipartite network proposed by Zhang et al. (2013). Theoretical analyses are performed for the degree distribution of songs. Empirical data from an online streaming service, Last.fm, are used to confirm the degree distribution of the object nodes. Simulation results show improvements from a previous model. Finally, to illustrate the application of the proposed model, a simplified royalty cost model for online music services is used to demonstrate how the changes in the proposed parameters can affect the costs for online music streaming providers. Managerial implications are also discussed.
Fitting outbreak models to data from many small norovirus outbreaks
Directory of Open Access Journals (Sweden)
Eamon B. O’Dea
2014-03-01
Full Text Available Infectious disease often occurs in small, independent outbreaks in populations with varying characteristics. Each outbreak by itself may provide too little information for accurate estimation of epidemic model parameters. Here we show that using standard stochastic epidemic models for each outbreak and allowing parameters to vary between outbreaks according to a linear predictor leads to a generalized linear model that accurately estimates parameters from many small and diverse outbreaks. By estimating initial growth rates in addition to transmission rates, we are able to characterize variation in numbers of initially susceptible individuals or contact patterns between outbreaks. With simulation, we find that the estimates are fairly robust to the data being collected at discrete intervals and imputation of about half of all infectious periods. We apply the method by fitting data from 75 norovirus outbreaks in health-care settings. Our baseline regression estimates are 0.0037 transmissions per infective-susceptible day, an initial growth rate of 0.27 transmissions per infective day, and a symptomatic period of 3.35 days. Outbreaks in long-term-care facilities had significantly higher transmission and initial growth rates than outbreaks in hospitals.
Local and Global Function Model of the Liver
Energy Technology Data Exchange (ETDEWEB)
Wang, Hesheng, E-mail: hesheng@umich.edu [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Feng, Mary [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Jackson, Andrew [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Ten Haken, Randall K.; Lawrence, Theodore S. [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Cao, Yue [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Department of Radiology, University of Michigan, Ann Arbor, Michigan (United States); Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan (United States)
2016-01-01
Purpose: To develop a local and global function model in the liver based on regional and organ function measurements to support individualized adaptive radiation therapy (RT). Methods and Materials: A local and global model for liver function was developed to include both functional volume and the effect of functional variation of subunits. Adopting the assumption of parallel architecture in the liver, the global function was composed of a sum of local function probabilities of subunits, varying between 0 and 1. The model was fit to 59 datasets of liver regional and organ function measures from 23 patients obtained before, during, and 1 month after RT. The local function probabilities of subunits were modeled by a sigmoid function in relating to MRI-derived portal venous perfusion values. The global function was fitted to a logarithm of an indocyanine green retention rate at 15 minutes (an overall liver function measure). Cross-validation was performed by leave-m-out tests. The model was further evaluated by fitting to the data divided according to whether the patients had hepatocellular carcinoma (HCC) or not. Results: The liver function model showed that (1) a perfusion value of 68.6 mL/(100 g · min) yielded a local function probability of 0.5; (2) the probability reached 0.9 at a perfusion value of 98 mL/(100 g · min); and (3) at a probability of 0.03 [corresponding perfusion of 38 mL/(100 g · min)] or lower, the contribution to global function was lost. Cross-validations showed that the model parameters were stable. The model fitted to the data from the patients with HCC indicated that the same amount of portal venous perfusion was translated into less local function probability than in the patients with non-HCC tumors. Conclusions: The developed liver function model could provide a means to better assess individual and regional dose-responses of hepatic functions, and provide guidance for individualized treatment planning of RT.
FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES
Directory of Open Access Journals (Sweden)
U. S. Panday
2012-09-01
Full Text Available In literature and in photogrammetric workstations many approaches and systems to automatically reconstruct buildings from remote sensing data are described and available. Those building models are being used for instance in city modeling or in cadastre context. If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS data. This leads to inconsistent building outlines, which has a negative inﬂuence on visual impression, but more seriously also represents a wrong legal boundary in the cadaster. Oblique aerial images as opposed to nadir-view images reveal greater detail, enabling to see different views of an object taken from different directions. Building walls are visible from oblique images directly and those images are used for automated roof overhang estimation in this research. A fitting algorithm is employed to find roof parameters of simple buildings. It uses a least squares algorithm to fit projected wire frames to their corresponding edge lines extracted from the images. Self-occlusion is detected based on intersection result of viewing ray and the planes formed by the building whereas occlusion from other objects is detected using an ALS point cloud. Overhang and ground height are obtained by sweeping vertical and horizontal planes respectively. Experimental results are verified with high resolution ortho-images, field survey, and ALS data. Planimetric accuracy of 1cm mean and 5cm standard deviation was obtained, while buildings' orientation were accurate to mean of 0.23° and standard deviation of 0.96° with ortho-image. Overhang parameters were aligned to approximately 10cm with field survey. The ground and roof heights were accurate to mean of – 9cm and 8cm with standard deviations of 16cm and 8cm with ALS respectively. The developed approach reconstructs 3D building models well in cases of sufficient texture. More images should be acquired for
Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.
Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E
2007-02-15
Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.
Spherical Process Models for Global Spatial Statistics
Jeong, Jaehong; Jun, Mikyoung; Genton, Marc G.
2017-01-01
Statistical models used in geophysical, environmental, and climate science applications must reflect the curvature of the spatial domain in global data. Over the past few decades, statisticians have developed covariance models that capture
A cautionary note on the use of information fit indexes in covariance structure modeling with means
Wicherts, J.M.; Dolan, C.V.
2004-01-01
Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases
Using the Flipchem Photochemistry Model When Fitting Incoherent Scatter Radar Data
Reimer, A. S.; Varney, R. H.
2017-12-01
The North face Resolute Bay Incoherent Scatter Radar (RISR-N) routinely images the dynamics of the polar ionosphere, providing measurements of the plasma density, electron temperature, ion temperature, and line of sight velocity with seconds to minutes time resolution. RISR-N does not directly measure ionospheric parameters, but backscattered signals, recording them as voltage samples. Using signal processing techniques, radar autocorrelation functions (ACF) are estimated from the voltage samples. A model of the signal ACF is then fitted to the ACF using non-linear least-squares techniques to obtain the best-fit ionospheric parameters. The signal model, and therefore the fitted parameters, depend on the ionospheric ion composition that is used [e.g. Zettergren et. al. (2010), Zou et. al. (2017)].The software used to process RISR-N ACF data includes the "flipchem" model, which is an ion photochemistry model developed by Richards [2011] that was adapted from the Field LineInterhemispheric Plasma (FLIP) model. Flipchem requires neutral densities, neutral temperatures, electron density, ion temperature, electron temperature, solar zenith angle, and F10.7 as inputs to compute ion densities, which are input to the signal model. A description of how the flipchem model is used in RISR-N fitting software will be presented. Additionally, a statistical comparison of the fitted electron density, ion temperature, electron temperature, and velocity obtained using a flipchem ionosphere, a pure O+ ionosphere, and a Chapman O+ ionosphere will be presented. The comparison covers nearly two years of RISR-N data (April 2015 - December 2016). Richards, P. G. (2011), Reexamination of ionospheric photochemistry, J. Geophys. Res., 116, A08307, doi:10.1029/2011JA016613.Zettergren, M., Semeter, J., Burnett, B., Oliver, W., Heinselman, C., Blelly, P.-L., and Diaz, M.: Dynamic variability in F-region ionospheric composition at auroral arc boundaries, Ann. Geophys., 28, 651-664, https
Global model structures for ∗-modules
DEFF Research Database (Denmark)
Böhme, Benjamin
2018-01-01
We extend Schwede's work on the unstable global homotopy theory of orthogonal spaces and L-spaces to the category of ∗-modules (i.e., unstable S-modules). We prove a theorem which transports model structures and their properties from L-spaces to ∗-modules and show that the resulting global model...... structure for ∗-modules is monoidally Quillen equivalent to that of orthogonal spaces. As a consequence, there are induced Quillen equivalences between the associated model categories of monoids, which identify equivalent models for the global homotopy theory of A∞-spaces....
Fourier series models through transformation | Omekara | Global ...
African Journals Online (AJOL)
As a result, the square transformation which outperforms the others is adopted. Consequently, each of the multiplicative and additive FSA models fitted to the transformed data are then subjected to a test for white noise based on spectral analysis. The result of this test shows that only the multiplicative model is adequate.
International Nuclear Information System (INIS)
Ishima, Rieko; Torchia, Dennis A.
2005-01-01
Off-resonance effects can introduce significant systematic errors in R 2 measurements in constant-time Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation dispersion experiments. For an off-resonance chemical shift of 500 Hz, 15 N relaxation dispersion profiles obtained from experiment and computer simulation indicated a systematic error of ca. 3%. This error is three- to five-fold larger than the random error in R 2 caused by noise. Good estimates of total R 2 uncertainty are critical in order to obtain accurate estimates in optimized chemical exchange parameters and their uncertainties derived from χ 2 minimization of a target function. Here, we present a simple empirical approach that provides a good estimate of the total error (systematic + random) in 15 N R 2 values measured for the HIV protease. The advantage of this empirical error estimate is that it is applicable even when some of the factors that contribute to the off-resonance error are not known. These errors are incorporated into a χ 2 minimization protocol, in which the Carver-Richards equation is used fit the observed R 2 dispersion profiles, that yields optimized chemical exchange parameters and their confidence limits. Optimized parameters are also derived, using the same protein sample and data-fitting protocol, from 1 H R 2 measurements in which systematic errors are negligible. Although 1 H and 15 N relaxation profiles of individual residues were well fit, the optimized exchange parameters had large uncertainties (confidence limits). In contrast, when a single pair of exchange parameters (the exchange lifetime, τ ex , and the fractional population, p a ), were constrained to globally fit all R 2 profiles for residues in the dimer interface of the protein, confidence limits were less than 8% for all optimized exchange parameters. In addition, F-tests showed that quality of the fits obtained using τ ex , p a as global parameters were not improved when these parameters were free to fit the R
A versatile curve-fit model for linear to deeply concave rank abundance curves
Neuteboom, J.H.; Struik, P.C.
2005-01-01
A new, flexible curve-fit model for linear to concave rank abundance curves was conceptualized and validated using observational data. The model links the geometric-series model and log-series model and can also fit deeply concave rank abundance curves. The model is based ¿ in an unconventional way
DEFF Research Database (Denmark)
Enemark, Stig
2015-01-01
on top-end technical solutions and high accuracy surveys. Of course, such flexibility allows for land administration systems to be incrementally improved over time. This paper unfolds the Fit-For-Purpose concept by analysing the three core components: • The spatial framework (large scale land parcel......This paper argues that the fit-for-purpose approach to building land administration systems in less developed countries will enable provision of the basic administrative frameworks for managing the people to land relationship that is fundamental for meeting the upcoming post 2015 global agenda....... The term “Fit-For-Purpose Land Administration” indicates that the approach used for building land administration systems in less developed countries should be flexible and focused on serving the purpose of the systems (such as providing security of tenure and control of land use) rather than focusing...
Matthew P. Adams; Catherine J. Collier; Sven Uthicke; Yan X. Ow; Lucas Langlois; Katherine R. O’Brien
2017-01-01
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluat...
Virtual Suit Fit Assessment Using Body Shape Model
National Aeronautics and Space Administration — Shoulder injury is one of the most serious risks for crewmembers in long-duration spaceflight. While suboptimal suit fit and contact pressures between the shoulder...
Fitness voter model: Damped oscillations and anomalous consensus.
Woolcock, Anthony; Connaughton, Colm; Merali, Yasmin; Vazquez, Federico
2017-09-01
We study the dynamics of opinion formation in a heterogeneous voter model on a complete graph, in which each agent is endowed with an integer fitness parameter k≥0, in addition to its + or - opinion state. The evolution of the distribution of k-values and the opinion dynamics are coupled together, so as to allow the system to dynamically develop heterogeneity and memory in a simple way. When two agents with different opinions interact, their k-values are compared, and with probability p the agent with the lower value adopts the opinion of the one with the higher value, while with probability 1-p the opposite happens. The agent that keeps its opinion (winning agent) increments its k-value by one. We study the dynamics of the system in the entire 0≤p≤1 range and compare with the case p=1/2, in which opinions are decoupled from the k-values and the dynamics is equivalent to that of the standard voter model. When 0≤psystem approaches exponentially fast to the consensus state of the initial majority opinion. The mean consensus time τ appears to grow logarithmically with the number of agents N, and it is greatly decreased relative to the linear behavior τ∼N found in the standard voter model. When 1/2system initially relaxes to a state with an even coexistence of opinions, but eventually reaches consensus by finite-size fluctuations. The approach to the coexistence state is monotonic for 1/2oscillations around the coexistence value. The final approach to coexistence is approximately a power law t^{-b(p)} in both regimes, where the exponent b increases with p. Also, τ increases respect to the standard voter model, although it still scales linearly with N. The p=1 case is special, with a relaxation to coexistence that scales as t^{-2.73} and a consensus time that scales as τ∼N^{β}, with β≃1.45.
Retrieving global aerosol sources from satellites using inverse modeling
Directory of Open Access Journals (Sweden)
O. Dubovik
2008-01-01
Full Text Available Understanding aerosol effects on global climate requires knowing the global distribution of tropospheric aerosols. By accounting for aerosol sources, transports, and removal processes, chemical transport models simulate the global aerosol distribution using archived meteorological fields. We develop an algorithm for retrieving global aerosol sources from satellite observations of aerosol distribution by inverting the GOCART aerosol transport model.
The inversion is based on a generalized, multi-term least-squares-type fitting, allowing flexible selection and refinement of a priori algorithm constraints. For example, limitations can be placed on retrieved quantity partial derivatives, to constrain global aerosol emission space and time variability in the results. Similarities and differences between commonly used inverse modeling and remote sensing techniques are analyzed. To retain the high space and time resolution of long-period, global observational records, the algorithm is expressed using adjoint operators.
Successful global aerosol emission retrievals at 2°×2.5 resolution were obtained by inverting GOCART aerosol transport model output, assuming constant emissions over the diurnal cycle, and neglecting aerosol compositional differences. In addition, fine and coarse mode aerosol emission sources were inverted separately from MODIS fine and coarse mode aerosol optical thickness data, respectively. These assumptions are justified, based on observational coverage and accuracy limitations, producing valuable aerosol source locations and emission strengths. From two weeks of daily MODIS observations during August 2000, the global placement of fine mode aerosol sources agreed with available independent knowledge, even though the inverse method did not use any a priori information about aerosol sources, and was initialized with a "zero aerosol emission" assumption. Retrieving coarse mode aerosol emissions was less successful
Global Health Innovation Technology Models
Directory of Open Access Journals (Sweden)
Kimberly Harding
2016-04-01
Full Text Available Chronic technology and business process disparities between High Income, Low Middle Income and Low Income (HIC, LMIC, LIC research collaborators directly prevent the growth of sustainable Global Health innova‐ tion for infectious and rare diseases. There is a need for an Open Source-Open Science Architecture Framework to bridge this divide. We are proposing such a framework for consideration by the Global Health community, by utiliz‐ ing a hybrid approach of integrating agnostic Open Source technology and healthcare interoperability standards and Total Quality Management principles. We will validate this architecture framework through our programme called Project Orchid. Project Orchid is a conceptual Clinical Intelligence Exchange and Virtual Innovation platform utilizing this approach to support clinical innovation efforts for multi-national collaboration that can be locally sustainable for LIC and LMIC research cohorts. The goal is to enable LIC and LMIC research organizations to acceler‐ ate their clinical trial process maturity in the field of drug discovery, population health innovation initiatives and public domain knowledge networks. When sponsored, this concept will be tested by 12 confirmed clinical research and public health organizations in six countries. The potential impact of this platform is reduced drug discovery and public health innovation lag time and improved clinical trial interventions, due to reliable clinical intelligence and bio-surveillance across all phases of the clinical innovation process.
Item level diagnostics and model - data fit in item response theory ...
African Journals Online (AJOL)
Item response theory (IRT) is a framework for modeling and analyzing item response data. Item-level modeling gives IRT advantages over classical test theory. The fit of an item score pattern to an item response theory (IRT) models is a necessary condition that must be assessed for further use of item and models that best fit ...
HYbrid Coordinate Ocean Model (HYCOM): Global
National Oceanic and Atmospheric Administration, Department of Commerce — Global HYbrid Coordinate Ocean Model (HYCOM) and U.S. Navy Coupled Ocean Data Assimilation (NCODA) 3-day, daily forecast at approximately 9-km (1/12-degree)...
ASTER Global Digital Elevation Model V002
National Aeronautics and Space Administration — The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Model (GDEM) was developed jointly by the U.S. National...
Gutierrez, Antonio P; Candela, Lori L; Carver, Lara
2012-07-01
GUTIERAIM: The aim of this correlational study was to examine the relations between organizational commitment, perceived organizational support, work values, person-organization fit, developmental experiences, and global job satisfaction among nursing faculty. The global nursing shortage is well documented. At least 57 countries have reported critical shortages. The lack of faculty is finally being recognized as a major issue directly influencing the ability to admit and graduate adequate numbers of nurses. As efforts increase to both recruit and retain faculty, the concept of organizational commitment and what it means to them is important to consider. A cross-sectional correlational design was used. The present study investigated the underlying structure of various organizational factors using structural equation modelling. Data were collected from a stratified random sample of nurse faculty during the academic year 2006-2007. The final model demonstrated that perceived organizational support, developmental experiences, person-organization fit, and global job satisfaction positively predicted nurse faculty's organizational commitment to the academic organization. Cross-validation results indicated that the final full SEM is valid and reliable. Nursing faculty administrators able to use mentoring skills are well equipped to build positive relationships with nursing faculty, which in turn, can lead to increased organizational commitment, productivity, job satisfaction, and perceived organizational support, among others. © 2012 Blackwell Publishing Ltd.
Connor, Gregory
1996-01-01
Factor models are now widely used to support asset selection decisions. Global asset allocation, the allocation between stocks versus bonds and among nations, usually relies instead on correlation analysis of international equity and bond indexes. It would be preferable to have a single integrated framework for both asset selection and asset allocation. This framework would require a factor model applicable at an asset or country level, as well as at a global level,...
CRAPONE, Optical Model Potential Fit of Neutron Scattering Data
International Nuclear Information System (INIS)
Fabbri, F.; Fratamico, G.; Reffo, G.
2004-01-01
1 - Description of problem or function: Automatic search for local and non-local optical potential parameters for neutrons. Total, elastic, differential elastic cross sections, l=0 and l=1 strength functions and scattering length can be considered. 2 - Method of solution: A fitting procedure is applied to different sets of experimental data depending on the local or non-local approximation chosen. In the non-local approximation the fitting procedure can be simultaneously performed over the whole energy range. The best fit is obtained when a set of parameters is found where CHI 2 is at its minimum. The solution of the system equations is obtained by diagonalization of the matrix according to the Jacobi method
International Nuclear Information System (INIS)
Mbagwu, J.S.C.
1994-05-01
Among the many models developed for monitoring the infiltration process those of Philip and Kostiakov have been studied in detail because of their simplicity and the ease of estimating their fitting parameters. The important soil physical factors influencing the fitting parameters in these infiltration models are reported in this study. The results of the study show that the single most important soil property affecting the fitting parameters in these models is the effective porosity. 36 refs, 2 figs, 5 tabs
A global central banker competency model
Directory of Open Access Journals (Sweden)
David W. Brits
2014-07-01
Full Text Available Orientation: No comprehensive, integrated competency model exists for central bankers. Due to the importance of central banks in the context of the ongoing global financial crisis, it was deemed necessary to design and validate such a model. Research purpose: To craft and validate a comprehensive, integrated global central banker competency model (GCBCM and to assess whether central banks using the GCBCM for training have a higher global influence. Motivation for the study: Limited consensus exists globally about what constitutes a ‘competent’ central banker. A quantitatively validated GCBCM would make a significant contribution to enhancing central banker effectiveness, and also provide a solid foundation for effective people management. Research approach, design and method: A blended quantitative and qualitative research approach was taken. Two sets of hypotheses were tested regarding the relationships between the GCBCM and the training offered, using the model on the one hand, and a central bank’s global influence on the other. Main findings: The GCBCM was generally accepted across all participating central banks globally, although some differences were found between central banks with higher and lower global influence. The actual training offered by central banks in terms of the model, however, is generally limited to technical-functional skills. The GCBCM is therefore at present predominantly aspirational. Significant differences were found regarding the training offered. Practical/managerial implications: By adopting the GCBCM, central banks would be able to develop organisation-specific competency models in order to enhance their organisational capabilities and play their increasingly important global role more effectively. Contribution: A generic conceptual framework for the crafting of a competency model with evaluation criteria was developed. A GCBCM was quantitatively validated.
Modelling and analysis of global coal markets
International Nuclear Information System (INIS)
Trueby, Johannes
2013-01-01
International Steam Coal Trade. In this paper, we analyse steam coal market equilibria in the years 2006 and 2008 by testing for two possible market structure scenarios: perfect competition and an oligopoly setup with major exporters competing in quantities. The assumed oligopoly scenario cannot explain market equilibria for any year. While we find that the competitive model simulates market equilibria well in 2006, the competitive model is not able to reproduce real market outcomes in 2008. The analysis shows that not all available supply capacity was utilised in 2008. We conclude that either unknown capacity bottlenecks or more sophisticated non-competitive strategies were the cause for the high prices in 2008. Chapter 4 builds upon the findings of the analysis in chapter 3 and adds a more detailed representation of domestic markets. The corresponding essay is titled Nations as Strategic Players in Global Commodity Markets: Evidence from World Coal Trade. In this chapter we explore the hypothesis that export policies and trade patterns of national players in the steam coal market are consistent with non-competitive market behaviour. We test this hypothesis by developing a static equilibrium model which is able to model coal producing nations as strategic players. We explicitly account for integrated seaborne trade and domestic markets. The global steam coal market is simulated under several imperfect market structure setups. We find that trade and prices of a China - Indonesia duopoly fits the real market outcome best and that real Chinese export quotas in 2008 were consistent with simulated exports under a Cournot-Nash strategy. Chapter 5 looks at the long-term effect of Chinese energy system planning decisions. The time horizon is 2006 to 2030. The analysis in this chapter combines a dynamic equilibrium model with the scenario analysis technique. The corresponding essay is titled Coal Lumps vs. Electrons: How Do Chinese Bulk Energy Transport Decisions Affect the Global
Modelling and analysis of global coal markets
Energy Technology Data Exchange (ETDEWEB)
Trueby, Johannes
2013-01-17
International Steam Coal Trade. In this paper, we analyse steam coal market equilibria in the years 2006 and 2008 by testing for two possible market structure scenarios: perfect competition and an oligopoly setup with major exporters competing in quantities. The assumed oligopoly scenario cannot explain market equilibria for any year. While we find that the competitive model simulates market equilibria well in 2006, the competitive model is not able to reproduce real market outcomes in 2008. The analysis shows that not all available supply capacity was utilised in 2008. We conclude that either unknown capacity bottlenecks or more sophisticated non-competitive strategies were the cause for the high prices in 2008. Chapter 4 builds upon the findings of the analysis in chapter 3 and adds a more detailed representation of domestic markets. The corresponding essay is titled Nations as Strategic Players in Global Commodity Markets: Evidence from World Coal Trade. In this chapter we explore the hypothesis that export policies and trade patterns of national players in the steam coal market are consistent with non-competitive market behaviour. We test this hypothesis by developing a static equilibrium model which is able to model coal producing nations as strategic players. We explicitly account for integrated seaborne trade and domestic markets. The global steam coal market is simulated under several imperfect market structure setups. We find that trade and prices of a China - Indonesia duopoly fits the real market outcome best and that real Chinese export quotas in 2008 were consistent with simulated exports under a Cournot-Nash strategy. Chapter 5 looks at the long-term effect of Chinese energy system planning decisions. The time horizon is 2006 to 2030. The analysis in this chapter combines a dynamic equilibrium model with the scenario analysis technique. The corresponding essay is titled Coal Lumps vs. Electrons: How Do Chinese Bulk Energy Transport Decisions Affect the Global
Regional forecasting with global atmospheric models
International Nuclear Information System (INIS)
Crowley, T.J.; North, G.R.; Smith, N.R.
1994-05-01
The scope of the report is to present the results of the fourth year's work on the atmospheric modeling part of the global climate studies task. The development testing of computer models and initial results are discussed. The appendices contain studies that provide supporting information and guidance to the modeling work and further details on computer model development. Complete documentation of the models, including user information, will be prepared under separate reports and manuals
The FITS model office ergonomics program: a model for best practice.
Chim, Justine M Y
2014-01-01
An effective office ergonomics program can predict positive results in reducing musculoskeletal injury rates, enhancing productivity, and improving staff well-being and job satisfaction. Its objective is to provide a systematic solution to manage the potential risk of musculoskeletal disorders among computer users in an office setting. A FITS Model office ergonomics program is developed. The FITS Model Office Ergonomics Program has been developed which draws on the legislative requirements for promoting the health and safety of workers using computers for extended periods as well as previous research findings. The Model is developed according to the practical industrial knowledge in ergonomics, occupational health and safety management, and human resources management in Hong Kong and overseas. This paper proposes a comprehensive office ergonomics program, the FITS Model, which considers (1) Furniture Evaluation and Selection; (2) Individual Workstation Assessment; (3) Training and Education; (4) Stretching Exercises and Rest Break as elements of an effective program. An experienced ergonomics practitioner should be included in the program design and implementation. Through the FITS Model Office Ergonomics Program, the risk of musculoskeletal disorders among computer users can be eliminated or minimized, and workplace health and safety and employees' wellness enhanced.
Global-warming forecasting models
International Nuclear Information System (INIS)
Moeller, K.P.
1992-01-01
In spite of an annual man-made quantity of about 20 billion tons, carbon dioxide has remained a trace gas in the atmosphere (350 ppm at present). The reliability of model calculations which forecast temperatures is dicussed in view of the world-wide increase in carbon dioxides. Computer simulations reveal a general, serious threat to the future of mankind. (DG) [de
Modelling population dynamics model formulation, fitting and assessment using state-space methods
Newman, K B; Morgan, B J T; King, R; Borchers, D L; Cole, D J; Besbeas, P; Gimenez, O; Thomas, L
2014-01-01
This book gives a unifying framework for estimating the abundance of open populations: populations subject to births, deaths and movement, given imperfect measurements or samples of the populations. The focus is primarily on populations of vertebrates for which dynamics are typically modelled within the framework of an annual cycle, and for which stochastic variability in the demographic processes is usually modest. Discrete-time models are developed in which animals can be assigned to discrete states such as age class, gender, maturity, population (within a metapopulation), or species (for multi-species models). The book goes well beyond estimation of abundance, allowing inference on underlying population processes such as birth or recruitment, survival and movement. This requires the formulation and fitting of population dynamics models. The resulting fitted models yield both estimates of abundance and estimates of parameters characterizing the underlying processes.
Model Fitting for Predicted Precipitation in Darwin: Some Issues with Model Choice
Farmer, Jim
2010-01-01
In Volume 23(2) of the "Australian Senior Mathematics Journal," Boncek and Harden present an exercise in fitting a Markov chain model to rainfall data for Darwin Airport (Boncek & Harden, 2009). Days are subdivided into those with precipitation and precipitation-free days. The author abbreviates these labels to wet days and dry days.…
Model-fitting approach to kinetic analysis of non-isothermal oxidation of molybdenite
International Nuclear Information System (INIS)
Ebrahimi Kahrizsangi, R.; Abbasi, M. H.; Saidi, A.
2007-01-01
The kinetics of molybdenite oxidation was studied by non-isothermal TGA-DTA with heating rate 5 d eg C .min -1 . The model-fitting kinetic approach applied to TGA data. The Coats-Redfern method used of model fitting. The popular model-fitting gives excellent fit non-isothermal data in chemically controlled regime. The apparent activation energy was determined to be about 34.2 kcalmol -1 With pre-exponential factor about 10 8 sec -1 for extent of reaction less than 0.5
Repair models of cell survival and corresponding computer program for survival curve fitting
International Nuclear Information System (INIS)
Shen Xun; Hu Yiwei
1992-01-01
Some basic concepts and formulations of two repair models of survival, the incomplete repair (IR) model and the lethal-potentially lethal (LPL) model, are introduced. An IBM-PC computer program for survival curve fitting with these models was developed and applied to fit the survivals of human melanoma cells HX118 irradiated at different dose rates. Comparison was made between the repair models and two non-repair models, the multitar get-single hit model and the linear-quadratic model, in the fitting and analysis of the survival-dose curves. It was shown that either IR model or LPL model can fit a set of survival curves of different dose rates with same parameters and provide information on the repair capacity of cells. These two mathematical models could be very useful in quantitative study on the radiosensitivity and repair capacity of cells
The l z ( p ) * Person-Fit Statistic in an Unfolding Model Context.
Tendeiro, Jorge N
2017-01-01
Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded unfolding model is used. Results from a simulation study indicate that the person-fit statistic performed relatively well in detecting midpoint response style patterns and not so well in detecting extreme response style patterns.
Qualitative models of global warming amplifiers
Milošević, U.; Bredeweg, B.; de Kleer, J.; Forbus, K.D.
2010-01-01
There is growing interest from ecological experts to create qualitative models of phenomena for which numerical information is sparse or missing. We present a number of successful models in the field of environmental science, namely, the domain of global warming. The motivation behind the effort is
Technology Learning Ratios in Global Energy Models
International Nuclear Information System (INIS)
Varela, M.
2001-01-01
The process of introduction of a new technology supposes that while its production and utilisation increases, also its operation improves and its investment costs and production decreases. The accumulation of experience and learning of a new technology increase in parallel with the increase of its market share. This process is represented by the technological learning curves and the energy sector is not detached from this process of substitution of old technologies by new ones. The present paper carries out a brief revision of the main energy models that include the technology dynamics (learning). The energy scenarios, developed by global energy models, assume that the characteristics of the technologies are variables with time. But this trend is incorporated in a exogenous way in these energy models, that is to say, it is only a time function. This practice is applied to the cost indicators of the technology such as the specific investment costs or to the efficiency of the energy technologies. In the last years, the new concept of endogenous technological learning has been integrated within these global energy models. This paper examines the concept of technological learning in global energy models. It also analyses the technological dynamics of the energy system including the endogenous modelling of the process of technological progress. Finally, it makes a comparison of several of the most used global energy models (MARKAL, MESSAGE and ERIS) and, more concretely, about the use these models make of the concept of technological learning. (Author) 17 refs
Energy Technology Data Exchange (ETDEWEB)
Gabel, J.
2005-03-17
We present an analysis of the intrinsic UV absorption in the Seyfert 1 galaxy Mrk 279 based on simultaneous long observations with the ''Hubble Space Telescope'' (41 ks) and the ''Far Ultraviolet Spectroscopic Explorer'' (91 ks). To extract the line-of-sight covering factors and ionic column densities, we separately fit two groups of absorption lines: the Lyman series and the CNO lithium-like doublets. For the CNO doublets we assume that all three ions share the same covering factors. The fitting method applied here overcomes some limitations of the traditional method using individual doublet pairs; it allows for the treatment of more complex, physically realistic scenarios for the absorption-emission geometry and eliminates systematic errors that we show are introduced by spectral noise. We derive velocity-dependent solutions based on two models of geometrical covering--a single covering factor for all background emission sources, and separate covering factors for the continuum and emission lines. Although both models give good statistical fits to the observed absorption, we favor the model with two covering factors because: (a) the best-fit covering factors for both emission sources are similar for the independent Lyman series and CNO doublet fits; (b) the fits are consistent with full coverage of the continuum source and partial coverage of the emission lines by the absorbers, as expected from the relative sizes of the nuclear emission components; and (c) it provides a natural explanation for variability in the Lya absorption detected in an earlier epoch. We also explore physical and geometrical constraints on the outflow from these results.
Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models
Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning
2012-01-01
The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…
Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.
DeCarlo, Lawrence T
2003-02-01
The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.
New global ICT-based business models
DEFF Research Database (Denmark)
The New Global Business model (NEWGIBM) book describes the background, theory references, case studies, results and learning imparted by the NEWGIBM project, which is supported by ICT, to a research group during the period from 2005-2011. The book is a result of the efforts and the collaborative ...... The NEWGIBM Cases Show? The Strategy Concept in Light of the Increased Importance of Innovative Business Models Successful Implementation of Global BM Innovation Globalisation Of ICT Based Business Models: Today And In 2020......The New Global Business model (NEWGIBM) book describes the background, theory references, case studies, results and learning imparted by the NEWGIBM project, which is supported by ICT, to a research group during the period from 2005-2011. The book is a result of the efforts and the collaborative....... The NEWGIBM book serves as a part of the final evaluation and documentation of the NEWGIBM project and is supported by results from the following projects: M-commerce, Global Innovation, Global Ebusiness & M-commerce, The Blue Ocean project, International Center for Innovation and Women in Business, NEFFICS...
A high resolution global scale groundwater model
de Graaf, Inge; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc
2014-05-01
As the world's largest accessible source of freshwater, groundwater plays a vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater storage provides a large natural buffer against water shortage and sustains flows to rivers and wetlands, supporting ecosystem habitats and biodiversity. Yet, the current generation of global scale hydrological models (GHMs) do not include a groundwater flow component, although it is a crucial part of the hydrological cycle. Thus, a realistic physical representation of the groundwater system that allows for the simulation of groundwater head dynamics and lateral flows is essential for GHMs that increasingly run at finer resolution. In this study we present a global groundwater model with a resolution of 5 arc-minutes (approximately 10 km at the equator) using MODFLOW (McDonald and Harbaugh, 1988). With this global groundwater model we eventually intend to simulate the changes in the groundwater system over time that result from variations in recharge and abstraction. Aquifer schematization and properties of this groundwater model were developed from available global lithological maps and datasets (Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moosdorf, 2013), combined with our estimate of aquifer thickness for sedimentary basins. We forced the groundwater model with the output from the global hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the net groundwater recharge and average surface water levels derived from routed channel discharge. For the parameterization, we relied entirely on available global datasets and did not calibrate the model so that it can equally be expanded to data poor environments. Based on our sensitivity analysis, in which we run the model with various hydrogeological parameter settings, we observed that most variance in groundwater
Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers
Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)
1996-01-01
Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.
FIT ANALYSIS OF INDOSAT DOMPETKU BUSINESS MODEL USING A STRATEGIC DIAGNOSIS APPROACH
Directory of Open Access Journals (Sweden)
Fauzi Ridwansyah
2015-09-01
Full Text Available Mobile payment is an industry's response to global and regional technological-driven, as well as national social-economical driven in less cash society development. The purposes of this study were 1 identifying positioning of PT. Indosat in providing a response to Indonesian mobile payment market, 2 analyzing Indosat’s internal capabilities and business model fit with environment turbulence, and 3 formulating the optimum mobile payment business model development design for Indosat. The method used in this study was a combination of qualitative and quantitative analysis through in-depth interviews with purposive judgment sampling. The analysis tools used in this study were Business Model Canvas (MBC and Ansoff’s Strategic Diagnosis. The interviewees were the representatives of PT. Indosat internal management and mobile payment business value chain stakeholders. Based on BMC mapping which is then analyzed by strategic diagnosis model, a considerable gap (>1 between the current market environment and Indosat strategy of aggressiveness with the expected future of environment turbulence level was obtained. Therefore, changes in the competitive strategy that need to be conducted include 1 developing a new customer segment, 2 shifting the value proposition that leads to the extensification of mobile payment, 3 monetizing effective value proposition, and 4 integrating effective collaboration for harmonizing company’s objective with the government's vision. Keywords: business model canvas, Indosat, mobile payment, less cash society, strategic diagnosis
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.
The issue of statistical power for overall model fit in evaluating structural equation models
Directory of Open Access Journals (Sweden)
Richard HERMIDA
2015-06-01
Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.
Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R
2017-01-04
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.
Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.
2017-01-01
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
Regional forecasting with global atmospheric models
International Nuclear Information System (INIS)
Crowley, T.J.; North, G.R.; Smith, N.R.
1994-05-01
This report was prepared by the Applied Research Corporation (ARC), College Station, Texas, under subcontract to Pacific Northwest Laboratory (PNL) as part of a global climate studies task. The task supports site characterization work required for the selection of a potential high-level nuclear waste repository and is part of the Performance Assessment Scientific Support (PASS) Program at PNL. The work is under the overall direction of the Office of Civilian Radioactive Waste Management (OCRWM), US Department of Energy Headquarters, Washington, DC. The scope of the report is to present the results of the third year's work on the atmospheric modeling part of the global climate studies task. The development testing of computer models and initial results are discussed. The appendices contain several studies that provide supporting information and guidance to the modeling work and further details on computer model development. Complete documentation of the models, including user information, will be prepared under separate reports and manuals
Tøndel, Kristin; Niederer, Steven A; Land, Sander; Smith, Nicolas P
2014-05-20
Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input-output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on
The Electroweak Fit of the Standard Model after the Discovery of a New Boson at the LHC
Baak, M.
2012-11-03
In view of the discovery of a new boson by the ATLAS and CMS Collaborations at the LHC, we present an update of the global Standard Model (SM) fit to electroweak precision data. Assuming the new particle to be the SM Higgs boson, all fundamental parameters of the SM are known allowing, for the first time, to overconstrain the SM at the electroweak scale and assert its validity. Including the effects of radiative corrections and the experimental and theoretical uncertainties, the global fit exhibits a p-value of 0.07. The mass measurements by ATLAS and CMS agree within 1.3sigma with the indirect determination M_H=(94 +25 -22) GeV. Within the SM the W boson mass and the effective weak mixing angle can be accurately predicted to be M_W=(80.359 +- 0.011) GeV and sin^2(theta_eff^ell)=(0.23150 +- 0.00010) from the global fit. These results are compatible with, and exceed in precision, the direct measurements. For the indirect determination of the top quark mass we find m_t=(175.8 +2.7 -2.4) GeV, in agreement with t...
The electroweak fit of the standard model after the discovery of a new boson at the LHC
International Nuclear Information System (INIS)
Baak, M.; Hoecker, A.; Schott, M.; Goebel, M.; Kennedy, D.; Moenig, K.; Haller, J.; Kogler, R.; Stelzer, J.
2012-09-01
In view of the discovery of a new boson by the ATLAS and CMS Collaborations at the LHC, we present an update of the global Standard Model (SM) fit to electroweak precision data. Assuming the new particle to be the SM Higgs boson, all fundamental parameters of the SM are known allowing, for the first time, to overconstrain the SM at the electroweak scale and assert its validity. Including the effects of radiative corrections and the experimental and theoretical uncertainties, the global fit exhibits a p-value of 0.07. The mass measurements by ATLAS and CMS agree within 1.3σ with the indirect determination M H =94 +25 -22 GeV. Within the SM the W boson mass and the effective weak mixing angle can be accurately predicted to be M W =80.359±0.011 GeV and sin 2 θ l eff =0.23150±0.00010 from the global fit. These results are compatible with, and exceed in precision, the direct measurements. For the indirect determination of the top quark mass we find m t =175.8 +2.7 -2.4 GeV, in agreement with the kinematic and cross-section based measurements.
Global parameter estimation for thermodynamic models of transcriptional regulation.
Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N
2013-07-15
Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.
Climate change adaptation: Where does global health fit in the agenda?
Directory of Open Access Journals (Sweden)
Bowen Kathryn J
2012-05-01
Full Text Available Abstract Human-induced climate change will affect the lives of most populations in the next decade and beyond. It will have greatest, and generally earliest, impact on the poorest and most disadvantaged populations on the planet. Changes in climatic conditions and increases in weather variability affect human wellbeing, safety, health and survival in many ways. Some impacts are direct-acting and immediate, such as impaired food yields and storm surges. Other health effects are less immediate and typically occur via more complex causal pathways that involve a range of underlying social conditions and sectors such as water and sanitation, agriculture and urban planning. Climate change adaptation is receiving much attention given the inevitability of climate change and its effects, particularly in developing contexts, where the effects of climate change will be experienced most strongly and the response mechanisms are weakest. Financial support towards adaptation activities from various actors including the World Bank, the European Union and the United Nations is increasing substantially. With this new global impetus and funding for adaptation action come challenges such as the importance of developing adaptation activities on a sound understanding of baseline community needs and vulnerabilities, and how these may alter with changes in climate. The global health community is paying heed to the strengthening focus on adaptation, albeit in a slow and unstructured manner. The aim of this paper is to provide an overview of adaptation and its relevance to global health, and highlight the opportunities to improve health and reduce health inequities via the new and additional funding that is available for climate change adaptation activities.
Climate change adaptation: where does global health fit in the agenda?
Bowen, Kathryn J; Friel, Sharon
2012-05-27
Human-induced climate change will affect the lives of most populations in the next decade and beyond. It will have greatest, and generally earliest, impact on the poorest and most disadvantaged populations on the planet. Changes in climatic conditions and increases in weather variability affect human wellbeing, safety, health and survival in many ways. Some impacts are direct-acting and immediate, such as impaired food yields and storm surges. Other health effects are less immediate and typically occur via more complex causal pathways that involve a range of underlying social conditions and sectors such as water and sanitation, agriculture and urban planning. Climate change adaptation is receiving much attention given the inevitability of climate change and its effects, particularly in developing contexts, where the effects of climate change will be experienced most strongly and the response mechanisms are weakest. Financial support towards adaptation activities from various actors including the World Bank, the European Union and the United Nations is increasing substantially. With this new global impetus and funding for adaptation action come challenges such as the importance of developing adaptation activities on a sound understanding of baseline community needs and vulnerabilities, and how these may alter with changes in climate. The global health community is paying heed to the strengthening focus on adaptation, albeit in a slow and unstructured manner. The aim of this paper is to provide an overview of adaptation and its relevance to global health, and highlight the opportunities to improve health and reduce health inequities via the new and additional funding that is available for climate change adaptation activities.
Spherical Process Models for Global Spatial Statistics
Jeong, Jaehong
2017-11-28
Statistical models used in geophysical, environmental, and climate science applications must reflect the curvature of the spatial domain in global data. Over the past few decades, statisticians have developed covariance models that capture the spatial and temporal behavior of these global data sets. Though the geodesic distance is the most natural metric for measuring distance on the surface of a sphere, mathematical limitations have compelled statisticians to use the chordal distance to compute the covariance matrix in many applications instead, which may cause physically unrealistic distortions. Therefore, covariance functions directly defined on a sphere using the geodesic distance are needed. We discuss the issues that arise when dealing with spherical data sets on a global scale and provide references to recent literature. We review the current approaches to building process models on spheres, including the differential operator, the stochastic partial differential equation, the kernel convolution, and the deformation approaches. We illustrate realizations obtained from Gaussian processes with different covariance structures and the use of isotropic and nonstationary covariance models through deformations and geographical indicators for global surface temperature data. To assess the suitability of each method, we compare their log-likelihood values and prediction scores, and we end with a discussion of related research problems.
On coupling global biome models with climate models
Claussen, M.
1994-01-01
The BIOME model of Prentice et al. (1992; J. Biogeogr. 19: 117-134), which predicts global vegetation patterns in equilibrium with climate, was coupled with the ECHAM climate model of the Max-Planck-Institut fiir Meteorologie, Hamburg, Germany. It was found that incorporation of the BIOME model into ECHAM, regardless at which frequency, does not enhance the simulated climate variability, expressed in terms of differences between global vegetation patterns. Strongest changes are seen only betw...
COLUMBUS. A global gas market model
Energy Technology Data Exchange (ETDEWEB)
Hecking, Harald; Panke, Timo
2012-03-15
A model of the global gas market is presented which in its basic version optimises the future development of production, transport and storage capacities as well as the actual gas flows around the world assuming perfect competition. Besides the transport of natural gas via pipelines also the global market for liquefied natural gas (LNG) is modelled using a hub-and-spoke approach. While in the basic version of the model an inelastic demand and a piecewise-linear supply function are used, both can be changed easily, e.g. to a Golombek style production function or a constant elasticity of substitution (CES) demand function. Due to the usage of mixed complementary programming (MCP) the model additionally allows for the simulation of strategic behaviour of different players in the gas market, e.g. the gas producers.
Information Theoretic Tools for Parameter Fitting in Coarse Grained Models
Kalligiannaki, Evangelia; Harmandaris, Vagelis; Katsoulakis, Markos A.; Plechac, Petr
2015-01-01
We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics
GLOMO - Global Mobility Model: Beschreibung und Ergebnisse
Kühn, André; Novinsky, Patrick; Schade, Wolfgang
2014-01-01
The development of both, emerging markets as well as the already establish markets (USA, Japan, Europe), is highly relevant for future success of the export-oriented German automotive industry. This paper describes the so called Global Mobility Model (GLOMO) based on the system dynamics approach, which simulates the future development of car sales by segment and drive technology. The modularized model contains population, income and GDP development in order to describe the framework in the mo...
Validation of A Global Hydrological Model
Doell, P.; Lehner, B.; Kaspar, F.; Vassolo, S.
Freshwater availability has been recognized as a global issue, and its consistent quan- tification not only in individual river basins but also at the global scale is required to support the sustainable use of water. The Global Hydrology Model WGHM, which is a submodel of the global water use and availability model WaterGAP 2, computes sur- face runoff, groundwater recharge and river discharge at a spatial resolution of 0.5. WGHM is based on the best global data sets currently available, including a newly developed drainage direction map and a data set of wetlands, lakes and reservoirs. It calculates both natural and actual discharge by simulating the reduction of river discharge by human water consumption (as computed by the water use submodel of WaterGAP 2). WGHM is calibrated against observed discharge at 724 gauging sta- tions (representing about 50% of the global land area) by adjusting a parameter of the soil water balance. It not only computes the long-term average water resources but also water availability indicators that take into account the interannual and seasonal variability of runoff and discharge. The reliability of the model results is assessed by comparing observed and simulated discharges at the calibration stations and at se- lected other stations. We conclude that reliable results can be obtained for basins of more than 20,000 km2. In particular, the 90% reliable monthly discharge is simu- lated well. However, there is the tendency that semi-arid and arid basins are modeled less satisfactorily than humid ones, which is partially due to neglecting river channel losses and evaporation of runoff from small ephemeral ponds in the model. Also, the hydrology of highly developed basins with large artificial storages, basin transfers and irrigation schemes cannot be simulated well. The seasonality of discharge in snow- dominated basins is overestimated by WGHM, and if the snow-dominated basin is uncalibrated, discharge is likely to be underestimated
DEFF Research Database (Denmark)
Giardino, P. P.; Kannike, K.; Masina, I.
2014-01-01
We perform a state-of-the-art global fit to all Higgs data. We synthesise them into a 'universal' form, which allows to easily test any desired model. We apply the proposed methodology to extract from data the Higgs branching ratios, production cross sections, couplings and to analyse composite...... Higgs models, models with extra Higgs doublets, supersymmetry, extra particles in the loops, anomalous top couplings, and invisible Higgs decays into Dark Matter. Best fit regions lie around the Standard Model predictions and are well approximated by our 'universal' fit. Latest data exclude the dilaton...... as an alternative to the Higgs, and disfavour fits with negative Yukawa couplings. We derive for the first time the SM Higgs boson mass from the measured rates, rather than from the peak positions, obtaining M-h = 124.4 +/- 1.6 GeV....
The lz(p)* Person-Fit Statistic in an Unfolding Model Context
Tendeiro, Jorge N.
2017-01-01
Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded
A model for global cycling of tritium
International Nuclear Information System (INIS)
Killough, G.G.; Kocher, D.C.
1988-01-01
Dynamic compartment models are widely used to describe global cycling of radionuclides for purposes of dose estimation. In this paper the authors present a new global tritium model that reproduces environmental time-series data on concentrations in precipitation, ocean surface waters, and surface fresh waters in the northern hemisphere, concentrations of atmospheric tritium in the southern hemisphere, and the latitude dependence of tritium in both hemispheres. Names TRICYCLE (for TRItium CYCLE) the model is based on the global hydrologic cycle and includes hemispheric stratospheric compartments, disaggregation of the troposphere and ocean surface waters into eight latitude zones, consideration of the different concentrations of atmospheric tritium over land and over the ocean, and a diffusive model for transport in the ocean. TRICYCLE reproduces the environmental data if it is assumed that about 50% of the tritium from atmospheric weapons testing was injected directly into the northern stratosphere as HTO. The model's latitudinal disaggregation permits taking into account the distribution of population. For a uniformly distributed release of HTO into the worldwide troposphere, TRICYCLE predicts a collective dose commitment to the world population that exceeds the NCRP model's corresponding prediction by a factor of three
A model for global cycling of tritium
International Nuclear Information System (INIS)
Killough, G.G.; Kocher, D.C.
1988-01-01
Dynamic compartment models are widely used to describe global cycling of radionuclides for purposes of dose estimation. In this paper, we present a new global tritium model that reproduces environmental time-series data on concentrations in precipitation, ocean surface waters, and surface fresh waters in the northern hemisphere, concentrations of atmospheric tritium in the soutehrn hemisphere, and the latitude dependence of tritium in both hemispheres. Named TRICYCLE for Tritium CYCLE, the model is based on the global hydrologic cycle and includes hemisphereic stratospheric compartments, disaggregation of the troposphere and ocean surface waters into eight latitudezones, consideration of the different concentrations of atmospheric tritium over land and over the ocean, and a diffusive model for transport in the ocean. TRICYCLE reproduces the environmental data if we assume that about 50% of the tritium from atmospheric weapons testing was injected directly into the northern stratosphere as HTO. The models latitudinal disaggregation permits taking into account the distribution of population. For a unfiormaly distributed release of HTO into the worldwide troposphere, TRICYCLE predicts a collective dose commitment to the world population that exceeds the corresponding prediction by the NCRP model by about a factor of 3. 11 refs., 5 figs., 1 tab
Supersymmetric Fits after the Higgs Discovery and Implications for Model Building
Ellis, John
2014-01-01
The data from the first run of the LHC at 7 and 8 TeV, together with the information provided by other experiments such as precision electroweak measurements, flavour measurements, the cosmological density of cold dark matter and the direct search for the scattering of dark matter particles in the LUX experiment, provide important constraints on supersymmetric models. Important information is provided by the ATLAS and CMS measurements of the mass of the Higgs boson, as well as the negative results of searches at the LHC for events with missing transverse energy accompanied by jets, and the LHCb and CMS measurements off BR($B_s \\to \\mu^+ \\mu^-$). Results are presented from frequentist analyses of the parameter spaces of the CMSSM and NUHM1. The global $\\chi^2$ functions for the supersymmetric models vary slowly over most of the parameter spaces allowed by the Higgs mass and the missing transverse energy search, with best-fit values that are comparable to the $\\chi^2$ for the Standard Model. The $95\\%$ CL lower...
Global nuclear material flow/control model
International Nuclear Information System (INIS)
Dreicer, J.S.; Rutherford, D.S.; Fasel, P.K.; Riese, J.M.
1997-01-01
This is the final report of a two-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The nuclear danger can be reduced by a system for global management, protection, control, and accounting as part of an international regime for nuclear materials. The development of an international fissile material management and control regime requires conceptual research supported by an analytical and modeling tool which treats the nuclear fuel cycle as a complete system. The prototype model developed visually represents the fundamental data, information, and capabilities related to the nuclear fuel cycle in a framework supportive of national or an international perspective. This includes an assessment of the global distribution of military and civilian fissile material inventories, a representation of the proliferation pertinent physical processes, facility specific geographic identification, and the capability to estimate resource requirements for the management and control of nuclear material. The model establishes the foundation for evaluating the global production, disposition, and safeguards and security requirements for fissile nuclear material and supports the development of other pertinent algorithmic capabilities necessary to undertake further global nuclear material related studies
Directory of Open Access Journals (Sweden)
G. E. Bodeker
2013-02-01
Full Text Available High vertical resolution ozone measurements from eight different satellite-based instruments have been merged with data from the global ozonesonde network to calculate monthly mean ozone values in 5° latitude zones. These ''Tier 0'' ozone number densities and ozone mixing ratios are provided on 70 altitude levels (1 to 70 km and on 70 pressure levels spaced ~ 1 km apart (878.4 hPa to 0.046 hPa. The Tier 0 data are sparse and do not cover the entire globe or altitude range. To provide a gap-free database, a least squares regression model is fitted to the Tier 0 data and then evaluated globally. The regression model fit coefficients are expanded in Legendre polynomials to account for latitudinal structure, and in Fourier series to account for seasonality. Regression model fit coefficient patterns, which are two dimensional fields indexed by latitude and month of the year, from the N-th vertical level serve as an initial guess for the fit at the N + 1-th vertical level. The initial guess field for the first fit level (20 km/58.2 hPa was derived by applying the regression model to total column ozone fields. Perturbations away from the initial guess are captured through the Legendre and Fourier expansions. By applying a single fit at each level, and using the approach of allowing the regression fits to change only slightly from one level to the next, the regression is less sensitive to measurement anomalies at individual stations or to individual satellite-based instruments. Particular attention is paid to ensuring that the low ozone abundances in the polar regions are captured. By summing different combinations of contributions from different regression model basis functions, four different ''Tier 1'' databases have been compiled for different intended uses. This database is suitable for assessing ozone fields from chemistry-climate model simulations or for providing the ozone boundary conditions for global climate model simulations that do not
DEFF Research Database (Denmark)
Øjelund, Henrik; Sadegh, Payman
2000-01-01
be obtained. This paper presents a new approach for system modelling under partial (global) information (or the so called Gray-box modelling) that seeks to perserve the benefits of the global as well as local methodologies sithin a unified framework. While the proposed technique relies on local approximations......Local function approximations concern fitting low order models to weighted data in neighbourhoods of the points where the approximations are desired. Despite their generality and convenience of use, local models typically suffer, among others, from difficulties arising in physical interpretation...... simultaneously with the (local estimates of) function values. The approach is applied to modelling of a linear time variant dynamic system under prior linear time invariant structure where local regression fails as a result of high dimensionality....
Spatial modeling of agricultural land use change at global scale
Meiyappan, P.; Dalton, M.; O'Neill, B. C.; Jain, A. K.
2014-11-01
Long-term modeling of agricultural land use is central in global scale assessments of climate change, food security, biodiversity, and climate adaptation and mitigation policies. We present a global-scale dynamic land use allocation model and show that it can reproduce the broad spatial features of the past 100 years of evolution of cropland and pastureland patterns. The modeling approach integrates economic theory, observed land use history, and data on both socioeconomic and biophysical determinants of land use change, and estimates relationships using long-term historical data, thereby making it suitable for long-term projections. The underlying economic motivation is maximization of expected profits by hypothesized landowners within each grid cell. The model predicts fractional land use for cropland and pastureland within each grid cell based on socioeconomic and biophysical driving factors that change with time. The model explicitly incorporates the following key features: (1) land use competition, (2) spatial heterogeneity in the nature of driving factors across geographic regions, (3) spatial heterogeneity in the relative importance of driving factors and previous land use patterns in determining land use allocation, and (4) spatial and temporal autocorrelation in land use patterns. We show that land use allocation approaches based solely on previous land use history (but disregarding the impact of driving factors), or those accounting for both land use history and driving factors by mechanistically fitting models for the spatial processes of land use change do not reproduce well long-term historical land use patterns. With an example application to the terrestrial carbon cycle, we show that such inaccuracies in land use allocation can translate into significant implications for global environmental assessments. The modeling approach and its evaluation provide an example that can be useful to the land use, Integrated Assessment, and the Earth system modeling
Global Optimization Ensemble Model for Classification Methods
Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab
2014-01-01
Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382
Global Optimization Ensemble Model for Classification Methods
Directory of Open Access Journals (Sweden)
Hina Anwar
2014-01-01
Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.
Fitting and Testing Conditional Multinormal Partial Credit Models
Hessen, David J.
2012-01-01
A multinormal partial credit model for factor analysis of polytomously scored items with ordered response categories is derived using an extension of the Dutch Identity (Holland in "Psychometrika" 55:5-18, 1990). In the model, latent variables are assumed to have a multivariate normal distribution conditional on unweighted sums of item…
Global Analysis, Interpretation and Modelling: An Earth Systems Modelling Program
Moore, Berrien, III; Sahagian, Dork
1997-01-01
The Goal of the GAIM is: To advance the study of the coupled dynamics of the Earth system using as tools both data and models; to develop a strategy for the rapid development, evaluation, and application of comprehensive prognostic models of the Global Biogeochemical Subsystem which could eventually be linked with models of the Physical-Climate Subsystem; to propose, promote, and facilitate experiments with existing models or by linking subcomponent models, especially those associated with IGBP Core Projects and with WCRP efforts. Such experiments would be focused upon resolving interface issues and questions associated with developing an understanding of the prognostic behavior of key processes; to clarify key scientific issues facing the development of Global Biogeochemical Models and the coupling of these models to General Circulation Models; to assist the Intergovernmental Panel on Climate Change (IPCC) process by conducting timely studies that focus upon elucidating important unresolved scientific issues associated with the changing biogeochemical cycles of the planet and upon the role of the biosphere in the physical-climate subsystem, particularly its role in the global hydrological cycle; and to advise the SC-IGBP on progress in developing comprehensive Global Biogeochemical Models and to maintain scientific liaison with the WCRP Steering Group on Global Climate Modelling.
Clark, D Angus; Bowles, Ryan P
2018-04-23
In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.
Assessment of health surveys: fitting a multidimensional graded response model.
Depaoli, Sarah; Tiemensma, Jitske; Felt, John M
The multidimensional graded response model, an item response theory (IRT) model, can be used to improve the assessment of surveys, even when sample sizes are restricted. Typically, health-based survey development utilizes classical statistical techniques (e.g. reliability and factor analysis). In a review of four prominent journals within the field of Health Psychology, we found that IRT-based models were used in less than 10% of the studies examining scale development or assessment. However, implementing IRT-based methods can provide more details about individual survey items, which is useful when determining the final item content of surveys. An example using a quality of life survey for Cushing's syndrome (CushingQoL) highlights the main components for implementing the multidimensional graded response model. Patients with Cushing's syndrome (n = 397) completed the CushingQoL. Results from the multidimensional graded response model supported a 2-subscale scoring process for the survey. All items were deemed as worthy contributors to the survey. The graded response model can accommodate unidimensional or multidimensional scales, be used with relatively lower sample sizes, and is implemented in free software (example code provided in online Appendix). Use of this model can help to improve the quality of health-based scales being developed within the Health Sciences.
A No-Scale Inflationary Model to Fit Them All
Ellis, John; Nanopoulos, Dimitri; Olive, Keith
2014-01-01
The magnitude of B-mode polarization in the cosmic microwave background as measured by BICEP2 favours models of chaotic inflation with a quadratic $m^2 \\phi^2/2$ potential, whereas data from the Planck satellite favour a small value of the tensor-to-scalar perturbation ratio $r$ that is highly consistent with the Starobinsky $R + R^2$ model. Reality may lie somewhere between these two scenarios. In this paper we propose a minimal two-field no-scale supergravity model that interpolates between quadratic and Starobinsky-like inflation as limiting cases, while retaining the successful prediction $n_s \\simeq 0.96$.
Global modelling of Cryptosporidium in surface water
Vermeulen, Lucie; Hofstra, Nynke
2016-04-01
Introduction Waterborne pathogens that cause diarrhoea, such as Cryptosporidium, pose a health risk all over the world. In many regions quantitative information on pathogens in surface water is unavailable. Our main objective is to model Cryptosporidium concentrations in surface waters worldwide. We present the GloWPa-Crypto model and use the model in a scenario analysis. A first exploration of global Cryptosporidium emissions to surface waters has been published by Hofstra et al. (2013). Further work has focused on modelling emissions of Cryptosporidium and Rotavirus to surface waters from human sources (Vermeulen et al 2015, Kiulia et al 2015). A global waterborne pathogen model can provide valuable insights by (1) providing quantitative information on pathogen levels in data-sparse regions, (2) identifying pathogen hotspots, (3) enabling future projections under global change scenarios and (4) supporting decision making. Material and Methods GloWPa-Crypto runs on a monthly time step and represents conditions for approximately the year 2010. The spatial resolution is a 0.5 x 0.5 degree latitude x longitude grid for the world. We use livestock maps (http://livestock.geo-wiki.org/) combined with literature estimates to calculate spatially explicit livestock Cryptosporidium emissions. For human Cryptosporidium emissions, we use UN population estimates, the WHO/UNICEF JMP sanitation country data and literature estimates of wastewater treatment. We combine our emissions model with a river routing model and data from the VIC hydrological model (http://vic.readthedocs.org/en/master/) to calculate concentrations in surface water. Cryptosporidium survival during transport depends on UV radiation and water temperature. We explore pathogen emissions and concentrations in 2050 with the new Shared Socio-economic Pathways (SSPs) 1 and 3. These scenarios describe plausible future trends in demographics, economic development and the degree of global integration. Results and
Global energy modeling - A biophysical approach
Energy Technology Data Exchange (ETDEWEB)
Dale, Michael
2010-09-15
This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.
A Fit-For-Purpose approach to Land Administration in Africa in support of the new 2030 Global Agenda
DEFF Research Database (Denmark)
Enemark, Stig
2017-01-01
on legacy approaches, have been fragmented and have not delivered the required pervasive changes and improvements at scale. The solutions have not helped the most needy - the poor and disadvantaged that have no security of tenure. In fact the beneficiaries have often been the elite and organizations...... involved in land grabbing. It is time to rethink the approaches. New solutions are required that can deliver security of tenure for all, are affordable and can be quickly developed and incrementally improved over time. The Fit-For-Purpose (FFP) approach to land administration has emerged to meet...... administration systems is the only viable solution to solving the global security of tenure divide. The FFP approach is flexible and includes the adaptability to meet the actual and basic needs of society today and having the capability to be incrementally improved over time. This will be triggered in response...
Statistical models of global Langmuir mixing
Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean
2017-05-01
The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.
Divon, Hege Hvattum; Ziv, Carmit; Davydov, Olga; Yarden, Oded; Fluhr, Robert
2006-11-01
SUMMARY Fusarium oxysporum is a soil-borne pathogen that infects plants through the roots and uses the vascular system for host ingress. Specialized for this route of infection, F. oxysporum is able to adapt to the scarce nutrient environment in the xylem vessels. Here we report the cloning of the F. oxysporum global nitrogen regulator, Fnr1, and show that it is one of the determinants for fungal fitness during in planta growth. The Fnr1 gene has a single conserved GATA-type zinc finger domain and is 96% and 48% identical to AREA-GF from Gibberella fujikuroi, and NIT2 from Neurospora crassa, respectively. Fnr1 cDNA, expressed under a constitutive promoter, was able to complement functionally an N. crassa nit-2(RIP) mutant, restoring the ability of the mutant to utilize nitrate. Fnr1 disruption mutants showed high tolerance to chlorate and reduced ability to utilize several secondary nitrogen sources such as amino acids, hypoxanthine and uric acid, whereas growth on favourable nitrogen sources was not affected. Fnr1 disruption also abolished in vitro expression of nutrition genes, normally induced during the early phase of infection. In an infection assay on tomato seedlings, infection rate of disruption mutants was significantly delayed in comparison with the parental strain. Our results indicate that FNR1 mediates adaptation to nitrogen-poor conditions in planta through the regulation of secondary nitrogen acquisition, and as such acts as a determinant for fungal fitness during infection.
Modeling of the Global Water Cycle - Analytical Models
Yongqiang Liu; Roni Avissar
2005-01-01
Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...
SPSS macros to compare any two fitted values from a regression model.
Weaver, Bruce; Dubois, Sacha
2012-12-01
In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.
Dirac-global fits to calcium elastic scattering data in the range 21-200 MeV
International Nuclear Information System (INIS)
Cooper, E.D.
1988-01-01
We present a global relativistic optical model for p+ 40 Ca consisting of Lorentz scalar and vector potentials parametrized as a function of energy. The shapes chosen are Woods-Saxons for the real potentials, and a linear combination of Woods-Saxons and derivative Woods-Saxons for the imaginary potentials. (orig.)
Global Environmental Change: An integrated modelling approach
International Nuclear Information System (INIS)
Den Elzen, M.
1993-01-01
Two major global environmental problems are dealt with: climate change and stratospheric ozone depletion (and their mutual interactions), briefly surveyed in part 1. In Part 2 a brief description of the integrated modelling framework IMAGE 1.6 is given. Some specific parts of the model are described in more detail in other Chapters, e.g. the carbon cycle model, the atmospheric chemistry model, the halocarbon model, and the UV-B impact model. In Part 3 an uncertainty analysis of climate change and stratospheric ozone depletion is presented (Chapter 4). Chapter 5 briefly reviews the social and economic uncertainties implied by future greenhouse gas emissions. Chapters 6 and 7 describe a model and sensitivity analysis pertaining to the scientific uncertainties and/or lacunae in the sources and sinks of methane and carbon dioxide, and their biogeochemical feedback processes. Chapter 8 presents an uncertainty and sensitivity analysis of the carbon cycle model, the halocarbon model, and the IMAGE model 1.6 as a whole. Part 4 presents the risk assessment methodology as applied to the problems of climate change and stratospheric ozone depletion more specifically. In Chapter 10, this methodology is used as a means with which to asses current ozone policy and a wide range of halocarbon policies. Chapter 11 presents and evaluates the simulated globally-averaged temperature and sea level rise (indicators) for the IPCC-1990 and 1992 scenarios, concluding with a Low Risk scenario, which would meet the climate targets. Chapter 12 discusses the impact of sea level rise on the frequency of the Dutch coastal defence system (indicator) for the IPCC-1990 scenarios. Chapter 13 presents projections of mortality rates due to stratospheric ozone depletion based on model simulations employing the UV-B chain model for a number of halocarbon policies. Chapter 14 presents an approach for allocating future emissions of CO 2 among regions. (Abstract Truncated)
Information Theoretic Tools for Parameter Fitting in Coarse Grained Models
Kalligiannaki, Evangelia
2015-01-07
We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.
Design of spatial experiments: Model fitting and prediction
Energy Technology Data Exchange (ETDEWEB)
Fedorov, V.V.
1996-03-01
The main objective of the paper is to describe and develop model oriented methods and algorithms for the design of spatial experiments. Unlike many other publications in this area, the approach proposed here is essentially based on the ideas of convex design theory.
Goodness-of-fit tests in mixed models
Claeskens, Gerda; Hart, Jeffrey D.
2009-01-01
Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors
Mapping the global depth to bedrock for land surface modelling
Shangguan, W.; Hengl, T.; Yuan, H.; Dai, Y. J.; Zhang, S.
2017-12-01
Depth to bedrock serves as the lower boundary of land surface models, which controls hydrologic and biogeochemical processes. This paper presents a framework for global estimation of Depth to bedrock (DTB). Observations were extracted from a global compilation of soil profile data (ca. 130,000 locations) and borehole data (ca. 1.6 million locations). Additional pseudo-observations generated by expert knowledge were added to fill in large sampling gaps. The model training points were then overlaid on a stack of 155 covariates including DEM-based hydrological and morphological derivatives, lithologic units, MODIS surfacee reflectance bands and vegetation indices derived from the MODIS land products. Global spatial prediction models were developed using random forests and Gradient Boosting Tree algorithms. The final predictions were generated at the spatial resolution of 250m as an ensemble prediction of the two independently fitted models. The 10-fold cross-validation shows that the models explain 59% for absolute DTB and 34% for censored DTB (depths deep than 200 cm are predicted as 200 cm). The model for occurrence of R horizon (bedrock) within 200 cm does a good job. Visual comparisons of predictions in the study areas where more detailed maps of depth to bedrock exist show that there is a general match with spatial patterns from similar local studies. Limitation of the data set and extrapolation in data spare areas should not be ignored in applications. To improve accuracy of spatial prediction, more borehole drilling logs will need to be added to supplement the existing training points in under-represented areas.
Goodness-of-fit tests in mixed models
Claeskens, Gerda
2009-05-12
Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.
Directory of Open Access Journals (Sweden)
Akemi Gálvez
2014-01-01
for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.
Fitting the Phenomenological MSSM
AbdusSalam, S S; Quevedo, F; Feroz, F; Hobson, M
2010-01-01
We perform a global Bayesian fit of the phenomenological minimal supersymmetric standard model (pMSSM) to current indirect collider and dark matter data. The pMSSM contains the most relevant 25 weak-scale MSSM parameters, which are simultaneously fit using `nested sampling' Monte Carlo techniques in more than 15 years of CPU time. We calculate the Bayesian evidence for the pMSSM and constrain its parameters and observables in the context of two widely different, but reasonable, priors to determine which inferences are robust. We make inferences about sparticle masses, the sign of the $\\mu$ parameter, the amount of fine tuning, dark matter properties and the prospects for direct dark matter detection without assuming a restrictive high-scale supersymmetry breaking model. We find the inferred lightest CP-even Higgs boson mass as an example of an approximately prior independent observable. This analysis constitutes the first statistically convergent pMSSM global fit to all current data.
Development of an Integrated Global Energy Model
International Nuclear Information System (INIS)
Krakowski, R.A.
1999-01-01
The primary objective of this research was to develop a forefront analysis tool for application to enhance understanding of long-term, global, nuclear-energy and nuclear-material futures. To this end, an existing economics-energy-environmental (E 3 ) model was adopted, modified, and elaborated to examine this problem in a multi-regional (13), long-term (approximately2,100) context. The E 3 model so developed was applied to create a Los Alamos presence in this E 3 area through ''niche analyses'' that provide input to the formulation of policies dealing with and shaping of nuclear-energy and nuclear-materials futures. Results from analyses using the E 3 model have been presented at a variety of national and international conferences and workshops. Through use of the E 3 model Los Alamos was afforded the opportunity to participate in a multi-national E 3 study team that is examining a range of global, long-term nuclear issues under the auspices of the IAEA during the 1998-99 period . Finally, the E 3 model developed under this LDRD project is being used as an important component in more recent Nuclear Material Management Systems (NMMS) project
Drought Persistence Errors in Global Climate Models
Moon, H.; Gudmundsson, L.; Seneviratne, S. I.
2018-04-01
The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.
Fitting measurement models to vocational interest data: are dominance models ideal?
Tay, Louis; Drasgow, Fritz; Rounds, James; Williams, Bruce A
2009-09-01
In this study, the authors examined the item response process underlying 3 vocational interest inventories: the Occupational Preference Inventory (C.-P. Deng, P. I. Armstrong, & J. Rounds, 2007), the Interest Profiler (J. Rounds, T. Smith, L. Hubert, P. Lewis, & D. Rivkin, 1999; J. Rounds, C. M. Walker, et al., 1999), and the Interest Finder (J. E. Wall & H. E. Baker, 1997; J. E. Wall, L. L. Wise, & H. E. Baker, 1996). Item response theory (IRT) dominance models, such as the 2-parameter and 3-parameter logistic models, assume that item response functions (IRFs) are monotonically increasing as the latent trait increases. In contrast, IRT ideal point models, such as the generalized graded unfolding model, have IRFs that peak where the latent trait matches the item. Ideal point models are expected to fit better because vocational interest inventories ask about typical behavior, as opposed to requiring maximal performance. Results show that across all 3 interest inventories, the ideal point model provided better descriptions of the response process. The importance of specifying the correct item response model for precise measurement is discussed. In particular, scores computed by a dominance model were shown to be sometimes illogical: individuals endorsing mostly realistic or mostly social items were given similar scores, whereas scores based on an ideal point model were sensitive to which type of items respondents endorsed.
Directory of Open Access Journals (Sweden)
François Waldner
2015-06-01
Full Text Available Timely and accurate information on the global cropland extent is critical for applications in the fields of food security, agricultural monitoring, water management, land-use change modeling and Earth system modeling. On the one hand, it gives detailed location information on where to analyze satellite image time series to assess crop condition. On the other hand, it isolates the agriculture component to focus food security monitoring on agriculture and to assess the potential impacts of climate change on agricultural lands. The cropland class is often poorly captured in global land cover products due to its dynamic nature and the large variety of agro-systems. The overall objective was to evaluate the current availability of cropland datasets in order to propose a strategic planning and effort distribution for future cropland mapping activities and, therefore, to maximize their impact. Following a very comprehensive identification and collection of national to global land cover maps, a multi-criteria analysis was designed at the country level to identify the priority areas for cropland mapping. As a result, the analysis highlighted priority regions, such as Western Africa, Ethiopia, Madagascar and Southeast Asia, for the remote sensing community to focus its efforts. A Unified Cropland Layer at 250 m for the year 2014 was produced combining the fittest products. It was assessed using global validation datasets and yields an overall accuracy ranging from 82%–94%. Masking cropland areas with a global forest map reduced the commission errors from 46% down to 26%. Compared to the GLC-Share and the International Institute for Applied Systems Analysis-International Food Policy Research Institute (IIASA-IFPRI cropland maps, significant spatial disagreements were found, which might be attributed to discrepancies in the cropland definition. This advocates for a shared definition of cropland, as well as global validation datasets relevant for the
Nonlinear models for fitting growth curves of Nellore cows reared in the Amazon Biome
Directory of Open Access Journals (Sweden)
Kedma Nayra da Silva Marinho
2013-09-01
Full Text Available Growth curves of Nellore cows were estimated by comparing six nonlinear models: Brody, Logistic, two alternatives by Gompertz, Richards and Von Bertalanffy. The models were fitted to weight-age data, from birth to 750 days of age of 29,221 cows, born between 1976 and 2006 in the Brazilian states of Acre, Amapá, Amazonas, Pará, Rondônia, Roraima and Tocantins. The models were fitted by the Gauss-Newton method. The goodness of fit of the models was evaluated by using mean square error, adjusted coefficient of determination, prediction error and mean absolute error. Biological interpretation of parameters was accomplished by plotting estimated weights versus the observed weight means, instantaneous growth rate, absolute maturity rate, relative instantaneous growth rate, inflection point and magnitude of the parameters A (asymptotic weight and K (maturing rate. The Brody and Von Bertalanffy models fitted the weight-age data but the other models did not. The average weight (A and growth rate (K were: 384.6±1.63 kg and 0.0022±0.00002 (Brody and 313.40±0.70 kg and 0.0045±0.00002 (Von Bertalanffy. The Brody model provides better goodness of fit than the Von Bertalanffy model.
Modelling 1-minute directional observations of the global irradiance.
Thejll, Peter; Pagh Nielsen, Kristian; Andersen, Elsa; Furbo, Simon
2016-04-01
Direct and diffuse irradiances from the sky has been collected at 1-minute intervals for about a year from the experimental station at the Technical University of Denmark for the IEA project "Solar Resource Assessment and Forecasting". These data were gathered by pyrheliometers tracking the Sun, as well as with apertured pyranometers gathering 1/8th and 1/16th of the light from the sky in 45 degree azimuthal ranges pointed around the compass. The data are gathered in order to develop detailed models of the potentially available solar energy and its variations at high temporal resolution in order to gain a more detailed understanding of the solar resource. This is important for a better understanding of the sub-grid scale cloud variation that cannot be resolved with climate and weather models. It is also important for optimizing the operation of active solar energy systems such as photovoltaic plants and thermal solar collector arrays, and for passive solar energy and lighting to buildings. We present regression-based modelling of the observed data, and focus, here, on the statistical properties of the model fits. Using models based on the one hand on what is found in the literature and on physical expectations, and on the other hand on purely statistical models, we find solutions that can explain up to 90% of the variance in global radiation. The models leaning on physical insights include terms for the direct solar radiation, a term for the circum-solar radiation, a diffuse term and a term for the horizon brightening/darkening. The purely statistical model is found using data- and formula-validation approaches picking model expressions from a general catalogue of possible formulae. The method allows nesting of expressions, and the results found are dependent on and heavily constrained by the cross-validation carried out on statistically independent testing and training data-sets. Slightly better fits -- in terms of variance explained -- is found using the purely
Progress in Global Multicompartmental Modelling of DDT
Stemmler, I.; Lammel, G.
2009-04-01
Dichlorophenyltrichloroethane, DDT, and its major metabolite dichlorophenyldichloroethylene, DDE, are long-lived in the environment (persistent) and circulate since the 1950s. They accumulate along food chains, cause detrimental effects in marine and terrestrial wild life, and pose a hazard for human health. DDT was widely used as an insecticide in the past and is still in use in a number of tropical countries to combat vector borne diseases like malaria and typhus. It is a multicompartmental substance with only a small mass fraction residing in air. A global multicompartment chemistry transport model (MPI-MCTM; Semeena et al., 2006) is used to study the environmental distribution and fate of dichlorodiphenyltrichloroethane (DDT). For the first time a horizontally and vertically resolved global model was used to perform a long-term simulation of DDT and DDE. The model is based on general circulation models for the ocean (MPIOM; Marsland et al., 2003) and atmosphere (ECHAM5). In addition, an oceanic biogeochemistry model (HAMOCC5.1; Maier-Reimer et al., 2005 ) and a microphysical aerosol model (HAM; Stier et al., 2005 ) are included. Multicompartmental substances are cycling in atmosphere (3 phases), ocean (3 phases), top soil (3 phases), and vegetation surfaces. The model was run for 40 years forced with historical agricultural application data of 1950-1990. The model results show that the global environmental contamination started to decrease in air, soil and vegetation after the applications peaked in 1965-70. In some regions, however, the DDT mass had not yet reached a maximum in 1990 and was still accumulating mass until the end of the simulation. Modelled DDT and DDE concentrations in atmosphere, ocean and soil are evaluated by comparison with observational data. The evaluation of the model results indicate that degradation of DDE in air was underestimated. Also for DDT, the discrepancies between model results and observations are related to uncertainties of
A satellite-based global landslide model
Directory of Open Access Journals (Sweden)
A. Farahmand
2013-05-01
Full Text Available Landslides are devastating phenomena that cause huge damage around the world. This paper presents a quasi-global landslide model derived using satellite precipitation data, land-use land cover maps, and 250 m topography information. This suggested landslide model is based on the Support Vector Machines (SVM, a machine learning algorithm. The National Aeronautics and Space Administration (NASA Goddard Space Flight Center (GSFC landslide inventory data is used as observations and reference data. In all, 70% of the data are used for model development and training, whereas 30% are used for validation and verification. The results of 100 random subsamples of available landslide observations revealed that the suggested landslide model can predict historical landslides reliably. The average error of 100 iterations of landslide prediction is estimated to be approximately 7%, while approximately 2% false landslide events are observed.
A hydroclimatic model of global fire patterns
Boer, Matthias
2015-04-01
Satellite-based earth observation is providing an increasingly accurate picture of global fire patterns. The highest fire activity is observed in seasonally dry (sub-)tropical environments of South America, Africa and Australia, but fires occur with varying frequency, intensity and seasonality in almost all biomes on Earth. The particular combination of these fire characteristics, or fire regime, is known to emerge from the combined influences of climate, vegetation, terrain and land use, but has so far proven difficult to reproduce by global models. Uncertainty about the biophysical drivers and constraints that underlie current global fire patterns is propagated in model predictions of how ecosystems, fire regimes and biogeochemical cycles may respond to projected future climates. Here, I present a hydroclimatic model of global fire patterns that predicts the mean annual burned area fraction (F) of 0.25° x 0.25° grid cells as a function of the climatic water balance. Following Bradstock's four-switch model, long-term fire activity levels were assumed to be controlled by fuel productivity rates and the likelihood that the extant fuel is dry enough to burn. The frequency of ignitions and favourable fire weather were assumed to be non-limiting at long time scales. Fundamentally, fuel productivity and fuel dryness are a function of the local water and energy budgets available for the production and desiccation of plant biomass. The climatic water balance summarizes the simultaneous availability of biologically usable energy and water at a site, and may therefore be expected to explain a significant proportion of global variation in F. To capture the effect of the climatic water balance on fire activity I focused on the upper quantiles of F, i.e. the maximum level of fire activity for a given climatic water balance. Analysing GFED4 data for annual burned area together with gridded climate data, I found that nearly 80% of the global variation in the 0.99 quantile of F
On coupling global biome models with climate models
International Nuclear Information System (INIS)
Claussen, M.
1994-01-01
The BIOME model of Prentice et al. (1992), which predicts global vegetation patterns in equilibrium with climate, is coupled with the ECHAM climate model of the Max-Planck-Institut fuer Meteorologie, Hamburg. It is found that incorporation of the BIOME model into ECHAM, regardless at which frequency, does not enhance the simulated climate variability, expressed in terms of differences between global vegetation patterns. Strongest changes are seen only between the initial biome distribution and the biome distribution computed after the first simulation period, provided that the climate-biome model is started from a biome distribution that resembles the present-day distribution. After the first simulation period, there is no significant shrinking, expanding, or shifting of biomes. Likewise, no trend is seen in global averages of land-surface parameters and climate variables. (orig.)
Directory of Open Access Journals (Sweden)
Grant B. Morgan
2015-02-01
Full Text Available Bi-factor confirmatory factor models have been influential in research on cognitive abilities because they often better fit the data than correlated factors and higher-order models. They also instantiate a perspective that differs from that offered by other models. Motivated by previous work that hypothesized an inherent statistical bias of fit indices favoring the bi-factor model, we compared the fit of correlated factors, higher-order, and bi-factor models via Monte Carlo methods. When data were sampled from a true bi-factor structure, each of the approximate fit indices was more likely than not to identify the bi-factor solution as the best fitting. When samples were selected from a true multiple correlated factors structure, approximate fit indices were more likely overall to identify the correlated factors solution as the best fitting. In contrast, when samples were generated from a true higher-order structure, approximate fit indices tended to identify the bi-factor solution as best fitting. There was extensive overlap of fit values across the models regardless of true structure. Although one model may fit a given dataset best relative to the other models, each of the models tended to fit the data well in absolute terms. Given this variability, models must also be judged on substantive and conceptual grounds.
Tree-Based Global Model Tests for Polytomous Rasch Models
Komboz, Basil; Strobl, Carolin; Zeileis, Achim
2018-01-01
Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…
Global and local level density models
International Nuclear Information System (INIS)
Koning, A.J.; Hilaire, S.; Goriely, S.
2008-01-01
Four different level density models, three phenomenological and one microscopic, are consistently parameterized using the same set of experimental observables. For each of the phenomenological models, the Constant Temperature Model, the Back-shifted Fermi gas Model and the Generalized Superfluid Model, a version without and with explicit collective enhancement is considered. Moreover, a recently published microscopic combinatorial model is compared with the phenomenological approaches and with the same set of experimental data. For each nuclide for which sufficient experimental data exists, a local level density parameterization is constructed for each model. Next, these local models have helped to construct global level density prescriptions, to be used for cases for which no experimental data exists. Altogether, this yields a collection of level density formulae and parameters that can be used with confidence in nuclear model calculations. To demonstrate this, a large-scale validation with experimental discrete level schemes and experimental cross sections and neutron emission spectra for various different reaction channels has been performed
International Nuclear Information System (INIS)
Liang, Zhong Wei; Wang, Yi Jun; Ye, Bang Yan; Brauwer, Richard Kars
2012-01-01
In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process
Energy Technology Data Exchange (ETDEWEB)
Liang, Zhong Wei; Wang, Yi Jun [Guangzhou Univ., Guangzhou (China); Ye, Bang Yan [South China Univ. of Technology, Guangzhou (China); Brauwer, Richard Kars [Indian Institute of Technology, Kanpur (India)
2012-10-15
In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process.
DEFF Research Database (Denmark)
Ding, Tao; Li, Cheng; Huang, Can
2018-01-01
–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost......In order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master...... optimality. Numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods....
A Global Model of Meteoric Sodium
Marsh, Daniel R.; Janches, Diego; Feng, Wuhu; Plane, John M. C.
2013-01-01
A global model of sodium in the mesosphere and lower thermosphere has been developed within the framework of the National Center for Atmospheric Research's Whole Atmosphere Community Climate Model (WACCM). The standard fully interactive WACCM chemistry module has been augmented with a chemistry scheme that includes nine neutral and ionized sodium species. Meteoric ablation provides the source of sodium in the model and is represented as a combination of a meteoroid input function (MIF) and a parameterized ablation model. The MIF provides the seasonally and latitudinally varying meteoric flux which is modeled taking into consideration the astronomical origins of sporadic meteors and considers variations in particle entry angle, velocity, mass, and the differential ablation of the chemical constituents. WACCM simulations show large variations in the sodium constituents over time scales from days to months. Seasonality of sodium constituents is strongly affected by variations in the MIF and transport via the mean meridional wind. In particular, the summer to winter hemisphere flow leads to the highest sodium species concentrations and loss rates occurring over the winter pole. In the Northern Hemisphere, this winter maximum can be dramatically affected by stratospheric sudden warmings. Simulations of the January 2009 major warming event show that it caused a short-term decrease in the sodium column over the polar cap that was followed by a factor of 3 increase in the following weeks. Overall, the modeled distribution of atomic sodium in WACCM agrees well with both ground-based and satellite observations. Given the strong sensitivity of the sodium layer to dynamical motions, reproducing its variability provides a stringent test of global models and should help to constrain key atmospheric variables in this poorly sampled region of the atmosphere.
Stojek, Monika M K; Montoya, Amanda K; Drescher, Christopher F; Newberry, Andrew; Sultan, Zain; Williams, Celestine F; Pollock, Norman K; Davis, Catherine L
We used mediation models to examine the mechanisms underlying the relationships among physical fitness, sleep-disordered breathing (SDB), symptoms of depression, and cognitive functioning. We conducted a cross-sectional secondary analysis of the cohorts involved in the 2003-2006 project PLAY (a trial of the effects of aerobic exercise on health and cognition) and the 2008-2011 SMART study (a trial of the effects of exercise on cognition). A total of 397 inactive overweight children aged 7-11 received a fitness test, standardized cognitive test (Cognitive Assessment System, yielding Planning, Attention, Simultaneous, Successive, and Full Scale scores), and depression questionnaire. Parents completed a Pediatric Sleep Questionnaire. We used bootstrapped mediation analyses to test whether SDB mediated the relationship between fitness and depression and whether SDB and depression mediated the relationship between fitness and cognition. Fitness was negatively associated with depression ( B = -0.041; 95% CI, -0.06 to -0.02) and SDB ( B = -0.005; 95% CI, -0.01 to -0.001). SDB was positively associated with depression ( B = 0.99; 95% CI, 0.32 to 1.67) after controlling for fitness. The relationship between fitness and depression was mediated by SDB (indirect effect = -0.005; 95% CI, -0.01 to -0.0004). The relationship between fitness and the attention component of cognition was independently mediated by SDB (indirect effect = 0.058; 95% CI, 0.004 to 0.13) and depression (indirect effect = -0.071; 95% CI, -0.01 to -0.17). SDB mediates the relationship between fitness and depression, and SDB and depression separately mediate the relationship between fitness and the attention component of cognition.
Global adjoint tomography: first-generation model
Bozdağ, Ebru
2016-09-23
We present the first-generation global tomographic model constructed based on adjoint tomography, an iterative full-waveform inversion technique. Synthetic seismograms were calculated using GPU-accelerated spectral-element simulations of global seismic wave propagation, accommodating effects due to 3-D anelastic crust & mantle structure, topography & bathymetry, the ocean load, ellipticity, rotation, and self-gravitation. Fréchet derivatives were calculated in 3-D anelastic models based on an adjoint-state method. The simulations were performed on the Cray XK7 named \\'Titan\\', a computer with 18 688 GPU accelerators housed at Oak Ridge National Laboratory. The transversely isotropic global model is the result of 15 tomographic iterations, which systematically reduced differences between observed and simulated three-component seismograms. Our starting model combined 3-D mantle model S362ANI with 3-D crustal model Crust2.0. We simultaneously inverted for structure in the crust and mantle, thereby eliminating the need for widely used \\'crustal corrections\\'. We used data from 253 earthquakes in the magnitude range 5.8 ≤ M ≤ 7.0. We started inversions by combining ~30 s body-wave data with ~60 s surface-wave data. The shortest period of the surface waves was gradually decreased, and in the last three iterations we combined ~17 s body waves with ~45 s surface waves. We started using 180 min long seismograms after the 12th iteration and assimilated minor- and major-arc body and surface waves. The 15th iteration model features enhancements of well-known slabs, an enhanced image of the Samoa/Tahiti plume, as well as various other plumes and hotspots, such as Caroline, Galapagos, Yellowstone and Erebus. Furthermore, we see clear improvements in slab resolution along the Hellenic and Japan Arcs, as well as subduction along the East of Scotia Plate, which does not exist in the starting model. Point-spread function tests demonstrate that we are approaching the
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Challenges in Modeling of the Global Atmosphere
Janjic, Zavisa; Djurdjevic, Vladimir; Vasic, Ratko; Black, Tom
2015-04-01
") with significant amplitudes can develop. Due to their large scales, that are comparable to the scales of the dominant Rossby waves, such fictitious solutions are hard to identify and remove. Another new challenge on the global scale is that the limit of validity of the hydrostatic approximation is rapidly being approached. Having in mind the sensitivity of extended deterministic forecasts to small disturbances, we may need global non-hydrostatic models sooner than we think. The unified Non-hydrostatic Multi-scale Model (NMMB) that is being developed at the National Centers for Environmental Prediction (NCEP) as a part of the new NOAA Environmental Modeling System (NEMS) will be discussed as an example. The non-hydrostatic dynamics were designed in such a way as to avoid over-specification. The global version is run on the latitude-longitude grid, and the polar filter selectively slows down the waves that would otherwise be unstable. The model formulation has been successfully tested on various scales. A global forecasting system based on the NMMB has been run in order to test and tune the model. The skill of the medium range forecasts produced by the NMMB is comparable to that of other major medium range models. The computational efficiency of the global NMMB on parallel computers is good.
Unifying distance-based goodness-of-fit indicators for hydrologic model assessment
Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim
2014-05-01
The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on
Irvine, Michael A; Hollingsworth, T Déirdre
2018-05-26
Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Standard error propagation in R-matrix model fitting for light elements
International Nuclear Information System (INIS)
Chen Zhenpeng; Zhang Rui; Sun Yeying; Liu Tingjin
2003-01-01
The error propagation features with R-matrix model fitting 7 Li, 11 B and 17 O systems were researched systematically. Some laws of error propagation were revealed, an empirical formula P j = U j c / U j d = K j · S-bar · √m / √N for describing standard error propagation was established, the most likely error ranges for standard cross sections of 6 Li(n,t), 10 B(n,α0) and 10 B(n,α1) were estimated. The problem that the standard error of light nuclei standard cross sections may be too small results mainly from the R-matrix model fitting, which is not perfect. Yet R-matrix model fitting is the most reliable evaluation method for such data. The error propagation features of R-matrix model fitting for compound nucleus system of 7 Li, 11 B and 17 O has been studied systematically, some laws of error propagation are revealed, and these findings are important in solving the problem mentioned above. Furthermore, these conclusions are suitable for similar model fitting in other scientific fields. (author)
Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes
Leite, Walter L.; Stapleton, Laura M.
2011-01-01
In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…
Assessing model fit in latent class analysis when asymptotics do not hold
van Kollenburg, Geert H.; Mulder, Joris; Vermunt, Jeroen K.
2015-01-01
The application of latent class (LC) analysis involves evaluating the LC model using goodness-of-fit statistics. To assess the misfit of a specified model, say with the Pearson chi-squared statistic, a p-value can be obtained using an asymptotic reference distribution. However, asymptotic p-values
Development and design of a late-model fitness test instrument based on LabView
Xie, Ying; Wu, Feiqing
2010-12-01
Undergraduates are pioneers of China's modernization program and undertake the historic mission of rejuvenating our nation in the 21st century, whose physical fitness is vital. A smart fitness test system can well help them understand their fitness and health conditions, thus they can choose more suitable approaches and make practical plans for exercising according to their own situation. following the future trends, a Late-model fitness test Instrument based on LabView has been designed to remedy defects of today's instruments. The system hardware consists of fives types of sensors with their peripheral circuits, an acquisition card of NI USB-6251 and a computer, while the system software, on the basis of LabView, includes modules of user register, data acquisition, data process and display, and data storage. The system, featured by modularization and an open structure, is able to be revised according to actual needs. Tests results have verified the system's stability and reliability.
Global plastic models for computerized structural analysis
International Nuclear Information System (INIS)
Roche, R.L.; Hoffmann, A.
1977-01-01
In many types of structures, it is possible to use generalized stresses (like membrane forces, bending moment, torsion moment...) to define a yield surface for a part of the structure. Analysis can be achieved by using the HILL's principle and a hardening rule. The whole formulation is said 'Global Plastic Model'. Two different global models are used in the CEASEMT system for structural analysis, one for shell analysis and the other for piping analysis (in plastic or creep field). In shell analysis the generalized stresses chosen are the membrane forces and bending (including torsion) moments. There is only one yield condition for a normal to the middle surface and no integration along the thickness is required. In piping analysis, the choice of generalized stresses is bending moments, torsional moment, hoop stress and tension stress. There is only a set of stresses for a cross section and no integration over the cross section area is needed. Connected strains are axis curvature, torsion, uniform strains. The definition of the yield surface is the most important item. A practical way is to use a diagonal quadratic function of the stress components. But the coefficients are depending of the shape of the pipe element, especially for curved segments. Indications will be given on the yield functions used. Some examples of applications in structural analysis are added to the text
Modeling global scene factors in attention
Torralba, Antonio
2003-07-01
Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America
Global embedding of fibre inflation models
Energy Technology Data Exchange (ETDEWEB)
Cicoli, Michele [Dipartimento di Fisica e Astronomia, Università di Bologna,via Irnerio 46, 40126 Bologna (Italy); INFN - Sezione di Bologna,viale Berti Pichat 6/2, 40127 Bologna (Italy); Abdus Salam ICTP,Strada Costiera 11, Trieste 34151 (Italy); Muia, Francesco [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Rd., Oxford OX1 3NP (United Kingdom); Shukla, Pramod [Abdus Salam ICTP,Strada Costiera 11, Trieste 34151 (Italy)
2016-11-30
We present concrete embeddings of fibre inflation models in globally consistent type IIB Calabi-Yau orientifolds with closed string moduli stabilisation. After performing a systematic search through the existing list of toric Calabi-Yau manifolds, we find several examples that reproduce the minimal setup to embed fibre inflation models. This involves Calabi-Yau manifolds with h{sup 1,1}=3 which are K3 fibrations over a ℙ{sup 1} base with an additional shrinkable rigid divisor. We then provide different consistent choices of the underlying brane set-up which generate a non-perturbative superpotential suitable for moduli stabilisation and string loop corrections with the correct form to drive inflation. For each Calabi-Yau orientifold setting, we also compute the effect of higher derivative contributions and study their influence on the inflationary dynamics.
A Global Atmospheric Model of Meteoric Iron
Feng, Wuhu; Marsh, Daniel R.; Chipperfield, Martyn P.; Janches, Diego; Hoffner, Josef; Yi, Fan; Plane, John M. C.
2013-01-01
The first global model of meteoric iron in the atmosphere (WACCM-Fe) has been developed by combining three components: the Whole Atmosphere Community Climate Model (WACCM), a description of the neutral and ion-molecule chemistry of iron in the mesosphere and lower thermosphere (MLT), and a treatment of the injection of meteoric constituents into the atmosphere. The iron chemistry treats seven neutral and four ionized iron containing species with 30 neutral and ion-molecule reactions. The meteoric input function (MIF), which describes the injection of Fe as a function of height, latitude, and day, is precalculated from an astronomical model coupled to a chemical meteoric ablation model (CABMOD). This newly developed WACCM-Fe model has been evaluated against a number of available ground-based lidar observations and performs well in simulating the mesospheric atomic Fe layer. The model reproduces the strong positive correlation of temperature and Fe density around the Fe layer peak and the large anticorrelation around 100 km. The diurnal tide has a significant effect in the middle of the layer, and the model also captures well the observed seasonal variations. However, the model overestimates the peak Fe+ concentration compared with the limited rocket-borne mass spectrometer data available, although good agreement on the ion layer underside can be obtained by adjusting the rate coefficients for dissociative recombination of Fe-molecular ions with electrons. Sensitivity experiments with the same chemistry in a 1-D model are used to highlight significant remaining uncertainties in reaction rate coefficients, and to explore the dependence of the total Fe abundance on the MIF and rate of vertical transport.
The Software Architecture of Global Climate Models
Alexander, K. A.; Easterbrook, S. M.
2011-12-01
It has become common to compare and contrast the output of multiple global climate models (GCMs), such as in the Climate Model Intercomparison Project Phase 5 (CMIP5). However, intercomparisons of the software architecture of GCMs are almost nonexistent. In this qualitative study of seven GCMs from Canada, the United States, and Europe, we attempt to fill this gap in research. We describe the various representations of the climate system as computer programs, and account for architectural differences between models. Most GCMs now practice component-based software engineering, where Earth system components (such as the atmosphere or land surface) are present as highly encapsulated sub-models. This architecture facilitates a mix-and-match approach to climate modelling that allows for convenient sharing of model components between institutions, but it also leads to difficulty when choosing where to draw the lines between systems that are not encapsulated in the real world, such as sea ice. We also examine different styles of couplers in GCMs, which manage interaction and data flow between components. Finally, we pay particular attention to the varying levels of complexity in GCMs, both between and within models. Many GCMs have some components that are significantly more complex than others, a phenomenon which can be explained by the respective institution's research goals as well as the origin of the model components. In conclusion, although some features of software architecture have been adopted by every GCM we examined, other features show a wide range of different design choices and strategies. These architectural differences may provide new insights into variability and spread between models.
Fast and exact Newton and Bidirectional fitting of Active Appearance Models.
Kossaifi, Jean; Tzimiropoulos, Yorgos; Pantic, Maja
2016-12-21
Active Appearance Models (AAMs) are generative models of shape and appearance that have proven very attractive for their ability to handle wide changes in illumination, pose and occlusion when trained in the wild, while not requiring large training dataset like regression-based or deep learning methods. The problem of fitting an AAM is usually formulated as a non-linear least squares one and the main way of solving it is a standard Gauss-Newton algorithm. In this paper we extend Active Appearance Models in two ways: we first extend the Gauss-Newton framework by formulating a bidirectional fitting method that deforms both the image and the template to fit a new instance. We then formulate a second order method by deriving an efficient Newton method for AAMs fitting. We derive both methods in a unified framework for two types of Active Appearance Models, holistic and part-based, and additionally show how to exploit the structure in the problem to derive fast yet exact solutions. We perform a thorough evaluation of all algorithms on three challenging and recently annotated inthe- wild datasets, and investigate fitting accuracy, convergence properties and the influence of noise in the initialisation. We compare our proposed methods to other algorithms and show that they yield state-of-the-art results, out-performing other methods while having superior convergence properties.
Sensitivities in global scale modeling of isoprene
Directory of Open Access Journals (Sweden)
R. von Kuhlmann
2004-01-01
Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9 Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.
Visbeck, M.; Fischer, A. S.; Le Traon, P. Y.; Mowlem, M. C.; Speich, S.; Larkin, K.
2015-12-01
There are an increasing number of global, regional and local processes that are in need of integrated ocean information. In the sciences ocean information is needed to support physical ocean and climate studies for example within the World Climate Research Programme and its CLIVAR project, biogeochemical issues as articulated by the GCP, IMBER and SOLAS projects of ICSU-SCOR and Future Earth. This knowledge gets assessed in the area of climate by the IPCC and biodiversity by the IPBES processes. The recently released first World Ocean Assessment focuses more on ecosystem services and there is an expectation that the Sustainable Development Goals and in particular Goal 14 on the Ocean and Seas will generate new demands for integrated ocean observing from Climate to Fish and from Ocean Resources to Safe Navigation and on a healthy, productive and enjoyable ocean in more general terms. In recognition of those increasing needs for integrated ocean information we have recently launched the Horizon 2020 AtlantOS project to promote the transition from a loosely-coordinated set of existing ocean observing activities to a more integrated, more efficient, more sustainable and fit-for-purpose Atlantic Ocean Observing System. AtlantOS takes advantage of the Framework for Ocean observing that provided strategic guidance for the design of the project and its outcome. AtlantOS will advance the requirements and systems design, improving the readiness of observing networks and data systems, and engaging stakeholders around the Atlantic. AtlantOS will bring Atlantic nations together to strengthen their complementary contributions to and benefits from the internationally coordinated Global Ocean Observing System (GOOS) and the Blue Planet Initiative of the Global Earth Observation System of Systems (GEOSS). AtlantOS will fill gaps of the in-situ observing system networks and will ensure that their data are readily accessible and useable. AtlantOS will demonstrate the utility of
The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting
Tao, Zhang; Li, Zhang; Dingjun, Chen
On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.
Generic global regression models for growth prediction of Salmonella in ground pork and pork cuts
DEFF Research Database (Denmark)
Buschhardt, Tasja; Hansen, Tina Beck; Bahl, Martin Iain
2017-01-01
Introduction and Objectives Models for the prediction of bacterial growth in fresh pork are primarily developed using two-step regression (i.e. primary models followed by secondary models). These models are also generally based on experiments in liquids or ground meat and neglect surface growth....... It has been shown that one-step global regressions can result in more accurate models and that bacterial growth on intact surfaces can substantially differ from growth in liquid culture. Material and Methods We used a global-regression approach to develop predictive models for the growth of Salmonella....... One part of obtained logtransformed cell counts was used for model development and another for model validation. The Ratkowsky square root model and the relative lag time (RLT) model were integrated into the logistic model with delay. Fitted parameter estimates were compared to investigate the effect...
Anshel, Mark H; Brinthaupt, Thomas M; Kang, Minsoo
2010-01-01
This study examined the effect of a 10-week wellness program on changes in physical fitness and mental well-being. The conceptual framework for this study was the Disconnected Values Model (DVM). According to the DVM, detecting the inconsistencies between negative habits and values (e.g., health, family, faith, character) and concluding that these "disconnects" are unacceptable promotes the need for health behavior change. Participants were 164 full-time employees at a university in the southeastern U.S. The program included fitness coaching and a 90-minute orientation based on the DVM. Multivariate Mixed Model analyses indicated significantly improved scores from pre- to post-intervention on selected measures of physical fitness and mental well-being. The results suggest that the Disconnected Values Model provides an effective cognitive-behavioral approach to generating health behavior change in a 10-week workplace wellness program.
A goodness-of-fit test for occupancy models with correlated within-season revisits
Wright, Wilson; Irvine, Kathryn M.; Rodhouse, Thomas J.
2016-01-01
Occupancy modeling is important for exploring species distribution patterns and for conservation monitoring. Within this framework, explicit attention is given to species detection probabilities estimated from replicate surveys to sample units. A central assumption is that replicate surveys are independent Bernoulli trials, but this assumption becomes untenable when ecologists serially deploy remote cameras and acoustic recording devices over days and weeks to survey rare and elusive animals. Proposed solutions involve modifying the detection-level component of the model (e.g., first-order Markov covariate). Evaluating whether a model sufficiently accounts for correlation is imperative, but clear guidance for practitioners is lacking. Currently, an omnibus goodnessof- fit test using a chi-square discrepancy measure on unique detection histories is available for occupancy models (MacKenzie and Bailey, Journal of Agricultural, Biological, and Environmental Statistics, 9, 2004, 300; hereafter, MacKenzie– Bailey test). We propose a join count summary measure adapted from spatial statistics to directly assess correlation after fitting a model. We motivate our work with a dataset of multinight bat call recordings from a pilot study for the North American Bat Monitoring Program. We found in simulations that our join count test was more reliable than the MacKenzie–Bailey test for detecting inadequacy of a model that assumed independence, particularly when serial correlation was low to moderate. A model that included a Markov-structured detection-level covariate produced unbiased occupancy estimates except in the presence of strong serial correlation and a revisit design consisting only of temporal replicates. When applied to two common bat species, our approach illustrates that sophisticated models do not guarantee adequate fit to real data, underscoring the importance of model assessment. Our join count test provides a widely applicable goodness-of-fit test and
Tests of fit of historically-informed models of African American Admixture.
Gross, Jessica M
2018-02-01
African American populations in the U.S. formed primarily by mating between Africans and Europeans over the last 500 years. To date, studies of admixture have focused on either a one-time admixture event or continuous input into the African American population from Europeans only. Our goal is to gain a better understanding of the admixture process by examining models that take into account (a) assortative mating by ancestry in the African American population, (b) continuous input from both Europeans and Africans, and (c) historically informed variation in the rate of African migration over time. We used a model-based clustering method to generate distributions of African ancestry in three samples comprised of 147 African Americans from two published sources. We used a log-likelihood method to examine the fit of four models to these distributions and used a log-likelihood ratio test to compare the relative fit of each model. The mean ancestry estimates for our datasets of 77% African/23% European to 83% African/17% European ancestry are consistent with previous studies. We find admixture models that incorporate continuous gene flow from Europeans fit significantly better than one-time event models, and that a model involving continuous gene flow from Africans and Europeans fits better than one with continuous gene flow from Europeans only for two samples. Importantly, models that involve continuous input from Africans necessitate a higher level of gene flow from Europeans than previously reported. We demonstrate that models that take into account information about the rate of African migration over the past 500 years fit observed patterns of African ancestry better than alternative models. Our approach will enrich our understanding of the admixture process in extant and past populations. © 2017 Wiley Periodicals, Inc.
GOODNESS-OF-FIT TEST FOR THE ACCELERATED FAILURE TIME MODEL BASED ON MARTINGALE RESIDUALS
Czech Academy of Sciences Publication Activity Database
Novák, Petr
2013-01-01
Roč. 49, č. 1 (2013), s. 40-59 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:GA MŠk(CZ) SVV 261315/2011 Keywords : accelerated failure time model * survival analysis * goodness-of-fit Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/novak-goodness-of-fit test for the aft model based on martingale residuals.pdf
Efficient occupancy model-fitting for extensive citizen-science data
Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.
2017-01-01
Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen
Directory of Open Access Journals (Sweden)
Thomas J Matthews
2014-06-01
Full Text Available A species abundance distribution (SAD characterises patterns in the commonness and rarity of all species within an ecological community. As such, the SAD provides the theoretical foundation for a number of other biogeographical and macroecological patterns, such as the species–area relationship, as well as being an interesting pattern in its own right. While there has been resurgence in the study of SADs in the last decade, less focus has been placed on methodology in SAD research, and few attempts have been made to synthesise the vast array of methods which have been employed in SAD model evaluation. As such, our review has two aims. First, we provide a general overview of SADs, including descriptions of the commonly used distributions, plotting methods and issues with evaluating SAD models. Second, we review a number of recent advances in SAD model fitting and comparison. We conclude by providing a list of recommendations for fitting and evaluating SAD models. We argue that it is time for SAD studies to move away from many of the traditional methods available for fitting and evaluating models, such as sole reliance on the visual examination of plots, and embrace statistically rigorous techniques. In particular, we recommend the use of both goodness-of-fit tests and model-comparison analyses because each provides unique information which one can use to draw inferences.
Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.
Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei
2015-02-01
This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.
Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten
2017-05-01
Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.
A global digital elevation model - GTOP030
1999-01-01
GTOP030, the U.S. Geological Survey's (USGS) digital elevation model (DEM) of the Earth, provides the flrst global coverage of moderate resolution elevation data. The original GTOP30 data set, which was developed over a 3-year period through a collaborative effort led by the USGS, was completed in 1996 at the USGS EROS Data Center in Sioux Falls, South Dakota. The collaboration involved contributions of staffing, funding, or source data from cooperators including the National Aeronautics and Space Administration (NASA), the United Nations Environment Programme Global Resource Information Database (UNEP/GRID), the U.S. Agency for International Development (USAID), the Instituto Nacional de Estadistica Geografia e Informatica (INEGI) of Mexico, the Geographical Survey Institute (GSI) of Japan, Manaaki Whenua Landcare Research of New Zealand, and the Scientific Committee on Antarctic Research (SCAR). In 1999, work was begun on an update to the GTOP030 data set. Additional data sources are being incorporated into GTOP030 with an enhanced and improved data set planned for release in 2000.
Use of wind data in global modelling
Pailleux, J.
1985-01-01
The European Centre for Medium Range Weather Forecasts (ECMWF) is producing operational global analyses every 6 hours and operational global forecasts every day from the 12Z analysis. How the wind data are used in the ECMWF golbal analysis is described. For each current wind observing system, its ability to provide initial conditions for the forecast model is discussed as well as its weaknesses. An assessment of the impact of each individual system on the quality of the analysis and the forecast is given each time it is possible. Sometimes the deficiencies which are pointed out are related not only to the observing system itself but also to the optimum interpolation (OI) analysis scheme; then some improvements are generally possible through ad hoc modifications of the analysis scheme and especially tunings of the structure functions. Examples are given. The future observing network over the North Atlantic is examined. Several countries, coordinated by WMO, are working to set up an 'Operational WWW System Evaluation' (OWSE), in order to evaluate the operational aspects of the deployment of new systems (ASDAR, ASAP). Most of the new systems are expected to be deployed before January 1987, and in order to make the best use of the available resources during the deployment phase, some network studies are carried out at the present time, by using simulated data for ASDAR and ASAP systems. They are summarized.
eWaterCycle: A global operational hydrological forecasting model
van de Giesen, Nick; Bierkens, Marc; Donchyts, Gennadii; Drost, Niels; Hut, Rolf; Sutanudjaja, Edwin
2015-04-01
Development of an operational hyper-resolution hydrological global model is a central goal of the eWaterCycle project (www.ewatercycle.org). This operational model includes ensemble forecasts (14 days) to predict water related stress around the globe. Assimilation of near-real time satellite data is part of the intended product that will be launched at EGU 2015. The challenges come from several directions. First, there are challenges that are mainly computer science oriented but have direct practical hydrological implications. For example, we aim to make use as much as possible of existing standards and open-source software. For example, different parts of our system are coupled through the Basic Model Interface (BMI) developed in the framework of the Community Surface Dynamics Modeling System (CSDMS). The PCR-GLOBWB model, built by Utrecht University, is the basic hydrological model that is the engine of the eWaterCycle project. Re-engineering of parts of the software was needed for it to run efficiently in a High Performance Computing (HPC) environment, and to be able to interface using BMI, and run on multiple compute nodes in parallel. The final aim is to have a spatial resolution of 1km x 1km, which is currently 10 x 10km. This high resolution is computationally not too demanding but very memory intensive. The memory bottleneck becomes especially apparent for data assimilation, for which we use OpenDA. OpenDa allows for different data assimilation techniques without the need to build these from scratch. We have developed a BMI adaptor for OpenDA, allowing OpenDA to use any BMI compatible model. To circumvent memory shortages which would result from standard applications of the Ensemble Kalman Filter, we have developed a variant that does not need to keep all ensemble members in working memory. At EGU, we will present this variant and how it fits well in HPC environments. An important step in the eWaterCycle project was the coupling between the hydrological and
Local and omnibus goodness-of-fit tests in classical measurement error models
Ma, Yanyuan
2010-09-14
We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.
ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction
International Nuclear Information System (INIS)
Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.
2015-01-01
An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks
Modelling stratospheric chemistry in a global three-dimensional chemical transport model
Energy Technology Data Exchange (ETDEWEB)
Rummukainen, M [Finnish Meteorological Inst., Sodankylae (Finland). Sodankylae Observatory
1996-12-31
Numerical modelling of atmospheric chemistry aims to increase the understanding of the characteristics, the behavior and the evolution of atmospheric composition. These topics are of utmost importance in the study of climate change. The multitude of gases and particulates making up the atmosphere and the complicated interactions between them affect radiation transfer, atmospheric dynamics, and the impacts of anthropogenic and natural emissions. Chemical processes are fundamental factors in global warming, ozone depletion and atmospheric pollution problems in general. Much of the prevailing work on modelling stratospheric chemistry has so far been done with 1- and 2-dimensional models. Carrying an extensive chemistry parameterisation in a model with high spatial and temporal resolution is computationally heavy. Today, computers are becoming powerful enough to allow going over to 3-dimensional models. In order to concentrate on the chemistry, many Chemical Transport Models (CTM) are still run off-line, i.e. with precalculated and archived meteorology and radiation. In chemistry simulations, the archived values drive the model forward in time, without interacting with the chemical evolution. This is an approach that has been adopted in stratospheric chemistry modelling studies at the Finnish Meteorological Institute. In collaboration with the University of Oslo, a development project was initiated in 1993 to prepare a stratospheric chemistry parameterisation, fit for global 3-dimensional modelling. This article presents the parameterisation approach. Selected results are shown from basic photochemical simulations
Modelling stratospheric chemistry in a global three-dimensional chemical transport model
Energy Technology Data Exchange (ETDEWEB)
Rummukainen, M. [Finnish Meteorological Inst., Sodankylae (Finland). Sodankylae Observatory
1995-12-31
Numerical modelling of atmospheric chemistry aims to increase the understanding of the characteristics, the behavior and the evolution of atmospheric composition. These topics are of utmost importance in the study of climate change. The multitude of gases and particulates making up the atmosphere and the complicated interactions between them affect radiation transfer, atmospheric dynamics, and the impacts of anthropogenic and natural emissions. Chemical processes are fundamental factors in global warming, ozone depletion and atmospheric pollution problems in general. Much of the prevailing work on modelling stratospheric chemistry has so far been done with 1- and 2-dimensional models. Carrying an extensive chemistry parameterisation in a model with high spatial and temporal resolution is computationally heavy. Today, computers are becoming powerful enough to allow going over to 3-dimensional models. In order to concentrate on the chemistry, many Chemical Transport Models (CTM) are still run off-line, i.e. with precalculated and archived meteorology and radiation. In chemistry simulations, the archived values drive the model forward in time, without interacting with the chemical evolution. This is an approach that has been adopted in stratospheric chemistry modelling studies at the Finnish Meteorological Institute. In collaboration with the University of Oslo, a development project was initiated in 1993 to prepare a stratospheric chemistry parameterisation, fit for global 3-dimensional modelling. This article presents the parameterisation approach. Selected results are shown from basic photochemical simulations
Model-independent partial wave analysis using a massively-parallel fitting framework
Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.
2017-10-01
The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.
International Nuclear Information System (INIS)
Ji Zhilong; Ma Yuanwei; Wang Dezhong
2014-01-01
Background: In radioactive nuclides atmospheric diffusion models, the empirical dispersion coefficients were deduced under certain experiment conditions, whose difference with nuclear accident conditions is a source of deviation. A better estimation of the radioactive nuclide's actual dispersion process could be done by correcting dispersion coefficients with observation data, and Genetic Algorithm (GA) is an appropriate method for this correction procedure. Purpose: This study is to analyze the fitness functions' influence on the correction procedure and the forecast ability of diffusion model. Methods: GA, coupled with Lagrange dispersion model, was used in a numerical simulation to compare 4 fitness functions' impact on the correction result. Results: In the numerical simulation, the fitness function with observation deviation taken into consideration stands out when significant deviation exists in the observed data. After performing the correction procedure on the Kincaid experiment data, a significant boost was observed in the diffusion model's forecast ability. Conclusion: As the result shows, in order to improve dispersion models' forecast ability using GA, observation data should be given different weight in the fitness function corresponding to their error. (authors)
2-D model of global aerosol transport
Energy Technology Data Exchange (ETDEWEB)
Rehkopf, J; Newiger, M; Grassl, H
1984-01-01
The distribution of aerosol particles in the troposphere is described. Starting with long term mean seasonal flow and diffusivities as well as temperature, cloud distribution (six cloud classes), relative humidity and OH radical concentration, the steady state concentration of aerosol particles and SO/sub 2/ are calculated in a two-dimensional global (height and latitude) model. The following sources and sinks for particles are handled: direct emission, gas-to-particle conversion from SO/sub 2/, coagulation, rainout, washout, gravitational settling, and dry deposition. The sinks considered for sulphur emissions are dry deposition, washout, rainout, gasphase oxidation, and aqueous phase oxidation. Model tests with the water vapour cycle show a good agreement between measured and calculated zonal mean precipitation distribution. The steady state concentration distribution for natural emissions reached after 10 weeks model time, may be described by a mean exponent ..cap alpha.. = 3.2 near the surface assuming a modified Junge distribution and an increased value, ..cap alpha.. = 3.7, for the combined natural and man-made emission. The maximum ground level concentrations are 2000 and 10,000 particules cm/sup -3/ for natural and natural plus man-made emissions, respectively. The resulting distribution of sulphur dioxide agrees satisfactorily with measurements given by several authors. 37 references, 4 figures.
Integrated assessment models of global climate change
International Nuclear Information System (INIS)
Parson, E.A.; Fisher-Vanden, K.
1997-01-01
The authors review recent work in the integrated assessment modeling of global climate change. This field has grown rapidly since 1990. Integrated assessment models seek to combine knowledge from multiple disciplines in formal integrated representations; inform policy-making, structure knowledge, and prioritize key uncertainties; and advance knowledge of broad system linkages and feedbacks, particularly between socio-economic and bio-physical processes. They may combine simplified representations of the socio-economic determinants of greenhouse gas emissions, the atmosphere and oceans, impacts on human activities and ecosystems, and potential policies and responses. The authors summarize current projects, grouping them according to whether they emphasize the dynamics of emissions control and optimal policy-making, uncertainty, or spatial detail. They review the few significant insights that have been claimed from work to date and identify important challenges for integrated assessment modeling in its relationships to disciplinary knowledge and to broader assessment seeking to inform policy- and decision-making. 192 refs., 2 figs
A scaled Lagrangian method for performing a least squares fit of a model to plant data
International Nuclear Information System (INIS)
Crisp, K.E.
1988-01-01
Due to measurement errors, even a perfect mathematical model will not be able to match all the corresponding plant measurements simultaneously. A further discrepancy may be introduced if an un-modelled change in conditions occurs within the plant which should have required a corresponding change in model parameters - e.g. a gradual deterioration in the performance of some component(s). Taking both these factors into account, what is required is that the overall discrepancy between the model predictions and the plant data is kept to a minimum. This process is known as 'model fitting', A method is presented for minimising any function which consists of the sum of squared terms, subject to any constraints. Its most obvious application is in the process of model fitting, where a weighted sum of squares of the differences between model predictions and plant data is the function to be minimised. When implemented within existing Central Electricity Generating Board computer models, it will perform a least squares fit of a model to plant data within a single job submission. (author)
Calibration of a simple and a complex model of global marine biogeochemistry
Kriest, Iris
2017-11-01
The assessment of the ocean biota's role in climate change is often carried out with global biogeochemical ocean models that contain many components and involve a high level of parametric uncertainty. Because many data that relate to tracers included in a model are only sparsely observed, assessment of model skill is often restricted to tracers that can be easily measured and assembled. Examination of the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types - a complex seven-component model (MOPS) and a very simple four-component model (RetroMOPS) - for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual-mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model contains only two nutrients: oxygen and dissolved organic phosphorus (DOP). Its misfit and large-scale tracer distributions are sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias (fixed nitrogen and oxygen inventory) as a useful additional constraint on model parameters. Dissolved organic phosphorus at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer - although difficult to measure - may be an important asset for model calibration.
Directory of Open Access Journals (Sweden)
Javier Macias-Guarasa
2012-10-01
Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.
McCluskey, Ken W.
2010-01-01
This article presents the author's comments on Hisham B. Ghassib's "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" Ghassib's article focuses on the transformation of science from pre-modern times to the present. Ghassib (2010) notes that, unlike in an earlier era when the economy depended on static…
Checking the Adequacy of Fit of Models from Split-Plot Designs
DEFF Research Database (Denmark)
Almini, A. A.; Kulahci, Murat; Montgomery, D. C.
2009-01-01
models. In this article, we propose the computation of two R-2, R-2-adjusted, prediction error sums of squares (PRESS), and R-2-prediction statistics to measure the adequacy of fit for the WP and the SP submodels in a split-plot design. This is complemented with the graphical analysis of the two types......One of the main features that distinguish split-plot experiments from other experiments is that they involve two types of experimental errors: the whole-plot (WP) error and the subplot (SP) error. Taking this into consideration is very important when computing measures of adequacy of fit for split-plot...... of errors to check for any violation of the underlying assumptions and the adequacy of fit of split-plot models. Using examples, we show how computing two measures of model adequacy of fit for each split-plot design model is appropriate and useful as they reveal whether the correct WP and SP effects have...
Direct fit of a theoretical model of phase transition in oscillatory finger motions.
Newell, K.M.; Molenaar, P.C.M.
2003-01-01
This paper presents a general method to fit the Schoner-Haken-Kelso (SHK) model of human movement phase transitions directly to time series data. A robust variant of the extended Kalman filter technique is applied to the data of a single subject. The options of covariance resetting and iteration
A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.
Glas, Cees A. W.; Meijer, Rob R.
A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…
Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee
2013-07-01
Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.
Fit Gap Analysis – The Role of Business Process Reference Models
Directory of Open Access Journals (Sweden)
Dejan Pajk
2013-12-01
Full Text Available Enterprise resource planning (ERP systems support solutions for standard business processes such as financial, sales, procurement and warehouse. In order to improve the understandability and efficiency of their implementation, ERP vendors have introduced reference models that describe the processes and underlying structure of an ERP system. To select and successfully implement an ERP system, the capabilities of that system have to be compared with a company’s business needs. Based on a comparison, all of the fits and gaps must be identified and further analysed. This step usually forms part of ERP implementation methodologies and is called fit gap analysis. The paper theoretically overviews methods for applying reference models and describes fit gap analysis processes in detail. The paper’s first contribution is its presentation of a fit gap analysis using standard business process modelling notation. The second contribution is the demonstration of a process-based comparison approach between a supply chain process and an ERP system process reference model. In addition to its theoretical contributions, the results can also be practically applied to projects involving the selection and implementation of ERP systems.
BETR global - A geographically-explicit global-scale multimedia contaminant fate model
International Nuclear Information System (INIS)
MacLeod, Matthew; Waldow, Harald von; Tay, Pascal; Armitage, James M.; Woehrnschimmel, Henry; Riley, William J.; McKone, Thomas E.; Hungerbuhler, Konrad
2011-01-01
We present two new software implementations of the BETR Global multimedia contaminant fate model. The model uses steady-state or non-steady-state mass-balance calculations to describe the fate and transport of persistent organic pollutants using a desktop computer. The global environment is described using a database of long-term average monthly conditions on a 15 o x 15 o grid. We demonstrate BETR Global by modeling the global sources, transport, and removal of decamethylcyclopentasiloxane (D5). - Two new software implementations of the Berkeley-Trent Global Contaminant Fate Model are available. The new model software is illustrated using a case study of the global fate of decamethylcyclopentasiloxane (D5).
Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R
2010-05-01
Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.
Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy
Nabizadeh, Nooshin; John, Nigel
2014-03-01
Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.
Directory of Open Access Journals (Sweden)
Jaclyn K Mann
2014-08-01
Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion
Directory of Open Access Journals (Sweden)
Rita Yi Man Li
2012-03-01
Full Text Available Entrepreneurs have always born the risk of running their business. They reap a profit in return for their risk taking and work. Housing developers are no different. In many countries, such as Australia, the United Kingdom and the United States, they interpret the tastes of the buyers and provide the dwellings they develop with basic fittings such as floor and wall coverings, bathroom fittings and kitchen cupboards. In mainland China, however, in most of the developments, units or houses are sold without floor or wall coverings, kitchen or bathroom fittings. What is the motive behind this choice? This paper analyses the factors affecting housing developers’ decisions to provide fittings based on 1701 housing developments in Hangzhou, Chongqing and Hangzhou using a Probit model. The results show that developers build a higher proportion of bare units in mainland China when: 1 there is shortage of housing; 2 land costs are high so that the comparative costs of providing fittings become relatively low.
Anticipating mismatches of HIT investments: Developing a viability-fit model for e-health services.
Mettler, Tobias
2016-01-01
Albeit massive investments in the recent years, the impact of health information technology (HIT) has been controversial and strongly disputed by both research and practice. While many studies are concerned with the development of new or the refinement of existing measurement models for assessing the impact of HIT adoption (ex post), this study presents an initial attempt to better understand the factors affecting viability and fit of HIT and thereby underscores the importance of also having instruments for managing expectations (ex ante). We extend prior research by undertaking a more granular investigation into the theoretical assumptions of viability and fit constructs. In doing so, we use a mixed-methods approach, conducting qualitative focus group discussions and a quantitative field study to improve and validate a viability-fit measurement instrument. Our findings suggest two issues for research and practice. First, the results indicate that different stakeholders perceive HIT viability and fit of the same e-health services very unequally. Second, the analysis also demonstrates that there can be a great discrepancy between the organizational viability and individual fit of a particular e-health service. The findings of this study have a number of important implications such as for health policy making, HIT portfolios, and stakeholder communication. Copyright © 2015. Published by Elsevier Ireland Ltd.
James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll
2003-01-01
This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...
Statistical topography of fitness landscapes
Franke, Jasper
2011-01-01
Fitness landscapes are generalized energy landscapes that play an important conceptual role in evolutionary biology. These landscapes provide a relation between the genetic configuration of an organism and that organism’s adaptive properties. In this work, global topographical features of these fitness landscapes are investigated using theoretical models. The resulting predictions are compared to empirical landscapes. It is shown that these landscapes allow, at least with respe...
DEFF Research Database (Denmark)
Nielsen, Karen L.; Pedersen, Thomas M.; Udekwu, Klas I.
2012-01-01
phage types, predominantly only penicillin resistant. We investigated whether isolates of this epidemic were associated with a fitness cost, and we employed a mathematical model to ask whether these fitness costs could have led to the observed reduction in frequency. Bacteraemia isolates of S. aureus...... from Denmark have been stored since 1957. We chose 40 S. aureus isolates belonging to phage complex 83A, clonal complex 8 based on spa type, ranging in time of isolation from 1957 to 1980 and with varyous antibiograms, including both methicillin-resistant and -susceptible isolates. The relative fitness...... of each isolate was determined in a growth competition assay with a reference isolate. Significant fitness costs of 215 were determined for the MRSA isolates studied. There was a significant negative correlation between number of antibiotic resistances and relative fitness. Multiple regression analysis...
BETR Global - A geographically explicit global-scale multimedia contaminant fate model
Energy Technology Data Exchange (ETDEWEB)
Macleod, M.; Waldow, H. von; Tay, P.; Armitage, J. M.; Wohrnschimmel, H.; Riley, W.; McKone, T. E.; Hungerbuhler, K.
2011-04-01
We present two new software implementations of the BETR Global multimedia contaminant fate model. The model uses steady-state or non-steady-state mass-balance calculations to describe the fate and transport of persistent organic pollutants using a desktop computer. The global environment is described using a database of long-term average monthly conditions on a 15{sup o} x 15{sup o} grid. We demonstrate BETR Global by modeling the global sources, transport, and removal of decamethylcyclopentasiloxane (D5).
Moduli stabilisation for chiral global models
International Nuclear Information System (INIS)
Cicoli, Michele; Mayrhofer, Christoph; Valandro, Roberto
2011-10-01
We combine moduli stabilisation and (chiral) model building in a fully consistent global set-up in Type IIB/F-theory. We consider compactifications on Calabi-Yau orientifolds which admit an explicit description in terms of toric geometry. We build globally consistent compactifications with tadpole and Freed-Witten anomaly cancellation by choosing appropriate brane set-ups and world-volume fluxes which also give rise to SU(5)- or MSSM-like chiral models. We fix all the Kaehler moduli within the Kaehler cone and the regime of validity of the 4D effective field theory. This is achieved in a way compatible with the local presence of chirality. The hidden sector generating the non-perturbative effects is placed on a del Pezzo divisor that does not have any chiral intersections with any other brane. In general, the vanishing D-term condition implies the shrinking of the rigid divisor supporting the visible sector. However, we avoid this problem by generating r< n D-term conditions on a set of n intersecting divisors. The remaining (n-r) flat directions are fixed by perturbative corrections to the Kaehler potential. We illustrate our general claims in an explicit example. We consider a K3-fibred Calabi-Yau with four Kaehler moduli, that is an hypersurface in a toric ambient space and admits a simple F-theory up-lift. We present explicit choices of brane set-ups and fluxes which lead to three different phenomenological scenarios: the first with GUT-scale strings and TeV-scale SUSY by fine-tuning the background fluxes; the second with an exponentially large value of the volume and TeV-scale SUSY without fine-tuning the background fluxes; and the third with a very anisotropic configuration that leads to TeV-scale strings and two micron-sized extra dimensions. The K3 fibration structure of the Calabi-Yau three-fold is also particularly suitable for cosmological purposes. (orig.)
Moduli stabilisation for chiral global models
Energy Technology Data Exchange (ETDEWEB)
Cicoli, Michele [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Mayrhofer, Christoph [Heidelberg Univ. (Germany). Inst. fuer Theoretische Physik; Valandro, Roberto [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik
2011-10-15
We combine moduli stabilisation and (chiral) model building in a fully consistent global set-up in Type IIB/F-theory. We consider compactifications on Calabi-Yau orientifolds which admit an explicit description in terms of toric geometry. We build globally consistent compactifications with tadpole and Freed-Witten anomaly cancellation by choosing appropriate brane set-ups and world-volume fluxes which also give rise to SU(5)- or MSSM-like chiral models. We fix all the Kaehler moduli within the Kaehler cone and the regime of validity of the 4D effective field theory. This is achieved in a way compatible with the local presence of chirality. The hidden sector generating the non-perturbative effects is placed on a del Pezzo divisor that does not have any chiral intersections with any other brane. In general, the vanishing D-term condition implies the shrinking of the rigid divisor supporting the visible sector. However, we avoid this problem by generating r
A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections
Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.
2014-01-01
A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.
A flexible, interactive software tool for fitting the parameters of neuronal models.
Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs
2014-01-01
The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.
A flexible, interactive software tool for fitting the parameters of neuronal models
Directory of Open Access Journals (Sweden)
Péter eFriedrich
2014-07-01
Full Text Available The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problem of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting
Global plastic models for computerized structural analysis
International Nuclear Information System (INIS)
Roche, R.; Hoffmann, A.
1977-01-01
Two different global models are used in the CEASEMT system for structural analysis, one for the shells analysis and the other for piping analysis (in plastic or creep field). In shell analysis the generalized stresses choosed are the membrane forces Nsub(ij) and bending (including torsion) moments Msub(ij). There is only one yield condition for a normal (to the middle surface) and no integration along the thickness is required. In piping analysis, the choice of generalized stresses is: bending moments, torsional moments, Hoop stress and tension stress. There is only a set of stresses for a cross section and non integration over the cross section area is needed. Connected strains are axis curvature, torsion, uniform strains. The definition of the yield surface is the most important item. A practical way is to use a diagonal quadratic fonction of the stress components. But the coefficients are depending of the shape of the pipe element, especially for curved segments. Indications will be given on the yield fonctions used. Some examples of applications in structural analysis are added to the text [fr
The fitting parameters extraction of conversion model of the low dose rate effect in bipolar devices
International Nuclear Information System (INIS)
Bakerenkov, Alexander
2011-01-01
The Enhanced Low Dose Rate Sensitivity (ELDRS) in bipolar devices consists of in base current degradation of NPN and PNP transistors increase as the dose rate is decreased. As a result of almost 20-year studying, the some physical models of effect are developed, being described in detail. Accelerated test methods, based on these models use in standards. The conversion model of the effect, that allows to describe the inverse S-shaped excess base current dependence versus dose rate, was proposed. This paper presents the problem of conversion model fitting parameters extraction.
Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.
1994-01-01
Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)
Global Information Enterprise (GIE) Modeling and Simulation (GIESIM)
National Research Council Canada - National Science Library
Bell, Paul
2005-01-01
... AND S) toolkits into the Global Information Enterprise (GIE) Modeling and Simulation (GIESim) framework to create effective user analysis of candidate communications architectures and technologies...
Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components
Zhang, Saijuan
2011-01-06
There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole
Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components
Zhang, Saijuan; Krebs-Smith, Susan M.; Midthune, Douglas; Perez, Adriana; Buckman, Dennis W.; Kipnis, Victor; Freedman, Laurence S.; Dodd, Kevin W.; Carroll, Raymond J
2011-01-01
There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole
Energy Technology Data Exchange (ETDEWEB)
Furlan, E. [Infrared Processing and Analysis Center, California Institute of Technology, 770 S. Wilson Ave., Pasadena, CA 91125 (United States); Fischer, W. J. [Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771 (United States); Ali, B. [Space Science Institute, 4750 Walnut Street, Boulder, CO 80301 (United States); Stutz, A. M. [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Stanke, T. [ESO, Karl-Schwarzschild-Strasse 2, D-85748 Garching bei München (Germany); Tobin, J. J. [National Radio Astronomy Observatory, Charlottesville, VA 22903 (United States); Megeath, S. T.; Booker, J. [Ritter Astrophysical Research Center, Department of Physics and Astronomy, University of Toledo, 2801 W. Bancroft Street, Toledo, OH 43606 (United States); Osorio, M. [Instituto de Astrofísica de Andalucía, CSIC, Camino Bajo de Huétor 50, E-18008 Granada (Spain); Hartmann, L.; Calvet, N. [Department of Astronomy, University of Michigan, 500 Church Street, Ann Arbor, MI 48109 (United States); Poteet, C. A. [New York Center for Astrobiology, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180 (United States); Manoj, P. [Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India); Watson, D. M. [Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627 (United States); Allen, L., E-mail: furlan@ipac.caltech.edu [National Optical Astronomy Observatory, 950 N. Cherry Avenue, Tucson, AZ 85719 (United States)
2016-05-01
We present key results from the Herschel Orion Protostar Survey: spectral energy distributions (SEDs) and model fits of 330 young stellar objects, predominantly protostars, in the Orion molecular clouds. This is the largest sample of protostars studied in a single, nearby star formation complex. With near-infrared photometry from 2MASS, mid- and far-infrared data from Spitzer and Herschel , and submillimeter photometry from APEX, our SEDs cover 1.2–870 μ m and sample the peak of the protostellar envelope emission at ∼100 μ m. Using mid-IR spectral indices and bolometric temperatures, we classify our sample into 92 Class 0 protostars, 125 Class I protostars, 102 flat-spectrum sources, and 11 Class II pre-main-sequence stars. We implement a simple protostellar model (including a disk in an infalling envelope with outflow cavities) to generate a grid of 30,400 model SEDs and use it to determine the best-fit model parameters for each protostar. We argue that far-IR data are essential for accurate constraints on protostellar envelope properties. We find that most protostars, and in particular the flat-spectrum sources, are well fit. The median envelope density and median inclination angle decrease from Class 0 to Class I to flat-spectrum protostars, despite the broad range in best-fit parameters in each of the three categories. We also discuss degeneracies in our model parameters. Our results confirm that the different protostellar classes generally correspond to an evolutionary sequence with a decreasing envelope infall rate, but the inclination angle also plays a role in the appearance, and thus interpretation, of the SEDs.
SEP modeling based on global heliospheric models at the CCMC
Mays, M. L.; Luhmann, J. G.; Odstrcil, D.; Bain, H. M.; Schwadron, N.; Gorby, M.; Li, Y.; Lee, K.; Zeitlin, C.; Jian, L. K.; Lee, C. O.; Mewaldt, R. A.; Galvin, A. B.
2017-12-01
Heliospheric models provide contextual information of conditions in the heliosphere, including the background solar wind conditions and shock structures, and are used as input to SEP models, providing an essential tool for understanding SEP properties. The global 3D MHD WSA-ENLIL+Cone model provides a time-dependent background heliospheric description, into which a spherical shaped hydrodynamic CME can be inserted. ENLIL simulates solar wind parameters and additionally one can extract the magnetic topologies of observer-connected magnetic field lines and all plasma and shock properties along those field lines. An accurate representation of the background solar wind is necessary for simulating transients. ENLIL simulations also drive SEP models such as the Solar Energetic Particle Model (SEPMOD) (Luhmann et al. 2007, 2010) and the Energetic Particle Radiation Environment Module (EPREM) (Schwadron et al. 2010). The Community Coordinated Modeling Center (CCMC) is in the process of making these SEP models available to the community and offering a system to run SEP models driven by a variety of heliospheric models available at CCMC. SEPMOD injects protons onto a sequence of observer field lines at intensities dependent on the connected shock source strength which are then integrated at the observer to approximate the proton flux. EPREM couples with MHD models such as ENLIL and computes energetic particle distributions based on the focused transport equation along a Lagrangian grid of nodes that propagate out with the solar wind. The coupled SEP models allow us to derive the longitudinal distribution of SEP profiles of different types of events throughout the heliosphere. The coupled ENLIL and SEP models allow us to derive the longitudinal distribution of SEP profiles of different types of events throughout the heliosphere. In this presentation we demonstrate several case studies of SEP event modeling at different observers based on WSA-ENLIL+Cone simulations.
Globalization and Shanghai Model: A Retrospective and Prospective Analysis
Directory of Open Access Journals (Sweden)
Linsun Cheng
2012-11-01
Full Text Available Intended to shed light on the debate on the results of globalization and providebetter understanding of the influences of globalization upon China as well as theworld, this article traces the history of Shanghai’s economic globalization over thepast 170 years since 1843 and demonstrates the benefits and problems Shanghaireceived from (or connected to its economic globalization. Divided into threesections (Globalization, de-globalization and re-globalization of Shanghai’s economy;Manufacturing-Oriented vs. Tertiary-oriented—Shanghai’s Double PriorityStrategy of Economic Growth; Free market, state enterprises, and Shanghai’s mixedeconomy the article summarizes and analyzes several characteristics that madeShanghai a unique model in the history of globalization: In adapting and adoptinginevitable economic globalization, Shanghai created its unique model of economicdevelopment—widely embracing economic globalization; placing Shanghai’seconomy on a solid foundation of both strong modern manufacturing and strongtertiary industry (consisting of finance and insurance, real estate, transportations,post and telecommunication, wholesale and retailing; and creating a mixedeconomic structure with hybrid of private and state owned enterprises. TheShanghai model proves that globalization has been an unavoidable trend as scienceand technology have made the world “smaller” and “smaller.” Actively engaging intoeconomic globalization is the only way for Shanghai, as well as many developingcountries, to accelerate its economic growth.
International Nuclear Information System (INIS)
Mbagwu, J.S.C.
1993-10-01
Six infiltration models, some obtained by reformulating the fitting parameters of the classical Kostiakov (1932) and Philip (1957) equations, were investigated for their ability to describe water infiltration into highly permeable sandy soils from the Nsukka plains of SE Nigeria. The models were Kostiakov, Modified Kostiakov (A), Modified Kostiakov (B), Philip, Modified Philip (A) and Modified Philip (B). Infiltration data were obtained from double ring infiltrometers on field plots established on a Knadic Paleustult (Nkpologu series) to investigate the effects of land use on soil properties and maize yield. The treatments were; (i) tilled-mulched (TM), (ii) tilled-unmulched (TU), (iii) untilled-mulched (UM), (iv) untilled-unmulched (UU) and (v) continuous pasture (CP). Cumulative infiltration was highest on the TM and lowest on the CP plots. All estimated model parameters obtained by the best fit of measured data differed significantly among the treatments. Based on the magnitude of R 2 values, the Kostiakov, Modified Kostiakov (A), Philip and Modified Philip (A) models provided best predictions of cumulative infiltration as a function of time. Comparing experimental with model-predicted cumulative infiltration showed, however, that on all treatments the values predicted by the classical Kostiakov, Philip and Modified Philip (A) models deviated most from experimental data. The other models produced values that agreed very well with measured data. Considering the eases of determining the fitting parameters it is proposed that on soils with high infiltration rates, either Modified Kostiakov model (I = Kt a + Ict) or Modified Philip model (I St 1/2 + Ict), (where I is cumulative infiltration, K, the time coefficient, t, time elapsed, 'a' the time exponent, Ic the equilibrium infiltration rate and S, the soil water sorptivity), be used for routine characterization of the infiltration process. (author). 33 refs, 3 figs 6 tabs
Mandal, S.; Choudhury, B. U.
2015-07-01
Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.
Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.
Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F
2009-11-01
Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database.
National Aeronautics and Space Administration — The ability of the Global Hawk air vehicle to autonomously fly long distances and remain aloft for extended periods of time means that measuring, monitoring, and...
Development and Analysis of Volume Multi-Sphere Method Model Generation using Electric Field Fitting
Ingram, G. J.
Electrostatic modeling of spacecraft has wide-reaching applications such as detumbling space debris in the Geosynchronous Earth Orbit regime before docking, servicing and tugging space debris to graveyard orbits, and Lorentz augmented orbits. The viability of electrostatic actuation control applications relies on faster-than-realtime characterization of the electrostatic interaction. The Volume Multi-Sphere Method (VMSM) seeks the optimal placement and radii of a small number of equipotential spheres to accurately model the electrostatic force and torque on a conducting space object. Current VMSM models tuned using force and torque comparisons with commercially available finite element software are subject to the modeled probe size and numerical errors of the software. This work first investigates fitting of VMSM models to Surface-MSM (SMSM) generated electrical field data, removing modeling dependence on probe geometry while significantly increasing performance and speed. A proposed electric field matching cost function is compared to a force and torque cost function, the inclusion of a self-capacitance constraint is explored and 4 degree-of-freedom VMSM models generated using electric field matching are investigated. The resulting E-field based VMSM development framework is illustrated on a box-shaped hub with a single solar panel, and convergence properties of select models are qualitatively analyzed. Despite the complex non-symmetric spacecraft geometry, elegantly simple 2-sphere VMSM solutions provide force and torque fits within a few percent.
Building Customer Churn Prediction Models in Fitness Industry with Machine Learning Methods
Shan, Min
2017-01-01
With the rapid growth of digital systems, churn management has become a major focus within customer relationship management in many industries. Ample research has been conducted for churn prediction in different industries with various machine learning methods. This thesis aims to combine feature selection and supervised machine learning methods for defining models of churn prediction and apply them on fitness industry. Forward selection is chosen as feature selection methods. Support Vector ...
Bereczkei, Tamas; Mesko, Norbert
2007-01-01
Multiple Fitness Model states that attractiveness varies across multiple dimensions, with each feature representing a different aspect of mate value. In the present study, male raters judged the attractiveness of young females with neotenous and mature facial features, with various hair lengths. Results revealed that the physical appearance of long-haired women was rated high, regardless of their facial attractiveness being valued high or low. Women rated as most attractive were those whose f...
Efficient parallel implementation of active appearance model fitting algorithm on GPU.
Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou
2014-01-01
The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.
Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU
Directory of Open Access Journals (Sweden)
Jinwei Wang
2014-01-01
Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.
Measuring fit of sequence data to phylogenetic model: gain of power using marginal tests.
Waddell, Peter J; Ota, Rissa; Penny, David
2009-10-01
Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (Unended quest: an intellectual autobiography. Fontana, London, 1976) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (Nature 297:197-200, 1982) to the present. We compare the general log-likelihood ratio (the G or G (2) statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (P approximately 0.5), but the marginalized tests do. Tests on pairwise frequency (F) matrices, strongly (P < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (P < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4( t ) patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with P < 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published trees may really be far larger than the analytical methods (e.g., bootstrap) report.
UROX 2.0: an interactive tool for fitting atomic models into electron-microscopy reconstructions
International Nuclear Information System (INIS)
Siebert, Xavier; Navaza, Jorge
2009-01-01
UROX is software designed for the interactive fitting of atomic models into electron-microscopy reconstructions. The main features of the software are presented, along with a few examples. Electron microscopy of a macromolecular structure can lead to three-dimensional reconstructions with resolutions that are typically in the 30–10 Å range and sometimes even beyond 10 Å. Fitting atomic models of the individual components of the macromolecular structure (e.g. those obtained by X-ray crystallography or nuclear magnetic resonance) into an electron-microscopy map allows the interpretation of the latter at near-atomic resolution, providing insight into the interactions between the components. Graphical software is presented that was designed for the interactive fitting and refinement of atomic models into electron-microscopy reconstructions. Several characteristics enable it to be applied over a wide range of cases and resolutions. Firstly, calculations are performed in reciprocal space, which results in fast algorithms. This allows the entire reconstruction (or at least a sizeable portion of it) to be used by taking into account the symmetry of the reconstruction both in the calculations and in the graphical display. Secondly, atomic models can be placed graphically in the map while the correlation between the model-based electron density and the electron-microscopy reconstruction is computed and displayed in real time. The positions and orientations of the models are refined by a least-squares minimization. Thirdly, normal-mode calculations can be used to simulate conformational changes between the atomic model of an individual component and its corresponding density within a macromolecular complex determined by electron microscopy. These features are illustrated using three practical cases with different symmetries and resolutions. The software, together with examples and user instructions, is available free of charge at http://mem.ibs.fr/UROX/
Modeling of reservoir operation in UNH global hydrological model
Shiklomanov, Alexander; Prusevich, Alexander; Frolking, Steve; Glidden, Stanley; Lammers, Richard; Wisser, Dominik
2015-04-01
Climate is changing and river flow is an integrated characteristic reflecting numerous environmental processes and their changes aggregated over large areas. Anthropogenic impacts on the river flow, however, can significantly exceed the changes associated with climate variability. Besides of irrigation, reservoirs and dams are one of major anthropogenic factor affecting streamflow. They distort hydrological regime of many rivers by trapping of freshwater runoff, modifying timing of river discharge and increasing the evaporation rate. Thus, reservoirs is an integral part of the global hydrological system and their impacts on rivers have to be taken into account for better quantification and understanding of hydrological changes. We developed a new technique, which was incorporated into WBM-TrANS model (Water Balance Model-Transport from Anthropogenic and Natural Systems) to simulate river routing through large reservoirs and natural lakes based on information available from freely accessible databases such as GRanD (the Global Reservoir and Dam database) or NID (National Inventory of Dams for US). Different formulations were applied for unregulated spillway dams and lakes, and for 4 types of regulated reservoirs, which were subdivided based on main purpose including generic (multipurpose), hydropower generation, irrigation and water supply, and flood control. We also incorporated rules for reservoir fill up and draining at the times of construction and decommission based on available data. The model were tested for many reservoirs of different size and types located in various climatic conditions using several gridded meteorological data sets as model input and observed daily and monthly discharge data from GRDC (Global Runoff Data Center), USGS Water Data (US Geological Survey), and UNH archives. The best results with Nash-Sutcliffe model efficiency coefficient in the range of 0.5-0.9 were obtained for temperate zone of Northern Hemisphere where most of large
A hands-on approach for fitting long-term survival models under the GAMLSS framework.
de Castro, Mário; Cancho, Vicente G; Rodrigues, Josemar
2010-02-01
In many data sets from clinical studies there are patients insusceptible to the occurrence of the event of interest. Survival models which ignore this fact are generally inadequate. The main goal of this paper is to describe an application of the generalized additive models for location, scale, and shape (GAMLSS) framework to the fitting of long-term survival models. In this work the number of competing causes of the event of interest follows the negative binomial distribution. In this way, some well known models found in the literature are characterized as particular cases of our proposal. The model is conveniently parameterized in terms of the cured fraction, which is then linked to covariates. We explore the use of the gamlss package in R as a powerful tool for inference in long-term survival models. The procedure is illustrated with a numerical example. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Förster, J.
2009-01-01
Nine studies showed a bidirectional link (a) between a global processing style and generation of similarities and (b) between a local processing style and generation of dissimilarities. In Experiments 1-4, participants were primed with global versus local perception styles and then asked to work on
Architecture design in global and model-centric software development
Heijstek, Werner
2012-01-01
This doctoral dissertation describes a series of empirical investigations into representation, dissemination and coordination of software architecture design in the context of global software development. A particular focus is placed on model-centric and model-driven software development.
Multiple organ definition in CT using a Bayesian approach for 3D model fitting
Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.
1995-08-01
Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.
Fitting the CDO correlation skew: a tractable structural jump-diffusion model
DEFF Research Database (Denmark)
Willemann, Søren
2007-01-01
We extend a well-known structural jump-diffusion model for credit risk to handle both correlations through diffusion of asset values and common jumps in asset value. Through a simplifying assumption on the default timing and efficient numerical techniques, we develop a semi-analytic framework...... allowing for instantaneous calibration to heterogeneous CDS curves and fast computation of CDO tranche spreads. We calibrate the model to CDX and iTraxx data from February 2007 and achieve a satisfactory fit. To price the senior tranches for both indices, we require a risk-neutral probability of a market...
Directory of Open Access Journals (Sweden)
Cheol-Eung Lee
2017-02-01
Full Text Available Several natural disasters occur because of torrential rainfalls. The change in global climate most likely increases the occurrences of such downpours. Hence, it is necessary to investigate the characteristics of the torrential rainfall events in order to introduce effective measures for mitigating disasters such as urban floods and landslides. However, one of the major problems is evaluating the number of torrential rainfall events from a statistical viewpoint. If the number of torrential rainfall occurrences during a month is considered as count data, their frequency distribution could be identified using a probability distribution. Generally, the number of torrential rainfall occurrences has been analyzed using the Poisson distribution (POI or the Generalized Poisson Distribution (GPD. However, it was reported that POI and GPD often overestimated or underestimated the observed count data when additional or fewer zeros were included. Hence, in this study, a zero-inflated model concept was applied to solve this problem existing in the conventional models. Zero-Inflated Poisson (ZIP model, Zero-Inflated Generalized Poisson (ZIGP model, and the Bayesian ZIGP model have often been applied to fit the count data having additional or fewer zeros. However, the applications of these models in water resource management have been very limited despite their efficiency and accuracy. The five models, namely, POI, GPD, ZIP, ZIGP, and Bayesian ZIGP, were applied to the torrential rainfall data having additional zeros obtained from two rain gauges in South Korea, and their applicability was examined in this study. In particular, the informative prior distributions evaluated via the empirical Bayes method using ten rain gauges were developed in the Bayesian ZIGP model. Finally, it was suggested to avoid using the POI and GPD models to fit the frequency of torrential rainfall data. In addition, it was concluded that the Bayesian ZIGP model used in this study
Permutation tests for goodness-of-fit testing of mathematical models to experimental data.
Fişek, M Hamit; Barlas, Zeynep
2013-03-01
This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.
FITTING A THREE DIMENSIONAL PEM FUEL CELL MODEL TO MEASUREMENTS BY TUNING THE POROSITY AND
DEFF Research Database (Denmark)
Bang, Mads; Odgaard, Madeleine; Condra, Thomas Joseph
2004-01-01
the distribution of current density and further how thisaffects the polarization curve.The porosity and conductivity of the catalyst layer are some ofthe most difficult parameters to measure, estimate and especiallycontrol. Yet the proposed model shows how these two parameterscan have significant influence...... on the performance of the fuel cell.The two parameters are shown to be key elements in adjusting thethree-dimensional model to fit measured polarization curves.Results from the proposed model are compared to single cellmeasurements on a test MEA from IRD Fuel Cells.......A three-dimensional, computational fluid dynamics (CFD) model of a PEM fuel cell is presented. The model consists ofstraight channels, porous gas diffusion layers, porous catalystlayers and a membrane. In this computational domain, most ofthe transport phenomena which govern the performance of the...
Fitting the Fractional Polynomial Model to Non-Gaussian Longitudinal Data
Directory of Open Access Journals (Sweden)
Ji Hoon Ryoo
2017-08-01
Full Text Available As in cross sectional studies, longitudinal studies involve non-Gaussian data such as binomial, Poisson, gamma, and inverse-Gaussian distributions, and multivariate exponential families. A number of statistical tools have thus been developed to deal with non-Gaussian longitudinal data, including analytic techniques to estimate parameters in both fixed and random effects models. However, as yet growth modeling with non-Gaussian data is somewhat limited when considering the transformed expectation of the response via a linear predictor as a functional form of explanatory variables. In this study, we introduce a fractional polynomial model (FPM that can be applied to model non-linear growth with non-Gaussian longitudinal data and demonstrate its use by fitting two empirical binary and count data models. The results clearly show the efficiency and flexibility of the FPM for such applications.
Modeling the Global Workplace Using Emerging Technologies
Dorazio, Patricia; Hickok, Corey
2008-01-01
The Fall 2006 term of COM495, Senior Practicum in Communication, offered communication and information design students the privilege of taking part in a transatlantic intercultural virtual project. To emulate real world experience in today's global workplace, these students researched and completed a business communication project with German…
Modelling global container freight transport demand
Tavasszy, L.A.; Ivanova, O.; Halim, R.A.
2015-01-01
The objective of this chapter is to discuss methods and techniques for a quantitative and descriptive analysis of future container transport demand at a global level. Information on future container transport flows is useful for various purposes. It is instrumental for the assessment of returns of
An algorithm to provide UK global radiation for use with models
International Nuclear Information System (INIS)
Hamer, P.J.C.
1999-01-01
Decision support systems which include crop growth models require long-term average values of global radiation to simulate future expected growth. Global radiation is rarely available as there are relatively few meteorological stations with long-term records and so interpolation between sites is difficult. Global radiation data across a good geographical spread throughout the UK were obtained and sub-divided into ‘coastal’ and ‘inland’ sites. Monthly means of global radiation (S) were extracted and analysed in relation to irradiance in the absence of atmosphere (S o ) calculated from site latitude and the time of year. The ratio S/S o was fitted to the month of the year (t) and site latitude using a nonlinear fit function in which 90% of the variance was accounted for. An algorithm is presented which provides long-term daily values of global radiation from information on latitude, time of year and whether the site is inland or close to the coast. (author)
Duarte, Adam; Adams, Michael J.; Peterson, James T.
2018-01-01
Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision
Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models
Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan
2006-04-01
The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.
Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models
International Nuclear Information System (INIS)
Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan
2006-01-01
The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data
Fitted HBT radii versus space-time variances in flow-dominated models
International Nuclear Information System (INIS)
Lisa, Mike; Frodermann, Evan; Heinz, Ulrich
2007-01-01
The inability of otherwise successful dynamical models to reproduce the 'HBT radii' extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the 'RHIC HBT Puzzle'. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source which can be directly computed from the emission function, without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models some of which exhibit significant deviations from simple Gaussian behaviour. By Fourier transforming the emission function we compute the 2-particle correlation function and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and measured HBT radii remain, we show that a more 'apples-to-apples' comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data. (author)
Fast fitting of non-Gaussian state-space models to animal movement data via Template Model Builder
DEFF Research Database (Denmark)
Albertsen, Christoffer Moesgaard; Whoriskey, Kim; Yurkowski, David
2015-01-01
recommend using the Laplace approximation combined with automatic differentiation (as implemented in the novel R package Template Model Builder; TMB) for the fast fitting of continuous-time multivariate non-Gaussian SSMs. Through Argos satellite tracking data, we demonstrate that the use of continuous...... are able to estimate additional parameters compared to previous methods, all without requiring a substantial increase in computational time. The model implementation is made available through the R package argosTrack....
Booth, B. B. B.; Bernie, D.; McNeall, D.; Hawkins, E.; Caesar, J.; Boulton, C.; Friedlingstein, P.; Sexton, D.
2012-09-01
We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission driven rather than concentration driven perturbed parameter ensemble of a Global Climate Model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration driven simulations (with 10-90 percentile ranges of 1.7 K for the aggressive mitigation scenario up to 3.9 K for the high end business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 degrees (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission driven experiments, they do not change existing expectations (based on previous concentration driven experiments) on the timescale that different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration pathways used to drive GCM ensembles lies towards the lower end of our simulated distribution. This design decision (a legecy of previous assessments) is likely to lead concentration driven experiments to under-sample strong feedback responses in concentration driven projections. Our ensemble of emission driven simulations span the global temperature response of other multi-model frameworks except at the low end, where combinations of low climate sensitivity and low carbon cycle feedbacks lead to responses outside our ensemble range. The ensemble simulates a number of high end responses which lie above the CMIP5 carbon
USING GEM - GLOBAL ECONOMIC MODEL IN ACHIEVING A GLOBAL ECONOMIC FORECAST
Directory of Open Access Journals (Sweden)
Camelia Madalina Orac
2013-12-01
Full Text Available The global economic development model has proved to be insufficiently reliable under the new economic crisis. As a result, the entire theoretical construction about the global economy needs rethinking and reorientation. In this context, it is quite clear that only through effective use of specific techniques and tools of economic-mathematical modeling, statistics, regional analysis and economic forecasting it is possible to obtain an overview of the future economy.
GAMBIT: the global and modular beyond-the-standard-model inference tool
Athron, Peter; Balazs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Dickinson, Hugh; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Lundberg, Johan; McKay, James; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Ripken, Joachim; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Seo, Seon-Hee; Serra, Nicola; Weniger, Christoph; White, Martin; Wild, Sebastian
2017-11-01
We describe the open-source global fitting package GAMBIT: the Global And Modular Beyond-the-Standard-Model Inference Tool. GAMBIT combines extensive calculations of observables and likelihoods in particle and astroparticle physics with a hierarchical model database, advanced tools for automatically building analyses of essentially any model, a flexible and powerful system for interfacing to external codes, a suite of different statistical methods and parameter scanning algorithms, and a host of other utilities designed to make scans faster, safer and more easily-extendible than in the past. Here we give a detailed description of the framework, its design and motivation, and the current models and other specific components presently implemented in GAMBIT. Accompanying papers deal with individual modules and present first GAMBIT results. GAMBIT can be downloaded from gambit.hepforge.org.
GAMBIT. The global and modular beyond-the-standard-model inference tool
Energy Technology Data Exchange (ETDEWEB)
Athron, Peter; Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Bringmann, Torsten; Dal, Lars A.; Gonzalo, Tomas E.; Krislock, Abram; Raklev, Are [University of Oslo, Department of Physics, Oslo (Norway); Buckley, Andy [University of Glasgow, SUPA, School of Physics and Astronomy, Glasgow (United Kingdom); Chrzaszcz, Marcin [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Polish Academy of Sciences, H. Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben; Lundberg, Johan [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Dickinson, Hugh [University of Minnesota, Minnesota Institute for Astrophysics, Minneapolis, MN (United States); Jackson, Paul; White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); McKay, James [Imperial College London, Blackett Laboratory, Department of Physics, London (United Kingdom); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, ENS de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval (France); CERN, Theoretical Physics Department, Geneva (Switzerland); Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Ripken, Joachim [Max Planck Institute for Solar System Research, Goettingen (Germany); Rogan, Christopher [Harvard University, Department of Physics, Cambridge, MA (United States); Saavedra, Aldo [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); The University of Sydney, Faculty of Engineering and Information Technologies, Centre for Translational Data Science, School of Physics, Sydney, NSW (Australia); Scott, Pat [Imperial College London, Blackett Laboratory, Department of Physics, London (United Kingdom); Seo, Seon-Hee [Seoul National University, Department of Physics and Astronomy, Seoul (Korea, Republic of); Serra, Nicola [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); Wild, Sebastian [DESY, Hamburg (Germany); Collaboration: The GAMBIT Collaboration
2017-11-15
We describe the open-source global fitting package GAMBIT: the Global And Modular Beyond-the-Standard-Model Inference Tool. GAMBIT combines extensive calculations of observables and likelihoods in particle and astroparticle physics with a hierarchical model database, advanced tools for automatically building analyses of essentially any model, a flexible and powerful system for interfacing to external codes, a suite of different statistical methods and parameter scanning algorithms, and a host of other utilities designed to make scans faster, safer and more easily-extendible than in the past. Here we give a detailed description of the framework, its design and motivation, and the current models and other specific components presently implemented in GAMBIT. Accompanying papers deal with individual modules and present first GAMBIT results. GAMBIT can be downloaded from gambit.hepforge.org. (orig.)
GAMBIT. The global and modular beyond-the-standard-model inference tool
International Nuclear Information System (INIS)
Athron, Peter; Balazs, Csaba; Bringmann, Torsten; Dal, Lars A.; Gonzalo, Tomas E.; Krislock, Abram; Raklev, Are; Buckley, Andy; Chrzaszcz, Marcin; Conrad, Jan; Edsjoe, Joakim; Farmer, Ben; Lundberg, Johan; Cornell, Jonathan M.; Dickinson, Hugh; Jackson, Paul; White, Martin; Kvellestad, Anders; Savage, Christopher; McKay, James; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Ripken, Joachim; Rogan, Christopher; Saavedra, Aldo; Scott, Pat; Seo, Seon-Hee; Serra, Nicola; Weniger, Christoph; Wild, Sebastian
2017-01-01
We describe the open-source global fitting package GAMBIT: the Global And Modular Beyond-the-Standard-Model Inference Tool. GAMBIT combines extensive calculations of observables and likelihoods in particle and astroparticle physics with a hierarchical model database, advanced tools for automatically building analyses of essentially any model, a flexible and powerful system for interfacing to external codes, a suite of different statistical methods and parameter scanning algorithms, and a host of other utilities designed to make scans faster, safer and more easily-extendible than in the past. Here we give a detailed description of the framework, its design and motivation, and the current models and other specific components presently implemented in GAMBIT. Accompanying papers deal with individual modules and present first GAMBIT results. GAMBIT can be downloaded from gambit.hepforge.org. (orig.)
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
UK energy policy ambition and UK energy modelling-fit for purpose?
International Nuclear Information System (INIS)
Strachan, Neil
2011-01-01
Aiming to lead amongst other G20 countries, the UK government has classified the twin energy policy priorities of decarbonisation and security of supply as a 'centennial challenge'. This viewpoint discusses the UK's capacity for energy modelling and scenario building as a critical underpinning of iterative decision making to meet these policy ambitions. From a nadir, over the last decade UK modelling expertise has been steadily built up. However extreme challenges remain in the level and consistency of funding of core model teams - critical to ensure a full scope of energy model types and hence insights, and in developing new state-of-the-art models to address evolving uncertainties. Meeting this challenge will facilitate a broad scope of types and geographical scale of UK's analytical tools to responsively deliver the evidence base for a range of public and private sector decision makers, and ensure that the UK contributes to global efforts to advance the field of energy-economic modelling. - Research highlights: → Energy modelling capacity is a critical underpinning for iterative energy policy making. → Full scope of energy models and analytical approaches is required. → Extreme challenges remain in consistent and sustainable funding of energy modelling teams. → National governments that lead in global energy policy also need to invest in modelling capacity.
Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model
International Nuclear Information System (INIS)
Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.
2002-01-01
We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well
VizieR Online Data Catalog: GRB prompt emission fitted with the DREAM model (Ahlgren+, 2015)
Ahlgren, B.; Larsson, J.; Nymark, T.; Ryde, F.; Pe'Er, A.
2018-01-01
We illustrate the application of the DREAM model by fitting it to two different, bright Fermi GRBs; GRB 090618 and GRB 100724B. While GRB 090618 is well fitted by a Band function, GRB 100724B was the first example of a burst with a significant additional BB component (Guiriec et al. 2011ApJ...727L..33G). GRB 090618 is analysed using Gamma-ray Burst Monitor (GBM) data (Meegan et al. 2009ApJ...702..791M) from the NaI and BGO detectors. For GRB 100724B, we used GBM data from the NaI and BGO detectors as well as Large Area Telescope Low Energy (LAT-LLE) data. For both bursts we selected NaI detectors seeing the GRB at an off-axis angle lower than 60° and the BGO detector as being the best aligned of the two BGO detectors. The spectra were fitted in the energy ranges 8-1000 keV (NaI), 200-40000 keV (BGO) and 30-1000 MeV (LAT-LLE). (2 data files).
Adapted strategic plannig model applied to small business: a case study in the fitness area
Directory of Open Access Journals (Sweden)
Eduarda Tirelli Hennig
2012-06-01
Full Text Available The strategic planning is an important management tool in the corporate scenario and shall not be restricted to big Companies. However, this kind of planning process in small business may need special adaptations due to their own characteristics. This paper aims to identify and adapt the existent models of strategic planning to the scenario of a small business in the fitness area. Initially, it is accomplished a comparative study among models of different authors to identify theirs phases and activities. Then, it is defined which of these phases and activities should be present in a model that will be utilized in a small business. That model was applied to a Pilates studio; it involves the establishment of an organizational identity, an environmental analysis as well as the definition of strategic goals, strategies and actions to reach them. Finally, benefits to the organization could be identified, as well as hurdles in the implementation of the tool.
A global fit to determine the pseudoscalar mixing angle and the gluonium content of the η' meson
International Nuclear Information System (INIS)
Ambrosino, F.; Antonelli, A.; Antonelli, M.; Bencivenni, G.; Bertolucci, S.; Bloise, C.; Bossi, F.; Capon, G.; Capussela, T.; Ciambrone, P.; De Lucia, E.; De Simone, P.; Archilli, F.; Beltrame, P.; Bini, C.; De Santis, A.; De Zorzi, G.; Bocchetta, S.; Ceradini, F.; Branchini, P.
2009-01-01
We update the values of the η-η' mixing angle and of the η' gluonium content by fitting our measurement R φ = BR(φ → η'γ)/BR(φ → ηγ) together with several vector meson radiative decays to pseudoscalars (V → Pγ), pseudoscalar mesons radiative decays to vectors (P → Vγ) and the η' → γγ, π 0 → γγ widths. From the fit we extract a gluonium fraction of Z G 2 = 0.12±0.04, the pseudoscalar mixing angle ψ P = (40.4±0.6) 0 and the φ-ω mixing angle ψ V = (3.32±0.09) 0 . Z G 2 and ψ P are fairly consistent with those previously published. We also evaluate the impact on the η' gluonium content determination of future experimental improvements of the η' branching ratios and decay width.
Global spatiotemporal distribution of soil respiration modeled using a global database
Hashimoto, S.; Carvalhais, N.; Ito, A.; Migliavacca, M.; Nishina, K.; Reichstein, M.
2015-07-01
The flux of carbon dioxide from the soil to the atmosphere (soil respiration) is one of the major fluxes in the global carbon cycle. At present, the accumulated field observation data cover a wide range of geographical locations and climate conditions. However, there are still large uncertainties in the magnitude and spatiotemporal variation of global soil respiration. Using a global soil respiration data set, we developed a climate-driven model of soil respiration by modifying and updating Raich's model, and the global spatiotemporal distribution of soil respiration was examined using this model. The model was applied at a spatial resolution of 0.5°and a monthly time step. Soil respiration was divided into the heterotrophic and autotrophic components of respiration using an empirical model. The estimated mean annual global soil respiration was 91 Pg C yr-1 (between 1965 and 2012; Monte Carlo 95 % confidence interval: 87-95 Pg C yr-1) and increased at the rate of 0.09 Pg C yr-2. The contribution of soil respiration from boreal regions to the total increase in global soil respiration was on the same order of magnitude as that of tropical and temperate regions, despite a lower absolute magnitude of soil respiration in boreal regions. The estimated annual global heterotrophic respiration and global autotrophic respiration were 51 and 40 Pg C yr-1, respectively. The global soil respiration responded to the increase in air temperature at the rate of 3.3 Pg C yr-1 °C-1, and Q10 = 1.4. Our study scaled up observed soil respiration values from field measurements to estimate global soil respiration and provide a data-oriented estimate of global soil respiration. The estimates are based on a semi-empirical model parameterized with over one thousand data points. Our analysis indicates that the climate controls on soil respiration may translate into an increasing trend in global soil respiration and our analysis emphasizes the relevance of the soil carbon flux from soil to
Using geometry to improve model fitting and experiment design for glacial isostasy
Kachuck, S. B.; Cathles, L. M.
2017-12-01
As scientists we routinely deal with models, which are geometric objects at their core - the manifestation of a set of parameters as predictions for comparison with observations. When the number of observations exceeds the number of parameters, the model is a hypersurface (the model manifold) in the space of all possible predictions. The object of parameter fitting is to find the parameters corresponding to the point on the model manifold as close to the vector of observations as possible. But the geometry of the model manifold can make this difficult. By curving, ending abruptly (where, for instance, parameters go to zero or infinity), and by stretching and compressing the parameters together in unexpected directions, it can be difficult to design algorithms that efficiently adjust the parameters. Even at the optimal point on the model manifold, parameters might not be individually resolved well enough to be applied to new contexts. In our context of glacial isostatic adjustment, models of sparse surface observations have a broad spread of sensitivity to mixtures of the earth's viscous structure and the surface distribution of ice over the last glacial cycle. This impedes precise statements about crucial geophysical processes, such as the planet's thermal history or the climates that controlled the ice age. We employ geometric methods developed in the field of systems biology to improve the efficiency of fitting (geodesic accelerated Levenberg-Marquardt) and to identify the maximally informative sources of additional data to make better predictions of sea levels and ice configurations (optimal experiment design). We demonstrate this in particular in reconstructions of the Barents Sea Ice Sheet, where we show that only certain kinds of data from the central Barents have the power to distinguish between proposed models.
New Temperature-based Models for Predicting Global Solar Radiation
International Nuclear Information System (INIS)
Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.
2016-01-01
Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for
Evaluation of global climate models for Indian monsoon climatology
International Nuclear Information System (INIS)
Kodra, Evan; Ganguly, Auroop R; Ghosh, Subimal
2012-01-01
The viability of global climate models for forecasting the Indian monsoon is explored. Evaluation and intercomparison of model skills are employed to assess the reliability of individual models and to guide model selection strategies. Two dominant and unique patterns of Indian monsoon climatology are trends in maximum temperature and periodicity in total rainfall observed after 30 yr averaging over India. An examination of seven models and their ensembles reveals that no single model or model selection strategy outperforms the rest. The single-best model for the periodicity of Indian monsoon rainfall is the only model that captures a low-frequency natural climate oscillator thought to dictate the periodicity. The trend in maximum temperature, which most models are thought to handle relatively better, is best captured through a multimodel average compared to individual models. The results suggest a need to carefully evaluate individual models and model combinations, in addition to physical drivers where possible, for regional projections from global climate models. (letter)
Describing the Process of Adopting Nutrition and Fitness Apps: Behavior Stage Model Approach.
König, Laura M; Sproesser, Gudrun; Schupp, Harald T; Renner, Britta
2018-03-13
Although mobile technologies such as smartphone apps are promising means for motivating people to adopt a healthier lifestyle (mHealth apps), previous studies have shown low adoption and continued use rates. Developing the means to address this issue requires further understanding of mHealth app nonusers and adoption processes. This study utilized a stage model approach based on the Precaution Adoption Process Model (PAPM), which proposes that people pass through qualitatively different motivational stages when adopting a behavior. To establish a better understanding of between-stage transitions during app adoption, this study aimed to investigate the adoption process of nutrition and fitness app usage, and the sociodemographic and behavioral characteristics and decision-making style preferences of people at different adoption stages. Participants (N=1236) were recruited onsite within the cohort study Konstanz Life Study. Use of mobile devices and nutrition and fitness apps, 5 behavior adoption stages of using nutrition and fitness apps, preference for intuition and deliberation in eating decision-making (E-PID), healthy eating style, sociodemographic variables, and body mass index (BMI) were assessed. Analysis of the 5 behavior adoption stages showed that stage 1 ("unengaged") was the most prevalent motivational stage for both nutrition and fitness app use, with half of the participants stating that they had never thought about using a nutrition app (52.41%, 533/1017), whereas less than one-third stated they had never thought about using a fitness app (29.25%, 301/1029). "Unengaged" nonusers (stage 1) showed a higher preference for an intuitive decision-making style when making eating decisions, whereas those who were already "acting" (stage 4) showed a greater preference for a deliberative decision-making style (F 4,1012 =21.83, Pdigital interventions. This study highlights that new user groups might be better reached by apps designed to address a more intuitive
Directory of Open Access Journals (Sweden)
Dylan Molenaar
2015-08-01
Full Text Available In the psychometric literature, item response theory models have been proposed that explicitly take the decision process underlying the responses of subjects to psychometric test items into account. Application of these models is however hampered by the absence of general and flexible software to fit these models. In this paper, we present diffIRT, an R package that can be used to fit item response theory models that are based on a diffusion process. We discuss parameter estimation and model fit assessment, show the viability of the package in a simulation study, and illustrate the use of the package with two datasets pertaining to extraversion and mental rotation. In addition, we illustrate how the package can be used to fit the traditional diffusion model (as it has been originally developed in experimental psychology to data.
A Simple Model of Global Aerosol Indirect Effects
Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter
2013-01-01
Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.
Inverse problem theory methods for data fitting and model parameter estimation
Tarantola, A
2002-01-01
Inverse Problem Theory is written for physicists, geophysicists and all scientists facing the problem of quantitative interpretation of experimental data. Although it contains a lot of mathematics, it is not intended as a mathematical book, but rather tries to explain how a method of acquisition of information can be applied to the actual world.The book provides a comprehensive, up-to-date description of the methods to be used for fitting experimental data, or to estimate model parameters, and to unify these methods into the Inverse Problem Theory. The first part of the book deals wi
On the fit of models to covariances and methodology to the Bulletin.
Bentler, P M
1992-11-01
It is noted that 7 of the 10 top-cited articles in the Psychological Bulletin deal with methodological topics. One of these is the Bentler-Bonett (1980) article on the assessment of fit in covariance structure models. Some context is provided on the popularity of this article. In addition, a citation study of methodology articles appearing in the Bulletin since 1978 was carried out. It verified that publications in design, evaluation, measurement, and statistics continue to be important to psychological research. Some thoughts are offered on the role of the journal in making developments in these areas more accessible to psychologists.
Fitting the two-compartment model in DCE-MRI by linear inversion.
Flouri, Dimitra; Lesnic, Daniel; Sourbron, Steven P
2016-09-01
Model fitting of dynamic contrast-enhanced-magnetic resonance imaging-MRI data with nonlinear least squares (NLLS) methods is slow and may be biased by the choice of initial values. The aim of this study was to develop and evaluate a linear least squares (LLS) method to fit the two-compartment exchange and -filtration models. A second-order linear differential equation for the measured concentrations was derived where model parameters act as coefficients. Simulations of normal and pathological data were performed to determine calculation time, accuracy and precision under different noise levels and temporal resolutions. Performance of the LLS was evaluated by comparison against the NLLS. The LLS method is about 200 times faster, which reduces the calculation times for a 256 × 256 MR slice from 9 min to 3 s. For ideal data with low noise and high temporal resolution the LLS and NLLS were equally accurate and precise. The LLS was more accurate and precise than the NLLS at low temporal resolution, but less accurate at high noise levels. The data show that the LLS leads to a significant reduction in calculation times, and more reliable results at low noise levels. At higher noise levels the LLS becomes exceedingly inaccurate compared to the NLLS, but this may be improved using a suitable weighting strategy. Magn Reson Med 76:998-1006, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Saunders, Christina T; Blume, Jeffrey D
2017-10-26
Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.
Tikhonov, Mikhail; Monasson, Remi
2018-01-01
Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.
Multi-binding site model-based curve-fitting program for the computation of RIA data
International Nuclear Information System (INIS)
Malan, P.G.; Ekins, R.P.; Cox, M.G.; Long, E.M.R.
1977-01-01
In this paper, a comparison will be made of model-based and empirical curve-fitting procedures. The implementation of a multiple binding-site curve-fitting model which will successfully fit a wide range of assay data, and which can be run on a mini-computer is described. The latter sophisticated model also provides estimates of binding site concentrations and the values of the respective equilibrium constants present: the latter have been used for refining assay conditions using computer optimisation techniques. (orig./AJ) [de
Global qualitative analysis of a quartic ecological model
Broer, Hendrik; Gaiko, Valery A.
2010-01-01
in this paper we complete the global qualitative analysis of a quartic ecological model. In particular, studying global bifurcations of singular points and limit cycles, we prove that the corresponding dynamical system has at most two limit cycles. (C) 2009 Elsevier Ltd. All rights reserved.
Existence of global attractor for the Trojan Y Chromosome model
Directory of Open Access Journals (Sweden)
Xiaopeng Zhao
2012-04-01
Full Text Available This paper is concerned with the long time behavior of solution for the equation derived by the Trojan Y Chromosome (TYC model with spatial spread. Based on the regularity estimates for the semigroups and the classical existence theorem of global attractors, we prove that this equations possesses a global attractor in $H^k(\\Omega^4$ $(k\\geq 0$ space.
Models for prediction of global solar radiation on horizontal surface ...
African Journals Online (AJOL)
The estimation of global solar radiation continues to play a fundamental role in solar engineering systems and applications. This paper compares various models for estimating the average monthly global solar radiation on horizontal surface for Akure, Nigeria, using solar radiation and sunshine duration data covering years ...
Modeling of the Earth's gravity field using the New Global Earth Model (NEWGEM)
Kim, Yeong E.; Braswell, W. Danny
1989-01-01
Traditionally, the global gravity field was described by representations based on the spherical harmonics (SH) expansion of the geopotential. The SH expansion coefficients were determined by fitting the Earth's gravity data as measured by many different methods including the use of artificial satellites. As gravity data have accumulated with increasingly better accuracies, more of the higher order SH expansion coefficients were determined. The SH representation is useful for describing the gravity field exterior to the Earth but is theoretically invalid on the Earth's surface and in the Earth's interior. A new global Earth model (NEWGEM) (KIM, 1987 and 1988a) was recently proposed to provide a unified description of the Earth's gravity field inside, on, and outside the Earth's surface using the Earth's mass density profile as deduced from seismic studies, elevation and bathymetric information, and local and global gravity data. Using NEWGEM, it is possible to determine the constraints on the mass distribution of the Earth imposed by gravity, topography, and seismic data. NEWGEM is useful in investigating a variety of geophysical phenomena. It is currently being utilized to develop a geophysical interpretation of Kaula's rule. The zeroth order NEWGEM is being used to numerically integrate spherical harmonic expansion coefficients and simultaneously determine the contribution of each layer in the model to a given coefficient. The numerically determined SH expansion coefficients are also being used to test the validity of SH expansions at the surface of the Earth by comparing the resulting SH expansion gravity model with exact calculations of the gravity at the Earth's surface.
OILMAP: A global approach to spill modeling
International Nuclear Information System (INIS)
Spaulding, M.L.; Howlett, E.; Anderson, E.; Jayko, K.
1992-01-01
OILMAP is an oil spill model system suitable for use in both rapid response mode and long-range contingency planning. It was developed for a personal computer and employs full-color graphics to enter data, set up spill scenarios, and view model predictions. The major components of OILMAP include environmental data entry and viewing capabilities, the oil spill models, and model prediction display capabilities. Graphic routines are provided for entering wind data, currents, and any type of geographically referenced data. Several modes of the spill model are available. The surface trajectory mode is intended for quick spill response. The weathering model includes the spreading, evaporation, entrainment, emulsification, and shoreline interaction of oil. The stochastic and receptor models simulate a large number of trajectories from a single site for generating probability statistics. Each model and the algorithms they use are described. Several additional capabilities are planned for OILMAP, including simulation of tactical spill response and subsurface oil transport. 8 refs
New global fit to the total photon-proton cross-section σL+T and to the structure function F2
International Nuclear Information System (INIS)
Gabbert, D.; Nardo, L. de
2007-08-01
A fit to world data on the photon-proton cross section σ L+T and the unpolarised structure function F 2 is presented. The 23-parameter ALLM model based on Reggeon and Pomeron exchange is used. Cross section data were reconstructed to avoid inconsistencies with respect to R of the published F 2 data base. Parameter uncertainties and correlations are obtained. (orig.)
GRace: a MATLAB-based application for fitting the discrimination-association model.
Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio
2014-10-28
The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed.
Wenseleers, Tom; Helanterä, Heikki; Alves, Denise A.; Dueñez-Guzmán, Edgar; Pamilo, Pekka
2013-01-01
The conflicts over sex allocation and male production in insect societies have long served as an important test bed for Hamilton's theory of inclusive fitness, but have for the most part been considered separately. Here, we develop new coevolutionary models to examine the interaction between these two conflicts and demonstrate that sex ratio and colony productivity costs of worker reproduction can lead to vastly different outcomes even in species that show no variation in their relatedness structure. Empirical data on worker-produced males in eight species of Melipona bees support the predictions from a model that takes into account the demographic details of colony growth and reproduction. Overall, these models contribute significantly to explaining behavioural variation that previous theories could not account for. PMID:24132088
Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria
2017-10-01
Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.
Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.
Mi, Gu; Di, Yanming; Schafer, Daniel W
2015-01-01
This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.
Validation of individual and aggregate global flood hazard models for two major floods in Africa.
Trigg, M.; Bernhofen, M.; Whyman, C.
2017-12-01
A recent intercomparison of global flood hazard models undertaken by the Global Flood Partnership shows that there is an urgent requirement to undertake more validation of the models against flood observations. As part of the intercomparison, the aggregated model dataset resulting from the project was provided as open access data. We compare the individual and aggregated flood extent output from the six global models and test these against two major floods in the African Continent within the last decade, namely severe flooding on the Niger River in Nigeria in 2012, and on the Zambezi River in Mozambique in 2007. We test if aggregating different number and combination of models increases model fit to the observations compared with the individual model outputs. We present results that illustrate some of the challenges of comparing imperfect models with imperfect observations and also that of defining the probability of a real event in order to test standard model output probabilities. Finally, we propose a collective set of open access validation flood events, with associated observational data and descriptions that provide a standard set of tests across different climates and hydraulic conditions.
Global Atmosphere Watch Workshop on Measurement-Model ...
The World Meteorological Organization’s (WMO) Global Atmosphere Watch (GAW) Programme coordinates high-quality observations of atmospheric composition from global to local scales with the aim to drive high-quality and high-impact science while co-producing a new generation of products and services. In line with this vision, GAW’s Scientific Advisory Group for Total Atmospheric Deposition (SAG-TAD) has a mandate to produce global maps of wet, dry and total atmospheric deposition for important atmospheric chemicals to enable research into biogeochemical cycles and assessments of ecosystem and human health effects. The most suitable scientific approach for this activity is the emerging technique of measurement-model fusion for total atmospheric deposition. This technique requires global-scale measurements of atmospheric trace gases, particles, precipitation composition and precipitation depth, as well as predictions of the same from global/regional chemical transport models. The fusion of measurement and model results requires data assimilation and mapping techniques. The objective of the GAW Workshop on Measurement-Model Fusion for Global Total Atmospheric Deposition (MMF-GTAD), an initiative of the SAG-TAD, was to review the state-of-the-science and explore the feasibility and methodology of producing, on a routine retrospective basis, global maps of atmospheric gas and aerosol concentrations as well as wet, dry and total deposition via measurement-model
Yuan, Shupei; Ma, Wenjuan; Kanthawala, Shaheen; Peng, Wei
2015-09-01
Health and fitness applications (apps) are one of the major app categories in the current mobile app market. Few studies have examined this area from the users' perspective. This study adopted the Extended Unified Theory of Acceptance and Use of Technology (UTAUT2) Model to examine the predictors of the users' intention to adopt health and fitness apps. A survey (n=317) was conducted with college-aged smartphone users at a Midwestern university in the United States. Performance expectancy, hedonic motivations, price value, and habit were significant predictors of users' intention of continued usage of health and fitness apps. However, effort expectancy, social influence, and facilitating conditions were not found to predict users' intention of continued usage of health and fitness apps. This study extends the UTATU2 Model to the mobile apps domain and provides health professions, app designers, and marketers with the insights of user experience in terms of continuously using health and fitness apps.
A History of Regression and Related Model-Fitting in the Earth Sciences (1636?-2000)
International Nuclear Information System (INIS)
Howarth, Richard J.
2001-01-01
The (statistical) modeling of the behavior of a dependent variate as a function of one or more predictors provides examples of model-fitting which span the development of the earth sciences from the 17th Century to the present. The historical development of these methods and their subsequent application is reviewed. Bond's predictions (c. 1636 and 1668) of change in the magnetic declination at London may be the earliest attempt to fit such models to geophysical data. Following publication of Newton's theory of gravitation in 1726, analysis of data on the length of a 1 o meridian arc, and the length of a pendulum beating seconds, as a function of sin 2 (latitude), was used to determine the ellipticity of the oblate spheroid defining the Figure of the Earth. The pioneering computational methods of Mayer in 1750, Boscovich in 1755, and Lambert in 1765, and the subsequent independent discoveries of the principle of least squares by Gauss in 1799, Legendre in 1805, and Adrain in 1808, and its later substantiation on the basis of probability theory by Gauss in 1809 were all applied to the analysis of such geodetic and geophysical data. Notable later applications include: the geomagnetic survey of Ireland by Lloyd, Sabine, and Ross in 1836, Gauss's model of the terrestrial magnetic field in 1838, and Airy's 1845 analysis of the residuals from a fit to pendulum lengths, from which he recognized the anomalous character of measurements of gravitational force which had been made on islands. In the early 20th Century applications to geological topics proliferated, but the computational burden effectively held back applications of multivariate analysis. Following World War II, the arrival of digital computers in universities in the 1950s facilitated computation, and fitting linear or polynomial models as a function of geographic coordinates, trend surface analysis, became popular during the 1950-60s. The inception of geostatistics in France at this time by Matheron had its
Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J
2014-04-01
The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Modeling the Acceleration of Global Surface Temperture
Jones, B.
2017-12-01
A mathematical projection focusing on the changing rate of acceleration of Global Surface Temperatures. Using historical trajectory and informed expert near-term prediction, it is possible to extend this further forward drawing a reference arc of acceleration. Presented here is an example of this technique based on data found in the Summary of Findings of A New Estimate of the Average Earth Surface Land Temperature Spanning 1753 to 2011 and that same team's stated prediction to 2050. With this, we can project a curve showing future acceleration: Decade (midpoint) Change in Global Land Temp Degrees C Known Slope Projected Trend 1755 0.000 1955 0.600 0.0030 2005 1.500 0.0051 2045 3.000 0.0375 2095 5.485 0.0497 2145 8.895 0.0682 2195 13.488 0.0919 Observations: Slopes are getting steeper and doing so faster in an "acceleration of the acceleration" or an "arc of acceleration". This is consistent with the non-linear accelerating feedback loops of global warming. Such projected temperatures threaten human civilization and human life. This `thumbnail' projection is consistent with the other long term predictions based on anthropogenic greenhouse gases. This projection is low when compared to those whose forecasts include greenhouse gases released from thawing permafrost and clathrate hydrates. A reference line: This curve should be considered a point of reference. In the near term and absent significant drawdown of greenhouse gases, my "bet" for this AGU session is that future temperatures will generally be above this reference curve. For example, the decade ending 2020 - more than 1.9C and the decade ending 2030 - more than 2.3C - again measured from the 1750 start point. *Caveat: The long term curve and prediction assumes that mankind does not move quickly away from high cost fossil fuels and does not invent, mobilize and take actions drawing down greenhouse gases. Those seeking a comprehensive action plan are directed to drawdown.org
Levy flights and self-similar exploratory behaviour of termite workers: beyond model fitting.
Directory of Open Access Journals (Sweden)
Octavio Miramontes
Full Text Available Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties--including Lévy flights--in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.
Numerical Simulation of Hydrogen Combustion: Global Reaction Model and Validation
Energy Technology Data Exchange (ETDEWEB)
Zhang, Yun [School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an (China); Department of Mechanical, Aerospace and Nuclear Engineering, Rensselaer Polytechnic Institute, Troy, NY (United States); Liu, Yinhe, E-mail: yinheliu@mail.xjtu.edu.cn [School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an (China)
2017-11-20
Due to the complexity of modeling the combustion process in nuclear power plants, the global mechanisms are preferred for numerical simulation. To quickly perform the highly resolved simulations with limited processing resources of large-scale hydrogen combustion, a method based on thermal theory was developed to obtain kinetic parameters of global reaction mechanism of hydrogen–air combustion in a wide range. The calculated kinetic parameters at lower hydrogen concentration (C{sub hydrogen} < 20%) were validated against the results obtained from experimental measurements in a container and combustion test facility. In addition, the numerical data by the global mechanism (C{sub hydrogen} > 20%) were compared with the results by detailed mechanism. Good agreement between the model prediction and the experimental data was achieved, and the comparison between simulation results by the detailed mechanism and the global reaction mechanism show that the present calculated global mechanism has excellent predictable capabilities for a wide range of hydrogen–air mixtures.
Numerical Simulation of Hydrogen Combustion: Global Reaction Model and Validation
International Nuclear Information System (INIS)
Zhang, Yun; Liu, Yinhe
2017-01-01
Due to the complexity of modeling the combustion process in nuclear power plants, the global mechanisms are preferred for numerical simulation. To quickly perform the highly resolved simulations with limited processing resources of large-scale hydrogen combustion, a method based on thermal theory was developed to obtain kinetic parameters of global reaction mechanism of hydrogen–air combustion in a wide range. The calculated kinetic parameters at lower hydrogen concentration (C hydrogen < 20%) were validated against the results obtained from experimental measurements in a container and combustion test facility. In addition, the numerical data by the global mechanism (C hydrogen > 20%) were compared with the results by detailed mechanism. Good agreement between the model prediction and the experimental data was achieved, and the comparison between simulation results by the detailed mechanism and the global reaction mechanism show that the present calculated global mechanism has excellent predictable capabilities for a wide range of hydrogen–air mixtures.
Energy Technology Data Exchange (ETDEWEB)
Varela, M.
2001-07-01
The process of introduction of a new technology supposes that while its production and utilisation increases, also its operation improves and its investment costs and production decreases. The accumulation of experience and learning of a new technology increase in parallel with the increase of its market share. This process is represented by the technological learning curves and the energy sector is not detached from this process of substitution of old technologies by new ones. The present paper carries out a brief revision of the main energy models that include the technology dynamics (learning). The energy scenarios, developed by global energy models, assume that the characteristics of the technologies are variables with time. But this tend is incorporated in a exogenous way in these energy models, that is to say, it is only a time function. This practice is applied to the cost indicators of the technology such as the specific investment costs or to the efficiency of the energy technologies. In the last years, the new concept of endogenous technological learning has been integrated within these global energy models. This paper examines the concept of technological learning in global energy models. It also analyses the technological dynamics of the energy systems including the endogenous modelling of the process of technological progress. Finally, it makes a comparison of several of the most used global energy models (MARKAL, MESSAGE and ERIS) and, more concretely, about the use these models make of the concept of technological learning. (Author) 17 refs.
Global Nonlinear Model Identification with Multivariate Splines
De Visser, C.C.
2011-01-01
At present, model based control systems play an essential role in many aspects of modern society. Application areas of model based control systems range from food processing to medical imaging, and from process control in oil refineries to the flight control systems of modern aircraft. Central to a
A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns
Dao, Ngocanh; Genton, Marc G.
2014-01-01
Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte
Modeling global distribution of agricultural insecticides in surface waters
International Nuclear Information System (INIS)
Ippolito, Alessio; Kattwinkel, Mira; Rasmussen, Jes J.; Schäfer, Ralf B.; Fornaroli, Riccardo; Liess, Matthias
2015-01-01
Agricultural insecticides constitute a major driver of animal biodiversity loss in freshwater ecosystems. However, the global extent of their effects and the spatial extent of exposure remain largely unknown. We applied a spatially explicit model to estimate the potential for agricultural insecticide runoff into streams. Water bodies within 40% of the global land surface were at risk of insecticide runoff. We separated the influence of natural factors and variables under human control determining insecticide runoff. In the northern hemisphere, insecticide runoff presented a latitudinal gradient mainly driven by insecticide application rate; in the southern hemisphere, a combination of daily rainfall intensity, terrain slope, agricultural intensity and insecticide application rate determined the process. The model predicted the upper limit of observed insecticide exposure measured in water bodies (n = 82) in five different countries reasonably well. The study provides a global map of hotspots for insecticide contamination guiding future freshwater management and conservation efforts. - Highlights: • First global map on insecticide runoff through modelling. • Model predicts upper limit of insecticide exposure when compared to field data. • Water bodies in 40% of global land surface may be at risk of adverse effects. • Insecticide application rate, terrain slope and rainfall main drivers of exposure. - We provide the first global map on insecticide runoff to surface water predicting that water bodies in 40% of global land surface may be at risk of adverse effects
Usefulness and limitations of global flood risk models
Ward, Philip; Jongman, Brenden; Salamon, Peter; Simpson, Alanna; Bates, Paul; De Groeve, Tom; Muis, Sanne; Coughlan de Perez, Erin; Rudari, Roberto; Mark, Trigg; Winsemius, Hessel
2016-04-01
Global flood risk models are now a reality. Initially, their development was driven by a demand from users for first-order global assessments to identify risk hotspots. Relentless upward trends in flood damage over the last decade have enhanced interest in such assessments. The adoption of the Sendai Framework for Disaster Risk Reduction and the Warsaw International Mechanism for Loss and Damage Associated with Climate Change Impacts have made these efforts even more essential. As a result, global flood risk models are being used more and more in practice, by an increasingly large number of practitioners and decision-makers. However, they clearly have their limits compared to local models. To address these issues, a team of scientists and practitioners recently came together at the Global Flood Partnership meeting to critically assess the question 'What can('t) we do with global flood risk models?'. The results of this dialogue (Ward et al., 2013) will be presented, opening a discussion on similar broader initiatives at the science-policy interface in other natural hazards. In this contribution, examples are provided of successful applications of global flood risk models in practice (for example together with the World Bank, Red Cross, and UNISDR), and limitations and gaps between user 'wish-lists' and model capabilities are discussed. Finally, a research agenda is presented for addressing these limitations and reducing the gaps. Ward et al., 2015. Nature Climate Change, doi:10.1038/nclimate2742
Fits of the baryon magnetic moments to the quark model and spectrum-generating SU(3)
International Nuclear Information System (INIS)
Bohm, A.; Teese, R.B.
1982-01-01
We show that for theoretical as well as phenomenological reasons the baryon magnetic moments that fulfill simple group transformation properties should be taken in intrinsic rather than nuclear magnetons. A fit of the recent experimental data to the reduced matrix elements of the usual octet electromagnetic current is still not good, and in order to obtain acceptable agreement, one has to add correction terms to the octet current. We have texted two kinds of corrections: U-spin-scalar terms, which are singles out by the model-independent algebraic properties of the hadron electromagnetic current, and octet U-spin vectors, which could come from quark-mass breaking in a nonrelativistic quark model. We find that the U-spin-scalar terms are more important than the U-spin vectors for various levels of demanded theoretical accuracy
Globalizing High-Tech Business Models
DEFF Research Database (Denmark)
Turcan, Romeo V.
2012-01-01
resources and behavioral patterns. Two sources could be identified that effect these tensions, namely strategic experimentation and business model experimentation. For example, entrepreneurs are trying to ease the tensions in the organizational gestalt as a result of a change in the business model...... and growth path. To internationalize, international new ventures have to develop a product-led business model as services do not travel. Opting to attract venture capital, entrepreneurs are to deal with dyadic tensions that are the result of differences in entrepreneurs’ and VCs’ goals and measures...
Directory of Open Access Journals (Sweden)
Cristina García Magro
2015-06-01
Full Text Available Purpose: The aims of the paper is offers a model of analysis which allows to measure the impact on the performance of fairs, as well as the knowledge or not of the motives of participation of the visitors on the part of the exhibitors. Design/methodology: A review of the literature is established concerning two of the principal interested agents, exhibitors and visitors, focusing. The study is focused on the line of investigation referred to the motives of participation or not in a trade show. According to the information thrown by each perspectives of study, a comparative analysis is carried out in order to determine the degree of existing understanding between both. Findings: The trade shows allow to be studied from an integrated strategic marketing approach. The fit model between the reasons for participation of exhibitors and visitors offer information on the lack of an understanding between exhibitors and visitors, leading to dissatisfaction with the participation, a fact that is reflected in the fair success. The model identified shows that a strategic plan must be designed in which the reason for participation of visitor was incorporated as moderating variable of the reason for participation of exhibitors. The article concludes with the contribution of a series of proposals for the improvement of fairground results. Social implications: The fit model that improve the performance of trade shows, implicitly leads to successful achievement of targets for multiple stakeholders beyond the consideration of visitors and exhibitors. Originality/value: The integrated perspective of stakeholders allows the study of the existing relationships between the principal groups of interest, in such a way that, having knowledge on the condition of the question of the trade shows facilitates the task of the investigator in future academic works and allows that the interested groups obtain a better performance to the participation in fairs, as visitor or as
Förster, Jens
2009-02-01
Nine studies showed a bidirectional link (a) between a global processing style and generation of similarities and (b) between a local processing style and generation of dissimilarities. In Experiments 1-4, participants were primed with global versus local perception styles and then asked to work on an allegedly unrelated generation task. Across materials, participants generated more similarities than dissimilarities after global priming, whereas for participants with local priming, the opposite was true. Experiments 5-6 demonstrated a bidirectional link whereby participants who were first instructed to search for similarities attended more to the gestalt of a stimulus than to its details, whereas the reverse was true for those who were initially instructed to search for dissimilarities. Because important psychological variables are correlated with processing styles, in Experiments 7-9, temporal distance, a promotion focus, and high power were predicted and shown to enhance the search for similarities, whereas temporal proximity, a prevention focus, and low power enhanced the search for dissimilarities. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
International Nuclear Information System (INIS)
Xu, L.; Lees, R.M.; Hougen, J.T.
1999-01-01
Equilibrium structural constants and certain torsion endash rotation interaction parameters have been determined for methanol and acetaldehyde from ab initio calculations using GAUSSIAN 94. The substantial molecular flexing which occurs in going from the bottom to the top of the torsional potential barrier can be quantitatively related to coefficients of torsion endash rotation terms having a (1-cos ampersand hthinsp;3γ) dependence on torsional angle γ. The barrier height, six equilibrium structural constants characterizing the bottom of the potential well, and six torsion endash rotation constants are all compared to experimental parameters obtained from global fits to large microwave and far-infrared data sets for methanol and acetaldehyde. The rather encouraging agreement between the Gaussian and global fit results for methanol seems both to validate the accuracy of ab initio calculations of these parameters, and to demonstrate that the physical origin of these torsion endash rotation interaction terms in methanol lies primarily in structural relaxation with torsion. The less satisfactory agreement between theory and experiment for acetaldehyde requires further study. copyright 1999 American Institute of Physics
Bottom and charm mass determinations from global fits to Q\\overline{Q} bound states at N3LO
Mateu, Vicent; Ortega, Pablo G.
2018-01-01
The bottomonium spectrum up to n = 3 is studied within Non-Relativistic Quantum Chromodynamics up to N3LO. We consider finite charm quark mass effects both in the QCD potential and the \\overline{MS} -pole mass relation up to third order in the Y-scheme counting. The u = 1 /2 renormalon of the static potential is canceled by expressing the bottom quark pole mass in terms of the MSR mass. A careful investigation of scale variation reveals that, while n = 1 , 2 states are well behaved within perturbation theory, n = 3 bound states are no longer reliable. We carry out our analysis in the n ℓ = 3 and n ℓ = 4 schemes and conclude that, as long as finite m c effects are smoothly incorporated in the MSR mass definition, the difference between the two schemes is rather small. Performing a fit to b\\overline{b} bound states we find {\\overline{m}}_b({\\overline{m}}_b) = 4 .216 ± 0 .039 GeV. We extend our analysis to the lowest lying charmonium states finding {\\overline{m}}_c({\\overline{m}}_c) = 1 .273 ± 0 .054 GeV. Finally, we perform simultaneous fits for {\\overline{m}}_b and α s finding {α}_s^{({n}_f=5)}({m}_Z)=0.1178± 0.0051 . Additionally, using a modified version of the MSR mass with lighter massive quarks we are able to predict the uncalculated O({α}_s^4) virtual massive quark corrections to the relation between the \\overline{MS} and pole masses.
The status and challenge of global fire modelling
Hantson, Stijn; Arneth, Almut; Harrison, Sandy P.; Kelley, Douglas I.; Prentice, I. Colin; Rabin, Sam S.; Archibald, Sally; Mouillot, Florent; Arnold, Steve R.; Artaxo, Paulo; Bachelet, Dominique; Ciais, Philippe; Forrest, Matthew; Friedlingstein, Pierre; Hickler, Thomas; Kaplan, Jed O.; Kloster, Silvia; Knorr, Wolfgang; Lasslop, Gitta; Li, Fang; Mangeon, Stephane; Melton, Joe R.; Meyn, Andrea; Sitch, Stephen; Spessa, Allan; van der Werf, Guido R.; Voulgarakis, Apostolos; Yue, Chao
2016-06-01
Biomass burning impacts vegetation dynamics, biogeochemical cycling, atmospheric chemistry, and climate, with sometimes deleterious socio-economic impacts. Under future climate projections it is often expected that the risk of wildfires will increase. Our ability to predict the magnitude and geographic pattern of future fire impacts rests on our ability to model fire regimes, using either well-founded empirical relationships or process-based models with good predictive skill. While a large variety of models exist today, it is still unclear which type of model or degree of complexity is required to model fire adequately at regional to global scales. This is the central question underpinning the creation of the Fire Model Intercomparison Project (FireMIP), an international initiative to compare and evaluate existing global fire models against benchmark data sets for present-day and historical conditions. In this paper we review how fires have been represented in fire-enabled dynamic global vegetation models (DGVMs) and give an overview of the current state of the art in fire-regime modelling. We indicate which challenges still remain in global fire modelling and stress the need for a comprehensive model evaluation and outline what lessons may be learned from FireMIP.
Validation of a Global Hydrodynamic Flood Inundation Model
Bates, P. D.; Smith, A.; Sampson, C. C.; Alfieri, L.; Neal, J. C.
2014-12-01
In this work we present first validation results for a hyper-resolution global flood inundation model. We use a true hydrodynamic model (LISFLOOD-FP) to simulate flood inundation at 1km resolution globally and then use downscaling algorithms to determine flood extent and depth at 90m spatial resolution. Terrain data are taken from a custom version of the SRTM data set that has been processed specifically for hydrodynamic modelling. Return periods of flood flows along the entire global river network are determined using: (1) empirical relationships between catchment characteristics and index flood magnitude in different hydroclimatic zones derived from global runoff data; and (2) an index flood growth curve, also empirically derived. Bankful return period flow is then used to set channel width and depth, and flood defence impacts are modelled using empirical relationships between GDP, urbanization and defence standard of protection. The results of these simulations are global flood hazard maps for a number of different return period events from 1 in 5 to 1 in 1000 years. We compare these predictions to flood hazard maps developed by national government agencies in the UK and Germany using similar methods but employing detailed local data, and to observed flood extent at a number of sites including St. Louis, USA and Bangkok in Thailand. Results show that global flood hazard models can have considerable skill given careful treatment to overcome errors in the publicly available data that are used as their input.
Global tropospheric ozone modeling: Quantifying errors due to grid resolution
Wild, Oliver; Prather, Michael J
2006-01-01
Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quant...
Fitting Data to Model: Structural Equation Modeling Diagnosis Using Two Scatter Plots
Yuan, Ke-Hai; Hayashi, Kentaro
2010-01-01
This article introduces two simple scatter plots for model diagnosis in structural equation modeling. One plot contrasts a residual-based M-distance of the structural model with the M-distance for the factor score. It contains information on outliers, good leverage observations, bad leverage observations, and normal cases. The other plot contrasts…
Model Atmosphere Spectrum Fit to the Soft X-Ray Outburst Spectrum of SS Cyg
Directory of Open Access Journals (Sweden)
V. F. Suleimanov
2015-02-01
Full Text Available The X-ray spectrum of SS Cyg in outburst has a very soft component that can be interpreted as the fast-rotating optically thick boundary layer on the white dwarf surface. This component was carefully investigated by Mauche (2004 using the Chandra LETG spectrum of this object in outburst. The spectrum shows broad ( ≈5 °A spectral features that have been interpreted as a large number of absorption lines on a blackbody continuum with a temperature of ≈250 kK. Because the spectrum resembles the photospheric spectra of super-soft X-ray sources, we tried to fit it with high gravity hot LTE stellar model atmospheres with solar chemical composition, specially computed for this purpose. We obtained a reasonably good fit to the 60–125 °A spectrum with the following parameters: Teff = 190 kK, log g = 6.2, and NH = 8 · 1019 cm−2, although at shorter wavelengths the observed spectrum has a much higher flux. The reasons for this are discussed. The hypothesis of a fast rotating boundary layer is supported by the derived low surface gravity.
Directory of Open Access Journals (Sweden)
Omoruyi Credit Irabor
2017-12-01
Full Text Available A major contributor to the disparity in cancer outcome across the globe is the limited health care access in low- and middle-income countries that results from the shortfall in human resources for health (HRH, fomented by the limited training and leadership capacity of low-resource countries. In 2012, Seed Global Health teamed up with the Peace Corps to create the Global Health Service Partnership, an initiative that has introduced a novel model for tackling the HRH crises in developing regions of the world. The Global Health Service Partnership has made global health impacts in leveraging partnerships for HRH development, faculty activities and output, scholarship engagement, adding value to the learning environment, health workforce empowerment, and infrastructure development.
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Kuhlman, J. M.
1979-01-01
The aerodynamic design of a wind-tunnel model of a wing representative of that of a subsonic jet transport aircraft, fitted with winglets, was performed using two recently developed optimal wing-design computer programs. Both potential flow codes use a vortex lattice representation of the near-field of the aerodynamic surfaces for determination of the required mean camber surfaces for minimum induced drag, and both codes use far-field induced drag minimization procedures to obtain the required spanloads. One code uses a discrete vortex wake model for this far-field drag computation, while the second uses a 2-D advanced panel wake model. Wing camber shapes for the two codes are very similar, but the resulting winglet camber shapes differ widely. Design techniques and considerations for these two wind-tunnel models are detailed, including a description of the necessary modifications of the design geometry to format it for use by a numerically controlled machine for the actual model construction.
Quantification of effective plant rooting depth: advancing global hydrological modelling
Yang, Y.; Donohue, R. J.; McVicar, T.
2017-12-01
Plant rooting depth (Zr) is a key parameter in hydrological and biogeochemical models, yet the global spatial distribution of Zr is largely unknown due to the difficulties in its direct measurement. Moreover, Zr observations are usually only representative of a single plant or several plants, which can differ greatly from the effective Zr over a modelling unit (e.g., catchment or grid-box). Here, we provide a global parameterization of an analytical Zr model that balances the marginal carbon cost and benefit of deeper roots, and produce a climatological (i.e., 1982-2010 average) global Zr map. To test the Zr estimates, we apply the estimated Zr in a highly transparent hydrological model (i.e., the Budyko-Choudhury-Porporato (BCP) model) to estimate mean annual actual evapotranspiration (E) across the globe. We then compare the estimated E with both water balance-based E observations at 32 major catchments and satellite grid-box retrievals across the globe. Our results show that the BCP model, when implemented with Zr estimated herein, optimally reproduced the spatial pattern of E at both scales and provides improved model outputs when compared to BCP model results from two already existing global Zr datasets. These results suggest that our Zr estimates can be effectively used in state-of-the-art hydrological models, and potentially biogeochemical models, where the determination of Zr currently largely relies on biome type-based look-up tables.
Radiative heating in global climate models
Energy Technology Data Exchange (ETDEWEB)
Baer, F.; Arsky, N.; Rocque, K. [Univ. of Maryland, College Park, MD (United States)
1996-04-01
LWR algorithms from various GCMs vary significantly from one another for the same clear sky input data. This variability becomes pronounced when clouds are included. We demonstrate this effect by intercomparing the various models` output using observed data including clouds from ARM/CART data taken in Oklahoma.
Global comparison of three greenhouse climate models
Bavel, van C.H.M.; Takakura, T.; Bot, G.P.A.
1985-01-01
Three dynamic simulation models for calculating the greenhouse climate and its energy requirements for both heating and cooling were compared by making detailed computations for each of seven sets of data. The data sets ranged from a cold winter day, requiring heating, to a hot summer day, requiring
Global ocean modeling on the Connection Machine
International Nuclear Information System (INIS)
Smith, R.D.; Dukowicz, J.K.; Malone, R.C.
1993-01-01
The authors have developed a version of the Bryan-Cox-Semtner ocean model (Bryan, 1969; Semtner, 1976; Cox, 1984) for massively parallel computers. Such models are three-dimensional, Eulerian models that use latitude and longitude as the horizontal spherical coordinates and fixed depth levels as the vertical coordinate. The incompressible Navier-Stokes equations, with a turbulent eddy viscosity, and mass continuity equation are solved, subject to the hydrostatic and Boussinesq approximations. The traditional model formulation uses a rigid-lid approximation (vertical velocity = 0 at the ocean surface) to eliminate fast surface waves. These waves would otherwise require that a very short time step be used in numerical simulations, which would greatly increase the computational cost. To solve the equations with the rigid-lid assumption, the equations of motion are split into two parts: a set of twodimensional ''barotropic'' equations describing the vertically-averaged flow, and a set of three-dimensional ''baroclinic'' equations describing temperature, salinity and deviations of the horizontal velocities from the vertically-averaged flow
A new fit-for-purpose model testing framework: Decision Crash Tests
Tolson, Bryan; Craig, James
2016-04-01
Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building
Ultra high energy interaction models for Monte Carlo calculations: what model is the best fit
Energy Technology Data Exchange (ETDEWEB)
Stanev, Todor [Bartol Research Institute, University of Delaware, Newark DE 19716 (United States)
2006-01-15
We briefly outline two methods for extension of hadronic interaction models to extremely high energy. Then we compare the main characteristics of representative computer codes that implement the different models and give examples of air shower parameters predicted by those codes.
Global warming description using Daisyworld model with greenhouse gases.
Paiva, Susana L D; Savi, Marcelo A; Viola, Flavio M; Leiroz, Albino J K
2014-11-01
Daisyworld is an archetypal model of the earth that is able to describe the global regulation that can emerge from the interaction between life and environment. This article proposes a model based on the original Daisyworld considering greenhouse gases emission and absorption, allowing the description of the global warming phenomenon. Global and local analyses are discussed evaluating the influence of greenhouse gases in the planet dynamics. Numerical simulations are carried out showing the general qualitative behavior of the Daisyworld for different scenarios that includes solar luminosity variations and greenhouse gases effect. Nonlinear dynamics perspective is of concern discussing a way that helps the comprehension of the global warming phenomenon. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Misztal Ignacy
2009-01-01
Full Text Available Abstract A semi-parametric non-linear longitudinal hierarchical model is presented. The model assumes that individual variation exists both in the degree of the linear change of performance (slope beyond a particular threshold of the independent variable scale and in the magnitude of the threshold itself; these individual variations are attributed to genetic and environmental components. During implementation via a Bayesian MCMC approach, threshold levels were sampled using a Metropolis step because their fully conditional posterior distributions do not have a closed form. The model was tested by simulation following designs similar to previous studies on genetics of heat stress. Posterior means of parameters of interest, under all simulation scenarios, were close to their true values with the latter always being included in the uncertain regions, indicating an absence of bias. The proposed models provide flexible tools for studying genotype by environmental interaction as well as for fitting other longitudinal traits subject to abrupt changes in the performance at particular points on the independent variable scale.
A coupled chemotaxis-fluid model: Global existence
Liu, Jian-Guo; Lorz, Alexander
2011-01-01
We consider a model arising from biology, consisting of chemotaxis equations coupled to viscous incompressible fluid equations through transport and external forcing. Global existence of solutions to the Cauchy problem is investigated under certain conditions. Precisely, for the chemotaxis-Navier- Stokes system in two space dimensions, we obtain global existence for large data. In three space dimensions, we prove global existence of weak solutions for the chemotaxis-Stokes system with nonlinear diffusion for the cell density.© 2011 Elsevier Masson SAS. All rights reserved.
A coupled chemotaxis-fluid model: Global existence
Liu, Jian-Guo
2011-09-01
We consider a model arising from biology, consisting of chemotaxis equations coupled to viscous incompressible fluid equations through transport and external forcing. Global existence of solutions to the Cauchy problem is investigated under certain conditions. Precisely, for the chemotaxis-Navier- Stokes system in two space dimensions, we obtain global existence for large data. In three space dimensions, we prove global existence of weak solutions for the chemotaxis-Stokes system with nonlinear diffusion for the cell density.© 2011 Elsevier Masson SAS. All rights reserved.
Global asymptotic stability of density dependent integral population projection models.
Rebarber, Richard; Tenhumberg, Brigitte; Townley, Stuart
2012-02-01
Many stage-structured density dependent populations with a continuum of stages can be naturally modeled using nonlinear integral projection models. In this paper, we study a trichotomy of global stability result for a class of density dependent systems which include a Platte thistle model. Specifically, we identify those systems parameters for which zero is globally asymptotically stable, parameters for which there is a positive asymptotically stable equilibrium, and parameters for which there is no asymptotically stable equilibrium. Copyright © 2011 Elsevier Inc. All rights reserved.
Global model for the lithospheric strength and effective elastic thickness
Magdala Tesauro; Mikhail Kaban; S. A. P. L. Cloetingh
2013-01-01
Global distribution of the strength and effective elastic thickness (Te) of the lithosphere are estimated using physical parameters from recent crustal and lithospheric models. For the Te estimation we apply a new approach, which provides a possibility to take into account variations of Young modulus (E) within the lithosphere. In view of the large uncertainties affecting strength estimates, we evaluate global strength and Te distributions for possible end-member ‘hard’ (HRM) and a ‘soft’ (SR...
Stabilising the global greenhouse. A simulation model
International Nuclear Information System (INIS)
Michaelis, P.
1993-01-01
This paper investigates the economic implications of a comprehensive approach to greenhouse policies that strives to stabilise the atmospheric concentration of greenhouse gases at an ecolocially determined threshold level. In a theoretical optimisation model conditions for an efficient allocation of abatement effort among pollutants and over time are derived. The model is empirically specified and adapted to a dynamic Gams-algorithm. By various simulation runs for the period of 1990 to 2110, the economics of greenhouse gas accumulation are explored. In particular, the long-run cost associated with the above stabilisation target are evaluated for three different policy scenarios: i) A comprehensive approach that covers all major greenhouse gases simultaneously, ii) a piecemeal approach that is limited to reducing CO 2 emissions, and iii) a ten-year moratorium that postpones abatement effort until new scientific evidence on the greenhouse effect will become available. Comparing the simulation results suggests that a piecemeal approach would considerably increase total cost, whereas a ten-year moratorium might be reasonable even if the probability of 'good news' is comparatively small. (orig.)
Directory of Open Access Journals (Sweden)
Gurutzeta Guillera-Arroita
Full Text Available In a recent paper, Welsh, Lindenmayer and Donnelly (WLD question the usefulness of models that estimate species occupancy while accounting for detectability. WLD claim that these models are difficult to fit and argue that disregarding detectability can be better than trying to adjust for it. We think that this conclusion and subsequent recommendations are not well founded and may negatively impact the quality of statistical inference in ecology and related management decisions. Here we respond to WLD's claims, evaluating in detail their arguments, using simulations and/or theory to support our points. In particular, WLD argue that both disregarding and accounting for imperfect detection lead to the same estimator performance regardless of sample size when detectability is a function of abundance. We show that this, the key result of their paper, only holds for cases of extreme heterogeneity like the single scenario they considered. Our results illustrate the dangers of disregarding imperfect detection. When ignored, occupancy and detection are confounded: the same naïve occupancy estimates can be obtained for very different true levels of occupancy so the size of the bias is unknowable. Hierarchical occupancy models separate occupancy and detection, and imprecise estimates simply indicate that more data are required for robust inference about the system in question. As for any statistical method, when underlying assumptions of simple hierarchical models are violated, their reliability is reduced. Resorting in those instances where hierarchical occupancy models do no perform well to the naïve occupancy estimator does not provide a satisfactory solution. The aim should instead be to achieve better estimation, by minimizing the effect of these issues during design, data collection and analysis, ensuring that the right amount of data is collected and model assumptions are met, considering model extensions where appropriate.
Modeling the Effect of Oil Price on Global Fertilizer Prices
P-Y. Chen (Ping-Yu); C-L. Chang (Chia-Lin); C-C. Chen (Chi-Chung); M.J. McAleer (Michael)
2010-01-01
textabstractThe main purpose of this paper is to evaluate the effect of crude oil price on global fertilizer prices in both the mean and volatility. The endogenous structural breakpoint unit root test, the autoregressive distributed lag (ARDL) model, and alternative volatility models, including the
Toward an Integrative Model of Global Business Strategy
DEFF Research Database (Denmark)
Li, Xin
fragmentation-integration-fragmentation-integration upward spiral. In response to the call for integrative approach to strategic management research, we propose an integrative model of global business strategy that aims at integrating not only strategy and IB but also the different paradigms within the strategy...... field. We also discuss the merit and limitation of our model....
Combined discriminative global and generative local models for visual tracking
Zhao, Liujun; Zhao, Qingjie; Chen, Yanming; Lv, Peng
2016-03-01
It is a challenging task to develop an effective visual tracking algorithm due to factors such as pose variation, rotation, and so on. Combined discriminative global and generative local appearance models are proposed to address this problem. Specifically, we develop a compact global object representation by extracting the low-frequency coefficients of the color and texture of the object based on two-dimensional discrete cosine transform. Then, with the global appearance representation, we learn a discriminative metric classifier in an online fashion to differentiate the target object from its background, which is very important to robustly indicate the changes in appearance. Second, we develop a new generative local model that exploits the scale invariant feature transform and its spatial geometric information. To make use of the advantages of the global discriminative model and the generative local model, we incorporate them into Bayesian inference framework. In this framework, the complementary models help the tracker locate the target more accurately. Furthermore, we use different mechanisms to update global and local templates to capture appearance changes. The experimental results demonstrate that the proposed approach performs favorably against state-of-the-art methods in terms of accuracy.
Directory of Open Access Journals (Sweden)
Mónica A Silva
Full Text Available Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF. The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km was nearly half that of LS estimates (11.6 ± 8.4 km. Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.
Directory of Open Access Journals (Sweden)
Loreen eHertäg
2012-09-01
Full Text Available For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('in-vivo-like' input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.
Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E
2012-03-01
In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Minimal see-saw model predicting best fit lepton mixing angles
International Nuclear Information System (INIS)
King, Stephen F.
2013-01-01
We discuss a minimal predictive see-saw model in which the right-handed neutrino mainly responsible for the atmospheric neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (0,1,1) and the right-handed neutrino mainly responsible for the solar neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (1,4,2), with a relative phase η=−2π/5. We show how these patterns of couplings could arise from an A 4 family symmetry model of leptons, together with Z 3 and Z 5 symmetries which fix η=−2π/5 up to a discrete phase choice. The PMNS matrix is then completely determined by one remaining parameter which is used to fix the neutrino mass ratio m 2 /m 3 . The model predicts the lepton mixing angles θ 12 ≈34 ∘ ,θ 23 ≈41 ∘ ,θ 13 ≈9.5 ∘ , which exactly coincide with the current best fit values for a normal neutrino mass hierarchy, together with the distinctive prediction for the CP violating oscillation phase δ≈106 ∘
Physician behavioral adaptability: A model to outstrip a "one size fits all" approach.
Carrard, Valérie; Schmid Mast, Marianne
2015-10-01
Based on a literature review, we propose a model of physician behavioral adaptability (PBA) with the goal of inspiring new research. PBA means that the physician adapts his or her behavior according to patients' different preferences. The PBA model shows how physicians infer patients' preferences and adapt their interaction behavior from one patient to the other. We claim that patients will benefit from better outcomes if their physicians show behavioral adaptability rather than a "one size fits all" approach. This literature review is based on a literature search of the PsycINFO(®) and MEDLINE(®) databases. The literature review and first results stemming from the authors' research support the validity and viability of parts of the PBA model. There is evidence suggesting that physicians are able to show behavioral flexibility when interacting with their different patients, that a match between patients' preferences and physician behavior is related to better consultation outcomes, and that physician behavioral adaptability is related to better consultation outcomes. Training of physicians' behavioral flexibility and their ability to infer patients' preferences can facilitate physician behavioral adaptability and positive patient outcomes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Modeling of physical fitness of young karatyst on the pre basic training
Directory of Open Access Journals (Sweden)
V. A. Galimskyi
2014-09-01
Full Text Available Purpose : to develop a program of physical fitness for the correction of the pre basic training on the basis of model performance. Material: 57 young karate sportsmen of 9-11 years old took part in the research. Results : the level of general and special physical preparedness of young karate 9-11 years old was determined. Classes in the control group occurred in the existing program for yous sports school Muay Thai (Thailand boxing. For the experimental group has developed a program of selective development of general and special physical qualities of model-based training sessions. Special program contains 6 direction: 1. Development of static and dynamic balance; 2. Development of vestibular stability (precision movements after rotation; 3. Development rate movements; 4. The development of the capacity for rapid restructuring movements; 5. Development capabilities to differentiate power and spatial parameters of movement; 6. Development of the ability to perform jumping movements of rotation. Development of special physical qualities continued to work to improve engineering complex shock motions on the place and with movement. Conclusions : the use of selective development of special physical qualities based models of training sessions has a significant performance advantage over the control group.
Miszta, Przemyslaw; Pasznik, Pawel; Jakowiecki, Jakub; Sztyler, Agnieszka; Latek, Dorota; Filipek, Slawomir
2018-05-21
Due to the involvement of G protein-coupled receptors (GPCRs) in most of the physiological and pathological processes in humans they have been attracting a lot of attention from pharmaceutical industry as well as from scientific community. Therefore, the need for new, high quality structures of GPCRs is enormous. The updated homology modeling service GPCRM (http://gpcrm.biomodellab.eu/) meets those expectations by greatly reducing the execution time of submissions (from days to hours/minutes) with nearly the same average quality of obtained models. Additionally, due to three different scoring functions (Rosetta, Rosetta-MP, BCL::Score) it is possible to select accurate models for the required purposes: the structure of the binding site, the transmembrane domain or the overall shape of the receptor. Currently, no other web service for GPCR modeling provides this possibility. GPCRM is continually upgraded in a semi-automatic way and the number of template structures has increased from 20 in 2013 to over 90 including structures the same receptor with different ligands which can influence the structure not only in the on/off manner. Two types of protein viewers can be used for visual inspection of obtained models. The extended sortable tables with available templates provide links to external databases and display ligand-receptor interactions in visual form.
Interdecadal variability in a global coupled model
International Nuclear Information System (INIS)
Storch, J.S. von.
1994-01-01
Interdecadal variations are studied in a 325-year simulation performed by a coupled atmosphere - ocean general circulation model. The patterns obtained in this study may be considered as characteristic patterns for interdecadal variations. 1. The atmosphere: Interdecadal variations have no preferred time scales, but reveal well-organized spatial structures. They appear as two modes, one is related with variations of the tropical easterlies and the other with the Southern Hemisphere westerlies. Both have red spectra. The amplitude of the associated wind anomalies is largest in the upper troposphere. The associated temperature anomalies are in thermal-wind balance with the zonal winds and are out-of-phase between the troposphere and the lower stratosphere. 2. The Pacific Ocean: The dominant mode in the Pacific appears to be wind-driven in the midlatitudes and is related to air-sea interaction processes during one stage of the oscillation in the tropics. Anomalies of this mode propagate westward in the tropics and the northward (southwestward) in the North (South) Pacific on a time scale of about 10 to 20 years. (orig.)
Toward GEOS-6, A Global Cloud System Resolving Atmospheric Model
Putman, William M.
2010-01-01
NASA is committed to observing and understanding the weather and climate of our home planet through the use of multi-scale modeling systems and space-based observations. Global climate models have evolved to take advantage of the influx of multi- and many-core computing technologies and the availability of large clusters of multi-core microprocessors. GEOS-6 is a next-generation cloud system resolving atmospheric model that will place NASA at the forefront of scientific exploration of our atmosphere and climate. Model simulations with GEOS-6 will produce a realistic representation of our atmosphere on the scale of typical satellite observations, bringing a visual comprehension of model results to a new level among the climate enthusiasts. In preparation for GEOS-6, the agency's flagship Earth System Modeling Framework [JDl] has been enhanced to support cutting-edge high-resolution global climate and weather simulations. Improvements include a cubed-sphere grid that exposes parallelism; a non-hydrostatic finite volume dynamical core, and algorithm designed for co-processor technologies, among others. GEOS-6 represents a fundamental advancement in the capability of global Earth system models. The ability to directly compare global simulations at the resolution of spaceborne satellite images will lead to algorithm improvements and better utilization of space-based observations within the GOES data assimilation system
Schöb, Christian; Michalet, Richard; Cavieres, Lohengrin A; Pugnaire, Francisco I; Brooker, Rob W; Butterfield, Bradley J; Cook, Bradley J; Kikvidze, Zaal; Lortie, Christopher J; Xiao, Sa; Al Hayek, Patrick; Anthelme, Fabien; Cranston, Brittany H; García, Mary-Carolina; Le Bagousse-Pinguet, Yoann; Reid, Anya M; le Roux, Peter C; Lingua, Emanuele; Nyakatya, Mawethu J; Touzard, Blaise; Zhao, Liang; Callaway, Ragan M
2014-04-01
Facilitative interactions are defined as positive effects of one species on another, but bidirectional feedbacks may be positive, neutral, or negative. Understanding the bidirectional nature of these interactions is a fundamental prerequisite for the assessment of the potential evolutionary consequences of facilitation. In a global study combining observational and experimental approaches, we quantified the impact of the cover and richness of species associated with alpine cushion plants on reproductive traits of the benefactor cushions. We found a decline in cushion seed production with increasing cover of cushion-associated species, indicating that being a benefactor came at an overall cost. The effect of cushion-associated species was negative for flower density and seed set of cushions, but not for fruit set and seed quality. Richness of cushion-associated species had positive effects on seed density and modulated the effects of their abundance on flower density and fruit set, indicating that the costs and benefits of harboring associated species depend on the composition of the plant assemblage. Our study demonstrates 'parasitic' interactions among plants over a wide range of species and environments in alpine systems, and we consider their implications for the possible selective effects of interactions between benefactor and beneficiary species. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.
Energy Technology Data Exchange (ETDEWEB)
Smith, D.L.; Guenther, P.T.
1983-11-01
We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references.
International Nuclear Information System (INIS)
Smith, D.L.; Guenther, P.T.
1983-11-01
We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references
Fitting diameter distribution models to data from forest inventories with concentric plot design
Energy Technology Data Exchange (ETDEWEB)
Nanos, N.; Sjöstedt de Luna, S.
2017-11-01
Aim: Several national forest inventories use a complex plot design based on multiple concentric subplots where smaller diameter trees are inventoried when lying in the smaller-radius subplots and ignored otherwise. Data from these plots are truncated with threshold (truncation) diameters varying according to the distance from the plot centre. In this paper we designed a maximum likelihood method to fit the Weibull diameter distribution to data from concentric plots. Material and methods: Our method (M1) was based on multiple truncated probability density functions to build the likelihood. In addition, we used an alternative method (M2) presented recently. We used methods M1 and M2 as well as two other reference methods to estimate the Weibull parameters in 40000 simulated plots. The spatial tree pattern of the simulated plots was generated using four models of spatial point patterns. Two error indices were used to assess the relative performance of M1 and M2 in estimating relevant stand-level variables. In addition, we estimated the Quadratic Mean plot Diameter (QMD) using Expansion Factors (EFs). Main results: Methods M1 and M2 produced comparable estimation errors in random and cluster tree spatial patterns. Method M2 produced biased parameter estimates in plots with inhomogeneous Poisson patterns. Estimation of QMD using EFs produced biased results in plots within inhomogeneous intensity Poisson patterns. Research highlights:We designed a new method to fit the Weibull distribution to forest inventory data from concentric plots that achieves high accuracy and precision in parameter estimates regardless of the within-plot spatial tree pattern.
Experimental model for non-Newtonian fluid viscosity estimation: Fit to mathematical expressions
Directory of Open Access Journals (Sweden)
Guillem Masoliver i Marcos
2017-01-01
Full Text Available The construction process of a viscometer, developed in collaboration with a final project student, is here presented. It is intended to be used by first year's students to know the viscosity as a fluid property, for both Newtonian and non-Newtonian flows. Viscosity determination is crucial for the fluids behaviour knowledge related to their reologic and physical properties. These have great implications in engineering aspects such as friction or lubrication. With the present experimental model device three different fluids are analyzed (water, kétchup and a mixture with cornstarch and water. Tangential stress is measured versus velocity in order to characterize all the fluids in different thermal conditions. A mathematical fit process is proposed to be done in order to adjust the results to expected analytical expressions, obtaining good results for these fittings, with R2 greater than 0.88 in any case.
Global dynamics of a dengue epidemic mathematical model
Energy Technology Data Exchange (ETDEWEB)
Cai Liming [Department of Mathematics, Xinyang Normal University, Xinyang 464000 (China); Academy of Mathematics and Systems Science, Academia Sinica, Beijing 100080 (China)], E-mail: lmcai06@yahoo.com.cn; Guo Shumin [Beijing Institute of Information Control, Beijing 100037 (China); Li, XueZhi [Department of Mathematics, Xinyang Normal University, Xinyang 464000 (China); Ghosh, Mini [School of Mathematics and Computer Application, Thapar University, Patiala 147004 (India)
2009-11-30
The paper investigates the global stability of a dengue epidemic model with saturation and bilinear incidence. The constant human recruitment rate and exponential natural death, as well as vector population with asymptotically constant population, are incorporated into the model. The model exhibits two equilibria, namely, the disease-free equilibrium and the endemic equilibrium. The stability of these two equilibria is controlled by the threshold number R{sub 0}. It is shown that if R{sub 0} is less than one, the disease-free equilibrium is globally asymptotically stable and in such a case the endemic equilibrium does not exist; if R{sub 0} is greater than one, then the disease persists and the unique endemic equilibrium is globally asymptotically stable.
Global dynamics of a dengue epidemic mathematical model
International Nuclear Information System (INIS)
Cai Liming; Guo Shumin; Li, XueZhi; Ghosh, Mini
2009-01-01
The paper investigates the global stability of a dengue epidemic model with saturation and bilinear incidence. The constant human recruitment rate and exponential natural death, as well as vector population with asymptotically constant population, are incorporated into the model. The model exhibits two equilibria, namely, the disease-free equilibrium and the endemic equilibrium. The stability of these two equilibria is controlled by the threshold number R 0 . It is shown that if R 0 is less than one, the disease-free equilibrium is globally asymptotically stable and in such a case the endemic equilibrium does not exist; if R 0 is greater than one, then the disease persists and the unique endemic equilibrium is globally asymptotically stable.
Archaeomagnetic Dating in Europe Using a Global Geomagnetic Field Model
Lodge, A.; Suttie, N.; Holme, R.; Shaw, J.; Hill, M. J.; Linford, P.
2009-12-01
Using up-to-date archaeomagnetic data from Europe and CALS7K.2 as an apriori model, we produce a global geomagnetic field model to be used for archaeomagnetic dating in Europe. More details on the modelling process will be presented elsewhere (in session GP12, abstract: Geophysical insights from archaeomagnetic dating). Here we apply the global geomagnetic field model to a series of test cases from both recently published data and unpublished data to demonstrate its application to archaeomagnetic dating. We compare the results produced using our model with those from the spherical cap harmonic model, SCHA.DIF.3K (Pavón-Carrasco et al., 2009), the global geomagnetic field model, ARCH3K.1 (Korte et al., 2009) and those produced using the palaeosecular variation curves generated using Bayesian statistics (Lanos, 2004). We include examples which emphasise the importance of using three component data (declination, inclination and intensity) to produce an improved archaeomagnetic date. In addition to the careful selection of an appropriate model for archaeomagnetic dating, the choice of errors on the model curves is vital for providing archaeologists with an age range of possible dates. We discuss how best to constrain the errors on the model curves and alternative ways to the mathematical method of Lanos (2004) for producing an archaeomagnetic date for archaeologists.
A high-resolution global flood hazard model
Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.
2015-09-01
Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.
Bermejo, Javier; Yotti, Raquel; Pérez del Villar, Candelas; del Álamo, Juan C; Rodríguez-Pérez, Daniel; Martínez-Legazpi, Pablo; Benito, Yolanda; Antoranz, J Carlos; Desco, M Mar; González-Mansilla, Ana; Barrio, Alicia; Elízaga, Jaime; Fernández-Avilés, Francisco
2013-08-15
In cardiovascular research, relaxation and stiffness are calculated from pressure-volume (PV) curves by separately fitting the data during the isovolumic and end-diastolic phases (end-diastolic PV relationship), respectively. This method is limited because it assumes uncoupled active and passive properties during these phases, it penalizes statistical power, and it cannot account for elastic restoring forces. We aimed to improve this analysis by implementing a method based on global optimization of all PV diastolic data. In 1,000 Monte Carlo experiments, the optimization algorithm recovered entered parameters of diastolic properties below and above the equilibrium volume (intraclass correlation coefficients = 0.99). Inotropic modulation experiments in 26 pigs modified passive pressure generated by restoring forces due to changes in the operative and/or equilibrium volumes. Volume overload and coronary microembolization caused incomplete relaxation at end diastole (active pressure > 0.5 mmHg), rendering the end-diastolic PV relationship method ill-posed. In 28 patients undergoing PV cardiac catheterization, the new algorithm reduced the confidence intervals of stiffness parameters by one-fifth. The Jacobian matrix allowed visualizing the contribution of each property to instantaneous diastolic pressure on a per-patient basis. The algorithm allowed estimating stiffness from single-beat PV data (derivative of left ventricular pressure with respect to volume at end-diastolic volume intraclass correlation coefficient = 0.65, error = 0.07 ± 0.24 mmHg/ml). Thus, in clinical and preclinical research, global optimization algorithms provide the most complete, accurate, and reproducible assessment of global left ventricular diastolic chamber properties from PV data. Using global optimization, we were able to fully uncouple relaxation and passive PV curves for the first time in the intact heart.
Patient-centered medical home model: do school-based health centers fit the model?
Larson, Satu A; Chapman, Susan A
2013-01-01
School-based health centers (SBHCs) are an important component of health care reform. The SBHC model of care offers accessible, continuous, comprehensive, family-centered, coordinated, and compassionate care to infants, children, and adolescents. These same elements comprise the patient-centered medical home (PCMH) model of care being promoted by the Affordable Care Act with the hope of lowering health care costs by rewarding clinicians for primary care services. PCMH survey tools have been developed to help payers determine whether a clinician/site serves as a PCMH. Our concern is that current survey tools will be unable to capture how a SBHC may provide a medical home and therefore be denied needed funding. This article describes how SBHCs might meet the requirements of one PCMH tool. SBHC stakeholders need to advocate for the creation or modification of existing survey tools that allow the unique characteristics of SBHCs to qualify as PCMHs.
Li, Henan; Wang, Qi; Wang, Ruobing; Zhang, Yawei; Wang, Xiaojuan; Wang, Hui
2017-06-01
SoxR is a global regulator contributing to multidrug resistance in Enterobacteriaceae. However, the contribution of SoxR to antibiotic resistance and fitness in Acinetobacter baumannii has not yet been studied. Comparisons of molecular characteristics were performed between 32 multidrug-resistant A. baumannii isolates and 11 susceptible isolates. A soxR overexpression mutant was constructed, and its resistance phenotype was analyzed. The impact of SoxR on efflux pump gene expression was measured at the transcription level. The effect of SoxR on the growth and fitness of A. baumannii was analyzed using a growth rate assay and an in vitro competition assay. The frequency of the Gly39Ser mutation in soxR was higher in multidrug-resistant A. baumannii, whereas the soxS gene was absent in all strains analyzed. SoxR overexpression led to increased susceptibility to chloramphenicol (4-fold), tetracycline (2-fold), tigecycline (2-fold), ciprofloxacin (2-fold), amikacin (2-fold), and trimethoprim (2-fold), but it did not influence imipenem susceptibility. Decreased expression of abeS (3.8-fold), abeM (1.3-fold), adeJ (2.4-fold), and adeG (2.5-fold) were correlated with soxR overexpression (P baumannii.
Paladin Enterprises: Monolithic particle physics models global climate.
2002-01-01
Paladin Enterprises presents a monolithic particle model of the universe which will be used by them to build an economical fusion energy system. The model is an extension of the work done by James Clerk Maxwell. Essentially, gravity is unified with electro-magnetic forces and shown to be a product of a closed loop current system, i.e. a particle - monolithic or sub atomic. This discovery explains rapid global climate changes which are evident in the geological record and also provides an explanation for recent changes in the global climate.
Kawano, N.; Varquez, A. C. G.; Dong, Y.; Kanda, M.
2016-12-01
Numerical model such as Weather Research and Forecasting model coupled with single-layer Urban Canopy Model (WRF-UCM) is one of the powerful tools to investigate urban heat island. Urban parameters such as average building height (Have), plain area index (λp) and frontal area index (λf), are necessary inputs for the model. In general, these parameters are uniformly assumed in WRF-UCM but this leads to unrealistic urban representation. Distributed urban parameters can also be incorporated into WRF-UCM to consider a detail urban effect. The problem is that distributed building information is not readily available for most megacities especially in developing countries. Furthermore, acquiring real building parameters often require huge amount of time and money. In this study, we investigated the potential of using globally available satellite-captured datasets for the estimation of the parameters, Have, λp, and λf. Global datasets comprised of high spatial resolution population dataset (LandScan by Oak Ridge National Laboratory), nighttime lights (NOAA), and vegetation fraction (NASA). True samples of Have, λp, and λf were acquired from actual building footprints from satellite images and 3D building database of Tokyo, New York, Paris, Melbourne, Istanbul, Jakarta and so on. Regression equations were then derived from the block-averaging of spatial pairs of real parameters and global datasets. Results show that two regression curves to estimate Have and λf from the combination of population and nightlight are necessary depending on the city's level of development. An index which can be used to decide which equation to use for a city is the Gross Domestic Product (GDP). On the other hand, λphas less dependence on GDP but indicated a negative relationship to vegetation fraction. Finally, a simplified but precise approximation of urban parameters through readily-available, high-resolution global datasets and our derived regressions can be utilized to estimate a
A GLOBAL MODEL OF THE LIGHT CURVES AND EXPANSION VELOCITIES OF TYPE II-PLATEAU SUPERNOVAE
Energy Technology Data Exchange (ETDEWEB)
Pejcha, Ondřej [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08540 (United States); Prieto, Jose L., E-mail: pejcha@astro.princeton.edu [Núcleo de Astronomía de la Facultad de Ingeniería, Universidad Diego Portales, Av. Ejército 441 Santiago (Chile)
2015-02-01
We present a new self-consistent and versatile method that derives photospheric radius and temperature variations of Type II-Plateau supernovae based on their expansion velocities and photometric measurements. We apply the method to a sample of 26 well-observed, nearby supernovae with published light curves and velocities. We simultaneously fit ∼230 velocity and ∼6800 mag measurements distributed over 21 photometric passbands spanning wavelengths from 0.19 to 2.2 μm. The light-curve differences among the Type II-Plateau supernovae are well modeled by assuming different rates of photospheric radius expansion, which we explain as different density profiles of the ejecta, and we argue that steeper density profiles result in flatter plateaus, if everything else remains unchanged. The steep luminosity decline of Type II-Linear supernovae is due to fast evolution of the photospheric temperature, which we verify with a successful fit of SN 1980K. Eliminating the need for theoretical supernova atmosphere models, we obtain self-consistent relative distances, reddenings, and nickel masses fully accounting for all internal model uncertainties and covariances. We use our global fit to estimate the time evolution of any missing band tailored specifically for each supernova, and we construct spectral energy distributions and bolometric light curves. We produce bolometric corrections for all filter combinations in our sample. We compare our model to the theoretical dilution factors and find good agreement for the B and V filters. Our results differ from the theory when the I, J, H, or K bands are included. We investigate the reddening law toward our supernovae and find reasonable agreement with standard R{sub V}∼3.1 reddening law in UBVRI bands. Results for other bands are inconclusive. We make our fitting code publicly available.
Improving the Fit of a Land-Surface Model to Data Using its Adjoint
Raoult, N.; Jupp, T. E.; Cox, P. M.; Luke, C.
2015-12-01
Land-surface models (LSMs) are of growing importance in the world of climate prediction. They are crucial components of larger Earth system models that are aimed at understanding the effects of land surface processes on the global carbon cycle. The Joint UK Land Environment Simulator (JULES) is the land-surface model used by the UK Met Office. It has been automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or 'adjoint', of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. adJULES presents an opportunity to confront JULES with many different observations, and make improvements to the model parameterisation. In the newest version of adJULES, multiple sites can be used in the calibration, to giving a generic set of parameters that can be generalised over plant functional types. We present an introduction to the adJULES system and its applications to data from a variety of flux tower sites. We show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.
Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.
2016-11-01
With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.
Regression Model to Predict Global Solar Irradiance in Malaysia
Directory of Open Access Journals (Sweden)
Hairuniza Ahmed Kutty
2015-01-01
Full Text Available A novel regression model is developed to estimate the monthly global solar irradiance in Malaysia. The model is developed based on different available meteorological parameters, including temperature, cloud cover, rain precipitate, relative humidity, wind speed, pressure, and gust speed, by implementing regression analysis. This paper reports on the details of the analysis of the effect of each prediction parameter to identify the parameters that are relevant to estimating global solar irradiance. In addition, the proposed model is compared in terms of the root mean square error (RMSE, mean bias error (MBE, and the coefficient of determination (R2 with other models available from literature studies. Seven models based on single parameters (PM1 to PM7 and five multiple-parameter models (PM7 to PM12 are proposed. The new models perform well, with RMSE ranging from 0.429% to 1.774%, R2 ranging from 0.942 to 0.992, and MBE ranging from −0.1571% to 0.6025%. In general, cloud cover significantly affects the estimation of global solar irradiance. However, cloud cover in Malaysia lacks sufficient influence when included into multiple-parameter models although it performs fairly well in single-parameter prediction models.
Global Modeling Study of the Bioavailable Atmospheric Iron Supply to the Global Ocean
Myriokefalitakis, S.; Krol, M. C.; van Noije, T.; Le Sager, P.
2017-12-01
Atmospheric deposition of trace constituents acts as a nutrient source to the open ocean and affect marine ecosystem. Dust is known as a major source of nutrients to the global ocean, but only a fraction of these nutrients is released in a bioavailable form that can be assimilated by the marine biota. Iron (Fe) is a key micronutrient that significantly modulates gross primary production in the High-Nutrient-Low-Chlorophyll (HNLC) oceans, where macronutrients like nitrate are abundant, but primary production is limited by Fe scarcity. The global atmospheric Fe cycle is here parameterized in the state-of-the-art global Earth System Model EC-Earth. The model takes into account the primary emissions of both insoluble and soluble Fe forms, associated with mineral dust and combustion aerosols. The impact of atmospheric acidity and organic ligands on mineral dissolution processes, is parameterized based on updated experimental and theoretical findings. Model results are also evaluated against available observations. Overall, the link between the labile Fe atmospheric deposition and atmospheric composition changes is here demonstrated and quantified. This work has been financed by the Marie-Curie H2020-MSCA-IF-2015 grant (ID 705652) ODEON (Online DEposition over OceaNs; modeling the effect of air pollution on ocean bio-geochemistry in an Earth System Model).
Fitness for duty: A tried-and-true model for decision making
International Nuclear Information System (INIS)
Horn, G.L.
1989-01-01
The US Nuclear Regulatory Commission (NRC) rules and regulations pertaining to fitness for duty specify development of programs designed to ensure that nuclear power plant personnel are not under the influence of legal or illegal substances that cause mental or physical impairment of work performance such that public safety is compromised. These regulations specify the type of decision loop to employ in determining the employee's movement through the process of initial restriction of access to the point at which his access authorization is restores. Suggestions are also offered to determine the roles that various components of the organization should take in the decision loop. This paper discusses some implications and labor concerns arising from the suggested role of employee assistance programs (EAPs) in the decision loop for clinical assessment and return-to-work evaluation of chemical testing failures. A model for a decision loop addressing some of the issues raised is presented. The proposed model has been implemented in one nuclear facility and has withstood the scrutiny of an NRC audit
Temperature dependence of bulk respiration of crop stands. Measurement and model fitting
International Nuclear Information System (INIS)
Tani, Takashi; Arai, Ryuji; Tako, Yasuhiro
2007-01-01
The objective of the present study was to examine whether the temperature dependence of respiration at a crop-stand scale could be directly represented by an Arrhenius function that was widely used for representing the temperature dependence of leaf respiration. We determined temperature dependences of bulk respiration of monospecific stands of rice and soybean within a range of the air temperature from 15 to 30degC using large closed chambers. Measured responses of respiration rates of the two stands were well fitted by the Arrhenius function (R 2 =0.99). In the existing model to assess the local radiological impact of the anthropogenic carbon-14, effects of the physical environmental factors on photosynthesis and respiration of crop stands are not taken into account for the calculation of the net amount of carbon per cultivation area in crops at harvest which is the crucial parameter for the estimation of the activity concentration of carbon-14 in crops. Our result indicates that the Arrhenius function is useful for incorporating the effect of the temperature on respiration of crop stands into the model which is expected to contribute to a more realistic estimate of the activity concentration of carbon-14 in crops. (author)
Universal fit to p-p elastic diffraction scattering from the Lorentz contracted geometrical model
International Nuclear Information System (INIS)
Hansen, P.H.; Krisch, A.D.
1976-01-01
The prediction of the Lorentz contracted geometical model for proton-proton elastic scattering at small angles is examined. The model assumes that when two high energy particles collide, each behaves as a geometrical object which has a Gaussian density and is spherically symmetric except for the Lorentz contraction in the incident direction. It is predicted that dsigma/dt should be independent of energy when plotted against the variable β 2 P 2 sub(perpendicular) sigmasub(TOT)(s)/38.3. Thus the energy dependence of the diffraction peak slope (b in an esup(-b mod(t))plot) is given by b(s)=A 2 β 2 sigmasub(TOT)(s)/38.3 where β is the proton's c.m. velocity and A is its radius. Recently measured values of sigmasub(TOT)(s) were used and an excellent fit obtained to the elastic slope in both t regions [-t 2 and 0.1 2 ] at all energies from s=6 to 4000(GeV/c) 2 . (Auth.)
A physically based model of global freshwater surface temperature
van Beek, Ludovicus P. H.; Eikelboom, Tessa; van Vliet, Michelle T. H.; Bierkens, Marc F. P.
2012-09-01
Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through their tolerance to parasites and diseases. Models used to predict surface water temperature range between physically based deterministic models and statistical approaches. Here we present the initial results of a physically based deterministic model of global freshwater surface temperature. The model adds a surface water energy balance to river discharge modeled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff, and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by shortwave and longwave radiation and sensible and latent heat fluxes. Also included are ice formation and its effect on heat storage and river hydraulics. We use the coupled surface water and energy balance model to simulate global freshwater surface temperature at daily time steps with a spatial resolution of 0.5° on a regular grid for the period 1976-2000. We opt to parameterize the model with globally available data and apply it without calibration in order to preserve its physical basis with the outlook of evaluating the effects of atmospheric warming on freshwater surface temperature. We validate our simulation results with daily temperature data from rivers and lakes (U.S. Geological Survey (USGS), limited to the USA) and compare mean monthly temperatures with those recorded in the Global Environment Monitoring System (GEMS) data set. Results show that the model is able to capture the mean monthly surface temperature for the majority of the GEMS stations, while the interannual variability as derived from the USGS and NOAA data was captured reasonably well. Results are poorest for
Yazid, N. M.; Din, A. H. M.; Omar, K. M.; Som, Z. A. M.; Omar, A. H.; Yahaya, N. A. Z.; Tugi, A.
2016-09-01
Global geopotential models (GGMs) are vital in computing global geoid undulations heights. Based on the ellipsoidal height by Global Navigation Satellite System (GNSS) observations, the accurate orthometric height can be calculated by adding precise and accurate geoid undulations model information. However, GGMs also provide data from the satellite gravity missions such as GRACE, GOCE and CHAMP. Thus, this will assist to enhance the global geoid undulations data. A statistical assessment has been made between geoid undulations derived from 4 GGMs and the airborne gravity data provided by Department of Survey and Mapping Malaysia (DSMM). The goal of this study is the selection of the best possible GGM that best matches statistically with the geoid undulations of airborne gravity data under the Marine Geodetic Infrastructures in Malaysian Waters (MAGIC) Project over marine areas in Sabah. The correlation coefficients and the RMS value for the geoid undulations of GGM and airborne gravity data were computed. The correlation coefficients between EGM 2008 and airborne gravity data is 1 while RMS value is 0.1499.In this study, the RMS value of EGM 2008 is the lowest among the others. Regarding to the statistical analysis, it clearly represents that EGM 2008 is the best fit for marine geoid undulations throughout South China Sea.
Schlemm, Eckhard
2015-09-01
The Bak-Sneppen model is an abstract representation of a biological system that evolves according to the Darwinian principles of random mutation and selection. The species in the system are characterized by a numerical fitness value between zero and one. We show that in the case of five species the steady-state fitness distribution can be obtained as a solution to a linear differential equation of order five with hypergeometric coefficients. Similar representations for the asymptotic fitness distribution in larger systems may help pave the way towards a resolution of the question of whether or not, in the limit of infinitely many species, the fitness is asymptotically uniformly distributed on the interval [fc, 1] with fc ≳ 2/3. Copyright © 2015 Elsevier Inc. All rights reserved.
One model to fit all? The pursuit of integrated earth system models in GAIM and AIMES
Uhrqvist, Ola
2015-01-01
Images of Earth from space popularized the view of our planet as a single, fragile entity against the vastness and darkness of space. In the 1980s, the International Geosphere-Biosphere Program (IGBP) was set up to produce a predictive understanding of this fragile entity as the ‘Earth System.’ In order to do so, the program sought to create a common research framework for the different disciplines involved. It suggested that integrated numerical models could provide such a framework. The pap...
Regional forecasting with global atmospheric models; Third year report
Energy Technology Data Exchange (ETDEWEB)
Crowley, T.J.; North, G.R.; Smith, N.R. [Applied Research Corp., College Station, TX (United States)
1994-05-01
This report was prepared by the Applied Research Corporation (ARC), College Station, Texas, under subcontract to Pacific Northwest Laboratory (PNL) as part of a global climate studies task. The task supports site characterization work required for the selection of a potential high-level nuclear waste repository and is part of the Performance Assessment Scientific Support (PASS) Program at PNL. The work is under the overall direction of the Office of Civilian Radioactive Waste Management (OCRWM), US Department of Energy Headquarters, Washington, DC. The scope of the report is to present the results of the third year`s work on the atmospheric modeling part of the global climate studies task. The development testing of computer models and initial results are discussed. The appendices contain several studies that provide supporting information and guidance to the modeling work and further details on computer model development. Complete documentation of the models, including user information, will be prepared under separate reports and manuals.
Basch, Corey H; Hillyer, Grace Clarke; Ethan, Danna; Berdnik, Alyssa; Basch, Charles E
2015-07-01
Tanned skin has been associated with perceptions of fitness and social desirability. Portrayal of models in magazines may reflect and perpetuate these perceptions. Limited research has investigated tanning shade gradations of models in men's versus women's fitness and muscle enthusiast magazines. Such findings are relevant in light of increased incidence and prevalence of melanoma in the United States. This study evaluated and compared tanning shade gradations of adult Caucasian male and female model images in mainstream fitness and muscle enthusiast magazines. Sixty-nine U.S. magazine issues (spring and summer, 2013) were utilized. Two independent reviewers rated tanning shade gradations of adult Caucasian male and female model images on magazines' covers, advertisements, and feature articles. Shade gradations were assessed using stock photographs of Caucasian models with varying levels of tanned skin on an 8-shade scale. A total of 4,683 images were evaluated. Darkest tanning shades were found among males in muscle enthusiast magazines and lightest among females in women's mainstream fitness magazines. By gender, male model images were 54% more likely to portray a darker tanning shade. In this study, images in men's (vs. women's) fitness and muscle enthusiast magazines portrayed Caucasian models with darker skin shades. Despite these magazines' fitness-related messages, pro-tanning images may promote attitudes and behaviors associated with higher skin cancer risk. To date, this is the first study to explore tanning shades in men's magazines of these genres. Further research is necessary to identify effects of exposure to these images among male readers. © The Author(s) 2014.
Frenken, K.
2001-01-01
The biological evolution of complex organisms, in which the functioning of genes is interdependent, has been analyzed as "hill-climbing" on NK fitness landscapes through random mutation and natural selection. In evolutionary economics, NK fitness landscapes have been used to simulate the evolution
Seismic waves and earthquakes in a global monolithic model
Roubíček, Tomáš
2018-03-01
The philosophy that a single "monolithic" model can "asymptotically" replace and couple in a simple elegant way several specialized models relevant on various Earth layers is presented and, in special situations, also rigorously justified. In particular, global seismicity and tectonics is coupled to capture, e.g., (here by a simplified model) ruptures of lithospheric faults generating seismic waves which then propagate through the solid-like mantle and inner core both as shear (S) or pressure (P) waves, while S-waves are suppressed in the fluidic outer core and also in the oceans. The "monolithic-type" models have the capacity to describe all the mentioned features globally in a unified way together with corresponding interfacial conditions implicitly involved, only when scaling its parameters appropriately in different Earth's layers. Coupling of seismic waves with seismic sources due to tectonic events is thus an automatic side effect. The global ansatz is here based, rather for an illustration, only on a relatively simple Jeffreys' viscoelastic damageable material at small strains whose various scaling (limits) can lead to Boger's viscoelastic fluid or even to purely elastic (inviscid) fluid. Self-induced gravity field, Coriolis, centrifugal, and tidal forces are counted in our global model, as well. The rigorous mathematical analysis as far as the existence of solutions, convergence of the mentioned scalings, and energy conservation is briefly presented.
Global Land Use Regression Model for Nitrogen Dioxide Air Pollution.
Larkin, Andrew; Geddes, Jeffrey A; Martin, Randall V; Xiao, Qingyang; Liu, Yang; Marshall, Julian D; Brauer, Michael; Hystad, Perry
2017-06-20
Nitrogen dioxide is a common air pollutant with growing evidence of health impacts independent of other common pollutants such as ozone and particulate matter. However, the worldwide distribution of NO 2 exposure and associated impacts on health is still largely uncertain. To advance global exposure estimates we created a global nitrogen dioxide (NO 2 ) land use regression model for 2011 using annual measurements from 5,220 air monitors in 58 countries. The model captured 54% of global NO 2 variation, with a mean absolute error of 3.7 ppb. Regional performance varied from R 2 = 0.42 (Africa) to 0.67 (South America). Repeated 10% cross-validation using bootstrap sampling (n = 10,000) demonstrated a robust performance with respect to air monitor sampling in North America, Europe, and Asia (adjusted R 2 within 2%) but not for Africa and Oceania (adjusted R 2 within 11%) where NO 2 monitoring data are sparse. The final model included 10 variables that captured both between and within-city spatial gradients in NO 2 concentrations. Variable contributions differed between continental regions, but major roads within 100 m and satellite-derived NO 2 were consistently the strongest predictors. The resulting model can be used for global risk assessments and health studies, particularly in countries without existing NO 2 monitoring data or models.
2017-08-01
k2 – k1) 3.3 Universal Kinetic Rate Platform Development Kinetic rate models range from pure chemical reactions to mass transfer...14 8. The rate model that best fits the experimental data is a first-order or homogeneous catalytic reaction ...Avrami (7), and intraparticle diffusion (6) rate equations to name a few. A single fitting algorithm (kinetic rate model ) for a reaction does not
Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting
Energy Technology Data Exchange (ETDEWEB)
Ross, James C., E-mail: jross@bwh.harvard.edu [Channing Laboratory, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Surgical Planning Lab, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Laboratory of Mathematics in Imaging, Brigham and Women' s Hospital, Boston, Massachusetts 02126 (United States); Kindlmann, Gordon L. [Computer Science Department and Computation Institute, University of Chicago, Chicago, Illinois 60637 (United States); Okajima, Yuka; Hatabu, Hiroto [Department of Radiology, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Díaz, Alejandro A. [Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 and Department of Pulmonary Diseases, Pontificia Universidad Católica de Chile, Santiago (Chile); Silverman, Edwin K. [Channing Laboratory, Brigham and Women' s Hospital, Boston, Massachusetts 02215 and Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 (United States); Washko, George R. [Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 (United States); Dy, Jennifer [ECE Department, Northeastern University, Boston, Massachusetts 02115 (United States); Estépar, Raúl San José [Department of Radiology, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Surgical Planning Lab, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Laboratory of Mathematics in Imaging, Brigham and Women' s Hospital, Boston, Massachusetts 02126 (United States)
2013-12-15
Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The
Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting
International Nuclear Information System (INIS)
Ross, James C.; Kindlmann, Gordon L.; Okajima, Yuka; Hatabu, Hiroto; Díaz, Alejandro A.; Silverman, Edwin K.; Washko, George R.; Dy, Jennifer; Estépar, Raúl San José
2013-01-01
Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The proposed
Synchronization Experiments With A Global Coupled Model of Intermediate Complexity
Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin
2013-04-01
In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.
Global GPS Ionospheric Modelling Using Spherical Harmonic Expansion Approach
Directory of Open Access Journals (Sweden)
Byung-Kyu Choi
2010-12-01
Full Text Available In this study, we developed a global ionosphere model based on measurements from a worldwide network of global positioning system (GPS. The total number of the international GPS reference stations for development of ionospheric model is about 100 and the spherical harmonic expansion approach as a mathematical method was used. In order to produce the ionospheric total electron content (TEC based on grid form, we defined spatial resolution of 2.0 degree and 5.0 degree in latitude and longitude, respectively. Two-dimensional TEC maps were constructed within the interval of one hour, and have a high temporal resolution compared to global ionosphere maps which are produced by several analysis centers. As a result, we could detect the sudden increase of TEC by processing GPS observables on 29 October, 2003 when the massive solar flare took place.
Modeling global distribution of agricultural insecticides in surface waters.
Ippolito, Alessio; Kattwinkel, Mira; Rasmussen, Jes J; Schäfer, Ralf B; Fornaroli, Riccardo; Liess, Matthias
2015-03-01
Agricultural insecticides constitute a major driver of animal biodiversity loss in freshwater ecosystems. However, the global extent of their effects and the spatial extent of exposure remain largely unknown. We applied a spatially explicit model to estimate the potential for agricultural insecticide runoff into streams. Water bodies within 40% of the global land surface were at risk of insecticide runoff. We separated the influence of natural factors and variables under human control determining insecticide runoff. In the northern hemisphere, insecticide runoff presented a latitudinal gradient mainly driven by insecticide application rate; in the southern hemisphere, a combination of daily rainfall intensity, terrain slope, agricultural intensity and insecticide application rate determined the process. The model predicted the upper limit of observed insecticide exposure measured in water bodies (n = 82) in five different countries reasonably well. The study provides a global map of hotspots for insecticide contamination guiding future freshwater management and conservation efforts. Copyright © 2014 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Little, M P
2004-01-01
Bystander effects following exposure to α-particles have been observed in many experimental systems, and imply that linearly extrapolating low dose risks from high dose data might materially underestimate risk. Brenner and Sachs (2002 Int. J. Radiat. Biol. 78 593-604; 2003 Health Phys. 85 103-8) have recently proposed a model of the bystander effect which they use to explain the inverse dose rate effect observed for lung cancer in underground miners exposed to radon daughters. In this paper we fit the model of the bystander effect proposed by Brenner and Sachs to 11 cohorts of underground miners, taking account of the covariance structure of the data and the period of latency between the development of the first pre-malignant cell and clinically overt cancer. We also fitted a simple linear relative risk model, with adjustment for age at exposure and attained age. The methods that we use for fitting both models are different from those used by Brenner and Sachs, in particular taking account of the covariance structure, which they did not, and omitting certain unjustifiable adjustments to the miner data. The fit of the original model of Brenner and Sachs (with 0 y period of latency) is generally poor, although it is much improved by assuming a 5 or 6 y period of latency from the first appearance of a pre-malignant cell to cancer. The fit of this latter model is equivalent to that of a linear relative risk model with adjustment for age at exposure and attained age. In particular, both models are capable of describing the observed inverse dose rate effect in this data set
W. Hasan, W. Z.
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554
Directory of Open Access Journals (Sweden)
A H Sabry
Full Text Available The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.
Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.
Scott, Robert B.
2010-01-01
We compare the total kinetic energy (TKE) in four global eddying ocean circulation simulations with a global dataset of over 5000, quality controlled, moored current meter records. At individual mooring sites, there was considerable scatter between models and observations that was greater than estimated statistical uncertainty. Averaging over all current meter records in various depth ranges, all four models had mean TKE within a factor of two of observations above 3500. m, and within a factor of three below 3500. m. With the exception of observations between 20 and 100. m, the models tended to straddle the observations. However, individual models had clear biases. The free running (no data assimilation) model biases were largest below 2000. m. Idealized simulations revealed that the parameterized bottom boundary layer tidal currents were not likely the source of the problem, but that reducing quadratic bottom drag coefficient may improve the fit with deep observations. Data assimilation clearly improved the model-observation comparison, especially below 2000. m, despite assimilated data existing mostly above this depth and only south of 47°N. Different diagnostics revealed different aspects of the comparison, though in general the models appeared to be in an eddying-regime with TKE that compared reasonably well with observations. © 2010 Elsevier Ltd.
Modelling Global Pattern Formations for Collaborative Learning Environments
DEFF Research Database (Denmark)
Grappiolo, Corrado; Cheong, Yun-Gyung; Khaled, Rilla
2012-01-01
We present our research towards the design of a computational framework capable of modelling the formation and evolution of global patterns (i.e. group structures) in a population of social individuals. The framework is intended to be used in collaborative environments, e.g. social serious games...
Empirical Models for the Estimation of Global Solar Radiation in ...
African Journals Online (AJOL)
Empirical Models for the Estimation of Global Solar Radiation in Yola, Nigeria. ... and average daily wind speed (WS) for the interval of three years (2010 – 2012) measured using various instruments for Yola of recorded data collected from the Center for Atmospheric Research (CAR), Anyigba are presented and analyzed.
Modeling Global Urbanization Supported by Nighttime Light Remote Sensing
Zhou, Y.
2015-12-01
Urbanization, a major driver of global change, profoundly impacts our physical and social world, for example, altering carbon cycling and climate. Understanding these consequences for better scientific insights and effective decision-making unarguably requires accurate information on urban extent and its spatial distributions. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the nighttime light remote sensing data, extended this method to the global domain by developing a computational method (parameterization) to estimate the key parameters in the cluster-based method, and built a consistent 20-year global urban map series to evaluate the time-reactive nature of global urbanization (e.g. 2000 in Fig. 1). Supported by urban maps derived from nightlights remote sensing data and socio-economic drivers, we developed an integrated modeling framework to project future urban expansion by integrating a top-down macro-scale statistical model with a bottom-up urban growth model. With the models calibrated and validated using historical data, we explored urban growth at the grid level (1-km) over the next two decades under a number of socio-economic scenarios. The derived spatiotemporal information of historical and potential future urbanization will be of great value with practical implications for developing adaptation and risk management measures for urban infrastructure, transportation, energy, and water systems when considered together with other factors such as climate variability and change, and high impact weather events.
Global model for the lithospheric strength and effective elastic thickness
Tesauro, M.; Kaban, M.K.; Cloetingh, S.A.P.L.
2013-01-01
Global distribution of the strength and effective elastic thickness (Te) of the lithosphere are estimated using physical parameters from recent crustal and lithospheric models. For the Te estimation we apply a new approach, which provides a possibility to take into account variations of Young
Global vegetation change predicted by the modified Budyko model
Energy Technology Data Exchange (ETDEWEB)
Monserud, R.A.; Tchebakova, N.M.; Leemans, R. (US Department of Agriculture, Moscow, ID (United States). Intermountain Research Station, Forest Service)
1993-09-01
A modified Budyko global vegetation model is used to predict changes in global vegetation patterns resulting from climate change (CO[sub 2] doubling). Vegetation patterns are predicted using a model based on a dryness index and potential evaporation determined by solving radiation balance equations. Climate change scenarios are derived from predictions from four General Circulation Models (GCM's) of the atmosphere (GFDL, GISS, OSU, and UKMO). All four GCM scenarios show similar trends in vegetation shifts and in areas that remain stable, although the UKMO scenario predicts greater warming than the others. Climate change maps produced by all four GCM scenarios show good agreement with the current climate vegetation map for the globe as a whole, although over half of the vegetation classes show only poor to fair agreement. The most stable areas are Desert and Ice/Polar Desert. Because most of the predicted warming is concentrated in the Boreal and Temperate zones, vegetation there is predicted to undergo the greatest change. Most vegetation classes in the Subtropics and Tropics are predicted to expand. Any shift in the Tropics favouring either Forest over Savanna, or vice versa, will be determined by the magnitude of the increased precipitation accompanying global warming. Although the model predicts equilibrium conditions to which many plant species cannot adjust (through migration or microevolution) in the 50-100 y needed for CO[sub 2] doubling, it is not clear if projected global warming will result in drastic or benign vegetation change. 72 refs., 3 figs., 3 tabs.
New Models of Hybrid Leadership in Global Higher Education
Tonini, Donna C.; Burbules, Nicholas C.; Gunsalus, C. K.
2016-01-01
This manuscript highlights the development of a leadership preparation program known as the Nanyang Technological University Leadership Academy (NTULA), exploring the leadership challenges unique to a university undergoing rapid growth in a highly multicultural context, and the hybrid model of leadership it developed in response to globalization.…
GLOBAL STABILITY AND PERIODIC SOLUTION OF A VIRAL DYNAMIC MODEL
Directory of Open Access Journals (Sweden)
Erhan COŞKUN
2009-02-01
Full Text Available Abstract:In this paper, we consider the classical viral dynamic mathematical model. Global dynamics of the model is rigorously established. We prove that, if the basic reproduction number, the HIV infection is cleared from the T-cell population; if , the HIV infection persists. For an open set of parameter values, the chronic-infection equilibrium can be unstable and periodic solutions may exist. We establish parameter regions for which is globally stable. Keywords: Global stability, HIV infection; CD4+ T cells; Periodic solution Mathematics Subject Classifications (2000: 65L10, 34B05 BİR VİRAL DİNAMİK MODELİN GLOBAL KARARLILIĞI VE PERİYODİK ÇÖZÜMÜ Özet: Bu makalede klasik viral dinamik modeli ele aldık. Modelin global dinamikleri oluşturuldu. Eğer temel üretim sayısı olur ise HIV enfeksiyonu T hücre nüfusundan çıkartılır, eğer olursa HIV enfeksiyonu çıkartılamaz. Parametre değerlerinin açık bir kümesi için kronik enfeksiyon dengesi kararsızdır ve periyodik çözüm oluşabilir. ın global kararlı olduğu parametre bölgeleri oluşturuldu. Anahtar Kelimeler: Global Kararlılık, HIV enfeksiyon, CD4+ T hücreler, Periyodik çözüm
Wasylkiw, L; Emms, A A; Meuse, R; Poirier, K F
2009-03-01
The current study is a content analysis of women appearing in advertisements in two types of magazines: fitness/health versus fashion/beauty chosen because of their large and predominantly female readerships. Women appearing in advertisements of the June 2007 issue of five fitness/health magazines were compared to women appearing in advertisements of the June 2007 issue of five beauty/fashion magazines. Female models appearing in advertisements of both types of magazines were primarily young, thin Caucasians; however, images of models were more likely to emphasize appearance over performance when they appeared in fashion magazines. This difference in emphasis has implications for future research.
The Global Classroom Video Conferencing Model and First Evaluations
DEFF Research Database (Denmark)
Weitze, Charlotte Lærke; Ørngreen, Rikke; Levinsen, Karin
2013-01-01
pedagogical innovativeness, including collaborative and technological issues. The research is based on the Global Classroom Model as it is implemented and used at an adult learning center in Denmark (VUC Storstrøm). VUC Storstrøms (VUC) Global Classroom Model is an approach to video conferencing and e......Learning using campus-based teaching combined with laptop solutions for students at home. After a couple of years of campus-to-campus video streaming, VUC started a fulltime day program in 2011 with the support of a hybrid campus and videoconference model. In this model the teachers and some of the students......This paper presents and discusses findings about how students, teachers, and the organization experience a start-up-project applying video conferences between campus and home. This is new territory for adult learning centers. The paper discusses the transition to this eLearning form and discusses...
Significant uncertainty in global scale hydrological modeling from precipitation data errors
Sperna Weiland, Frederiek C.; Vrugt, Jasper A.; van Beek, Rens (L.) P. H.; Weerts, Albrecht H.; Bierkens, Marc F. P.
2015-10-01
In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we focus on large-scale hydrologic modeling and analyze the effect of parameter and rainfall data uncertainty on simulated discharge dynamics with the global hydrologic model PCR-GLOBWB. We use three rainfall data products; the CFSR reanalysis, the ERA-Interim reanalysis, and a combined ERA-40 reanalysis and CRU dataset. Parameter uncertainty is derived from Latin Hypercube Sampling (LHS) using monthly discharge data from five of the largest river systems in the world. Our results demonstrate that the default parameterization of PCR-GLOBWB, derived from global datasets, can be improved by calibrating the model against monthly discharge observations. Yet, it is difficult to find a single parameterization of PCR-GLOBWB that works well for all of the five river basins considered herein and shows consistent performance during both the calibration and evaluation period. Still there may be possibilities for regionalization based on catchment similarities. Our simulations illustrate that parameter uncertainty constitutes only a minor part of predictive uncertainty. Thus, the apparent dichotomy between simulations of global-scale hydrologic behavior and actual data cannot be resolved by simply increasing the model complexity of PCR-GLOBWB and resolving sub-grid processes. Instead, it would be more productive to improve the characterization of global rainfall amounts at spatial resolutions of 0.5° and smaller.
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
Voigt, A.
2017-12-01
Climate models project that global warming will lead to substantial changes in extratropical jet streams. Yet, many quantitative aspects of warming-induced jet stream changes remain uncertain, and recent work has indicated an important role of clouds and their radiative interactions. Here, I will investigate how cloud-radiative changes impact the zonal-mean extratropical circulation response under global warming using a hierarchy of global atmosphere models. I will first focus on aquaplanet setups with prescribed sea-surface temperatures (SSTs), which reproduce the model spread found in realistic simulations with interactive SSTs. Simulations with two CMIP5 models MPI-ESM and IPSL-CM5A and prescribed clouds show that half of the circulation response can be attributed to cloud changes. The rise of tropical high-level clouds and the upward and poleward movement of midlatitude high-level clouds lead to poleward jet shifts. High-latitude low-level cloud changes shift the jet poleward in one model but not in the other. The impact of clouds on the jet operates via the atmospheric radiative forcing that is created by the cloud changes and is qualitatively reproduced in a dry Held-Suarez model, although the latter is too sensitive because of its simplified treatment of diabatic processes. I will then show that the aquaplanet results also hold when the models are used in a realistic setup that includes continents and seasonality. I will further juxtapose these prescribed-SST simulations with interactive-SST simulations and show that atmospheric and surface cloud-radiative interactions impact the jet poleward jet shifts in about equal measure. Finally, I will discuss the cloud impact on regional and seasonal circulation changes.
Energy Technology Data Exchange (ETDEWEB)
Fujii, Ricardo Junqueira; Udaeta, Miguel Edgar Morales; Galvao, Luiz Claudio Ribeiro [Universidade de Sao Paulo (USP), SP (Brazil). Dept. de Energia e Automacao Eletricas. Grupo de Energia]. E-mail: ricardo_fujii@pea.usp.br; daeta@pea.usp.br; lcgalvao@pea.usp.br
2006-07-01
The traditional energy planning usually takes into account the technical economical costs, considered alongside environmental and a few political restraints; however, there is a lack of methods to evenly assess environmental, economical, social and political costs. This work tries to change such scenario by elaborating a model to characterize an energy resource in all four dimensions - environmental, political, social and economical - in an integrated view. The model aims at two objectives: provide a method to assess the global cost of the energy resource and estimate its potential considering the limitations provided by these dimensions. To minimize the complexity of the integration process, the model strongly recommends the use of the Full Cost Accounting - FCA - method to assess the costs and benefits from any given resource. The FCA allows considering quantitative and qualitative costs, reducing the need of quantitative data, which are limited in some cases. The model has been applied in the characterization of the region of Aracatuba, located in the west part of the state of Sao Paulo - Brazil. The results showed that the potential of renewable sources are promising, especially when the global costs are considered. Some resources, in spite of being economically attractive, don't provide an acceptable global cost. It became clear that the model is a valuable tool when the conventional tools fail to address many issues, especially the need of an integrated view on the planning process; the results from this model can be applied in a portfolio selection method to evaluate the best options for a power system expansion. It has to be noticed that the usefulness of this model can be increased when adopted with a method to analyze demand side management measures, thus offering a complete set of possible choices of energy options for the decision maker. (author)
Standard Model updates and new physics analysis with the Unitarity Triangle fit
International Nuclear Information System (INIS)
Bevan, A.; Bona, M.; Ciuchini, M.; Derkach, D.; Franco, E.; Silvestrini, L.; Lubicz, V.; Tarantino, C.; Martinelli, G.; Parodi, F.; Schiavi, C.; Pierini, M.; Sordini, V.; Stocchi, A.; Vagnoni, V.
2013-01-01
We present the summer 2012 update of the Unitarity Triangle (UT) analysis performed by the UTfit Collaboration within the Standard Model (SM) and beyond. The increased accuracy on several of the fundamental constraints is now enhancing some of the tensions amongst and within the constraint themselves. In particular, the long standing tension between exclusive and inclusive determinations of the V ub and V cb CKM matrix elements is now playing a major role. Then we present the generalisation the UT analysis to investigate new physics (NP) effects, updating the constraints on NP contributions to ΔF=2 processes. In the NP analysis, both CKM and NP parameters are fitted simultaneously to obtain the possible NP effects in any specific sector. Finally, based on the NP constraints, we derive upper bounds on the coefficients of the most general ΔF=2 effective Hamiltonian. These upper bounds can be translated into lower bounds on the scale of NP that contributes to these low-energy effective interactions
Global stability of an SEIR epidemic model with constant immigration
Energy Technology Data Exchange (ETDEWEB)
Li Guihua [Key Laboratory of Eco-environments in Three Gorges Reservoir Region (Ministry of Education), Faculty of Life Science, Southwest China Normal University, Chongqing 400715 (China) and Department of Mathematics, Southwest China Normal University, Chongqing 400715 (China) and Department of Mathematics, North University of China, Taiyuan Shanxi 030051 (China)]. E-mail: liguihua@nuc.edu.cn; Wang Wendi [Department of Mathematics, Southwest China Normal University, Chongqing 400715 (China); Jin Zhen [Department of Mathematics, North University of China, Taiyuan Shanxi 030051 (China)
2006-11-15
An SEIR epidemic model with the infectious force in the latent (exposed), infected and recovered period is studied. It is assumed that susceptible and exposed individuals have constant immigration rates. The model exhibits a unique endemic state if the fraction p of infectious immigrants is positive. If the basic reproduction number R is greater than 1, sufficient conditions for the global stability of the endemic equilibrium are obtained by the compound matrix theory.
Global stability of an SEIR epidemic model with constant immigration
International Nuclear Information System (INIS)
Li Guihua; Wang Wendi; Jin Zhen
2006-01-01
An SEIR epidemic model with the infectious force in the latent (exposed), infected and recovered period is studied. It is assumed that susceptible and exposed individuals have constant immigration rates. The model exhibits a unique endemic state if the fraction p of infectious immigrants is positive. If the basic reproduction number R is greater than 1, sufficient conditions for the global stability of the endemic equilibrium are obtained by the compound matrix theory
A Parametric Model of Shoulder Articulation for Virtual Assessment of Space Suit Fit
Kim, K. Han; Young, Karen S.; Bernal, Yaritza; Boppana, Abhishektha; Vu, Linh Q.; Benson, Elizabeth A.; Jarvis, Sarah; Rajulu, Sudhakar L.
2016-01-01
Suboptimal suit fit is a known risk factor for crewmember shoulder injury. Suit fit assessment is however prohibitively time consuming and cannot be generalized across wide variations of body shapes and poses. In this work, we have developed a new design tool based on the statistical analysis of body shape