WorldWideScience

Sample records for global calibration model

  1. A global model for residential energy use: Uncertainty in calibration to regional data

    van Ruijven, Bas; van Vuuren, Detlef P.; de Vries, Bert; van der Sluijs, Jeroen P.

    2010-01-01

    Uncertainties in energy demand modelling allow for the development of different models, but also leave room for different calibrations of a single model. We apply an automated model calibration procedure to analyse calibration uncertainty of residential sector energy use modelling in the TIMER 2.0 global energy model. This model simulates energy use on the basis of changes in useful energy intensity, technology development (AEEI) and price responses (PIEEI). We find that different implementations of these factors yield behavioural model results. Model calibration uncertainty is identified as influential source for variation in future projections: amounting 30% to 100% around the best estimate. Energy modellers should systematically account for this and communicate calibration uncertainty ranges. (author)

  2. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  3. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  4. Calibration of a simple and a complex model of global marine biogeochemistry

    Kriest, Iris

    2017-11-01

    The assessment of the ocean biota's role in climate change is often carried out with global biogeochemical ocean models that contain many components and involve a high level of parametric uncertainty. Because many data that relate to tracers included in a model are only sparsely observed, assessment of model skill is often restricted to tracers that can be easily measured and assembled. Examination of the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types - a complex seven-component model (MOPS) and a very simple four-component model (RetroMOPS) - for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual-mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model contains only two nutrients: oxygen and dissolved organic phosphorus (DOP). Its misfit and large-scale tracer distributions are sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias (fixed nitrogen and oxygen inventory) as a useful additional constraint on model parameters. Dissolved organic phosphorus at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer - although difficult to measure - may be an important asset for model calibration.

  5. Vegetation root zone storage and rooting depth, derived from local calibration of a global hydrological model

    van der Ent, R.; Van Beek, R.; Sutanudjaja, E.; Wang-Erlandsson, L.; Hessels, T.; Bastiaanssen, W.; Bierkens, M. F.

    2017-12-01

    The storage and dynamics of water in the root zone control many important hydrological processes such as saturation excess overland flow, interflow, recharge, capillary rise, soil evaporation and transpiration. These processes are parameterized in hydrological models or land-surface schemes and the effect on runoff prediction can be large. Root zone parameters in global hydrological models are very uncertain as they cannot be measured directly at the scale on which these models operate. In this paper we calibrate the global hydrological model PCR-GLOBWB using a state-of-the-art ensemble of evaporation fields derived by solving the energy balance for satellite observations. We focus our calibration on the root zone parameters of PCR-GLOBWB and derive spatial patterns of maximum root zone storage. We find these patterns to correspond well with previous research. The parameterization of our model allows for the conversion of maximum root zone storage to root zone depth and we find that these correspond quite well to the point observations where available. We conclude that climate and soil type should be taken into account when regionalizing measured root depth for a certain vegetation type. We equally find that using evaporation rather than discharge better allows for local adjustment of root zone parameters within a basin and thus provides orthogonal data to diagnose and optimize hydrological models and land surface schemes.

  6. A multimethod Global Sensitivity Analysis to aid the calibration of geomechanical models via time-lapse seismic data

    Price, D. C.; Angus, D. A.; Garcia, A.; Fisher, Q. J.; Parsons, S.; Kato, J.

    2018-03-01

    Time-lapse seismic attributes are used extensively in the history matching of production simulator models. However, although proven to contain information regarding production induced stress change, it is typically only loosely (i.e. qualitatively) used to calibrate geomechanical models. In this study we conduct a multimethod Global Sensitivity Analysis (GSA) to assess the feasibility and aid the quantitative calibration of geomechanical models via near-offset time-lapse seismic data. Specifically, the calibration of mechanical properties of the overburden. Via the GSA, we analyse the near-offset overburden seismic traveltimes from over 4000 perturbations of a Finite Element (FE) geomechanical model of a typical High Pressure High Temperature (HPHT) reservoir in the North Sea. We find that, out of an initially large set of material properties, the near-offset overburden traveltimes are primarily affected by Young's modulus and the effective stress (i.e. Biot) coefficient. The unexpected significance of the Biot coefficient highlights the importance of modelling fluid flow and pore pressure outside of the reservoir. The FE model is complex and highly nonlinear. Multiple combinations of model parameters can yield equally possible model realizations. Consequently, numerical calibration via a large number of random model perturbations is unfeasible. However, the significant differences in traveltime results suggest that more sophisticated calibration methods could potentially be feasible for finding numerous suitable solutions. The results of the time-varying GSA demonstrate how acquiring multiple vintages of time-lapse seismic data can be advantageous. However, they also suggest that significant overburden near-offset seismic time-shifts, useful for model calibration, may take up to 3 yrs after the start of production to manifest. Due to the nonlinearity of the model behaviour, similar uncertainty in the reservoir mechanical properties appears to influence overburden

  7. Calibration of a surface mass balance model for global-scale applications

    Giesen, R. H.; Oerlemans, J.

    2012-01-01

    Global applications of surface mass balance models have large uncertainties, as a result of poor climate input data and limited availability of mass balance measurements. This study addresses several possible consequences of these limitations for the modelled mass balance. This is done by applying a

  8. Understanding Global Systems Today—A Calibration of the World3-03 Model between 1995 and 2012

    Roberto Pasqualino

    2015-07-01

    Full Text Available In 1972 the Limits to Growth report was published. It used the World3 model to better understand the dynamics of global systems and their relationship to finite resource availability, land use, and persistent pollution accumulation. The trends of resource depletion and degradation of physical systems which were identified by Limits to Growth have continued. Although World3 forecast scenarios are based on key measures and assumptions that cannot be easily assessed using available data (i.e., non-renewable resources, persistent pollution, the dynamics of growth components of the model can be compared with publicly available global data trends. Based on Scenario 2 of the Limits to Growth study, we present a calibration of the updated World3-03 model using historical data from 1995 to 2012 to better understand the dynamics of today’s economic and resource system. Given that accurate data on physical limits does not currently exist, the dynamics of overshoot to global limits are not assessed. In this paper we offer a new interpretation of the parametrisation of World3-03 using these data to explore how its assumptions on global dynamics, environmental footprints and responses have changed over the past 40 years. The results show that human society has invested more to abate persistent pollution, to increase food productivity and have a more productive service sector.

  9. Calibrated Properties Model

    Ahlers, C.F.; Liu, H.H.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M and O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  10. Calibrated Properties Model

    Ahlers, C.; Liu, H.

    2000-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  11. Calibration and simulation of Heston model

    Mrázek Milan

    2017-05-01

    Full Text Available We calibrate Heston stochastic volatility model to real market data using several optimization techniques. We compare both global and local optimizers for different weights showing remarkable differences even for data (DAX options from two consecutive days. We provide a novel calibration procedure that incorporates the usage of approximation formula and outperforms significantly other existing calibration methods.

  12. Calibrated Properties Model

    Ghezzehej, T.

    2004-01-01

    The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency

  13. Calibration of the maximum carboxylation velocity (Vcmax using data mining techniques and ecophysiological data from the Brazilian semiarid region, for use in Dynamic Global Vegetation Models

    L. F. C. Rezende

    Full Text Available Abstract The semiarid region of northeastern Brazil, the Caatinga, is extremely important due to its biodiversity and endemism. Measurements of plant physiology are crucial to the calibration of Dynamic Global Vegetation Models (DGVMs that are currently used to simulate the responses of vegetation in face of global changes. In a field work realized in an area of preserved Caatinga forest located in Petrolina, Pernambuco, measurements of carbon assimilation (in response to light and CO2 were performed on 11 individuals of Poincianella microphylla, a native species that is abundant in this region. These data were used to calibrate the maximum carboxylation velocity (Vcmax used in the INLAND model. The calibration techniques used were Multiple Linear Regression (MLR, and data mining techniques as the Classification And Regression Tree (CART and K-MEANS. The results were compared to the UNCALIBRATED model. It was found that simulated Gross Primary Productivity (GPP reached 72% of observed GPP when using the calibrated Vcmax values, whereas the UNCALIBRATED approach accounted for 42% of observed GPP. Thus, this work shows the benefits of calibrating DGVMs using field ecophysiological measurements, especially in areas where field data is scarce or non-existent, such as in the Caatinga.

  14. Observation models in radiocarbon calibration

    Jones, M.D.; Nicholls, G.K.

    2001-01-01

    The observation model underlying any calibration process dictates the precise mathematical details of the calibration calculations. Accordingly it is important that an appropriate observation model is used. Here this is illustrated with reference to the use of reservoir offsets where the standard calibration approach is based on a different model to that which the practitioners clearly believe is being applied. This sort of error can give rise to significantly erroneous calibration results. (author). 12 refs., 1 fig

  15. Building and calibrating a large-extent and high resolution coupled groundwater-land surface model using globally available data-sets

    Sutanudjaja, E. H.; Van Beek, L. P.; de Jong, S. M.; van Geer, F.; Bierkens, M. F.

    2012-12-01

    The current generation of large-scale hydrological models generally lacks a groundwater model component simulating lateral groundwater flow. Large-scale groundwater models are rare due to a lack of hydro-geological data required for their parameterization and a lack of groundwater head data required for their calibration. In this study, we propose an approach to develop a large-extent fully-coupled land surface-groundwater model by using globally available datasets and calibrate it using a combination of discharge observations and remotely-sensed soil moisture data. The underlying objective is to devise a collection of methods that enables one to build and parameterize large-scale groundwater models in data-poor regions. The model used, PCR-GLOBWB-MOD, has a spatial resolution of 1 km x 1 km and operates on a daily basis. It consists of a single-layer MODFLOW groundwater model that is dynamically coupled to the PCR-GLOBWB land surface model. This fully-coupled model accommodates two-way interactions between surface water levels and groundwater head dynamics, as well as between upper soil moisture states and groundwater levels, including a capillary rise mechanism to sustain upper soil storage and thus to fulfill high evaporation demands (during dry conditions). As a test bed, we used the Rhine-Meuse basin, where more than 4000 groundwater head time series have been collected for validation purposes. The model was parameterized using globally available data-sets on surface elevation, drainage direction, land-cover, soil and lithology. Next, the model was calibrated using a brute force approach and massive parallel computing, i.e. by running the coupled groundwater-land surface model for more than 3000 different parameter sets. Here, we varied minimal soil moisture storage and saturated conductivities of the soil layers as well as aquifer transmissivities. Using different regularization strategies and calibration criteria we compared three calibration scenarios

  16. CALIBRATED HYDRODYNAMIC MODEL

    Sezar Gülbaz

    2015-01-01

    Full Text Available The land development and increase in urbanization in a watershed affect water quantityand water quality. On one hand, urbanization provokes the adjustment of geomorphicstructure of the streams, ultimately raises peak flow rate which causes flood; on theother hand, it diminishes water quality which results in an increase in Total SuspendedSolid (TSS. Consequently, sediment accumulation in downstream of urban areas isobserved which is not preferred for longer life of dams. In order to overcome thesediment accumulation problem in dams, the amount of TSS in streams and inwatersheds should be taken under control. Low Impact Development (LID is a BestManagement Practice (BMP which may be used for this purpose. It is a land planningand engineering design method which is applied in managing storm water runoff inorder to reduce flooding as well as simultaneously improve water quality. LID includestechniques to predict suspended solid loads in surface runoff generated over imperviousurban surfaces. In this study, the impact of LID-BMPs on surface runoff and TSS isinvestigated by employing a calibrated hydrodynamic model for Sazlidere Watershedwhich is located in Istanbul, Turkey. For this purpose, a calibrated hydrodynamicmodel was developed by using Environmental Protection Agency Storm WaterManagement Model (EPA SWMM. For model calibration and validation, we set up arain gauge and a flow meter into the field and obtain rainfall and flow rate data. Andthen, we select several LID types such as retention basins, vegetative swales andpermeable pavement and we obtain their influence on peak flow rate and pollutantbuildup and washoff for TSS. Consequently, we observe the possible effects ofLID on surface runoff and TSS in Sazlidere Watershed.

  17. SURF Model Calibration Strategy

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    SURF and SURFplus are high explosive reactive burn models for shock initiation and propagation of detonation waves. They are engineering models motivated by the ignition & growth concept of high spots and for SURFplus a second slow reaction for the energy release from carbon clustering. A key feature of the SURF model is that there is a partial decoupling between model parameters and detonation properties. This enables reduced sets of independent parameters to be calibrated sequentially for the initiation and propagation regimes. Here we focus on a methodology for tting the initiation parameters to Pop plot data based on 1-D simulations to compute a numerical Pop plot. In addition, the strategy for tting the remaining parameters for the propagation regime and failure diameter is discussed.

  18. Global Calibration of Multiple Cameras Based on Sphere Targets

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  19. Forward Global Photometric Calibration of the Dark Energy Survey

    Burke, D. L.; Rykoff, E. S.; Allam, S.; Annis, J.; Bechtol, K.; Bernstein, G. M.; Drlica-Wagner, A.; Finley, D. A.; Gruendl, R. A.; James, D. J.; Kent, S.; Kessler, R.; Kuhlmann, S.; Lasker, J.; Li, T. S.; Scolnic, D.; Smith, J.; Tucker, D. L.; Wester, W.; Yanny, B.; Abbott, T. M. C.; Abdalla, F. B.; Benoit-Lévy, A.; Bertin, E.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D’Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Estrada, J.; García-Bellido, J.; Gruen, D.; Gutierrez, G.; Honscheid, K.; Kuehn, K.; Kuropatkin, N.; Maia, M. A. G.; March, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Plazas, A. A.; Sako, M.; Sanchez, E.; Scarpine, V.; Schindler, R.; Sevilla-Noarbe, I.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Walker, A. R.; DES Collaboration

    2018-01-01

    Many scientific goals for the Dark Energy Survey (DES) require the calibration of optical/NIR broadband b = grizY photometry that is stable in time and uniform over the celestial sky to one percent or better. It is also necessary to limit to similar accuracy systematic uncertainty in the calibrated broadband magnitudes due to uncertainty in the spectrum of the source. Here we present a “Forward Global Calibration Method (FGCM)” for photometric calibration of the DES, and we present results of its application to the first three years of the survey (Y3A1). The FGCM combines data taken with auxiliary instrumentation at the observatory with data from the broadband survey imaging itself and models of the instrument and atmosphere to estimate the spatial and time dependences of the passbands of individual DES survey exposures. “Standard” passbands that are typical of the passbands encountered during the survey are chosen. The passband of any individual observation is combined with an estimate of the source spectral shape to yield a magnitude {m}b{std} in the standard system. This “chromatic correction” to the standard system is necessary to achieve subpercent calibrations and in particular, to resolve ambiguity between the broadband brightness of a source and the shape of its SED. The FGCM achieves a reproducible and stable photometric calibration of standard magnitudes {m}b{std} of stellar sources over the multiyear Y3A1 data sample with residual random calibration errors of σ =6{--}7 {mmag} per exposure. The accuracy of the calibration is uniform across the 5000 {\\deg }2 DES footprint to within σ =7 {mmag}. The systematic uncertainties of magnitudes in the standard system due to the spectra of sources are less than 5 {mmag} for main-sequence stars with 0.5< g-i< 3.0.

  20. Forward Global Photometric Calibration of the Dark Energy Survey

    Burke, D. L.; Rykoff, E. S.; Allam, S.; Annis, J.; Bechtol, K.; Bernstein, G. M.; Drlica-Wagner, A.; Finley, D. A.; Gruendl, R. A.; James, D. J.; Kent, S.; Kessler, R.; Kuhlmann, S.; Lasker, J.; Li, T. S.; Scolnic, D.; Smith, J.; Tucker, D. L.; Wester, W.; Yanny, B.; Abbott, T. M. C.; Abdalla, F. B.; Benoit-Lévy, A.; Bertin, E.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Cunha, C. E.; D’Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Estrada, J.; García-Bellido, J.; Gruen, D.; Gutierrez, G.; Honscheid, K.; Kuehn, K.; Kuropatkin, N.; Maia, M. A. G.; March, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Plazas, A. A.; Sako, M.; Sanchez, E.; Scarpine, V.; Schindler, R.; Sevilla-Noarbe, I.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Walker, A. R.

    2017-12-28

    Many scientific goals for the Dark Energy Survey (DES) require calibration of optical/NIR broadband $b = grizY$ photometry that is stable in time and uniform over the celestial sky to one percent or better. It is also necessary to limit to similar accuracy systematic uncertainty in the calibrated broadband magnitudes due to uncertainty in the spectrum of the source. Here we present a "Forward Global Calibration Method (FGCM)" for photometric calibration of the DES, and we present results of its application to the first three years of the survey (Y3A1). The FGCM combines data taken with auxiliary instrumentation at the observatory with data from the broad-band survey imaging itself and models of the instrument and atmosphere to estimate the spatial- and time-dependence of the passbands of individual DES survey exposures. "Standard" passbands are chosen that are typical of the passbands encountered during the survey. The passband of any individual observation is combined with an estimate of the source spectral shape to yield a magnitude $m_b^{\\mathrm{std}}$ in the standard system. This "chromatic correction" to the standard system is necessary to achieve sub-percent calibrations. The FGCM achieves reproducible and stable photometric calibration of standard magnitudes $m_b^{\\mathrm{std}}$ of stellar sources over the multi-year Y3A1 data sample with residual random calibration errors of $\\sigma=5-6\\,\\mathrm{mmag}$ per exposure. The accuracy of the calibration is uniform across the $5000\\,\\mathrm{deg}^2$ DES footprint to within $\\sigma=7\\,\\mathrm{mmag}$. The systematic uncertainties of magnitudes in the standard system due to the spectra of sources are less than $5\\,\\mathrm{mmag}$ for main sequence stars with $0.5

  1. Model Calibration in Option Pricing

    Andre Loerx

    2012-04-01

    Full Text Available We consider calibration problems for models of pricing derivatives which occur in mathematical finance. We discuss various approaches such as using stochastic differential equations or partial differential equations for the modeling process. We discuss the development in the past literature and give an outlook into modern approaches of modelling. Furthermore, we address important numerical issues in the valuation of options and likewise the calibration of these models. This leads to interesting problems in optimization, where, e.g., the use of adjoint equations or the choice of the parametrization for the model parameters play an important role.

  2. Another look at volume self-calibration: calibration and self-calibration within a pinhole model of Scheimpflug cameras

    Cornic, Philippe; Le Besnerais, Guy; Champagnat, Frédéric; Illoul, Cédric; Cheminet, Adam; Le Sant, Yves; Leclaire, Benjamin

    2016-01-01

    We address calibration and self-calibration of tomographic PIV experiments within a pinhole model of cameras. A complete and explicit pinhole model of a camera equipped with a 2-tilt angles Scheimpflug adapter is presented. It is then used in a calibration procedure based on a freely moving calibration plate. While the resulting calibrations are accurate enough for Tomo-PIV, we confirm, through a simple experiment, that they are not stable in time, and illustrate how the pinhole framework can be used to provide a quantitative evaluation of geometrical drifts in the setup. We propose an original self-calibration method based on global optimization of the extrinsic parameters of the pinhole model. These methods are successfully applied to the tomographic PIV of an air jet experiment. An unexpected by-product of our work is to show that volume self-calibration induces a change in the world frame coordinates. Provided the calibration drift is small, as generally observed in PIV, the bias on the estimated velocity field is negligible but the absolute location cannot be accurately recovered using standard calibration data. (paper)

  3. Model Calibration in Watershed Hydrology

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  4. Influence of selecting secondary settling tank sub-models on the calibration of WWTP models – A global sensitivity analysis using BSM2

    Ramin, Elham; Flores Alsina, Xavier; Sin, Gürkan

    2014-01-01

    This study investigates the sensitivity of wastewater treatment plant (WWTP) model performance to the selection of one-dimensional secondary settling tanks (1-D SST) models with first-order and second-order mathematical structures. We performed a global sensitivity analysis (GSA) on the benchmark...... simulation model No.2 with the input uncertainty associated to the biokinetic parameters in the activated sludge model No. 1 (ASM1), a fractionation parameter in the primary clarifier, and the settling parameters in the SST model. Based on the parameter sensitivity rankings obtained in this study......, the settling parameters were found to be as influential as the biokinetic parameters on the uncertainty of WWTP model predictions, particularly for biogas production and treated water quality. However, the sensitivity measures were found to be dependent on the 1-D SST models selected. Accordingly, we suggest...

  5. Description, calibration and sensitivity analysis of the local ecosystem submodel of a global model of carbon and nitrogen cycling and the water balance in the terrestrial biosphere

    Kercher, J.R. [Lawrence Livermore National Lab., CA (United States); Chambers, J.Q. [Lawrence Livermore National Lab., CA (United States)]|[California Univ., Santa Barbara, CA (United States). Dept. of Biological Sciences

    1995-10-01

    We have developed a geographically-distributed ecosystem model for the carbon, nitrogen, and water dynamics of the terrestrial biosphere TERRA. The local ecosystem model of TERRA consists of coupled, modified versions of TEM and DAYTRANS. The ecosystem model in each grid cell calculates water fluxes of evaporation, transpiration, and runoff; carbon fluxes of gross primary productivity, litterfall, and plant and soil respiration; and nitrogen fluxes of vegetation uptake, litterfall, mineralization, immobilization, and system loss. The state variables are soil water content; carbon in live vegetation; carbon in soil; nitrogen in live vegetation; organic nitrogen in soil and fitter; available inorganic nitrogen aggregating nitrites, nitrates, and ammonia; and a variable for allocation. Carbon and nitrogen dynamics are calibrated to specific sites in 17 vegetation types. Eight parameters are determined during calibration for each of the 17 vegetation types. At calibration, the annual average values of carbon in vegetation C, show site differences that derive from the vegetation-type specific parameters and intersite variation in climate and soils. From calibration, we recover the average C{sub v} of forests, woodlands, savannas, grasslands, shrublands, and tundra that were used to develop the model initially. The timing of the phases of the annual variation is driven by temperature and light in the high latitude and moist temperate zones. The dry temperate zones are driven by temperature, precipitation, and light. In the tropics, precipitation is the key variable in annual variation. The seasonal responses are even more clearly demonstrated in net primary production and show the same controlling factors.

  6. Calibration of PMIS pavement performance prediction models.

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  7. Towards a global network of gamma-ray detector calibration facilities

    Tijs, Marco; Koomans, Ronald; Limburg, Han

    2016-09-01

    Gamma-ray logging tools are applied worldwide. At various locations, calibration facilities are used to calibrate these gamma-ray logging systems. Several attempts have been made to cross-correlate well known calibration pits, but this cross-correlation does not include calibration facilities in Europe or private company calibration facilities. Our aim is to set-up a framework that gives the possibility to interlink all calibration facilities worldwide by using `tools of opportunity' - tools that have been calibrated in different calibration facilities, whether this usage was on a coordinated basis or by coincidence. To compare the measurement of different tools, it is important to understand the behaviour of the tools in the different calibration pits. Borehole properties, such as diameter, fluid, casing and probe diameter strongly influence the outcome of gamma-ray borehole logging. Logs need to be properly calibrated and compensated for these borehole properties in order to obtain in-situ grades or to do cross-hole correlation. Some tool providers provide tool-specific correction curves for this purpose. Others rely on reference measurements against sources of known radionuclide concentration and geometry. In this article, we present an attempt to set-up a framework for transferring `local' calibrations to be applied `globally'. This framework includes corrections for any geometry and detector size to give absolute concentrations of radionuclides from borehole measurements. This model is used to compare measurements in the calibration pits of Grand Junction, located in the USA; Adelaide (previously known as AMDEL), located in Adelaide Australia; and Stonehenge, located at Medusa Explorations BV in the Netherlands.

  8. Error-in-variables models in calibration

    Lira, I.; Grientschnig, D.

    2017-12-01

    In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.

  9. Financial model calibration using consistency hints.

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  10. Iowa calibration of MEPDG performance prediction models.

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  11. Calibration Plans for the Global Precipitation Measurement (GPM)

    Bidwell, S. W.; Flaming, G. M.; Adams, W. J.; Everett, D. F.; Mendelsohn, C. R.; Smith, E. A.; Turk, J.

    2002-01-01

    The Global Precipitation Measurement (GPM) is an international effort led by the National Aeronautics and Space Administration (NASA) of the U.S.A. and the National Space Development Agency of Japan (NASDA) for the purpose of improving research into the global water and energy cycle. GPM will improve climate, weather, and hydrological forecasts through more frequent and more accurate measurement of precipitation world-wide. Comprised of U.S. domestic and international partners, GPM will incorporate and assimilate data streams from many spacecraft with varied orbital characteristics and instrument capabilities. Two of the satellites will be provided directly by GPM, the core satellite and a constellation member. The core satellite, at the heart of GPM, is scheduled for launch in November 2007. The core will carry a conical scanning microwave radiometer, the GPM Microwave Imager (GMI), and a two-frequency cross-track-scanning radar, the Dual-frequency Precipitation Radar (DPR). The passive microwave channels and the two radar frequencies of the core are carefully chosen for investigating the varying character of precipitation over ocean and land, and from the tropics to the high-latitudes. The DPR will enable microphysical characterization and three-dimensional profiling of precipitation. The GPM-provided constellation spacecraft will carry a GMI radiometer identical to that on the core spacecraft. This paper presents calibration plans for the GPM, including on-board instrument calibration, external calibration methods, and the role of ground validation. Particular emphasis is on plans for inter-satellite calibration of the GPM constellation. With its Unique instrument capabilities, the core spacecraft will serve as a calibration transfer standard to the GPM constellation. In particular the Dual-frequency Precipitation Radar aboard the core will check the accuracy of retrievals from the GMI radiometer and will enable improvement of the radiometer retrievals

  12. Using genetic algorithms to calibrate a water quality model.

    Liu, Shuming; Butler, David; Brazier, Richard; Heathwaite, Louise; Khu, Soon-Thiam

    2007-03-15

    With the increasing concern over the impact of diffuse pollution on water bodies, many diffuse pollution models have been developed in the last two decades. A common obstacle in using such models is how to determine the values of the model parameters. This is especially true when a model has a large number of parameters, which makes a full range of calibration expensive in terms of computing time. Compared with conventional optimisation approaches, soft computing techniques often have a faster convergence speed and are more efficient for global optimum searches. This paper presents an attempt to calibrate a diffuse pollution model using a genetic algorithm (GA). Designed to simulate the export of phosphorus from diffuse sources (agricultural land) and point sources (human), the Phosphorus Indicators Tool (PIT) version 1.1, on which this paper is based, consisted of 78 parameters. Previous studies have indicated the difficulty of full range model calibration due to the number of parameters involved. In this paper, a GA was employed to carry out the model calibration in which all parameters were involved. A sensitivity analysis was also performed to investigate the impact of operators in the GA on its effectiveness in optimum searching. The calibration yielded satisfactory results and required reasonable computing time. The application of the PIT model to the Windrush catchment with optimum parameter values was demonstrated. The annual P loss was predicted as 4.4 kg P/ha/yr, which showed a good fitness to the observed value.

  13. Logarithmic transformed statistical models in calibration

    Zeis, C.D.

    1975-01-01

    A general type of statistical model used for calibration of instruments having the property that the standard deviations of the observed values increase as a function of the mean value is described. The application to the Helix Counter at the Rocky Flats Plant is primarily from a theoretical point of view. The Helix Counter measures the amount of plutonium in certain types of chemicals. The method described can be used also for other calibrations. (U.S.)

  14. Calibration of CORSIM models under saturated traffic flow conditions.

    2013-09-01

    This study proposes a methodology to calibrate microscopic traffic flow simulation models. : The proposed methodology has the capability to calibrate simultaneously all the calibration : parameters as well as demand patterns for any network topology....

  15. Effect of Kepler calibration on global seismic and background parameters

    Salabert David

    2017-01-01

    Full Text Available Calibration issues associated to scrambled collateral smear affecting the Kepler short-cadence data were discovered in the Data Release 24 and were found to be present in all the previous data releases since launch. In consequence, a new Data Release 25 was reprocessed to correct for these problems. We perform here a preliminary study to evaluate the impact on the extracted global seismic and background parameters between data releases. We analyze the sample of seismic solar analogs observed by Kepler in short cadence between Q5 and Q17. We start with this set of stars as it constitutes the best sample to put the Sun into context along its evolution, and any significant differences on the seismic and background parameters need to be investigated before any further studies of this sample can take place. We use the A2Z pipeline to derive both global seismic parameters and background parameters from the Data Release 25 and previous data releases and report on the measured differences.

  16. Calibration models for high enthalpy calorimetric probes.

    Kannel, A

    1978-07-01

    The accuracy of gas-aspirated liquid-cooled calorimetric probes used for measuring the enthalpy of high-temperature gas streams is studied. The error in the differential temperature measurements caused by internal and external heat transfer interactions is considered and quantified by mathematical models. The analysis suggests calibration methods for the evaluation of dimensionless heat transfer parameters in the models, which then can give a more accurate value for the enthalpy of the sample. Calibration models for four types of calorimeters are applied to results from the literature and from our own experiments: a circular slit calorimeter developed by the author, single-cooling jacket probe, double-cooling jacket probe, and split-flow cooling jacket probe. The results show that the models are useful for describing and correcting the temperature measurements.

  17. SURFplus Model Calibration for PBX 9502

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-12-06

    The SURFplus reactive burn model is calibrated for the TATB based explosive PBX 9502 at three initial temperatures; hot (75 C), ambient (23 C) and cold (-55 C). The CJ state depends on the initial temperature due to the variation in the initial density and initial specific energy of the PBX reactants. For the reactants, a porosity model for full density TATB is used. This allows the initial PBX density to be set to its measured value even though the coeffcient of thermal expansion for the TATB and the PBX differ. The PBX products EOS is taken as independent of the initial PBX state. The initial temperature also affects the sensitivity to shock initiation. The model rate parameters are calibrated to Pop plot data, the failure diameter, the limiting detonation speed just above the failure diameters, and curvature effect data for small curvature.

  18. Grid based calibration of SWAT hydrological models

    D. Gorgan

    2012-07-01

    Full Text Available The calibration and execution of large hydrological models, such as SWAT (soil and water assessment tool, developed for large areas, high resolution, and huge input data, need not only quite a long execution time but also high computation resources. SWAT hydrological model supports studies and predictions of the impact of land management practices on water, sediment, and agricultural chemical yields in complex watersheds. The paper presents the gSWAT application as a web practical solution for environmental specialists to calibrate extensive hydrological models and to run scenarios, by hiding the complex control of processes and heterogeneous resources across the grid based high computation infrastructure. The paper highlights the basic functionalities of the gSWAT platform, and the features of the graphical user interface. The presentation is concerned with the development of working sessions, interactive control of calibration, direct and basic editing of parameters, process monitoring, and graphical and interactive visualization of the results. The experiments performed on different SWAT models and the obtained results argue the benefits brought by the grid parallel and distributed environment as a solution for the processing platform. All the instances of SWAT models used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea catchment area.

  19. High Accuracy Transistor Compact Model Calibrations

    Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  20. Gradient-based model calibration with proxy-model assistance

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  1. Electroweak Calibration of the Higgs Characterization Model

    CERN. Geneva

    2015-01-01

    I will present the preliminary results of histogram fits using the Higgs Combine histogram fitting package. These fits can be used to estimate the effects of electroweak contributions to the p p -> H mu+ mu- Higgs production channel and calibrate Beyond Standard Model (BSM) simulations which ignore these effects. I will emphasize my findings' significance in the context of other research here at CERN and in the broader world of high energy physics.

  2. Ideas for fast accelerator model calibration

    Corbett, J.

    1997-05-01

    With the advent of a simple matrix inversion technique, measurement-based storage ring modeling has made rapid progress in recent years. Using fast computers with large memory, the matrix inversion procedure typically adjusts up to 10 3 model variables to fit the order of 10 5 measurements. The results have been surprisingly accurate. Physics aside, one of the next frontiers is to simplify the process and to reduce computation time. In this paper, the authors discuss two approaches to speed up the model calibration process: recursive least-squares fitting and a piecewise fitting approach

  3. Model calibration for building energy efficiency simulation

    Mustafaraj, Giorgio; Marini, Dashamir; Costa, Andrea; Keane, Marcus

    2014-01-01

    Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE) hourly from −5.6% to 7.5% and CV(RMSE) hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases

  4. Global Delivery Models

    Manning, Stephan; Larsen, Marcus M.; Bharati, Pratyush

    2013-01-01

    This article examines antecedents and performance implications of global delivery models (GDMs) in global business services. GDMs require geographically distributed operations to exploit both proximity to clients and time-zone spread for efficient service delivery. We propose and empirically show...

  5. Tropospheric and ionospheric media calibrations based on global navigation satellite system observation data

    Feltens, Joachim; Bellei, Gabriele; Springer, Tim; Kints, Mark V.; Zandbergen, René; Budnik, Frank; Schönemann, Erik

    2018-06-01

    Context: Calibration of radiometric tracking data for effects in the Earth atmosphere is a crucial element in the field of deep-space orbit determination (OD). The troposphere can induce propagation delays in the order of several meters, the ionosphere up to the meter level for X-band signals and up to tens of meters, in extreme cases, for L-band ones. The use of media calibrations based on Global Navigation Satellite Systems (GNSS) measurement data can improve the accuracy of the radiometric observations modelling and, as a consequence, the quality of orbit determination solutions. Aims: ESOC Flight Dynamics employs ranging, Doppler and delta-DOR (Delta-Differential One-Way Ranging) data for the orbit determination of interplanetary spacecraft. Currently, the media calibrations for troposphere and ionosphere are either computed based on empirical models or, under mission specific agreements, provided by external parties such as the Jet Propulsion Laboratory (JPL) in Pasadena, California. In order to become independent from external models and sources, decision fell to establish a new in-house internal service to create these media calibrations based on GNSS measurements recorded at the ESA tracking sites and processed in-house by the ESOC Navigation Support Office with comparable accuracy and quality. Methods: For its concept, the new service was designed to be as much as possible depending on own data and resources and as less as possible depending on external models and data. Dedicated robust and simple algorithms, well suited for operational use, were worked out for that task. This paper describes the approach built up to realize this new in-house internal media calibration service. Results: Test results collected during three months of running the new media calibrations in quasi-operational mode indicate that GNSS-based tropospheric corrections can remove systematic signatures from the Doppler observations and biases from the range ones. For the ionosphere, a

  6. Seepage Calibration Model and Seepage Testing Data

    Dixon, P.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M and O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty

  7. Technical Report Series on Global Modeling and Data Assimilation. Volume 42; Soil Moisture Active Passive (SMAP) Project Calibration and Validation for the L4_C Beta-Release Data Product

    Koster, Randal D. (Editor); Kimball, John S.; Jones, Lucas A.; Glassy, Joseph; Stavros, E. Natasha; Madani, Nima (Editor); Reichle, Rolf H.; Jackson, Thomas; Colliander, Andreas

    2015-01-01

    During the post-launch Cal/Val Phase of SMAP there are two objectives for each science product team: 1) calibrate, verify, and improve the performance of the science algorithms, and 2) validate accuracies of the science data products as specified in the L1 science requirements according to the Cal/Val timeline. This report provides analysis and assessment of the SMAP Level 4 Carbon (L4_C) product specifically for the beta release. The beta-release version of the SMAP L4_C algorithms utilizes a terrestrial carbon flux model informed by SMAP soil moisture inputs along with optical remote sensing (e.g. MODIS) vegetation indices and other ancillary biophysical data to estimate global daily NEE and component carbon fluxes, particularly vegetation gross primary production (GPP) and ecosystem respiration (Reco). Other L4_C product elements include surface (<10 cm depth) soil organic carbon (SOC) stocks and associated environmental constraints to these processes, including soil moisture and landscape FT controls on GPP and Reco (Kimball et al. 2012). The L4_C product encapsulates SMAP carbon cycle science objectives by: 1) providing a direct link between terrestrial carbon fluxes and underlying freeze/thaw and soil moisture constraints to these processes, 2) documenting primary connections between terrestrial water, energy and carbon cycles, and 3) improving understanding of terrestrial carbon sink activity in northern ecosystems.

  8. Global ice sheet modeling

    Hughes, T.J.; Fastook, J.L.

    1994-05-01

    The University of Maine conducted this study for Pacific Northwest Laboratory (PNL) as part of a global climate modeling task for site characterization of the potential nuclear waste respository site at Yucca Mountain, NV. The purpose of the study was to develop a global ice sheet dynamics model that will forecast the three-dimensional configuration of global ice sheets for specific climate change scenarios. The objective of the third (final) year of the work was to produce ice sheet data for glaciation scenarios covering the next 100,000 years. This was accomplished using both the map-plane and flowband solutions of our time-dependent, finite-element gridpoint model. The theory and equations used to develop the ice sheet models are presented. Three future scenarios were simulated by the model and results are discussed

  9. Thermodynamically consistent model calibration in chemical kinetics

    Goutsias John

    2011-05-01

    Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new

  10. A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization

    Foster, John V.; Cunningham, Kevin

    2010-01-01

    Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the

  11. Calibration of hydrological model with programme PEST

    Brilly, Mitja; Vidmar, Andrej; Kryžanowski, Andrej; Bezak, Nejc; Šraj, Mojca

    2016-04-01

    PEST is tool based on minimization of an objective function related to the root mean square error between the model output and the measurement. We use "singular value decomposition", section of the PEST control file, and Tikhonov regularization method for successfully estimation of model parameters. The PEST sometimes failed if inverse problems were ill-posed, but (SVD) ensures that PEST maintains numerical stability. The choice of the initial guess for the initial parameter values is an important issue in the PEST and need expert knowledge. The flexible nature of the PEST software and its ability to be applied to whole catchments at once give results of calibration performed extremely well across high number of sub catchments. Use of parallel computing version of PEST called BeoPEST was successfully useful to speed up calibration process. BeoPEST employs smart slaves and point-to-point communications to transfer data between the master and slaves computers. The HBV-light model is a simple multi-tank-type model for simulating precipitation-runoff. It is conceptual balance model of catchment hydrology which simulates discharge using rainfall, temperature and estimates of potential evaporation. Version of HBV-light-CLI allows the user to run HBV-light from the command line. Input and results files are in XML form. This allows to easily connecting it with other applications such as pre and post-processing utilities and PEST itself. The procedure was applied on hydrological model of Savinja catchment (1852 km2) and consists of twenty one sub-catchments. Data are temporary processed on hourly basis.

  12. Calibration

    Greacen, E.L.; Correll, R.L.; Cunningham, R.B.; Johns, G.G.; Nicolls, K.D.

    1981-01-01

    Procedures common to different methods of calibration of neutron moisture meters are outlined and laboratory and field calibration methods compared. Gross errors which arise from faulty calibration techniques are described. The count rate can be affected by the dry bulk density of the soil, the volumetric content of constitutional hydrogen and other chemical components of the soil and soil solution. Calibration is further complicated by the fact that the neutron meter responds more strongly to the soil properties close to the detector and source. The differences in slope of calibration curves for different soils can be as much as 40%

  13. Calibration of discrete element model parameters: soybeans

    Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal

    2018-05-01

    Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.

  14. Seepage Calibration Model and Seepage Testing Data

    S. Finsterle

    2004-09-02

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross

  15. Seepage Calibration Model and Seepage Testing Data

    Finsterle, S.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross-Drift to obtain the permeability structure for the seepage model

  16. Mechanistic site-based emulation of a global ocean biogeochemical model (MEDUSA 1.0 for parametric analysis and calibration: an application of the Marine Model Optimization Testbed (MarMOT 1.1

    J. C. P. Hemmings

    2015-03-01

    Full Text Available Biogeochemical ocean circulation models used to investigate the role of plankton ecosystems in global change rely on adjustable parameters to capture the dominant biogeochemical dynamics of a complex biological system. In principle, optimal parameter values can be estimated by fitting models to observational data, including satellite ocean colour products such as chlorophyll that achieve good spatial and temporal coverage of the surface ocean. However, comprehensive parametric analyses require large ensemble experiments that are computationally infeasible with global 3-D simulations. Site-based simulations provide an efficient alternative but can only be used to make reliable inferences about global model performance if robust quantitative descriptions of their relationships with the corresponding 3-D simulations can be established. The feasibility of establishing such a relationship is investigated for an intermediate complexity biogeochemistry model (MEDUSA coupled with a widely used global ocean model (NEMO. A site-based mechanistic emulator is constructed for surface chlorophyll output from this target model as a function of model parameters. The emulator comprises an array of 1-D simulators and a statistical quantification of the uncertainty in their predictions. The unknown parameter-dependent biogeochemical environment, in terms of initial tracer concentrations and lateral flux information required by the simulators, is a significant source of uncertainty. It is approximated by a mean environment derived from a small ensemble of 3-D simulations representing variability of the target model behaviour over the parameter space of interest. The performance of two alternative uncertainty quantification schemes is examined: a direct method based on comparisons between simulator output and a sample of known target model "truths" and an indirect method that is only partially reliant on knowledge of the target model output. In general, chlorophyll

  17. Regionalizing global climate models

    Pitman, A.J.; Arneth, A.; Ganzeveld, L.N.

    2012-01-01

    Global climate models simulate the Earth's climate impressively at scales of continents and greater. At these scales, large-scale dynamics and physics largely define the climate. At spatial scales relevant to policy makers, and to impacts and adaptation, many other processes may affect regional and

  18. Global Hail Model

    Werner, A.; Sanderson, M.; Hand, W.; Blyth, A.; Groenemeijer, P.; Kunz, M.; Puskeiler, M.; Saville, G.; Michel, G.

    2012-04-01

    Hail risk models are rare for the insurance industry. This is opposed to the fact that average annual hail losses can be large and hail dominates losses for many motor portfolios worldwide. Insufficient observational data, high spatio-temporal variability and data inhomogenity have hindered creation of credible models so far. In January 2012, a selected group of hail experts met at Willis in London in order to discuss ways to model hail risk at various scales. Discussions aimed at improving our understanding of hail occurrence and severity, and covered recent progress in the understanding of microphysical processes and climatological behaviour and hail vulnerability. The final outcome of the meeting was the formation of a global hail risk model initiative and the launch of a realistic global hail model in order to assess hail loss occurrence and severities for the globe. The following projects will be tackled: Microphysics of Hail and hail severity measures: Understand the physical drivers of hail and hailstone size development in different regions on the globe. Proposed factors include updraft and supercooled liquid water content in the troposphere. What are the thresholds drivers of hail formation around the globe? Hail Climatology: Consider ways to build a realistic global climatological set of hail events based on physical parameters including spatial variations in total availability of moisture, aerosols, among others, and using neural networks. Vulnerability, Exposure, and financial model: Use historical losses and event footprints available in the insurance market to approximate fragility distributions and damage potential for various hail sizes for property, motor, and agricultural business. Propagate uncertainty distributions and consider effects of policy conditions along with aggregating and disaggregating exposure and losses. This presentation provides an overview of ideas and tasks that lead towards a comprehensive global understanding of hail risk for

  19. Validation of A Global Hydrological Model

    Doell, P.; Lehner, B.; Kaspar, F.; Vassolo, S.

    Freshwater availability has been recognized as a global issue, and its consistent quan- tification not only in individual river basins but also at the global scale is required to support the sustainable use of water. The Global Hydrology Model WGHM, which is a submodel of the global water use and availability model WaterGAP 2, computes sur- face runoff, groundwater recharge and river discharge at a spatial resolution of 0.5. WGHM is based on the best global data sets currently available, including a newly developed drainage direction map and a data set of wetlands, lakes and reservoirs. It calculates both natural and actual discharge by simulating the reduction of river discharge by human water consumption (as computed by the water use submodel of WaterGAP 2). WGHM is calibrated against observed discharge at 724 gauging sta- tions (representing about 50% of the global land area) by adjusting a parameter of the soil water balance. It not only computes the long-term average water resources but also water availability indicators that take into account the interannual and seasonal variability of runoff and discharge. The reliability of the model results is assessed by comparing observed and simulated discharges at the calibration stations and at se- lected other stations. We conclude that reliable results can be obtained for basins of more than 20,000 km2. In particular, the 90% reliable monthly discharge is simu- lated well. However, there is the tendency that semi-arid and arid basins are modeled less satisfactorily than humid ones, which is partially due to neglecting river channel losses and evaporation of runoff from small ephemeral ponds in the model. Also, the hydrology of highly developed basins with large artificial storages, basin transfers and irrigation schemes cannot be simulated well. The seasonality of discharge in snow- dominated basins is overestimated by WGHM, and if the snow-dominated basin is uncalibrated, discharge is likely to be underestimated

  20. Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin

    Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.

    2006-01-01

    The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.

  1. Influence of rainfall observation network on model calibration and application

    A. Bárdossy

    2008-01-01

    Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as

  2. A single model procedure for estimating tank calibration equations

    Liebetrau, A.M.

    1997-10-01

    A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes

  3. SWAT Model Configuration, Calibration and Validation for Lake Champlain Basin

    The Soil and Water Assessment Tool (SWAT) model was used to develop phosphorus loading estimates for sources in the Lake Champlain Basin. This document describes the model setup and parameterization, and presents calibration results.

  4. The Global Flood Model

    Williams, P.; Huddelston, M.; Michel, G.; Thompson, S.; Heynert, K.; Pickering, C.; Abbott Donnelly, I.; Fewtrell, T.; Galy, H.; Sperna Weiland, F.; Winsemius, H.; Weerts, A.; Nixon, S.; Davies, P.; Schiferli, D.

    2012-04-01

    Recently, a Global Flood Model (GFM) initiative has been proposed by Willis, UK Met Office, Esri, Deltares and IBM. The idea is to create a global community platform that enables better understanding of the complexities of flood risk assessment to better support the decisions, education and communication needed to mitigate flood risk. The GFM will provide tools for assessing the risk of floods, for devising mitigation strategies such as land-use changes and infrastructure improvements, and for enabling effective pre- and post-flood event response. The GFM combines humanitarian and commercial motives. It will benefit: - The public, seeking to preserve personal safety and property; - State and local governments, seeking to safeguard economic activity, and improve resilience; - NGOs, similarly seeking to respond proactively to flood events; - The insurance sector, seeking to understand and price flood risk; - Large corporations, seeking to protect global operations and supply chains. The GFM is an integrated and transparent set of modules, each composed of models and data. For each module, there are two core elements: a live "reference version" (a worked example) and a framework of specifications, which will allow development of alternative versions. In the future, users will be able to work with the reference version or substitute their own models and data. If these meet the specification for the relevant module, they will interoperate with the rest of the GFM. Some "crowd-sourced" modules could even be accredited and published to the wider GFM community. Our intent is to build on existing public, private and academic work, improve local adoption, and stimulate the development of multiple - but compatible - alternatives, so strengthening mankind's ability to manage flood impacts. The GFM is being developed and managed by a non-profit organization created for the purpose. The business model will be inspired from open source software (eg Linux): - for non-profit usage

  5. Global Volcano Model

    Sparks, R. S. J.; Loughlin, S. C.; Cottrell, E.; Valentine, G.; Newhall, C.; Jolly, G.; Papale, P.; Takarada, S.; Crosweller, S.; Nayembil, M.; Arora, B.; Lowndes, J.; Connor, C.; Eichelberger, J.; Nadim, F.; Smolka, A.; Michel, G.; Muir-Wood, R.; Horwell, C.

    2012-04-01

    Over 600 million people live close enough to active volcanoes to be affected when they erupt. Volcanic eruptions cause loss of life, significant economic losses and severe disruption to people's lives, as highlighted by the recent eruption of Mount Merapi in Indonesia. The eruption of Eyjafjallajökull, Iceland in 2010 illustrated the potential of even small eruptions to have major impact on the modern world through disruption of complex critical infrastructure and business. The effects in the developing world on economic growth and development can be severe. There is evidence that large eruptions can cause a change in the earth's climate for several years afterwards. Aside from meteor impact and possibly an extreme solar event, very large magnitude explosive volcanic eruptions may be the only natural hazard that could cause a global catastrophe. GVM is a growing international collaboration that aims to create a sustainable, accessible information platform on volcanic hazard and risk. We are designing and developing an integrated database system of volcanic hazards, vulnerability and exposure with internationally agreed metadata standards. GVM will establish methodologies for analysis of the data (eg vulnerability indices) to inform risk assessment, develop complementary hazards models and create relevant hazards and risk assessment tools. GVM will develop the capability to anticipate future volcanism and its consequences. NERC is funding the start-up of this initiative for three years from November 2011. GVM builds directly on the VOGRIPA project started as part of the GRIP (Global Risk Identification Programme) in 2004 under the auspices of the World Bank and UN. Major international initiatives and partners such as the Smithsonian Institution - Global Volcanism Program, State University of New York at Buffalo - VHub, Earth Observatory of Singapore - WOVOdat and many others underpin GVM.

  6. Effects of temporal and spatial resolution of calibration data on integrated hydrologic water quality model identification

    Jiang, Sanyuan; Jomaa, Seifeddine; Büttner, Olaf; Rode, Michael

    2014-05-01

    Hydrological water quality modeling is increasingly used for investigating runoff and nutrient transport processes as well as watershed management but it is mostly unclear how data availablity determins model identification. In this study, the HYPE (HYdrological Predictions for the Environment) model, which is a process-based, semi-distributed hydrological water quality model, was applied in two different mesoscale catchments (Selke (463 km2) and Weida (99 km2)) located in central Germany to simulate discharge and inorganic nitrogen (IN) transport. PEST and DREAM(ZS) were combined with the HYPE model to conduct parameter calibration and uncertainty analysis. Split-sample test was used for model calibration (1994-1999) and validation (1999-2004). IN concentration and daily IN load were found to be highly correlated with discharge, indicating that IN leaching is mainly controlled by runoff. Both dynamics and balances of water and IN load were well captured with NSE greater than 0.83 during validation period. Multi-objective calibration (calibrating hydrological and water quality parameters simultaneously) was found to outperform step-wise calibration in terms of model robustness. Multi-site calibration was able to improve model performance at internal sites, decrease parameter posterior uncertainty and prediction uncertainty. Nitrogen-process parameters calibrated using continuous daily averages of nitrate-N concentration observations produced better and more robust simulations of IN concentration and load, lower posterior parameter uncertainty and IN concentration prediction uncertainty compared to the calibration against uncontinuous biweekly nitrate-N concentration measurements. Both PEST and DREAM(ZS) are efficient in parameter calibration. However, DREAM(ZS) is more sound in terms of parameter identification and uncertainty analysis than PEST because of its capability to evolve parameter posterior distributions and estimate prediction uncertainty based on global

  7. Calibration of the Site-Scale Saturated Zone Flow Model

    Zyvoloski, G. A.

    2001-01-01

    The purpose of the flow calibration analysis work is to provide Performance Assessment (PA) with the calibrated site-scale saturated zone (SZ) flow model that will be used to make radionuclide transport calculations. As such, it is one of the most important models developed in the Yucca Mountain project. This model will be a culmination of much of our knowledge of the SZ flow system. The objective of this study is to provide a defensible site-scale SZ flow and transport model that can be used for assessing total system performance. A defensible model would include geologic and hydrologic data that are used to form the hydrogeologic framework model; also, it would include hydrochemical information to infer transport pathways, in-situ permeability measurements, and water level and head measurements. In addition, the model should include information on major model sensitivities. Especially important are those that affect calibration, the direction of transport pathways, and travel times. Finally, if warranted, alternative calibrations representing different conceptual models should be included. To obtain a defensible model, all available data should be used (or at least considered) to obtain a calibrated model. The site-scale SZ model was calibrated using measured and model-generated water levels and hydraulic head data, specific discharge calculations, and flux comparisons along several of the boundaries. Model validity was established by comparing model-generated permeabilities with the permeability data from field and laboratory tests; by comparing fluid pathlines obtained from the SZ flow model with those inferred from hydrochemical data; and by comparing the upward gradient generated with the model with that observed in the field. This analysis is governed by the Office of Civilian Radioactive Waste Management (OCRWM) Analysis and Modeling Report (AMR) Development Plan ''Calibration of the Site-Scale Saturated Zone Flow Model'' (CRWMS M and O 1999a)

  8. Model Calibration of Exciter and PSS Using Extended Kalman Filter

    Kalsi, Karanjit; Du, Pengwei; Huang, Zhenyu

    2012-07-26

    Power system modeling and controls continue to become more complex with the advent of smart grid technologies and large-scale deployment of renewable energy resources. As demonstrated in recent studies, inaccurate system models could lead to large-scale blackouts, thereby motivating the need for model calibration. Current methods of model calibration rely on manual tuning based on engineering experience, are time consuming and could yield inaccurate parameter estimates. In this paper, the Extended Kalman Filter (EKF) is used as a tool to calibrate exciter and Power System Stabilizer (PSS) models of a particular type of machine in the Western Electricity Coordinating Council (WECC). The EKF-based parameter estimation is a recursive prediction-correction process which uses the mismatch between simulation and measurement to adjust the model parameters at every time step. Numerical simulations using actual field test data demonstrate the effectiveness of the proposed approach in calibrating the parameters.

  9. Hand-eye calibration using a target registration error model.

    Chen, Elvis C S; Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M

    2017-10-01

    Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.

  10. Fermentation process tracking through enhanced spectral calibration modeling.

    Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah

    2007-06-15

    The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.

  11. The DarkSide-50 Experiment: Electron Recoil Calibrations and A Global Energy Variable

    Hackett, Brianne Rae [Hawaii U.

    2017-01-01

    Over the course of decades, there has been mounting astronomical evidence for non-baryonic dark matter, yet its precise nature remains elusive. A favored candidate for dark matter is the Weakly Interacting Massive Particle (WIMP) which arises naturally out of extensions to the Standard Model. WIMPs are expected to occasionally interact with particles of normal matter through nuclear recoils. DarkSide-50 aims to detect this type of particle through the use of a two-phase liquid argon time projection chamber. To make a claim of discovery, an accurate understanding of the background and WIMP search region is imperative. Knowledge of the backgrounds is done through extensive studies of DarkSide-50's response to electron and nuclear recoils. The CALibration Insertion System (CALIS) was designed and built for the purpose of introduc- ing radioactive sources into or near the detector in a joint eort between Fermi National Laboratory (FNAL) and the University of Hawai'i at Manoa. This work describes the testing, installation, and commissioning of CALIS at the Laboratori Nazionali del Gran Sasso. CALIS has been used in mul- tiple calibration campaigns with both neutron and sources. In this work, DarkSide-50's response to electron recoils, which are important for background estimations, was studied through the use of calibration sources by constructing a global energy variable which takes into account the anti- correlation between scintillation and ionization signals produced by interactions in the liquid argon. Accurately reconstructing the event energy correlates directly with quantitatively understanding the WIMP sensitivity in DarkSide-50. This work also validates the theoretically predicted decay spectrum of 39Ar against 39Ar decay data collected in the early days of DarkSide-50 while it was lled with atmospheric argon; a validation of this type is not readily found in the literature. Finally, we show how well the constructed energy variable can predict

  12. Cosmic CARNage I: on the calibration of galaxy formation models

    Knebe, Alexander; Pearce, Frazer R.; Gonzalez-Perez, Violeta; Thomas, Peter A.; Benson, Andrew; Asquith, Rachel; Blaizot, Jeremy; Bower, Richard; Carretero, Jorge; Castander, Francisco J.; Cattaneo, Andrea; Cora, Sofía A.; Croton, Darren J.; Cui, Weiguang; Cunnama, Daniel; Devriendt, Julien E.; Elahi, Pascal J.; Font, Andreea; Fontanot, Fabio; Gargiulo, Ignacio D.; Helly, John; Henriques, Bruno; Lee, Jaehyun; Mamon, Gary A.; Onions, Julian; Padilla, Nelson D.; Power, Chris; Pujol, Arnau; Ruiz, Andrés N.; Srisawat, Chaichalit; Stevens, Adam R. H.; Tollet, Edouard; Vega-Martínez, Cristian A.; Yi, Sukyoung K.

    2018-04-01

    We present a comparison of nine galaxy formation models, eight semi-analytical, and one halo occupation distribution model, run on the same underlying cold dark matter simulation (cosmological box of comoving width 125h-1 Mpc, with a dark-matter particle mass of 1.24 × 109h-1M⊙) and the same merger trees. While their free parameters have been calibrated to the same observational data sets using two approaches, they nevertheless retain some `memory' of any previous calibration that served as the starting point (especially for the manually tuned models). For the first calibration, models reproduce the observed z = 0 galaxy stellar mass function (SMF) within 3σ. The second calibration extended the observational data to include the z = 2 SMF alongside the z ˜ 0 star formation rate function, cold gas mass, and the black hole-bulge mass relation. Encapsulating the observed evolution of the SMF from z = 2 to 0 is found to be very hard within the context of the physics currently included in the models. We finally use our calibrated models to study the evolution of the stellar-to-halo mass (SHM) ratio. For all models, we find that the peak value of the SHM relation decreases with redshift. However, the trends seen for the evolution of the peak position as well as the mean scatter in the SHM relation are rather weak and strongly model dependent. Both the calibration data sets and model results are publicly available.

  13. Cumulative error models for the tank calibration problem

    Goldman, A.; Anderson, L.G.; Weber, J.

    1983-01-01

    The purpose of a tank calibration equation is to obtain an estimate of the liquid volume that corresponds to a liquid level measurement. Calibration experimental errors occur in both liquid level and liquid volume measurements. If one of the errors is relatively small, the calibration equation can be determined from wellknown regression and calibration methods. If both variables are assumed to be in error, then for linear cases a prototype model should be considered. Many investigators are not familiar with this model or do not have computing facilities capable of obtaining numerical solutions. This paper discusses and compares three linear models that approximate the prototype model and have the advantage of much simpler computations. Comparisons among the four models and recommendations of suitability are made from simulations and from analyses of six sets of experimental data

  14. Testing of a one dimensional model for Field II calibration

    Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten

    2008-01-01

    Field II is a program for simulating ultrasound transducer fields. It is capable of calculating the emitted and pulse-echoed fields for both pulsed and continuous wave transducers. To make it fully calibrated a model of the transducer’s electro-mechanical impulse response must be included. We...... examine an adapted one dimensional transducer model originally proposed by Willatzen [9] to calibrate Field II. This model is modified to calculate the required impulse responses needed by Field II for a calibrated field pressure and external circuit current calculation. The testing has been performed...... to the calibrated Field II program for 1, 4, and 10 cycle excitations. Two parameter sets were applied for modeling, one real valued Pz27 parameter set, manufacturer supplied, and one complex valued parameter set found in literature, Alguer´o et al. [11]. The latter implicitly accounts for attenuation. Results show...

  15. Balance between calibration objectives in a conceptual hydrological model

    Booij, Martijn J.; Krol, Martinus S.

    2010-01-01

    Three different measures to determine the optimum balance between calibration objectives are compared: the combined rank method, parameter identifiability and model validation. Four objectives (water balance, hydrograph shape, high flows, low flows) are included in each measure. The contributions of

  16. A Method to Test Model Calibration Techniques: Preprint

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  17. Presentation, calibration and validation of the low-order, DCESS Earth System Model

    Shaffer, G.; Olsen, S. Malskaer; Pedersen, Jens Olaf Pepke

    2008-01-01

    A new, low-order Earth system model is described, calibrated and tested against Earth system data. The model features modules for the atmosphere, ocean, ocean sediment, land biosphere and lithosphere and has been designed to simulate global change on time scales of years to millions of years...... remineralization. The lithosphere module considers outgassing, weathering of carbonate and silicate rocks and weathering of rocks containing old organic carbon and phosphorus. Weathering rates are related to mean atmospheric temperatures. A pre-industrial, steady state calibration to Earth system data is carried...

  18. Using Active Learning for Speeding up Calibration in Simulation Models.

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  19. A global calibration method for multiple vision sensors based on multiple targets

    Liu, Zhen; Zhang, Guangjun; Wei, Zhenzhong; Sun, Junhua

    2011-01-01

    The global calibration of multiple vision sensors (MVS) has been widely studied in the last two decades. In this paper, we present a global calibration method for MVS with non-overlapping fields of view (FOVs) using multiple targets (MT). MT is constructed by fixing several targets, called sub-targets, together. The mutual coordinate transformations between sub-targets need not be known. The main procedures of the proposed method are as follows: one vision sensor is selected from MVS to establish the global coordinate frame (GCF). MT is placed in front of the vision sensors for several (at least four) times. Using the constraint that the relative positions of all sub-targets are invariant, the transformation matrix from the coordinate frame of each vision sensor to GCF can be solved. Both synthetic and real experiments are carried out and good result is obtained. The proposed method has been applied to several real measurement systems and shown to be both flexible and accurate. It can serve as an attractive alternative to existing global calibration methods

  20. Some tests of wet tropospheric calibration for the CASA Uno Global Positioning System experiment

    Dixon, T. H.; Wolf, S. Kornreich

    1990-01-01

    Wet tropospheric path delay can be a major error source for Global Positioning System (GPS) geodetic experiments. Strategies for minimizing this error are investigted using data from CASA Uno, the first major GPS experiment in Central and South America, where wet path delays may be both high and variable. Wet path delay calibration using water vapor radiometers (WVRs) and residual delay estimation is compared with strategies where the entire wet path delay is estimated stochastically without prior calibration, using data from a 270-km test baseline in Costa Rica. Both approaches yield centimeter-level baseline repeatability and similar tropospheric estimates, suggesting that WVR calibration is not critical for obtaining high precision results with GPS in the CASA region.

  1. The Open Global Glacier Model

    Marzeion, B.; Maussion, F.

    2017-12-01

    Mountain glaciers are one of the few remaining sub-systems of the global climate system for which no globally applicable, open source, community-driven model exists. Notable examples from the ice sheet community include the Parallel Ice Sheet Model or Elmer/Ice. While the atmospheric modeling community has a long tradition of sharing models (e.g. the Weather Research and Forecasting model) or comparing them (e.g. the Coupled Model Intercomparison Project or CMIP), recent initiatives originating from the glaciological community show a new willingness to better coordinate global research efforts following the CMIP example (e.g. the Glacier Model Intercomparison Project or the Glacier Ice Thickness Estimation Working Group). In the recent past, great advances have been made in the global availability of data and methods relevant for glacier modeling, spanning glacier outlines, automatized glacier centerline identification, bed rock inversion methods, and global topographic data sets. Taken together, these advances now allow the ice dynamics of glaciers to be modeled on a global scale, provided that adequate modeling platforms are available. Here, we present the Open Global Glacier Model (OGGM), developed to provide a global scale, modular, and open source numerical model framework for consistently simulating past and future global scale glacier change. Global not only in the sense of leading to meaningful results for all glaciers combined, but also for any small ensemble of glaciers, e.g. at the headwater catchment scale. Modular to allow combinations of different approaches to the representation of ice flow and surface mass balance, enabling a new kind of model intercomparison. Open source so that the code can be read and used by anyone and so that new modules can be added and discussed by the community, following the principles of open governance. Consistent in order to provide uncertainty measures at all realizable scales.

  2. Using genetic algorithm and TOPSIS for Xinanjiang model calibration with a single procedure

    Cheng, Chun-Tian; Zhao, Ming-Yan; Chau, K. W.; Wu, Xin-Yu

    2006-01-01

    Genetic Algorithm (GA) is globally oriented in searching and thus useful in optimizing multiobjective problems, especially where the objective functions are ill-defined. Conceptual rainfall-runoff models that aim at predicting streamflow from the knowledge of precipitation over a catchment have become a basic tool for flood forecasting. The parameter calibration of a conceptual model usually involves the multiple criteria for judging the performances of observed data. However, it is often difficult to derive all objective functions for the parameter calibration problem of a conceptual model. Thus, a new method to the multiple criteria parameter calibration problem, which combines GA with TOPSIS (technique for order performance by similarity to ideal solution) for Xinanjiang model, is presented. This study is an immediate further development of authors' previous research (Cheng, C.T., Ou, C.P., Chau, K.W., 2002. Combining a fuzzy optimal model with a genetic algorithm to solve multi-objective rainfall-runoff model calibration. Journal of Hydrology, 268, 72-86), whose obvious disadvantages are to split the whole procedure into two parts and to become difficult to integrally grasp the best behaviors of model during the calibration procedure. The current method integrates the two parts of Xinanjiang rainfall-runoff model calibration together, simplifying the procedures of model calibration and validation and easily demonstrated the intrinsic phenomenon of observed data in integrity. Comparison of results with two-step procedure shows that the current methodology gives similar results to the previous method, is also feasible and robust, but simpler and easier to apply in practice.

  3. Regional Calibration of SCS-CN L-THIA Model: Application for Ungauged Basins

    Ji-Hong Jeon

    2014-05-01

    Full Text Available Estimating surface runoff for ungauged watershed is an important issue. The Soil Conservation Service Curve Number (SCS-CN method developed from long-term experimental data is widely used to estimate surface runoff from gaged or ungauged watersheds. Many modelers have used the documented SCS-CN parameters without calibration, sometimes resulting in significant errors in estimating surface runoff. Several methods for regionalization of SCS-CN parameters were evaluated. The regionalization methods include: (1 average; (2 land use area weighted average; (3 hydrologic soil group area weighted average; (4 area combined land use and hydrologic soil group weighted average; (5 spatial nearest neighbor; (6 inverse distance weighted average; and (7 global calibration method, and model performance for each method was evaluated with application to 14 watersheds located in Indiana. Eight watersheds were used for calibration and six watersheds for validation. For the validation results, the spatial nearest neighbor method provided the highest average Nash-Sutcliffe (NS value at 0.58 for six watersheds but it included the lowest NS value and variance of NS values of this method was the highest. The global calibration method provided the second highest average NS value at 0.56 with low variation of NS values. Although the spatial nearest neighbor method provided the highest average NS value, this method was not statistically different than other methods. However, the global calibration method was significantly different than other methods except the spatial nearest neighbor method. Therefore, we conclude that the global calibration method is appropriate to regionalize SCS-CN parameters for ungauged watersheds.

  4. A Generic Software Framework for Data Assimilation and Model Calibration

    Van Velzen, N.

    2010-01-01

    The accuracy of dynamic simulation models can be increased by using observations in conjunction with a data assimilation or model calibration algorithm. However, implementing such algorithms usually increases the complexity of the model software significantly. By using concepts from object oriented

  5. A mathematical model for camera calibration based on straight lines

    Antonio M. G. Tommaselli

    2005-12-01

    Full Text Available In other to facilitate the automation of camera calibration process, a mathematical model using straight lines was developed, which is based on the equivalent planes mathematical model. Parameter estimation of the developed model is achieved by the Least Squares Method with Conditions and Observations. The same method of adjustment was used to implement camera calibration with bundles, which is based on points. Experiments using simulated and real data have shown that the developed model based on straight lines gives results comparable to the conventional method with points. Details concerning the mathematical development of the model and experiments with simulated and real data will be presented and the results with both methods of camera calibration, with straight lines and with points, will be compared.

  6. Calibration of communication skills items in OSCE checklists according to the MAAS-Global.

    Setyonugroho, Winny; Kropmans, Thomas; Kennedy, Kieran M; Stewart, Brian; van Dalen, Jan

    2016-01-01

    Communication skills (CS) are commonly assessed using 'communication items' in Objective Structured Clinical Examination (OSCE) station checklists. Our aim is to calibrate the communication component of OSCE station checklists according to the MAAS-Global which is a valid and reliable standard to assess CS in undergraduate medical education. Three raters independently compared 280 checklists from 4 disciplines contributing to the undergraduate year 4 OSCE against the 17 items of the MAAS-Global standard. G-theory was used to analyze the reliability of this calibration procedure. G-Kappa was 0.8. For two raters G-Kappa is 0.72 and it fell to 0.57 for one rater. 46% of the checklist items corresponded to section three of the MAAS-Global (i.e. medical content of the consultation), whilst 12% corresponded to section two (i.e. general CS), and 8.2% to section one (i.e. CS for each separate phase of the consultation). 34% of the items were not considered to be CS. A G-Kappa of 0.8 confirms a reliable and valid procedure for calibrating OSCE CS checklist items using the MAAS-Global. We strongly suggest that such a procedure is more widely employed to arrive at a stable (valid and reliable) judgment of the communication component in existing checklists for medical students' communication behaviours. It is possible to measure the 'true' caliber of CS in OSCE stations. Students' results are thereby comparable between and across stations, students and institutions. A reliable calibration procedure requires only two raters. Copyright © 2015. Published by Elsevier Ireland Ltd.

  7. Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing

    Mingchi Feng

    2017-10-01

    Full Text Available Multi-camera systems are widely applied in the three dimensional (3D computer vision, especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-camera systems are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-camera systems based on transparent glass checkerboards and ray tracing is described, and is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera are obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on the other side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce a calibration error. A multi-camera calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of the four-camera system are 0.00007 and 0.4543 pixels, respectively. The proposed method is flexible, highly accurate, and simple to carry out.

  8. Global Delivery Models

    Manning, Stephan; Møller Larsen, Marcus; Bharati, Pratyush

    -zone spread allowing for 24/7 service delivery and access to resources. Based on comprehensive data we show that providers are likely to establish GDM configurations when clients value access to globally distributed talent pools and speed of service delivery, and in particular when services are highly...

  9. Global Delivery Models

    Manning, Stephan; Møller Larsen, Marcus; Bharati, Pratyush M.

    2015-01-01

    antecedents and contingencies of setting up GDM structures. Based on comprehensive data we show that providers are likely to establish GDM location configurations when clients value access to globally distributed talent and speed of service delivery, in particular when services are highly commoditized...

  10. Stochastic calibration and learning in nonstationary hydroeconomic models

    Maneta, M. P.; Howitt, R.

    2014-05-01

    Concern about water scarcity and adverse climate events over agricultural regions has motivated a number of efforts to develop operational integrated hydroeconomic models to guide adaptation and optimal use of water. Once calibrated, these models are used for water management and analysis assuming they remain valid under future conditions. In this paper, we present and demonstrate a methodology that permits the recursive calibration of economic models of agricultural production from noisy but frequently available data. We use a standard economic calibration approach, namely positive mathematical programming, integrated in a data assimilation algorithm based on the ensemble Kalman filter equations to identify the economic model parameters. A moving average kernel ensures that new and past information on agricultural activity are blended during the calibration process, avoiding loss of information and overcalibration for the conditions of a single year. A regularization constraint akin to the standard Tikhonov regularization is included in the filter to ensure its stability even in the presence of parameters with low sensitivity to observations. The results show that the implementation of the PMP methodology within a data assimilation framework based on the enKF equations is an effective method to calibrate models of agricultural production even with noisy information. The recursive nature of the method incorporates new information as an added value to the known previous observations of agricultural activity without the need to store historical information. The robustness of the method opens the door to the use of new remote sensing algorithms for operational water management.

  11. Model calibration and beam control systems for storage rings

    Corbett, W.J.; Lee, M.J.; Ziemann, V.

    1993-04-01

    Electron beam storage rings and linear accelerators are rapidly gaining worldwide popularity as scientific devices for the production of high-brightness synchrotron radiation. Today, everybody agrees that there is a premium on calibrating the storage ring model and determining errors in the machine as soon as possible after the beam is injected. In addition, the accurate optics model enables machine operators to predictably adjust key performance parameters, and allows reliable identification of new errors that occur during operation of the machine. Since the need for model calibration and beam control systems is common to all storage rings, software packages should be made that are portable between different machines. In this paper, we report on work directed toward achieving in-situ calibration of the optics model, detection of alignment errors, and orbit control techniques, with an emphasis on developing a portable system incorporating these tools

  12. The cost of uniqueness in groundwater model calibration

    Moore, Catherine; Doherty, John

    2006-04-01

    Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration

  13. Including sugar cane in the agro-ecosystem model ORCHIDEE-STICS: calibration and validation

    Valade, A.; Vuichard, N.; Ciais, P.; Viovy, N.

    2011-12-01

    Sugarcane is currently the most efficient bioenergy crop with regards to the energy produced per hectare. With approximately half the global bioethanol production in 2005, and a devoted land area expected to expand globally in the years to come, sugar cane is at the heart of the biofuel debate. Dynamic global vegetation models coupled with agronomical models are powerful and novel tools to tackle many of the environmental issues related to biofuels if they are carefully calibrated and validated against field observations. Here we adapt the agro-terrestrial model ORCHIDEE-STICS for sugar cane simulations. Observation data of LAI are used to evaluate the sensitivity of the model to parameters of nitrogen absorption and phenology, which are calibrated in a systematic way for six sites in Australia and La Reunion. We find that the optimal set of parameters is highly dependent on the sites' characteristics and that the model can reproduce satisfactorily the evolution of LAI. This careful calibration of ORCHIDEE-STICS for sugar cane biomass production for different locations and technical itineraries provides a strong basis for further analysis of the impacts of bioenergy-related land use change on carbon cycle budgets. As a next step, a sensitivity analysis is carried out to estimate the uncertainty of the model in biomass and carbon flux simulation due to its parameterization.

  14. Bayesian calibration of power plant models for accurate performance prediction

    Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der

    2014-01-01

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  15. Calibration and Confirmation in Geophysical Models

    Werndl, Charlotte

    2016-04-01

    For policy decisions the best geophysical models are needed. To evaluate geophysical models, it is essential that the best available methods for confirmation are used. A hotly debated issue on confirmation in climate science (as well as in philosophy) is the requirement of use-novelty (i.e. that data can only confirm models if they have not already been used before. This talk investigates the issue of use-novelty and double-counting for geophysical models. We will see that the conclusions depend on the framework of confirmation and that it is not clear that use-novelty is a valid requirement and that double-counting is illegitimate.

  16. Applying Hierarchical Model Calibration to Automatically Generated Items.

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  17. Cloud-Based Model Calibration Using OpenStudio: Preprint

    Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.

    2014-03-01

    OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.

  18. Global nuclear material control model

    Dreicer, J.S.; Rutherford, D.A.

    1996-01-01

    The nuclear danger can be reduced by a system for global management, protection, control, and accounting as part of a disposition program for special nuclear materials. The development of an international fissile material management and control regime requires conceptual research supported by an analytical and modeling tool that treats the nuclear fuel cycle as a complete system. Such a tool must represent the fundamental data, information, and capabilities of the fuel cycle including an assessment of the global distribution of military and civilian fissile material inventories, a representation of the proliferation pertinent physical processes, and a framework supportive of national or international perspective. They have developed a prototype global nuclear material management and control systems analysis capability, the Global Nuclear Material Control (GNMC) model. The GNMC model establishes the framework for evaluating the global production, disposition, and safeguards and security requirements for fissile nuclear material

  19. Calibrating cellular automaton models for pedestrians walking through corners

    Dias, Charitha; Lovreglio, Ruggiero

    2018-05-01

    Cellular Automata (CA) based pedestrian simulation models have gained remarkable popularity as they are simpler and easier to implement compared to other microscopic modeling approaches. However, incorporating traditional floor field representations in CA models to simulate pedestrian corner navigation behavior could result in unrealistic behaviors. Even though several previous studies have attempted to enhance CA models to realistically simulate pedestrian maneuvers around bends, such modifications have not been calibrated or validated against empirical data. In this study, two static floor field (SFF) representations, namely 'discrete representation' and 'continuous representation', are calibrated for CA-models to represent pedestrians' walking behavior around 90° bends. Trajectory data collected through a controlled experiment are used to calibrate these model representations. Calibration results indicate that although both floor field representations can represent pedestrians' corner navigation behavior, the 'continuous' representation fits the data better. Output of this study could be beneficial for enhancing the reliability of existing CA-based models by representing pedestrians' corner navigation behaviors more realistically.

  20. A single model procedure for tank calibration function estimation

    York, J.C.; Liebetrau, A.M.

    1995-01-01

    Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages

  1. MT3DMS: Model use, calibration, and validation

    Zheng, C.; Hill, Mary C.; Cao, G.; Ma, R.

    2012-01-01

    MT3DMS is a three-dimensional multi-species solute transport model for solving advection, dispersion, and chemical reactions of contaminants in saturated groundwater flow systems. MT3DMS interfaces directly with the U.S. Geological Survey finite-difference groundwater flow model MODFLOW for the flow solution and supports the hydrologic and discretization features of MODFLOW. MT3DMS contains multiple transport solution techniques in one code, which can often be important, including in model calibration. Since its first release in 1990 as MT3D for single-species mass transport modeling, MT3DMS has been widely used in research projects and practical field applications. This article provides a brief introduction to MT3DMS and presents recommendations about calibration and validation procedures for field applications of MT3DMS. The examples presented suggest the need to consider alternative processes as models are calibrated and suggest opportunities and difficulties associated with using groundwater age in transport model calibration.

  2. Effect of Using Extreme Years in Hydrologic Model Calibration Performance

    Goktas, R. K.; Tezel, U.; Kargi, P. G.; Ayvaz, T.; Tezyapar, I.; Mesta, B.; Kentel, E.

    2017-12-01

    Hydrological models are useful in predicting and developing management strategies for controlling the system behaviour. Specifically they can be used for evaluating streamflow at ungaged catchments, effect of climate change, best management practices on water resources, or identification of pollution sources in a watershed. This study is a part of a TUBITAK project named "Development of a geographical information system based decision-making tool for water quality management of Ergene Watershed using pollutant fingerprints". Within the scope of this project, first water resources in Ergene Watershed is studied. Streamgages found in the basin are identified and daily streamflow measurements are obtained from State Hydraulic Works of Turkey. Streamflow data is analysed using box-whisker plots, hydrographs and flow-duration curves focusing on identification of extreme periods, dry or wet. Then a hydrological model is developed for Ergene Watershed using HEC-HMS in the Watershed Modeling System (WMS) environment. The model is calibrated for various time periods including dry and wet ones and the performance of calibration is evaluated using Nash-Sutcliffe Efficiency (NSE), correlation coefficient, percent bias (PBIAS) and root mean square error. It is observed that calibration period affects the model performance, and the main purpose of the development of the hydrological model should guide calibration period selection. Acknowledgement: This study is funded by The Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 115Y064.

  3. Calibration of a stochastic health evolution model using NHIS data

    Gupta, Aparna; Li, Zhisheng

    2011-10-01

    This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.

  4. Optical model and calibration of a sun tracker

    Volkov, Sergei N.; Samokhvalov, Ignatii V.; Cheong, Hai Du; Kim, Dukhyeon

    2016-01-01

    Sun trackers are widely used to investigate scattering and absorption of solar radiation in the Earth's atmosphere. We present a method for optimization of the optical altazimuth sun tracker model with output radiation direction aligned with the axis of a stationary spectrometer. The method solves the problem of stability loss in tracker pointing at the Sun near the zenith. An optimal method for tracker calibration at the measurement site is proposed in the present work. A method of moving calibration is suggested for mobile applications in the presence of large temperature differences and errors in the alignment of the optical system of the tracker. - Highlights: • We present an optimal optical sun tracker model for atmospheric spectroscopy. • The problem of loss of stability of tracker pointing at the Sun has been solved. • We propose an optimal method for tracker calibration at a measurement site. • Test results demonstrate the efficiency of the proposed optimization methods.

  5. A high resolution global scale groundwater model

    de Graaf, Inge; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc

    2014-05-01

    As the world's largest accessible source of freshwater, groundwater plays a vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater storage provides a large natural buffer against water shortage and sustains flows to rivers and wetlands, supporting ecosystem habitats and biodiversity. Yet, the current generation of global scale hydrological models (GHMs) do not include a groundwater flow component, although it is a crucial part of the hydrological cycle. Thus, a realistic physical representation of the groundwater system that allows for the simulation of groundwater head dynamics and lateral flows is essential for GHMs that increasingly run at finer resolution. In this study we present a global groundwater model with a resolution of 5 arc-minutes (approximately 10 km at the equator) using MODFLOW (McDonald and Harbaugh, 1988). With this global groundwater model we eventually intend to simulate the changes in the groundwater system over time that result from variations in recharge and abstraction. Aquifer schematization and properties of this groundwater model were developed from available global lithological maps and datasets (Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moosdorf, 2013), combined with our estimate of aquifer thickness for sedimentary basins. We forced the groundwater model with the output from the global hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the net groundwater recharge and average surface water levels derived from routed channel discharge. For the parameterization, we relied entirely on available global datasets and did not calibrate the model so that it can equally be expanded to data poor environments. Based on our sensitivity analysis, in which we run the model with various hydrogeological parameter settings, we observed that most variance in groundwater

  6. Bayesian calibration of the Community Land Model using surrogates

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Swiler, Laura Painton

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.

  7. Calibration of hydrological models using flow-duration curves

    I. K. Westerberg

    2011-07-01

    Full Text Available The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1 uncertain discharge data, (2 variable sensitivity of different performance measures to different flow magnitudes, (3 influence of unknown input/output errors and (4 inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested – based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of

  8. A system-theory-based model for monthly river runoff forecasting: model calibration and optimization

    Wu Jianhua

    2014-03-01

    Full Text Available River runoff is not only a crucial part of the global water cycle, but it is also an important source for hydropower and an essential element of water balance. This study presents a system-theory-based model for river runoff forecasting taking the Hailiutu River as a case study. The forecasting model, designed for the Hailiutu watershed, was calibrated and verified by long-term precipitation observation data and groundwater exploitation data from the study area. Additionally, frequency analysis, taken as an optimization technique, was applied to improve prediction accuracy. Following model optimization, the overall relative prediction errors are below 10%. The system-theory-based prediction model is applicable to river runoff forecasting, and following optimization by frequency analysis, the prediction error is acceptable.

  9. Calibration of Automatically Generated Items Using Bayesian Hierarchical Modeling.

    Johnson, Matthew S.; Sinharay, Sandip

    For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden…

  10. LED-based Photometric Stereo: Modeling, Calibration and Numerical Solutions

    Quéau, Yvain; Durix, Bastien; Wu, Tao

    2018-01-01

    We conduct a thorough study of photometric stereo under nearby point light source illumination, from modeling to numerical solution, through calibration. In the classical formulation of photometric stereo, the luminous fluxes are assumed to be directional, which is very difficult to achieve in pr...

  11. Calibration of a Plastic Classification System with the Ccw Model

    Barcala Riveira, J. M.; Fernandez Marron, J. L.; Alberdi Primicia, J.; Navarrete Marin, J. J.; Oller Gonzalez, J. C.

    2003-01-01

    This document describes the calibration of a plastic Classification system with the Ccw model (Classification by Quantum's built with Wavelet Coefficients). The method is applied to spectra of plastics usually present in domestic wastes. Obtained results are showed. (Author) 16 refs

  12. Technical Note: Calibration and validation of geophysical observation models

    Salama, M.S.; van der Velde, R.; van der Woerd, H.J.; Kromkamp, J.C.; Philippart, C.J.M.; Joseph, A.T.; O'Neill, P.E.; Lang, R.H.; Gish, T.; Werdell, P.J.; Su, Z.

    2012-01-01

    We present a method to calibrate and validate observational models that interrelate remotely sensed energy fluxes to geophysical variables of land and water surfaces. Coincident sets of remote sensing observation of visible and microwave radiations and geophysical data are assembled and subdivided

  13. Evaluation of multivariate calibration models transferred between spectroscopic instruments

    Eskildsen, Carl Emil Aae; Hansen, Per W.; Skov, Thomas

    2016-01-01

    In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions for the ......In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions...... for the same samples using the transferred model. However, sometimes the success of a model transfer is evaluated by comparing the transferred model predictions with the reference values. This is not optimal, as uncertainties in the reference method will impact the evaluation. This paper proposes a new method...... for calibration model transfer evaluation. The new method is based on comparing predictions from different instruments, rather than comparing predictions and reference values. A total of 75 flour samples were available for the study. All samples were measured on ten near infrared (NIR) instruments from two...

  14. Calibration and verification of numerical runoff and erosion model

    Gabrić Ognjen

    2015-01-01

    Full Text Available Based on the field and laboratory measurements, and analogous with development of computational techniques, runoff and erosion models based on equations which describe the physics of the process are also developed. Based on the KINEROS2 model, this paper presents basic modelling principles of runoff and erosion processes based on the St. Venant's equations. Alternative equations for friction calculation, calculation of source and deposition elements and transport capacity are also shown. Numerical models based on original and alternative equations are calibrated and verified on laboratory scale model. According to the results, friction calculation based on the analytic solution of laminar flow must be included in all runoff and erosion models.

  15. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  16. Calibration of a Chemistry Test Using the Rasch Model

    Nancy Coromoto Martín Guaregua

    2011-11-01

    Full Text Available The Rasch model was used to calibrate a general chemistry test for the purpose of analyzing the advantages and information the model provides. The sample was composed of 219 college freshmen. Of the 12 questions used, good fit was achieved in 10. The evaluation shows that although there are items of variable difficulty, there are gaps on the scale; in order to make the test complete, it will be necessary to design new items to fill in these gaps.

  17. Stochastic isotropic hyperelastic materials: constitutive calibration and model selection

    Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain

    2018-03-01

    Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.

  18. The Global Tsunami Model (GTM)

    Lorito, S.; Basili, R.; Harbitz, C. B.; Løvholt, F.; Polet, J.; Thio, H. K.

    2017-12-01

    The tsunamis occurred worldwide in the last two decades have highlighted the need for a thorough understanding of the risk posed by relatively infrequent but often disastrous tsunamis and the importance of a comprehensive and consistent methodology for quantifying the hazard. In the last few years, several methods for probabilistic tsunami hazard analysis have been developed and applied to different parts of the world. In an effort to coordinate and streamline these activities and make progress towards implementing the Sendai Framework of Disaster Risk Reduction (SFDRR) we have initiated a Global Tsunami Model (GTM) working group with the aim of i) enhancing our understanding of tsunami hazard and risk on a global scale and developing standards and guidelines for it, ii) providing a portfolio of validated tools for probabilistic tsunami hazard and risk assessment at a range of scales, and iii) developing a global tsunami hazard reference model. This GTM initiative has grown out of the tsunami component of the Global Assessment of Risk (GAR15), which has resulted in an initial global model of probabilistic tsunami hazard and risk. Started as an informal gathering of scientists interested in advancing tsunami hazard analysis, the GTM is currently in the process of being formalized through letters of interest from participating institutions. The initiative has now been endorsed by the United Nations International Strategy for Disaster Reduction (UNISDR) and the World Bank's Global Facility for Disaster Reduction and Recovery (GFDRR). We will provide an update on the state of the project and the overall technical framework, and discuss the technical issues that are currently being addressed, including earthquake source recurrence models, the use of aleatory variability and epistemic uncertainty, and preliminary results for a probabilistic global hazard assessment, which is an update of the model included in UNISDR GAR15.

  19. A globally calibrated scheme for generating daily meteorology from monthly statistics: Global-WGEN (GWGEN) v1.0

    Sommer, Philipp S.; Kaplan, Jed O.

    2017-10-01

    While a wide range of Earth system processes occur at daily and even subdaily timescales, many global vegetation and other terrestrial dynamics models historically used monthly meteorological forcing both to reduce computational demand and because global datasets were lacking. Recently, dynamic land surface modeling has moved towards resolving daily and subdaily processes, and global datasets containing daily and subdaily meteorology have become available. These meteorological datasets, however, cover only the instrumental era of the last approximately 120 years at best, are subject to considerable uncertainty, and represent extremely large data files with associated computational costs of data input/output and file transfer. For periods before the recent past or in the future, global meteorological forcing can be provided by climate model output, but the quality of these data at high temporal resolution is low, particularly for daily precipitation frequency and amount. Here, we present GWGEN, a globally applicable statistical weather generator for the temporal downscaling of monthly climatology to daily meteorology. Our weather generator is parameterized using a global meteorological database and simulates daily values of five common variables: minimum and maximum temperature, precipitation, cloud cover, and wind speed. GWGEN is lightweight, modular, and requires a minimal set of monthly mean variables as input. The weather generator may be used in a range of applications, for example, in global vegetation, crop, soil erosion, or hydrological models. While GWGEN does not currently perform spatially autocorrelated multi-point downscaling of daily weather, this additional functionality could be implemented in future versions.

  20. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of

  1. Calibration of two complex ecosystem models with different likelihood functions

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model

  2. Calibrating corneal material model parameters using only inflation data: an ill-posed problem

    Kok, S

    2014-08-01

    Full Text Available is to perform numerical modelling using the finite element method, for which a calibrated material model is required. These material models are typically calibrated using experimental inflation data by solving an inverse problem. In the inverse problem...

  3. Calibration process of highly parameterized semi-distributed hydrological model

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group

  4. Caliver: An R package for CALIbration and VERification of forest fire gridded model outputs.

    Vitolo, Claudia; Di Giuseppe, Francesca; D'Andrea, Mirko

    2018-01-01

    The name caliver stands for CALIbration and VERification of forest fire gridded model outputs. This is a package developed for the R programming language and available under an APACHE-2 license from a public repository. In this paper we describe the functionalities of the package and give examples using publicly available datasets. Fire danger model outputs are taken from the modeling components of the European Forest Fire Information System (EFFIS) and observed burned areas from the Global Fire Emission Database (GFED). Complete documentation, including a vignette, is also available within the package.

  5. Bayesian model calibration of ramp compression experiments on Z

    Brown, Justin; Hund, Lauren

    2017-06-01

    Bayesian model calibration (BMC) is a statistical framework to estimate inputs for a computational model in the presence of multiple uncertainties, making it well suited to dynamic experiments which must be coupled with numerical simulations to interpret the results. Often, dynamic experiments are diagnosed using velocimetry and this output can be modeled using a hydrocode. Several calibration issues unique to this type of scenario including the functional nature of the output, uncertainty of nuisance parameters within the simulation, and model discrepancy identifiability are addressed, and a novel BMC process is proposed. As a proof of concept, we examine experiments conducted on Sandia National Laboratories' Z-machine which ramp compressed tantalum to peak stresses of 250 GPa. The proposed BMC framework is used to calibrate the cold curve of Ta (with uncertainty), and we conclude that the procedure results in simple, fast, and valid inferences. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  6. On Inertial Body Tracking in the Presence of Model Calibration Errors.

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-07-22

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments-the IMU-to-segment calibrations, subsequently called I2S calibrations-to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and

  7. Calibrating Vadose Zone Models with Time-Lapse Gravity Data

    Christiansen, Lars; Hansen, A. B.; Looms, M. C.

    2009-01-01

    A change in soil water content is a change in mass stored in the subsurface. Given that the mass change is big enough, the change can be measured with a gravity meter. Attempts have been made with varying success over the last decades to use ground-based time-lapse gravity measurements to infer...... hydrogeological parameters. These studies focused on the saturated zone with specific yield as the most prominent target parameter. Any change in storage in the vadose zone has been considered as noise. Our modeling results show a measureable change in gravity from the vadose zone during a forced infiltration...... experiment on 10m by 10m grass land. Simulation studies show a potential for vadose zone model calibration using gravity data in conjunction with other geophysical data, e.g. cross-borehole georadar. We present early field data and calibration results from a forced infiltration experiment conducted over 30...

  8. A new sewage exfiltration model--parameters and calibration.

    Karpf, Christian; Krebs, Peter

    2011-01-01

    Exfiltration of waste water from sewer systems represents a potential danger for the soil and the aquifer. Common models, which are used to describe the exfiltration process, are based on the law of Darcy, extended by a more or less detailed consideration of the expansion of leaks, the characteristics of the soil and the colmation layer. But, due to the complexity of the exfiltration process, the calibration of these models includes a significant uncertainty. In this paper, a new exfiltration approach is introduced, which implements the dynamics of the clogging process and the structural conditions near sewer leaks. The calibration is realised according to experimental studies and analysis of groundwater infiltration to sewers. Furthermore, exfiltration rates and the sensitivity of the approach are estimated and evaluated, respectively, by Monte-Carlo simulations.

  9. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  10. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  11. Spatial and Temporal Self-Calibration of a Hydroeconomic Model

    Howitt, R. E.; Hansen, K. M.

    2008-12-01

    Hydroeconomic modeling of water systems where risk and reliability of water supply are of critical importance must address explicitly how to model water supply uncertainty. When large fluctuations in annual precipitation and significant variation in flows by location are present, a model which solves with perfect foresight of future water conditions may be inappropriate for some policy and research questions. We construct a simulation-optimization model with limited foresight of future water conditions using positive mathematical programming and self-calibration techniques. This limited foresight netflow (LFN) model signals the value of storing water for future use and reflects a more accurate economic value of water at key locations, given that future water conditions are unknown. Failure to explicitly model this uncertainty could lead to undervaluation of storage infrastructure and contractual mechanisms for managing water supply risk. A model based on sequentially updated information is more realistic, since water managers make annual storage decisions without knowledge of yet to be realized future water conditions. The LFN model runs historical hydrological conditions through the current configuration of the California water system to determine the economically efficient allocation of water under current economic conditions and infrastructure. The model utilizes current urban and agricultural demands, storage and conveyance infrastructure, and the state's hydrological history to indicate the scarcity value of water at key locations within the state. Further, the temporal calibration penalty functions vary by year type, reflecting agricultural water users' ability to alter cropping patterns in response to water conditions. The model employs techniques from positive mathematical programming (Howitt, 1995; Howitt, 1998; Cai and Wang, 2006) to generate penalty functions that are applied to deviations from observed data. The functions are applied to monthly flows

  12. Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

    Soltanzadeh, I. [Tehran Univ. (Iran, Islamic Republic of). Inst. of Geophysics; Azadi, M.; Vakili, G.A. [Atmospheric Science and Meteorological Research Center (ASMERC), Teheran (Iran, Islamic Republic of)

    2011-07-01

    Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast. (orig.)

  13. Using Bayesian Model Averaging (BMA to calibrate probabilistic surface temperature forecasts over Iran

    I. Soltanzadeh

    2011-07-01

    Full Text Available Using Bayesian Model Averaging (BMA, an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM, with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP Global Forecast System (GFS and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009 over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  14. Global scale groundwater flow model

    Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc

    2013-04-01

    As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.

  15. Investigation of the transferability of hydrological models and a method to improve model calibration

    G. Hartmann

    2005-01-01

    Full Text Available In order to find a model parameterization such that the hydrological model performs well even under different conditions, appropriate model performance measures have to be determined. A common performance measure is the Nash Sutcliffe efficiency. Usually it is calculated comparing observed and modelled daily values. In this paper a modified version is suggested in order to calibrate a model on different time scales simultaneously (days up to years. A spatially distributed hydrological model based on HBV concept was used. The modelling was applied on the Upper Neckar catchment, a mesoscale river in south western Germany with a basin size of about 4000 km2. The observation period 1961-1990 was divided into four different climatic periods, referred to as "warm", "cold", "wet" and "dry". These sub periods were used to assess the transferability of the model calibration and of the measure of performance. In a first step, the hydrological model was calibrated on a certain period and afterwards applied on the same period. Then, a validation was performed on the climatologically opposite period than the calibration, e.g. the model calibrated on the cold period was applied on the warm period. Optimal parameter sets were identified by an automatic calibration procedure based on Simulated Annealing. The results show, that calibrating a hydrological model that is supposed to handle short as well as long term signals becomes an important task. Especially the objective function has to be chosen very carefully.

  16. Model- and calibration-independent test of cosmic acceleration

    Seikel, Marina; Schwarz, Dominik J.

    2009-01-01

    We present a calibration-independent test of the accelerated expansion of the universe using supernova type Ia data. The test is also model-independent in the sense that no assumptions about the content of the universe or about the parameterization of the deceleration parameter are made and that it does not assume any dynamical equations of motion. Yet, the test assumes the universe and the distribution of supernovae to be statistically homogeneous and isotropic. A significant reduction of systematic effects, as compared to our previous, calibration-dependent test, is achieved. Accelerated expansion is detected at significant level (4.3σ in the 2007 Gold sample, 7.2σ in the 2008 Union sample) if the universe is spatially flat. This result depends, however, crucially on supernovae with a redshift smaller than 0.1, for which the assumption of statistical isotropy and homogeneity is less well established

  17. Technical note: Bayesian calibration of dynamic ruminant nutrition models.

    Reed, K F; Arhonditsis, G B; France, J; Kebreab, E

    2016-08-01

    Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. CALIBRATION OF DISTRIBUTED SHALLOW LANDSLIDE MODELS IN FORESTED LANDSCAPES

    Gian Battista Bischetti

    2010-09-01

    Full Text Available In mountainous-forested soil mantled landscapes all around the world, rainfall-induced shallow landslides are one of the most common hydro-geomorphic hazards, which frequently impact the environment and human lives and properties. In order to produce shallow landslide susceptibility maps, several models have been proposed in the last decade, combining simplified steady state topography- based hydrological models with the infinite slope scheme, in a GIS framework. In the present paper, two of the still open issues are investigated: the assessment of the validity of slope stability models and the inclusion of root cohesion values. In such a perspective the “Stability INdex MAPping” has been applied to a small forested pre-Alpine catchment, adopting different calibrating approaches and target indexes. The Single and the Multiple Calibration Regions modality and three quantitative target indexes – the common Success Rate (SR, the Modified Success Rate (MSR, and a Weighted Modified Success Rate (WMSR herein introduced – are considered. The results obtained show that the target index can 34 003_Bischetti(569_23 1-12-2010 9:48 Pagina 34 significantly affect the values of a model’s parameters and lead to different proportions of stable/unstable areas, both for the Single and the Multiple Calibration Regions approach. The use of SR as the target index leads to an over-prediction of the unstable areas, whereas the use of MSR and WMSR, seems to allow a better discrimination between stable and unstable areas. The Multiple Calibration Regions approach should be preferred, using information on space distribution of vegetation to define the Regions. The use of field-based estimation of root cohesion and sliding depth allows the implementation of slope stability models (SINMAP in our case also without the data needed for calibration. To maximize the inclusion of such parameters into SINMAP, however, the assumption of a uniform distribution of

  19. Michelson Interferometer for Global High-Resolution Thermospheric Imaging (MIGHTI): Instrument Design and Calibration

    Englert, Christoph R.; Harlander, John M.; Brown, Charles M.; Marr, Kenneth D.; Miller, Ian J.; Stump, J. Eloise; Hancock, Jed; Peterson, James Q.; Kumler, Jay; Morrow, William H.; Mooney, Thomas A.; Ellis, Scott; Mende, Stephen B.; Harris, Stewart E.; Stevens, Michael H.; Makela, Jonathan J.; Harding, Brian J.; Immel, Thomas J.

    2017-10-01

    The Michelson Interferometer for Global High-resolution Thermospheric Imaging (MIGHTI) instrument was built for launch and operation on the NASA Ionospheric Connection Explorer (ICON) mission. The instrument was designed to measure thermospheric horizontal wind velocity profiles and thermospheric temperature in altitude regions between 90 km and 300 km, during day and night. For the wind measurements it uses two perpendicular fields of view pointed at the Earth's limb, observing the Doppler shift of the atomic oxygen red and green lines at 630.0 nm and 557.7 nm wavelength. The wavelength shift is measured using field-widened, temperature compensated Doppler Asymmetric Spatial Heterodyne (DASH) spectrometers, employing low order échelle gratings operating at two different orders for the different atmospheric lines. The temperature measurement is accomplished by a multichannel photometric measurement of the spectral shape of the molecular oxygen A-band around 762 nm wavelength. For each field of view, the signals of the two oxygen lines and the A-band are detected on different regions of a single, cooled, frame transfer charge coupled device (CCD) detector. On-board calibration sources are used to periodically quantify thermal drifts, simultaneously with observing the atmosphere. The MIGHTI requirements, the resulting instrument design and the calibration are described.

  20. Optical modeling and polarization calibration for CMB measurements with ACTPol and Advanced ACTPol

    Koopman, Brian; Austermann, Jason; Cho, Hsiao-Mei; Coughlin, Kevin P.; Duff, Shannon M.; Gallardo, Patricio A.; Hasselfield, Matthew; Henderson, Shawn W.; Ho, Shuay-Pwu Patty; Hubmayr, Johannes; Irwin, Kent D.; Li, Dale; McMahon, Jeff; Nati, Federico; Niemack, Michael D.; Newburgh, Laura; Page, Lyman A.; Salatino, Maria; Schillaci, Alessandro; Schmitt, Benjamin L.; Simon, Sara M.; Vavagiakis, Eve M.; Ward, Jonathan T.; Wollack, Edward J.

    2016-07-01

    The Atacama Cosmology Telescope Polarimeter (ACTPol) is a polarization sensitive upgrade to the Atacama Cosmology Telescope, located at an elevation of 5190 m on Cerro Toco in Chile. ACTPol uses transition edge sensor bolometers coupled to orthomode transducers to measure both the temperature and polarization of the Cosmic Microwave Background (CMB). Calibration of the detector angles is a critical step in producing polarization maps of the CMB. Polarization angle offsets in the detector calibration can cause leakage in polarization from E to B modes and induce a spurious signal in the EB and TB cross correlations, which eliminates our ability to measure potential cosmological sources of EB and TB signals, such as cosmic birefringence. We calibrate the ACTPol detector angles by ray tracing the designed detector angle through the entire optical chain to determine the projection of each detector angle on the sky. The distribution of calibrated detector polarization angles are consistent with a global offset angle from zero when compared to the EB-nulling offset angle, the angle required to null the EB cross-correlation power spectrum. We present the optical modeling process. The detector angles can be cross checked through observations of known polarized sources, whether this be a galactic source or a laboratory reference standard. To cross check the ACTPol detector angles, we use a thin film polarization grid placed in front of the receiver of the telescope, between the receiver and the secondary reflector. Making use of a rapidly rotating half-wave plate (HWP) mount we spin the polarizing grid at a constant speed, polarizing and rotating the incoming atmospheric signal. The resulting sinusoidal signal is used to determine the detector angles. The optical modeling calibration was shown to be consistent with a global offset angle of zero when compared to EB nulling in the first ACTPol results and will continue to be a part of our calibration implementation. The first

  1. A Linear Viscoelastic Model Calibration of Sylgard 184.

    Long, Kevin Nicholas; Brown, Judith Alice

    2017-04-01

    We calibrate a linear thermoviscoelastic model for solid Sylgard 184 (90-10 formulation), a lightly cross-linked, highly flexible isotropic elastomer for use both in Sierra / Solid Mechanics via the Universal Polymer Model as well as in Sierra / Structural Dynamics (Salinas) for use as an isotropic viscoelastic material. Material inputs for the calibration in both codes are provided. The frequency domain master curve of oscillatory shear was obtained from a report from Los Alamos National Laboratory (LANL). However, because the form of that data is different from the constitutive models in Sierra, we also present the mapping of the LANL data onto Sandia’s constitutive models. Finally, blind predictions of cyclic tension and compression out to moderate strains of 40 and 20% respectively are compared with Sandia’s legacy cure schedule material. Although the strain rate of the data is unknown, the linear thermoviscoelastic model accurately predicts the experiments out to moderate strains for the slower strain rates, which is consistent with the expectation that quasistatic test procedures were likely followed. This good agreement comes despite the different cure schedules between the Sandia and LANL data.

  2. Evaluation of an ASM1 Model Calibration Precedure on a Municipal-Industrial Wastewater Treatment Plant

    Petersen, Britta; Gernaey, Krist; Henze, Mogens

    2002-01-01

    treatment plant. In the case that was studied it was important to have a detailed description of the process dynamics, since the model was to be used as the basis for optimisation scenarios in a later phase. Therefore, a complete model calibration procedure was applied including: (1) a description......The purpose of the calibrated model determines how to approach a model calibration, e.g. which information is needed and to which level of detail the model should be calibrated. A systematic model calibration procedure was therefore defined and evaluated for a municipal–industrial wastewater...

  3. Dynamic calibration of agent-based models using data assimilation.

    Ward, Jonathan A; Evans, Andrew J; Malleson, Nicolas S

    2016-04-01

    A widespread approach to investigating the dynamical behaviour of complex social systems is via agent-based models (ABMs). In this paper, we describe how such models can be dynamically calibrated using the ensemble Kalman filter (EnKF), a standard method of data assimilation. Our goal is twofold. First, we want to present the EnKF in a simple setting for the benefit of ABM practitioners who are unfamiliar with it. Second, we want to illustrate to data assimilation experts the value of using such methods in the context of ABMs of complex social systems and the new challenges these types of model present. We work towards these goals within the context of a simple question of practical value: how many people are there in Leeds (or any other major city) right now? We build a hierarchy of exemplar models that we use to demonstrate how to apply the EnKF and calibrate these using open data of footfall counts in Leeds.

  4. Calibration and validation of a general infiltration model

    Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.

    1999-08-01

    A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.

  5. Calibration of the simulation model of the VINCY cyclotron magnet

    Ćirković Saša

    2002-01-01

    Full Text Available The MERMAID program will be used to isochronise the nominal magnetic field of the VINCY Cyclotron. This program simulates the response, i. e. calculates the magnetic field, of a previously defined model of a magnet. The accuracy of 3D field calculation depends on the density of the grid points in the simulation model grid. The size of the VINCY Cyclotron and the maximum number of grid points in the XY plane limited by MERMAID define the maximumobtainable accuracy of field calculations. Comparisons of the field simulated with maximum obtainable accuracy with the magnetic field measured in the first phase of the VINCY Cyclotron magnetic field measurements campaign has shown that the difference between these two fields is not as small as required. Further decrease of the difference between these fields is obtained by the simulation model calibration, i. e. by adjusting the current through the main coils in the simulation model.

  6. Calibration of a Distributed Hydrological Model using Remote Sensing Evapotranspiration data in the Semi-Arid Punjab Region of Pakista

    Becker, R.; Usman, M.

    2017-12-01

    A SWAT (Soil Water Assessment Tool) model is applied in the semi-arid Punjab region in Pakistan. The physically based hydrological model is set up to simulate hydrological processes and water resources demands under future land use, climate change and irrigation management scenarios. In order to successfully run the model, detailed focus is laid on the calibration procedure of the model. The study deals with the following calibration issues:i. lack of reliable calibration/validation data, ii. difficulty to accurately model a highly managed system with a physically based hydrological model and iii. use of alternative and spatially distributed data sets for model calibration. In our study area field observations are rare and the entirely human controlled irrigation system renders central calibration parameters (e.g. runoff/curve number) unsuitable, as it can't be assumed that they represent the natural behavior of the hydrological system. From evapotranspiration (ET) however principal hydrological processes can still be inferred. Usman et al. (2015) derived satellite based monthly ET data for our study area based on SEBAL (Surface Energy Balance Algorithm) and created a reliable ET data set which we use in this study to calibrate our SWAT model. The initial SWAT model performance is evaluated with respect to the SEBAL results using correlation coefficients, RMSE, Nash-Sutcliffe efficiencies and mean differences. Particular focus is laid on the spatial patters, investigating the potential of a spatially differentiated parameterization instead of just using spatially uniform calibration data. A sensitivity analysis reveals the most sensitive parameters with respect to changes in ET, which are then selected for the calibration process.Using the SEBAL-ET product we calibrate the SWAT model for the time period 2005-2006 using a dynamically dimensioned global search algorithm to minimize RMSE. The model improvement after the calibration procedure is finally evaluated based

  7. Recent Improvements to the Calibration Models for RXTE/PCA

    Jahoda, K.

    2008-01-01

    We are updating the calibration of the PCA to correct for slow variations, primarily in energy to channel relationship. We have also improved the physical model in the vicinity of the Xe K-edge, which should increase the reliability of continuum fits above 20 keV. The improvements to the matrix are especially important to simultaneous observations, where the PCA is often used to constrain the continuum while other higher resolution spectrometers are used to study the shape of lines and edges associated with Iron.

  8. An experimental test of CSR theory using a globally calibrated ordination method.

    Li, Yuanzhi; Shipley, Bill

    2017-01-01

    Can CSR theory, in conjunction with a recently proposed globally calibrated CSR ordination ("StrateFy"), using only three easily measured leaf traits (leaf area, specific leaf area and leaf dry matter content) predict the functional signature of herbaceous vegetation along experimentally manipulated gradients of soil fertility and disturbance? To determine this, we grew 37 herbaceous species in mixture for five years in 24 experimental mesocosms differing in factorial levels of soil resources (stress) and density-independent mortality (disturbance). We measured 16 different functional traits and then ordinated the resulting vegetation within the CSR triangle using StrateFy. We then calculated community-weighted mean (CWM) values of the competitor (CCWM), stress-tolerator (SCWM) and ruderal (RCWM) scores for each mesocosm. We found a significant increase in SCWM from low to high stress mesocosms, and an increase in RCWM from lowly to highly disturbed mesocosms. However, CCWM did not decline significantly as intensity of stress or disturbance increased, as predicted by CSR theory. This last result likely arose because our herbaceous species were relatively poor competitors in global comparisons and thus no strong competitors in our species pool were selectively favoured in low stress and low disturbed mesocosms. Variation in the 13 other traits, not used by StrateFy, largely argeed with the predictions of CSR theory. StrateFy worked surprisingly well in our experimental study except for the C-dimension. Despite loss of some precision, it has great potential applicability in future studies due to its simplicity and generality.

  9. Comparison of different multi-objective calibration criteria using a conceptual rainfall-runoff model of flood events

    R. Moussa

    2009-04-01

    Full Text Available A conceptual lumped rainfall-runoff flood event model was developed and applied on the Gardon catchment located in Southern France and various single-objective and multi-objective functions were used for its calibration. The model was calibrated on 15 events and validated on 14 others. The results of both the calibration and validation phases are compared on the basis of their performance with regards to six criteria, three global criteria and three relative criteria representing volume, peakflow, and the root mean square error. The first type of criteria gives more weight to large events whereas the second considers all events to be of equal weight. The results show that the calibrated parameter values are dependent on the type of criteria used. Significant trade-offs are observed between the different objectives: no unique set of parameters is able to satisfy all objectives simultaneously. Instead, the solution to the calibration problem is given by a set of Pareto optimal solutions. From this set of optimal solutions, a balanced aggregated objective function is proposed, as a compromise between up to three objective functions. The single-objective and multi-objective calibration strategies are compared both in terms of parameter variation bounds and simulation quality. The results of this study indicate that two well chosen and non-redundant objective functions are sufficient to calibrate the model and that the use of three objective functions does not necessarily yield different results. The problems of non-uniqueness in model calibration, and the choice of the adequate objective functions for flood event models, emphasise the importance of the modeller's intervention. The recent advances in automatic optimisation techniques do not minimise the user's responsibility, who has to choose multiple criteria based on the aims of the study, his appreciation on the errors induced by data and model structure and his knowledge of the

  10. GEM - The Global Earthquake Model

    Smolka, A.

    2009-04-01

    Over 500,000 people died in the last decade due to earthquakes and tsunamis, mostly in the developing world, where the risk is increasing due to rapid population growth. In many seismic regions, no hazard and risk models exist, and even where models do exist, they are intelligible only by experts, or available only for commercial purposes. The Global Earthquake Model (GEM) answers the need for an openly accessible risk management tool. GEM is an internationally sanctioned public private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) which will establish an authoritative standard for calculating and communicating earthquake hazard and risk, and will be designed to serve as the critical instrument to support decisions and actions that reduce earthquake losses worldwide. GEM will integrate developments on the forefront of scientific and engineering knowledge of earthquakes, at global, regional and local scale. The work is organized in three modules: hazard, risk, and socio-economic impact. The hazard module calculates probabilities of earthquake occurrence and resulting shaking at any given location. The risk module calculates fatalities, injuries, and damage based on expected shaking, building vulnerability, and the distribution of population and of exposed values and facilities. The socio-economic impact module delivers tools for making educated decisions to mitigate and manage risk. GEM will be a versatile online tool, with open source code and a map-based graphical interface. The underlying data will be open wherever possible, and its modular input and output will be adapted to multiple user groups: scientists and engineers, risk managers and decision makers in the public and private sectors, and the public-at- large. GEM will be the first global model for seismic risk assessment at a national and regional scale, and aims to achieve broad scientific participation and independence. Its development will occur in a

  11. Model validation and calibration based on component functions of model output

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  12. Geomechanical Simulation of Bayou Choctaw Strategic Petroleum Reserve - Model Calibration.

    Park, Byoung [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    A finite element numerical analysis model has been constructed that consists of a realistic mesh capturing the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multi - mechanism deformation ( M - D ) salt constitutive model using the daily data of actual wellhead pressure and oil - brine interface. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt is limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used for the field baseline measurement. The structure factor, A 2 , and transient strain limit factor, K 0 , in the M - D constitutive model are used for the calibration. The A 2 value obtained experimentally from the BC salt and K 0 value of Waste Isolation Pilot Plant (WIPP) salt are used for the baseline values. T o adjust the magnitude of A 2 and K 0 , multiplication factors A2F and K0F are defined, respectively. The A2F and K0F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back fitting analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict past and future geomechanical behaviors of the salt dome, caverns, caprock , and interbed layers. The geological concerns issued in the BC site will be explained from this model in a follow - up report .

  13. Selection, calibration, and validation of models of tumor growth.

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory

  14. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Wentworth, Mami Tonoe

    techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide

  15. Differential Evolution algorithm applied to FSW model calibration

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  16. A critical comparison of systematic calibration protocols for activated sludge models: a SWOT analysis.

    Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A

    2005-07-01

    Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.

  17. A calibration hierarchy for risk models was defined: from utopia to empirical data.

    Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W

    2016-06-01

    Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Calibrating emergent phenomena in stock markets with agent based models.

    Fievet, Lucas; Sornette, Didier

    2018-01-01

    Since the 2008 financial crisis, agent-based models (ABMs), which account for out-of-equilibrium dynamics, heterogeneous preferences, time horizons and strategies, have often been envisioned as the new frontier that could revolutionise and displace the more standard models and tools in economics. However, their adoption and generalisation is drastically hindered by the absence of general reliable operational calibration methods. Here, we start with a different calibration angle that qualifies an ABM for its ability to achieve abnormal trading performance with respect to the buy-and-hold strategy when fed with real financial data. Starting from the common definition of standard minority and majority agents with binary strategies, we prove their equivalence to optimal decision trees. This efficient representation allows us to exhaustively test all meaningful single agent models for their potential anomalous investment performance, which we apply to the NASDAQ Composite index over the last 20 years. We uncover large significant predictive power, with anomalous Sharpe ratio and directional accuracy, in particular during the dotcom bubble and crash and the 2008 financial crisis. A principal component analysis reveals transient convergence between the anomalous minority and majority models. A novel combination of the optimal single-agent models of both classes into a two-agents model leads to remarkable superior investment performance, especially during the periods of bubbles and crashes. Our design opens the field of ABMs to construct novel types of advanced warning systems of market crises, based on the emergent collective intelligence of ABMs built on carefully designed optimal decision trees that can be reversed engineered from real financial data.

  19. Calibrating emergent phenomena in stock markets with agent based models

    Sornette, Didier

    2018-01-01

    Since the 2008 financial crisis, agent-based models (ABMs), which account for out-of-equilibrium dynamics, heterogeneous preferences, time horizons and strategies, have often been envisioned as the new frontier that could revolutionise and displace the more standard models and tools in economics. However, their adoption and generalisation is drastically hindered by the absence of general reliable operational calibration methods. Here, we start with a different calibration angle that qualifies an ABM for its ability to achieve abnormal trading performance with respect to the buy-and-hold strategy when fed with real financial data. Starting from the common definition of standard minority and majority agents with binary strategies, we prove their equivalence to optimal decision trees. This efficient representation allows us to exhaustively test all meaningful single agent models for their potential anomalous investment performance, which we apply to the NASDAQ Composite index over the last 20 years. We uncover large significant predictive power, with anomalous Sharpe ratio and directional accuracy, in particular during the dotcom bubble and crash and the 2008 financial crisis. A principal component analysis reveals transient convergence between the anomalous minority and majority models. A novel combination of the optimal single-agent models of both classes into a two-agents model leads to remarkable superior investment performance, especially during the periods of bubbles and crashes. Our design opens the field of ABMs to construct novel types of advanced warning systems of market crises, based on the emergent collective intelligence of ABMs built on carefully designed optimal decision trees that can be reversed engineered from real financial data. PMID:29499049

  20. Geomechanical Model Calibration Using Field Measurements for a Petroleum Reserve

    Park, Byoung Yoon; Sobolik, Steven R.; Herrick, Courtney G.

    2018-03-01

    A finite element numerical analysis model has been constructed that consists of a mesh that effectively captures the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multimechanism deformation (M-D) salt constitutive model using the daily data of actual wellhead pressure and oil-brine interface location. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt are limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used as the field baseline measurement. The structure factor, A 2, and transient strain limit factor, K 0, in the M-D constitutive model are used for the calibration. The value of A 2, obtained experimentally from BC salt, and the value of K 0, obtained from Waste Isolation Pilot Plant salt, are used for the baseline values. To adjust the magnitude of A 2 and K 0, multiplication factors A 2 F and K 0 F are defined, respectively. The A 2 F and K 0 F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict behaviors of the salt dome, caverns, caprock, and interbed layers. The geotechnical concerns associated with the BC site from this analysis will be explained in a follow-up paper.

  1. Secondary clarifier hybrid model calibration in full scale pulp and paper activated sludge wastewater treatment

    Sreckovic, G.; Hall, E.R. [British Columbia Univ., Dept. of Civil Engineering, Vancouver, BC (Canada); Thibault, J. [Laval Univ., Dept. of Chemical Engineering, Ste-Foy, PQ (Canada); Savic, D. [Exeter Univ., School of Engineering, Exeter (United Kingdom)

    1999-05-01

    The issue of proper model calibration techniques applied to mechanistic mathematical models relating to activated sludge systems was discussed. Such calibrations are complex because of the non-linearity and multi-model objective functions of the process. This paper presents a hybrid model which was developed using two techniques to model and calibrate secondary clarifier parts of an activated sludge system. Genetic algorithms were used to successfully calibrate the settler mechanistic model, and neural networks were used to reduce the error between the mechanistic model output and real world data. Results of the modelling study show that the long term response of a one-dimensional settler mechanistic model calibrated by genetic algorithms and compared to full scale plant data can be improved by coupling the calibrated mechanistic model to as black-box model, such as a neural network. 11 refs., 2 figs.

  2. A joint calibration model for combining predictive distributions

    Patrizia Agati

    2013-05-01

    Full Text Available In many research fields, as for example in probabilistic weather forecasting, valuable predictive information about a future random phenomenon may come from several, possibly heterogeneous, sources. Forecast combining methods have been developed over the years in order to deal with ensembles of sources: the aim is to combine several predictions in such a way to improve forecast accuracy and reduce risk of bad forecasts.In this context, we propose the use of a Bayesian approach to information combining, which consists in treating the predictive probability density functions (pdfs from the individual ensemble members as data in a Bayesian updating problem. The likelihood function is shown to be proportional to the product of the pdfs, adjusted by a joint “calibration function” describing the predicting skill of the sources (Morris, 1977. In this paper, after rephrasing Morris’ algorithm in a predictive context, we propose to model the calibration function in terms of bias, scale and correlation and to estimate its parameters according to the least squares criterion. The performance of our method is investigated and compared with that of Bayesian Model Averaging (Raftery, 2005 on simulated data.

  3. A Solvatochromic Model Calibrates Nitriles’ Vibrational Frequencies to Electrostatic Fields

    Bagchi, Sayan; Fried, Stephen D.; Boxer, Steven G.

    2012-01-01

    Electrostatic interactions provide a primary connection between a protein’s three-dimensional structure and its function. Infrared (IR) probes are useful because vibrational frequencies of certain chemical groups, such as nitriles, are linearly sensitive to local electrostatic field, and can serve as a molecular electric field meter. IR spectroscopy has been used to study electrostatic changes or fluctuations in proteins, but measured peak frequencies have not been previously mapped to total electric fields, because of the absence of a field-frequency calibration and the complication of local chemical effects such as H-bonds. We report a solvatochromic model that provides a means to assess the H-bonding status of aromatic nitrile vibrational probes, and calibrates their vibrational frequencies to electrostatic field. The analysis involves correlations between the nitrile’s IR frequency and its 13C chemical shift, whose observation is facilitated by a robust method for introducing isotopes into aromatic nitriles. The method is tested on the model protein Ribonuclease S (RNase S) containing a labeled p-CN-Phe near the active site. Comparison of the measurements in RNase S against solvatochromic data gives an estimate of the average total electrostatic field at this location. The value determined agrees quantitatively with MD simulations, suggesting broader potential for the use of IR probes in the study of protein electrostatics. PMID:22694663

  4. Calibration Modeling Methodology to Optimize Performance for Low Range Applications

    McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.

    2010-01-01

    Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.

  5. Preliminary report on NTS spectral gamma logging and calibration models

    Mathews, M.A.; Warren, R.G.; Garcia, S.R.; Lavelle, M.J.

    1985-01-01

    Facilities are now available at the Nevada Test Site (NTS) in Building 2201 to calibrate spectral gamma logging equipment in environments of low radioactivity. Such environments are routinely encountered during logging of holes at the NTS. Four calibration models were delivered to Building 2201 in January 1985. Each model, or test pit, consists of a stone block with a 12-inch diameter cored borehole. Preliminary radioelement values from the core for the test pits range from 0.58 to 3.83% potassium (K), 0.48 to 29.11 ppm thorium (Th), and 0.62 to 40.42 ppm uranium (U). Two satellite holes, U19ab number2 and U19ab number3, were logged during the winter of 1984-1985. The response of these logs correlates with contents of the naturally radioactive elements K. Th. and U determined in samples from petrologic zones that occur within these holes. Based on these comparisons, the spectral gamma log aids in the recognition and mapping of subsurface stratigraphic units and alteration features associated with unusual concentration of these radioactive elements, such as clay-rich zones

  6. Takaful Models and Global Practices

    Akhter, Waheed

    2010-01-01

    There is a global interest in Islamic finance in general and Takāful in particular. The main feature that differentiates Takāful services from conventional ones is Sharī‟ah compliance nature of these services. Investors are taking keen interest in this potential market as Muslims constitute about one fourth of the world population (Muslim population, 2006). To streamline operations of a Takāful company, management and Sharī‟ah experts have developed different operational models for Takāful bu...

  7. Modeling Prairie Pothole Lakes: Linking Satellite Observation and Calibration (Invited)

    Schwartz, F. W.; Liu, G.; Zhang, B.; Yu, Z.

    2009-12-01

    This paper examines the response of a complex lake wetland system to variations in climate. The focus is on the lakes and wetlands of the Missouri Coteau, which is part of the larger Prairie Pothole Region of the Central Plains of North America. Information on lake size was enumerated from satellite images, and yielded power law relationships for different hydrological conditions. More traditional lake-stage data were made available to us from the USGS Cottonwood Lake Study Site in North Dakota. A Probabilistic Hydrologic Model (PHM) was developed to simulate lake complexes comprised of tens-of-thousands or more individual closed-basin lakes and wetlands. What is new about this model is a calibration scheme that utilizes remotely-sensed data on lake area as well as stage data for individual lakes. Some ¼ million individual data points are used within a Genetic Algorithm to calibrate the model by comparing the simulated results with observed lake area-frequency power law relationships derived from Landsat images and water depths from seven individual lakes and wetlands. The simulated lake behaviors show good agreement with the observations under average, dry, and wet climatic conditions. The calibrated model is used to examine the impact of climate variability on a large lake complex in ND, in particular, the “Dust Bowl Drought” 1930s. This most famous drought of the 20th Century devastated the agricultural economy of the Great Plains with health and social impacts lingering for years afterwards. Interestingly, the drought of 1930s is unremarkable in relation to others of greater intensity and frequency before AD 1200 in the Great Plains. Major droughts and deluges have the ability to create marked variability of the power law function (e.g. up to one and a half orders of magnitude variability from the extreme Dust Bowl Drought to the extreme 1993-2001 deluge). This new probabilistic modeling approach provides a novel tool to examine the response of the

  8. Non-linear calibration models for near infrared spectroscopy

    Ni, Wangdong; Nørgaard, Lars; Mørup, Morten

    2014-01-01

    by ridge regression (RR). The performance of the different methods is demonstrated by their practical applications using three real-life near infrared (NIR) data sets. Different aspects of the various approaches including computational time, model interpretability, potential over-fitting using the non-linear...... models on linear problems, robustness to small or medium sample sets, and robustness to pre-processing, are discussed. The results suggest that GPR and BANN are powerful and promising methods for handling linear as well as nonlinear systems, even when the data sets are moderately small. The LS......-SVM), relevance vector machines (RVM), Gaussian process regression (GPR), artificial neural network (ANN), and Bayesian ANN (BANN). In this comparison, partial least squares (PLS) regression is used as a linear benchmark, while the relationship of the methods is considered in terms of traditional calibration...

  9. A New Perspective for the Calibration of Computational Predictor Models.

    Crespo, Luis Guillermo

    2014-11-01

    This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).

  10. Modified calibration protocol evaluated in a model-based testing of SBR flexibility

    Corominas, Lluís; Sin, Gürkan; Puig, Sebastià

    2011-01-01

    The purpose of this paper is to refine the BIOMATH calibration protocol for SBR systems, in particular to develop a pragmatic calibration protocol that takes advantage of SBR information-rich data, defines a simulation strategy to obtain proper initial conditions for model calibration and provide...

  11. Enhanced Single Seed Trait Predictions in Soybean (Glycine max) and Robust Calibration Model Transfer with Near-Infrared Reflectance Spectroscopy.

    Hacisalihoglu, Gokhan; Gustin, Jeffery L; Louisma, Jean; Armstrong, Paul; Peter, Gary F; Walker, Alejandro R; Settles, A Mark

    2016-02-10

    Single seed near-infrared reflectance (NIR) spectroscopy predicts soybean (Glycine max) seed quality traits of moisture, oil, and protein. We tested the accuracy of transferring calibrations between different single seed NIR analyzers of the same design by collecting NIR spectra and analytical trait data for globally diverse soybean germplasm. X-ray microcomputed tomography (μCT) was used to collect seed density and shape traits to enhance the number of soybean traits that can be predicted from single seed NIR. Partial least-squares (PLS) regression gave accurate predictive models for oil, weight, volume, protein, and maximal cross-sectional area of the seed. PLS models for width, length, and density were not predictive. Although principal component analysis (PCA) of the NIR spectra showed that black seed coat color had significant signal, excluding black seeds from the calibrations did not impact model accuracies. Calibrations for oil and protein developed in this study as well as earlier calibrations for a separate NIR analyzer of the same design were used to test the ability to transfer PLS regressions between platforms. PLS models built from data collected on one NIR analyzer had minimal differences in accuracy when applied to spectra collected from a sister device. Model transfer was more robust when spectra were trimmed from 910 to 1679 nm to 955-1635 nm due to divergence of edge wavelengths between the two devices. The ability to transfer calibrations between similar single seed NIR spectrometers facilitates broader adoption of this high-throughput, nondestructive, seed phenotyping technology.

  12. Root zone water quality model (RZWQM2): Model use, calibration and validation

    Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.

    2012-01-01

    The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.

  13. Simultaneous calibration of surface flow and baseflow simulations: A revisit of the SWAT model calibration framework

    Accurate analysis of water flow pathways from rainfall to streams is critical for simulating water use, climate change impact, and contaminant transport. In this study, we developed a new scheme to simultaneously calibrate surface flow (SF) and baseflow (BF) simulations of Soil and Water Assessment ...

  14. Modeling of global biomass policies

    Gielen, Dolf; Fujino, Junichi; Hashimoto, Seiji; Moriguchi, Yuichi

    2003-01-01

    This paper discusses the BEAP model and its use for the analysis of biomass policies for CO 2 emission reduction. The model considers competing land use, trade and leakage effects, and competing emission reduction strategies. Two policy scenarios are presented. In case of a 2040 time horizon the results suggest that a combination of afforestation and limited use of biomass for energy and materials constitutes the most attractive set of strategies. In case of a 'continued Kyoto' scenario including afforestation permit trade, the results suggest 5.1 Gt emission reduction based on land use change in 2020, two thirds of the total emission reduction by then. In case of global emission reduction, land use, land use change and forestry (LULUCF) accounts for one quarter of the emission reduction. However these results depend on the modeling time horizon. In case of a broader time horizon, maximized biomass production is more attractive than LULUCF. This result can be interpreted as a warning against a market based trading scheme for LULUCF credits. The model results suggest that the bioenergy market is dominated by transportation fuels and heating, and to a lesser extent feedstocks. Bioelectricity does not gain a significant market share in case competing CO 2 -free electricity options such as CO 2 capture and sequestration and nuclear are considered. To some extent trade in agricultural food products such as beef and cereals will be affected by CO 2 policies

  15. Modeling of global biomass policies

    Gielen, D.; Fujino, Junichi; Hashimoto, Seiji; Moriguchi, Yuichi

    2003-01-01

    This paper discusses the BEAP model and its use for the analysis of biomass policies for CO 2 emission reduction. The model considers competing land use, trade and leakage effects, and competing emission reduction strategies. Two policy scenarios are presented. In case of a 2040 time horizon the results suggest that a combination of afforestation and limited use of biomass for energy and materials constitutes the most attractive set of strategies. In case of a 'continued Kyoto' scenario including afforestation permit trade, the results suggest 5.1 Gt emission reduction based on land use change in 2020, two thirds of the total emission reduction by then. In case of global emission reduction, land use, land use change and forestry (LULUCF) accounts for one quarter of the emission reduction. However these results depend on the modeling time horizon. In case of a broader time horizon, maximized biomass production is more attractive than LULUCF. This result can be interpreted as a warning against a market based trading scheme for LULUCF credits. The model results suggest that the bioenergy market is dominated by transportation fuels and heating, and to a lesser extent feedstocks. Bioelectricity does not gain a significant market share in case competing CO 2 -free electricity options such as CO 2 capture and sequestration and nuclear are considered. To some extent trade in agricultural food products such as beef and cereals will be affected by CO 2 policies. (Author)

  16. Calibration of a DG–model for fluorescence microscopy

    Hansen, Christian Valdemar

    It is well known that diseases like Alzheimer, Parkinson, Corea Huntington and Arteriosclerosis are caused by a jam in intracellular membrane traffic [2]. Hence to improve treatment, a quantitative analysis of intracellular transport is essential. Fluorescence loss in photobleaching (FLIP......) is an impor- tant and widely used microscopy method for visualization of molecular transport processes in living cells. Thus, the motivation for making an automated reliable analysis of the image data is high. In this contribution, we present and comment on the calibration of a Discontinuous......–Galerkin simulator [3, 4] on segmented cell images. The cell geometry is extracted from FLIP images using the Chan– Vese active contours algorithm [1] while the DG simulator is implemented in FEniCS [5]. Simulated FLIP sequences based on optimal parameters from the PDE model are presented, with an overall goal...

  17. Calibration by Hydrological Response Unit of a National Hydrologic Model to Improve Spatial Representation and Distribution of Parameters

    Norton, P. A., II

    2015-12-01

    The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.

  18. The design and realization of calibration apparatus for measuring the concentration of radon in three models

    Huiping, Guo [The Second Artillery Engineering College, Xi' an (China)

    2007-06-15

    For satisfying calibration request of radon measure in the laboratory, the calibration apparatus for radon activity measure is designed and realized. The calibration apparatus can auto-control and auto-measure in three models. sequent mode, pulse mode and constant mode. The stability and reliability of the calibration apparatus was tested under the three models. The experimental result shows that the apparatus can provides an adjustable and steady radon activity concentration environment for the research of radon and its progeny and for the calibration of its measure. (authors)

  19. Hydrological processes and model representation: impact of soft data on calibration

    J.G. Arnold; M.A. Youssef; H. Yen; M.J. White; A.Y. Sheshukov; A.M. Sadeghi; D.N. Moriasi; J.L. Steiner; Devendra Amatya; R.W. Skaggs; E.B. Haney; J. Jeong; M. Arabi; P.H. Gowda

    2015-01-01

    Hydrologic and water quality models are increasingly used to determine the environmental impacts of climate variability and land management. Due to differing model objectives and differences in monitored data, there are currently no universally accepted procedures for model calibration and validation in the literature. In an effort to develop accepted model calibration...

  20. Calibrated and Interactive Modelling of Form-Active Hybrid Structures

    Quinn, Gregory; Holden Deleuran, Anders; Piker, Daniel

    2016-01-01

    Form-active hybrid structures (FAHS) couple two or more different structural elements of low self weight and low or negligible bending flexural stiffness (such as slender beams, cables and membranes) into one structural assembly of high global stiffness. They offer high load-bearing capacity...... software packages which introduce interruptions and data exchange issues in the modelling pipeline. The mechanical precision, stability and open software architecture of Kangaroo has facilitated the development of proof-of-concept modelling pipelines which tackle this challenge and enable powerful...... materially-informed sketching. Making use of a projection-based dynamic relaxation solver for structural analysis, explorative design has proven to be highly effective....

  1. NSLS-II: Nonlinear Model Calibration for Synchrotrons

    Bengtsson, J.

    2010-10-08

    This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was {approx} 1 x 10{sup -5} for 1024 turns (to calibrate the linear optics) and {approx} 1 x 10{sup -4} for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is {approx}0.1. Since the transverse damping time is {approx}20 msec, i.e., {approx}4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain {delta}{nu} {approx} 1 x 10{sup -5}. A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al

  2. NSLS-II: Nonlinear Model Calibration for Synchrotrons

    Bengtsson, J.

    2010-01-01

    This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was ∼ 1 x 10 -5 for 1024 turns (to calibrate the linear optics) and ∼ 1 x 10 -4 for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is ∼0.1. Since the transverse damping time is ∼20 msec, i.e., ∼4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain (delta)ν ∼ 1 x 10 -5 . A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al since the 40s for that matter. Conclusion: what

  3. Calibration of a distributed hydrology and land surface model using energy flux measurements

    Larsen, Morten Andreas Dahl; Refsgaard, Jens Christian; Jensen, Karsten H.

    2016-01-01

    In this study we develop and test a calibration approach on a spatially distributed groundwater-surface water catchment model (MIKE SHE) coupled to a land surface model component with particular focus on the water and energy fluxes. The model is calibrated against time series of eddy flux measure...

  4. Modelling and calibration of a ring-shaped electrostatic meter

    Zhang Jianyong [University of Teesside, Middlesbrough TS1 3BA (United Kingdom); Zhou Bin; Xu Chuanlong; Wang Shimin, E-mail: zhoubinde1980@gmail.co [Southeast University, Sipailou 2, Nanjing 210096 (China)

    2009-02-01

    Ring-shaped electrostatic flow meters can provide very useful information on pneumatically transported air-solids mixture. This type of meters are popular in measuring and controlling the pulverized coal flow distribution among conveyors leading to burners in coal-fired power stations, and they have also been used for research purposes, e.g. for the investigation of electrification mechanism of air-solids two-phase flow. In this paper, finite element method (FEM) is employed to analyze the characteristics of ring-shaped electrostatic meters, and a mathematic model has been developed to express the relationship between the meter's voltage output and the motion of charged particles in the sensing volume. The theoretical analysis and the test results using a belt rig demonstrate that the output of the meter depends upon many parameters including the characteristics of conditioning circuitry, the particle velocity vector, the amount and the rate of change of the charge carried by particles, the locations of particles and etc. This paper also introduces a method to optimize the theoretical model via calibration.

  5. Hanford statewide groundwater flow and transport model calibration report

    Law, A.; Panday, S.; Denslow, C.; Fecht, K.; Knepp, A.

    1996-04-01

    This report presents the results of the development and calibration of a three-dimensional, finite element model (VAM3DCG) for the unconfined groundwater flow system at the Hanford Site. This flow system is the largest radioactively contaminated groundwater system in the United States. Eleven groundwater plumes have been identified containing organics, inorganics, and radionuclides. Because groundwater from the unconfined groundwater system flows into the Columbia River, the development of a groundwater flow model is essential to the long-term management of these plumes. Cost effective decision making requires the capability to predict the effectiveness of various remediation approaches. Some of the alternatives available to remediate groundwater include: pumping contaminated water from the ground for treatment with reinjection or to other disposal facilities; containment of plumes by means of impermeable walls, physical barriers, and hydraulic control measures; and, in some cases, management of groundwater via planned recharge and withdrawals. Implementation of these methods requires a knowledge of the groundwater flow system and how it responds to remedial actions

  6. Streamflow characteristics from modelled runoff time series: Importance of calibration criteria selection

    Poole, Sandra; Vis, Marc; Knight, Rodney; Seibert, Jan

    2017-01-01

    Ecologically relevant streamflow characteristics (SFCs) of ungauged catchments are often estimated from simulated runoff of hydrologic models that were originally calibrated on gauged catchments. However, SFC estimates of the gauged donor catchments and subsequently the ungauged catchments can be substantially uncertain when models are calibrated using traditional approaches based on optimization of statistical performance metrics (e.g., Nash–Sutcliffe model efficiency). An improved calibration strategy for gauged catchments is therefore crucial to help reduce the uncertainties of estimated SFCs for ungauged catchments. The aim of this study was to improve SFC estimates from modeled runoff time series in gauged catchments by explicitly including one or several SFCs in the calibration process. Different types of objective functions were defined consisting of the Nash–Sutcliffe model efficiency, single SFCs, or combinations thereof. We calibrated a bucket-type runoff model (HBV – Hydrologiska Byråns Vattenavdelning – model) for 25 catchments in the Tennessee River basin and evaluated the proposed calibration approach on 13 ecologically relevant SFCs representing major flow regime components and different flow conditions. While the model generally tended to underestimate the tested SFCs related to mean and high-flow conditions, SFCs related to low flow were generally overestimated. The highest estimation accuracies were achieved by a SFC-specific model calibration. Estimates of SFCs not included in the calibration process were of similar quality when comparing a multi-SFC calibration approach to a traditional model efficiency calibration. For practical applications, this implies that SFCs should preferably be estimated from targeted runoff model calibration, and modeled estimates need to be carefully interpreted.

  7. Application of Iterative Robust Model-based Optimal Experimental Design for the Calibration of Biocatalytic Models

    Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer

    2017-01-01

    The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during...... experimentation is not actively used to optimise the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω......-transaminase catalysed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is a more accurate, but also a computationally more expensive method. As a result, an important deviation between both approaches...

  8. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  9. Calibration of a complex activated sludge model for the full-scale wastewater treatment plant

    Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw

    2011-01-01

    In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that u...

  10. Generator Dynamic Model Validation and Parameter Calibration Using Phasor Measurements at the Point of Connection

    Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry; Yang, Steve

    2013-05-01

    Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.

  11. A new method to calibrate Lagrangian model with ASAR images for oil slick trajectory.

    Tian, Siyu; Huang, Xiaoxia; Li, Hongga

    2017-03-15

    Since Lagrangian model coefficients vary with different conditions, it is necessary to calibrate the model to obtain optimal coefficient combination for special oil spill accident. This paper focuses on proposing a new method to calibrate Lagrangian model with time series of Envisat ASAR images. Oil slicks extracted from time series images form a detected trajectory of special oil slick. Lagrangian model is calibrated by minimizing the difference between simulated trajectory and detected trajectory. mean center position distance difference (MCPD) and rotation difference (RD) of Oil slicks' or particles' standard deviational ellipses (SDEs) are calculated as two evaluations. The two parameters are taken to evaluate the performance of Lagrangian transport model with different coefficient combinations. This method is applied to Penglai 19-3 oil spill accident. The simulation result with calibrated model agrees well with related satellite observations. It is suggested the new method is effective to calibrate Lagrangian model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Calibration models for density borehole logging - construction report

    Engelmann, R.E.; Lewis, R.E.; Stromswold, D.C.

    1995-10-01

    Two machined blocks of magnesium and aluminum alloys form the basis for Hanford's density models. The blocks provide known densities of 1.780 ± 0.002 g/cm 3 and 2.804 ± 0.002 g/cm 3 for calibrating borehole logging tools that measure density based on gamma-ray scattering from a source in the tool. Each block is approximately 33 x 58 x 91 cm (13 x 23 x 36 in.) with cylindrical grooves cut into the sides of the blocks to hold steel casings of inner diameter 15 cm (6 in.) and 20 cm (8 in.). Spacers that can be inserted between the blocks and casings can create air gaps of thickness 0.64, 1.3, 1.9, and 2.5 cm (0.25, 0.5, 0.75 and 1.0 in.), simulating air gaps that can occur in actual wells from hole enlargements behind the casing

  13. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.

  14. Calibrating the sqHIMMELI v1.0 wetland methane emission model with hierarchical modeling and adaptive MCMC

    Susiluoto, Jouni; Raivonen, Maarit; Backman, Leif; Laine, Marko; Makela, Jarmo; Peltola, Olli; Vesala, Timo; Aalto, Tuula

    2018-03-01

    the early spring net primary production could be used to predict parameters affecting the annual methane production. Even though the calibration is specific to the Siikaneva site, the hierarchical modeling approach is well suited for larger-scale studies and the results of the estimation pave way for a regional or global-scale Bayesian calibration of wetland emission models.

  15. Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics

    Lazarus, S. M.; Holman, B. P.; Splitt, M. E.

    2017-12-01

    A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.

  16. Considering Decision Variable Diversity in Multi-Objective Optimization: Application in Hydrologic Model Calibration

    Sahraei, S.; Asadzadeh, M.

    2017-12-01

    Any modern multi-objective global optimization algorithm should be able to archive a well-distributed set of solutions. While the solution diversity in the objective space has been explored extensively in the literature, little attention has been given to the solution diversity in the decision space. Selection metrics such as the hypervolume contribution and crowding distance calculated in the objective space would guide the search toward solutions that are well-distributed across the objective space. In this study, the diversity of solutions in the decision-space is used as the main selection criteria beside the dominance check in multi-objective optimization. To this end, currently archived solutions are clustered in the decision space and the ones in less crowded clusters are given more chance to be selected for generating new solution. The proposed approach is first tested on benchmark mathematical test problems. Second, it is applied to a hydrologic model calibration problem with more than three objective functions. Results show that the chance of finding more sparse set of high-quality solutions increases, and therefore the analyst would receive a well-diverse set of options with maximum amount of information. Pareto Archived-Dynamically Dimensioned Search, which is an efficient and parsimonious multi-objective optimization algorithm for model calibration, is utilized in this study.

  17. Effect of calibration data series length on performance and optimal parameters of hydrological model

    Chuan-zhe Li

    2010-12-01

    Full Text Available In order to assess the effects of calibration data series length on the performance and optimal parameter values of a hydrological model in ungauged or data-limited catchments (data are non-continuous and fragmental in some catchments, we used non-continuous calibration periods for more independent streamflow data for SIMHYD (simple hydrology model calibration. Nash-Sutcliffe efficiency and percentage water balance error were used as performance measures. The particle swarm optimization (PSO method was used to calibrate the rainfall-runoff models. Different lengths of data series ranging from one year to ten years, randomly sampled, were used to study the impact of calibration data series length. Fifty-five relatively unimpaired catchments located all over Australia with daily precipitation, potential evapotranspiration, and streamflow data were tested to obtain more general conclusions. The results show that longer calibration data series do not necessarily result in better model performance. In general, eight years of data are sufficient to obtain steady estimates of model performance and parameters for the SIMHYD model. It is also shown that most humid catchments require fewer calibration data to obtain a good performance and stable parameter values. The model performs better in humid and semi-humid catchments than in arid catchments. Our results may have useful and interesting implications for the efficiency of using limited observation data for hydrological model calibration in different climates.

  18. Improving SWAT model prediction using an upgraded denitrification scheme and constrained auto calibration

    The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...

  19. Modelling MIZ dynamics in a global model

    Rynders, Stefanie; Aksenov, Yevgeny; Feltham, Daniel; Nurser, George; Naveira Garabato, Alberto

    2016-04-01

    Exposure of large, previously ice-covered areas of the Arctic Ocean to the wind and surface ocean waves results in the Arctic pack ice cover becoming more fragmented and mobile, with large regions of ice cover evolving into the Marginal Ice Zone (MIZ). The need for better climate predictions, along with growing economic activity in the Polar Oceans, necessitates climate and forecasting models that can simulate fragmented sea ice with a greater fidelity. Current models are not fully fit for the purpose, since they neither model surface ocean waves in the MIZ, nor account for the effect of floe fragmentation on drag, nor include sea ice rheology that represents both the now thinner pack ice and MIZ ice dynamics. All these processes affect the momentum transfer to the ocean. We present initial results from a global ocean model NEMO (Nucleus for European Modelling of the Ocean) coupled to the Los Alamos sea ice model CICE. The model setup implements a novel rheological formulation for sea ice dynamics, accounting for ice floe collisions, thus offering a seamless framework for pack ice and MIZ simulations. The effect of surface waves on ice motion is included through wave pressure and the turbulent kinetic energy of ice floes. In the multidecadal model integrations we examine MIZ and basin scale sea ice and oceanic responses to the changes in ice dynamics. We analyse model sensitivities and attribute them to key sea ice and ocean dynamical mechanisms. The results suggest that the effect of the new ice rheology is confined to the MIZ. However with the current increase in summer MIZ area, which is projected to continue and may become the dominant type of sea ice in the Arctic, we argue that the effects of the combined sea ice rheology will be noticeable in large areas of the Arctic Ocean, affecting sea ice and ocean. With this study we assert that to make more accurate sea ice predictions in the changing Arctic, models need to include MIZ dynamics and physics.

  20. On global and regional spectral evaluation of global geopotential models

    Ustun, A; Abbak, R A

    2010-01-01

    Spectral evaluation of global geopotential models (GGMs) is necessary to recognize the behaviour of gravity signal and its error recorded in spherical harmonic coefficients and associated standard deviations. Results put forward in this wise explain the whole contribution of gravity data in different kinds that represent various sections of the gravity spectrum. This method is more informative than accuracy assessment methods, which use external data such as GPS-levelling. Comparative spectral evaluation for more than one model can be performed both in global and local sense using many spectral tools. The number of GGMs has grown with the increasing number of data collected by the dedicated satellite gravity missions, CHAMP, GRACE and GOCE. This fact makes it necessary to measure the differences between models and to monitor the improvements in the gravity field recovery. In this paper, some of the satellite-only and combined models are examined in different scales, globally and regionally, in order to observe the advances in the modelling of GGMs and their strengths at various expansion degrees for geodetic and geophysical applications. The validation of the published errors of model coefficients is a part of this evaluation. All spectral tools explicitly reveal the superiority of the GRACE-based models when compared against the models that comprise the conventional satellite tracking data. The disagreement between models is large in local/regional areas if data sets are different, as seen from the example of the Turkish territory

  1. Beyond discrimination: A comparison of calibration methods and clinical usefulness of predictive models of readmission risk.

    Walsh, Colin G; Sharman, Kavya; Hripcsak, George

    2017-12-01

    Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration

  2. Calibration of the APEX Model to Simulate Management Practice Effects on Runoff, Sediment, and Phosphorus Loss.

    Bhandari, Ammar B; Nelson, Nathan O; Sweeney, Daniel W; Baffaut, Claire; Lory, John A; Senaviratne, Anomaa; Pierzynski, Gary M; Janssen, Keith A; Barnes, Philip L

    2017-11-01

    Process-based computer models have been proposed as a tool to generate data for Phosphorus (P) Index assessment and development. Although models are commonly used to simulate P loss from agriculture using managements that are different from the calibration data, this use of models has not been fully tested. The objective of this study is to determine if the Agricultural Policy Environmental eXtender (APEX) model can accurately simulate runoff, sediment, total P, and dissolved P loss from 0.4 to 1.5 ha of agricultural fields with managements that are different from the calibration data. The APEX model was calibrated with field-scale data from eight different managements at two locations (management-specific models). The calibrated models were then validated, either with the same management used for calibration or with different managements. Location models were also developed by calibrating APEX with data from all managements. The management-specific models resulted in satisfactory performance when used to simulate runoff, total P, and dissolved P within their respective systems, with > 0.50, Nash-Sutcliffe efficiency > 0.30, and percent bias within ±35% for runoff and ±70% for total and dissolved P. When applied outside the calibration management, the management-specific models only met the minimum performance criteria in one-third of the tests. The location models had better model performance when applied across all managements compared with management-specific models. Our results suggest that models only be applied within the managements used for calibration and that data be included from multiple management systems for calibration when using models to assess management effects on P loss or evaluate P Indices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  3. Spherical Process Models for Global Spatial Statistics

    Jeong, Jaehong; Jun, Mikyoung; Genton, Marc G.

    2017-01-01

    Statistical models used in geophysical, environmental, and climate science applications must reflect the curvature of the spatial domain in global data. Over the past few decades, statisticians have developed covariance models that capture

  4. Multiple-Objective Stepwise Calibration Using Luca

    Hay, Lauren E.; Umemoto, Makiko

    2007-01-01

    This report documents Luca (Let us calibrate), a multiple-objective, stepwise, automated procedure for hydrologic model calibration and the associated graphical user interface (GUI). Luca is a wizard-style user-friendly GUI that provides an easy systematic way of building and executing a calibration procedure. The calibration procedure uses the Shuffled Complex Evolution global search algorithm to calibrate any model compiled with the U.S. Geological Survey's Modular Modeling System. This process assures that intermediate and final states of the model are simulated consistently with measured values.

  5. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    Carl Stern; Martin Lee

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models

  6. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    Carl-Stern

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models.

  7. Modeling Global Urbanization Supported by Nighttime Light Remote Sensing

    Zhou, Y.

    2015-12-01

    Urbanization, a major driver of global change, profoundly impacts our physical and social world, for example, altering carbon cycling and climate. Understanding these consequences for better scientific insights and effective decision-making unarguably requires accurate information on urban extent and its spatial distributions. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the nighttime light remote sensing data, extended this method to the global domain by developing a computational method (parameterization) to estimate the key parameters in the cluster-based method, and built a consistent 20-year global urban map series to evaluate the time-reactive nature of global urbanization (e.g. 2000 in Fig. 1). Supported by urban maps derived from nightlights remote sensing data and socio-economic drivers, we developed an integrated modeling framework to project future urban expansion by integrating a top-down macro-scale statistical model with a bottom-up urban growth model. With the models calibrated and validated using historical data, we explored urban growth at the grid level (1-km) over the next two decades under a number of socio-economic scenarios. The derived spatiotemporal information of historical and potential future urbanization will be of great value with practical implications for developing adaptation and risk management measures for urban infrastructure, transportation, energy, and water systems when considered together with other factors such as climate variability and change, and high impact weather events.

  8. Global model structures for ∗-modules

    Böhme, Benjamin

    2018-01-01

    We extend Schwede's work on the unstable global homotopy theory of orthogonal spaces and L-spaces to the category of ∗-modules (i.e., unstable S-modules). We prove a theorem which transports model structures and their properties from L-spaces to ∗-modules and show that the resulting global model...... structure for ∗-modules is monoidally Quillen equivalent to that of orthogonal spaces. As a consequence, there are induced Quillen equivalences between the associated model categories of monoids, which identify equivalent models for the global homotopy theory of A∞-spaces....

  9. Infusion of SMAP Data into Offline and Coupled Models: Evaluation, Calibration, and Assimilation

    Lawston, P.; Santanello, J. A., Jr.; Dennis, E. J.; Kumar, S.

    2017-12-01

    The impact of the land surface on the water and energy cycle is modulated by its coupling to the planetary boundary layer (PBL), and begins at the local scale. A core component of the local land-atmosphere coupling (LoCo) effort requires understanding the `links in the chain' between soil moisture and precipitation, most notably through surface heat fluxes and PBL evolution. To date, broader (i.e. global) application of LoCo diagnostics has been limited by observational data requirements of the coupled system (and in particular, soil moisture) that are typically only met during localized, short-term field campaigns. SMAP offers, for the first time, the ability to map high quality, near-surface soil moisture globally every few days at a spatial resolution comparable to current modeling efforts. As a result, there are numerous potential avenues for SMAP model-data fusion that can be explored in the context of improving understanding of L-A interaction and NWP. In this study, we assess multiple points of intersection of SMAP products with offline and coupled models and evaluate impacts using process-level diagnostics. Results will inform upon the importance of high-resolution soil moisture mapping for improved coupled prediction and model development, as well as reconciling differences in modeled, retrieved, and measured soil moisture. Specifically, NASA model (LIS, NU-WRF) and observation (SMAP, NLDAS-2) products are combined with in-situ standard and IOP measurements (soil moisture, flux, and radiosonde) over the ARM-SGP. An array of land surface model spinups (via LIS-Noah) are performed with varying atmospheric forcing, greenness fraction, and soil layering permutations. Calibration of LIS-Noah soil hydraulic parameters is then performed using an array of in-situ soil moisture and flux and SMAP products. In addition, SMAP assimilation is performed in LIS-Noah both at the scale of the observation (36 and 9km) and the model grid (1km). The focus is on the

  10. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  11. The Value of Hydrograph Partitioning Curves for Calibrating Hydrological Models in Glacierized Basins

    He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno

    2018-03-01

    This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.

  12. Application of heuristic and machine-learning approach to engine model calibration

    Cheng, Jie; Ryu, Kwang R.; Newman, C. E.; Davis, George C.

    1993-03-01

    Automation of engine model calibration procedures is a very challenging task because (1) the calibration process searches for a goal state in a huge, continuous state space, (2) calibration is often a lengthy and frustrating task because of complicated mutual interference among the target parameters, and (3) the calibration problem is heuristic by nature, and often heuristic knowledge for constraining a search cannot be easily acquired from domain experts. A combined heuristic and machine learning approach has, therefore, been adopted to improve the efficiency of model calibration. We developed an intelligent calibration program called ICALIB. It has been used on a daily basis for engine model applications, and has reduced the time required for model calibrations from many hours to a few minutes on average. In this paper, we describe the heuristic control strategies employed in ICALIB such as a hill-climbing search based on a state distance estimation function, incremental problem solution refinement by using a dynamic tolerance window, and calibration target parameter ordering for guiding the search. In addition, we present the application of a machine learning program called GID3* for automatic acquisition of heuristic rules for ordering target parameters.

  13. Multi-Site Calibration of Linear Reservoir Based Geomorphologic Rainfall-Runoff Models

    Bahram Saeidifarzad

    2014-09-01

    Full Text Available Multi-site optimization of two adapted event-based geomorphologic rainfall-runoff models was presented using Non-dominated Sorting Genetic Algorithm (NSGA-II method for the South Fork Eel River watershed, California. The first model was developed based on Unequal Cascade of Reservoirs (UECR and the second model was presented as a modified version of Geomorphological Unit Hydrograph based on Nash’s model (GUHN. Two calibration strategies were considered as semi-lumped and semi-distributed for imposing (or unimposing the geomorphology relations in the models. The results of models were compared with Nash’s model. Obtained results using the observed data of two stations in the multi-site optimization framework showed reasonable efficiency values in both the calibration and the verification steps. The outcomes also showed that semi-distributed calibration of the modified GUHN model slightly outperformed other models in both upstream and downstream stations during calibration. Both calibration strategies for the developed UECR model during the verification phase showed slightly better performance in the downstream station, but in the upstream station, the modified GUHN model in the semi-lumped strategy slightly outperformed the other models. The semi-lumped calibration strategy could lead to logical lag time parameters related to the basin geomorphology and may be more suitable for data-based statistical analyses of the rainfall-runoff process.

  14. Nanotechnology in global medicine and human biosecurity: private interests, policy dilemmas, and the calibration of public health law.

    Faunce, Thomas A

    2007-01-01

    This paper considers how best to approach dilemmas posed to global health and biosecurity policy by increasing advances in practical applications of nanotechnology. The type of nano-technology policy dilemmas discussed include: (1) expenditure of public funds, (2) public-funded research priorities, (3) public confidence in government and science and, finally, (4) public safety. The article examines the value in this context of a legal obligation that the development of relevant public health law be calibrated against less corporate-influenced norms issuing from bioethics and international human rights.

  15. Calibration of the 7—Equation Transition Model for High Reynolds Flows at Low Mach

    Colonia, S.; Leble, V.; Steijl, R.; Barakos, G.

    2016-09-01

    The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling concept represents a valid way to include transitional effects into practical CFD simulations. However, the model involves coefficients that need tuning. In this paper, the γ—equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. An aerofoil is used to evaluate the original model and calibrate it; while a large scale wind turbine blade is employed to show that the calibrated model can lead to reliable solutions for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected.

  16. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  17. A fundamental parameter-based calibration model for an intrinsic germanium X-ray fluorescence spectrometer

    Christensen, L.H.; Pind, N.

    1982-01-01

    A matrix-independent fundamental parameter-based calibration model for an energy-dispersive X-ray fluorescence spectrometer has been developed. This model, which is part of a fundamental parameter approach quantification method, accounts for both the excitation and detection probability. For each secondary target a number of relative calibration constants are calculated on the basis of knowledge of the irradiation geometry, the detector specifications, and tabulated fundamental physical parameters. The absolute calibration of the spectrometer is performed by measuring one pure element standard per secondary target. For sample systems where all elements can be analyzed by means of the same secondary target the absolute calibration constant can be determined during the iterative solution of the basic equation. Calculated and experimentally determined relative calibration constants agree to within 5-10% of each other and so do the results obtained from the analysis of an NBS certified alloy using the two sets of constants. (orig.)

  18. PEST modules with regularization for the acceleration of the automatic calibration in hydrodynamic models

    Polomčić, Dušan M.; Bajić, Dragoljub I.; Močević, Jelena M.

    2015-01-01

    The calibration process of hydrodynamic model is done usually manually by 'testing' with different values of hydrogeological parameters and hydraulic characteristics of the boundary conditions. By using the PEST program, automatic calibration of models has been introduced, and it has proved to significantly reduce the subjective influence of the model creator on results. With the relatively new approach of PEST, i.e. with the introduction of so-called 'pilot points', the concept of homogeneou...

  19. Global Health Innovation Technology Models

    Kimberly Harding

    2016-04-01

    Full Text Available Chronic technology and business process disparities between High Income, Low Middle Income and Low Income (HIC, LMIC, LIC research collaborators directly prevent the growth of sustainable Global Health innova‐ tion for infectious and rare diseases. There is a need for an Open Source-Open Science Architecture Framework to bridge this divide. We are proposing such a framework for consideration by the Global Health community, by utiliz‐ ing a hybrid approach of integrating agnostic Open Source technology and healthcare interoperability standards and Total Quality Management principles. We will validate this architecture framework through our programme called Project Orchid. Project Orchid is a conceptual Clinical Intelligence Exchange and Virtual Innovation platform utilizing this approach to support clinical innovation efforts for multi-national collaboration that can be locally sustainable for LIC and LMIC research cohorts. The goal is to enable LIC and LMIC research organizations to acceler‐ ate their clinical trial process maturity in the field of drug discovery, population health innovation initiatives and public domain knowledge networks. When sponsored, this concept will be tested by 12 confirmed clinical research and public health organizations in six countries. The potential impact of this platform is reduced drug discovery and public health innovation lag time and improved clinical trial interventions, due to reliable clinical intelligence and bio-surveillance across all phases of the clinical innovation process.

  20. New Methods for Kinematic Modelling and Calibration of Robots

    Søe-Knudsen, Rune

    2014-01-01

    the accuracy in an easy and accessible way. The required equipment is accessible, since the cost is held to a minimum and can be made with conventional processing equipment. Our first method calibrates the kinematics of a robot using known relative positions measured with the robot itself and a plate...... with holes matching the robot tool flange. The second method calibrates the kinematics using two robots. This method allows the robots to carry out the collection of measurements and the adjustment, by themselves, after the robots have been connected. Furthermore, we also propose a method for restoring......Improving a robot's accuracy increases its ability to solve certain tasks, and is therefore valuable. Practical ways of achieving this improved accuracy, even after robot repair, is also valuable. In this work, we introduce methods that improve the robot's accuracy and make it possible to maintain...

  1. Global sensitivity analysis of a filtration model for submerged anaerobic membrane bioreactors (AnMBR).

    Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J

    2014-04-01

    The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Validation and calibration of structural models that combine information from multiple sources.

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  3. Uncertainty modelling and code calibration for composite materials

    Toft, Henrik Stensgaard; Branner, Kim; Mishnaevsky, Leon, Jr

    2013-01-01

    and measurement uncertainties which are introduced on the different scales. Typically, these uncertainties are taken into account in the design process using characteristic values and partial safety factors specified in a design standard. The value of the partial safety factors should reflect a reasonable balance...... to wind turbine blades are calibrated for two typical lay-ups using a large number of load cases and ratios between the aerodynamic forces and the inertia forces....

  4. A Low Cost Calibration Method for Urban Drainage Models

    Rasmussen, Michael R.; Thorndahl, Søren; Schaarup-Jensen, Kjeld

    2008-01-01

    The calibration of the hydrological reduction coefficient is examined for a small catchment. The objective is to determine the hydrological reduction coefficient, which is used for describing how much of the precipitation which falls on impervious areas, that actually ends up in the sewer...... to what can be found with intensive in-sewer measurement of rain and runoff. The results also clearly indicate that there is a large variation in hydrological reduction coefficient between different rain events....

  5. HYbrid Coordinate Ocean Model (HYCOM): Global

    National Oceanic and Atmospheric Administration, Department of Commerce — Global HYbrid Coordinate Ocean Model (HYCOM) and U.S. Navy Coupled Ocean Data Assimilation (NCODA) 3-day, daily forecast at approximately 9-km (1/12-degree)...

  6. ASTER Global Digital Elevation Model V002

    National Aeronautics and Space Administration — The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Model (GDEM) was developed jointly by the U.S. National...

  7. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  8. A Global Stock and Bond Model

    Connor, Gregory

    1996-01-01

    Factor models are now widely used to support asset selection decisions. Global asset allocation, the allocation between stocks versus bonds and among nations, usually relies instead on correlation analysis of international equity and bond indexes. It would be preferable to have a single integrated framework for both asset selection and asset allocation. This framework would require a factor model applicable at an asset or country level, as well as at a global level,...

  9. A Fundamental Parameter-Based Calibration Model for an Intrinsic Germanium X-Ray Fluorescence Spectrometer

    Christensen, Leif Højslet; Pind, Niels

    1982-01-01

    A matrix-independent fundamental parameter-based calibration model for an energy-dispersive X-ray fluorescence spectrometer has been developed. This model, which is part of a fundamental parameter approach quantification method, accounts for both the excitation and detection probability. For each...... secondary target a number of relative calibration constants are calculated on the basis of knowledge of the irradiation geometry, the detector specifications, and tabulated fundamental physical parameters. The absolute calibration of the spectrometer is performed by measuring one pure element standard per...

  10. Influence of smoothing of X-ray spectra on parameters of calibration model

    Antoniak, W.; Urbanski, P.; Kowalska, E.

    1998-01-01

    Parameters of the calibration model before and after smoothing of X-ray spectra have been investigated. The calibration model was calculated using multivariate procedure - namely the partial least square regression (PLS). Investigations have been performed on an example of six sets of various standards used for calibration of some instruments based on X-ray fluorescence principle. The smoothing methods were compared: regression splines, Savitzky-Golay and Discrete Fourier Transform. The calculations were performed using a software package MATLAB and some home-made programs. (author)

  11. A physically based model of global freshwater surface temperature

    van Beek, Ludovicus P. H.; Eikelboom, Tessa; van Vliet, Michelle T. H.; Bierkens, Marc F. P.

    2012-09-01

    Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through their tolerance to parasites and diseases. Models used to predict surface water temperature range between physically based deterministic models and statistical approaches. Here we present the initial results of a physically based deterministic model of global freshwater surface temperature. The model adds a surface water energy balance to river discharge modeled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff, and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by shortwave and longwave radiation and sensible and latent heat fluxes. Also included are ice formation and its effect on heat storage and river hydraulics. We use the coupled surface water and energy balance model to simulate global freshwater surface temperature at daily time steps with a spatial resolution of 0.5° on a regular grid for the period 1976-2000. We opt to parameterize the model with globally available data and apply it without calibration in order to preserve its physical basis with the outlook of evaluating the effects of atmospheric warming on freshwater surface temperature. We validate our simulation results with daily temperature data from rivers and lakes (U.S. Geological Survey (USGS), limited to the USA) and compare mean monthly temperatures with those recorded in the Global Environment Monitoring System (GEMS) data set. Results show that the model is able to capture the mean monthly surface temperature for the majority of the GEMS stations, while the interannual variability as derived from the USGS and NOAA data was captured reasonably well. Results are poorest for

  12. A global central banker competency model

    David W. Brits

    2014-07-01

    Full Text Available Orientation: No comprehensive, integrated competency model exists for central bankers. Due to the importance of central banks in the context of the ongoing global financial crisis, it was deemed necessary to design and validate such a model. Research purpose: To craft and validate a comprehensive, integrated global central banker competency model (GCBCM and to assess whether central banks using the GCBCM for training have a higher global influence. Motivation for the study: Limited consensus exists globally about what constitutes a ‘competent’ central banker. A quantitatively validated GCBCM would make a significant contribution to enhancing central banker effectiveness, and also provide a solid foundation for effective people management. Research approach, design and method: A blended quantitative and qualitative research approach was taken. Two sets of hypotheses were tested regarding the relationships between the GCBCM and the training offered, using the model on the one hand, and a central bank’s global influence on the other. Main findings: The GCBCM was generally accepted across all participating central banks globally, although some differences were found between central banks with higher and lower global influence. The actual training offered by central banks in terms of the model, however, is generally limited to technical-functional skills. The GCBCM is therefore at present predominantly aspirational. Significant differences were found regarding the training offered. Practical/managerial implications: By adopting the GCBCM, central banks would be able to develop organisation-specific competency models in order to enhance their organisational capabilities and play their increasingly important global role more effectively. Contribution: A generic conceptual framework for the crafting of a competency model with evaluation criteria was developed. A GCBCM was quantitatively validated.

  13. An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model

    Tiernan, E. D.; Hodges, B. R.

    2017-12-01

    The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.

  14. Calibration and validation of a model describing complete autotrophic nitrogen removal in a granular SBR system

    Vangsgaard, Anna Katrine; Mutlu, Ayten Gizem; Gernaey, Krist

    2013-01-01

    BACKGROUND: A validated model describing the nitritation-anammox process in a granular sequencing batch reactor (SBR) system is an important tool for: a) design of future experiments and b) prediction of process performance during optimization, while applying process control, or during system scale......-up. RESULTS: A model was calibrated using a step-wise procedure customized for the specific needs of the system. The important steps in the procedure were initialization, steady-state and dynamic calibration, and validation. A fast and effective initialization approach was developed to approximate pseudo...... screening of the parameter space proposed by Sin et al. (2008) - to find the best fit of the model to dynamic data. Finally, the calibrated model was validated with an independent data set. CONCLUSION: The presented calibration procedure is the first customized procedure for this type of system...

  15. Modeling, Calibration and Control for Extreme-Precision MEMS Deformable Mirrors, Phase I

    National Aeronautics and Space Administration — Iris AO will develop electromechanical models and actuator calibration methods to enable open-loop control of MEMS deformable mirrors (DMs) with unprecedented...

  16. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    Elsheikh, A. H.; Wheeler, M. F.; Hoteit, Ibrahim

    2013-01-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known

  17. Radiolytic modelling of spent fuel oxidative dissolution mechanism. Calibration against UO2 dynamic leaching experiments

    Merino, J.; Cera, E.; Bruno, J.; Quinones, J.; Casas, I.; Clarens, F.; Gimenez, J.; Pablo, J. de; Rovira, M.; Martinez-Esparza, A.

    2005-01-01

    Calibration and testing are inherent aspects of any modelling exercise and consequently they are key issues in developing a model for the oxidative dissolution of spent fuel. In the present work we present the outcome of the calibration process for the kinetic constants of a UO 2 oxidative dissolution mechanism developed for using in a radiolytic model. Experimental data obtained in dynamic leaching experiments of unirradiated UO 2 has been used for this purpose. The iterative calibration process has provided some insight into the detailed mechanism taking place in the alteration of UO 2 , particularly the role of · OH radicals and their interaction with the carbonate system. The results show that, although more simulations are needed for testing in different experimental systems, the calibrated oxidative dissolution mechanism could be included in radiolytic models to gain confidence in the prediction of the long-term alteration rate of the spent fuel under repository conditions

  18. Evaluation of Uncertainties in hydrogeological modeling and groundwater flow analyses. Model calibration

    Ijiri, Yuji; Ono, Makoto; Sugihara, Yutaka; Shimo, Michito; Yamamoto, Hajime; Fumimura, Kenichi

    2003-03-01

    This study involves evaluation of uncertainty in hydrogeological modeling and groundwater flow analysis. Three-dimensional groundwater flow in Shobasama site in Tono was analyzed using two continuum models and one discontinuous model. The domain of this study covered area of four kilometers in east-west direction and six kilometers in north-south direction. Moreover, for the purpose of evaluating how uncertainties included in modeling of hydrogeological structure and results of groundwater simulation decreased with progress of investigation research, updating and calibration of the models about several modeling techniques of hydrogeological structure and groundwater flow analysis techniques were carried out, based on the information and knowledge which were newly acquired. The acquired knowledge is as follows. As a result of setting parameters and structures in renewal of the models following to the circumstances by last year, there is no big difference to handling between modeling methods. The model calibration is performed by the method of matching numerical simulation with observation, about the pressure response caused by opening and closing of a packer in MIU-2 borehole. Each analysis technique attains reducing of residual sum of squares of observations and results of numerical simulation by adjusting hydrogeological parameters. However, each model adjusts different parameters as water conductivity, effective porosity, specific storage, and anisotropy. When calibrating models, sometimes it is impossible to explain the phenomena only by adjusting parameters. In such case, another investigation may be required to clarify details of hydrogeological structure more. As a result of comparing research from beginning to this year, the following conclusions are obtained about investigation. (1) The transient hydraulic data are effective means in reducing the uncertainty of hydrogeological structure. (2) Effective porosity for calculating pore water velocity of

  19. Improvement, calibration and validation of a distributed hydrological model over France

    P. Quintana Seguí

    2009-02-01

    Full Text Available The hydrometeorological model SAFRAN-ISBA-MODCOU (SIM computes water and energy budgets on the land surface and riverflows and the level of several aquifers at the scale of France. SIM is composed of a meteorological analysis system (SAFRAN, a land surface model (ISBA, and a hydrogeological model (MODCOU. In this study, an exponential profile of hydraulic conductivity at saturation is introduced to the model and its impact analysed. It is also studied how calibration modifies the performance of the model. A very simple method of calibration is implemented and applied to the parameters of hydraulic conductivity and subgrid runoff. The study shows that a better description of the hydraulic conductivity of the soil is important to simulate more realistic discharges. It also shows that the calibrated model is more robust than the original SIM. In fact, the calibration mainly affects the processes related to the dynamics of the flow (drainage and runoff, and the rest of relevant processes (like evaporation remain stable. It is also proven that it is only worth introducing the new empirical parameterization of hydraulic conductivity if it is accompanied by a calibration of its parameters, otherwise the simulations can be degraded. In conclusion, it is shown that the new parameterization is necessary to obtain good simulations. Calibration is a tool that must be used to improve the performance of distributed models like SIM that have some empirical parameters.

  20. Predictive sensor based x-ray calibration using a physical model

    Fuente, Matias de la; Lutz, Peter; Wirtz, Dieter C.; Radermacher, Klaus

    2007-01-01

    Many computer assisted surgery systems are based on intraoperative x-ray images. To achieve reliable and accurate results these images have to be calibrated concerning geometric distortions, which can be distinguished between constant distortions and distortions caused by magnetic fields. Instead of using an intraoperative calibration phantom that has to be visible within each image resulting in overlaying markers, the presented approach directly takes advantage of the physical background of the distortions. Based on a computed physical model of an image intensifier and a magnetic field sensor, an online compensation of distortions can be achieved without the need of an intraoperative calibration phantom. The model has to be adapted once to each specific image intensifier through calibration, which is based on an optimization algorithm systematically altering the physical model parameters, until a minimal error is reached. Once calibrated, the model is able to predict the distortions caused by the measured magnetic field vector and build an appropriate dewarping function. The time needed for model calibration is not yet optimized and takes up to 4 h on a 3 GHz CPU. In contrast, the time needed for distortion correction is less than 1 s and therefore absolutely acceptable for intraoperative use. First evaluations showed that by using the model based dewarping algorithm the distortions of an XRII with a 21 cm FOV could be significantly reduced. The model was able to predict and compensate distortions by approximately 80% to a remaining error of 0.45 mm (max) (0.19 mm rms)

  1. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Forsmark

    Aneljung, Maria; Gustafsson, Lars-Goeran

    2007-04-01

    . Differences in the aquifer refilling process subsequent to dry periods, for example a too slow refill when the groundwater table rises after dry summers. This may be due to local deviations in the applied pF-curves in the unsaturated zone description. Differences in near-surface groundwater elevations. For example, the calculated groundwater level reaches the ground surface during the fall and spring at locations where the measured groundwater depth is just below the ground surface. This may be due to the presence of near-surface high-conductive layers. A sensitivity analysis has been made on calibration parameters. For parameters that have 'global' effects, such as the hydraulic conductivity in the saturated zone, the analysis was performed using the 'full' model. For parameters with more local effects, such as parameters influencing the evapotranspiration and the net recharge, the model was scaled down to a column model, representing two different type areas. The most important conclusions that can be drawn from the sensitivity analysis are the following: The results indicate that the horizontal hydraulic conductivity generally should be increased at topographic highs, and reduced at local depressions in the topography. The results indicate that no changes should be made to the vertical hydraulic conductivity at locations where the horizontal conductivity has been increased, and that the vertical conductivity generally should be decreased where the horizontal conductivity has been decreased. The vegetation parameters that have the largest influence on the total groundwater recharge are the root mass distribution and the crop coefficient. The unsaturated zone parameter that have the largest influence on the total groundwater recharge is the effective porosity given in the pF-curve. In addition, the shape of the pF-curve above the water content at field capacity is also of great importance. The general conclusion is that the surrounding conditions have large effects on water

  2. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Forsmark

    Aneljung, Maria; Gustafsson, Lars-Goeran [DHI Water and Environment AB, Goeteborg (Sweden)

    2007-04-15

    . Differences in the aquifer refilling process subsequent to dry periods, for example a too slow refill when the groundwater table rises after dry summers. This may be due to local deviations in the applied pF-curves in the unsaturated zone description. Differences in near-surface groundwater elevations. For example, the calculated groundwater level reaches the ground surface during the fall and spring at locations where the measured groundwater depth is just below the ground surface. This may be due to the presence of near-surface high-conductive layers. A sensitivity analysis has been made on calibration parameters. For parameters that have 'global' effects, such as the hydraulic conductivity in the saturated zone, the analysis was performed using the 'full' model. For parameters with more local effects, such as parameters influencing the evapotranspiration and the net recharge, the model was scaled down to a column model, representing two different type areas. The most important conclusions that can be drawn from the sensitivity analysis are the following: The results indicate that the horizontal hydraulic conductivity generally should be increased at topographic highs, and reduced at local depressions in the topography. The results indicate that no changes should be made to the vertical hydraulic conductivity at locations where the horizontal conductivity has been increased, and that the vertical conductivity generally should be decreased where the horizontal conductivity has been decreased. The vegetation parameters that have the largest influence on the total groundwater recharge are the root mass distribution and the crop coefficient. The unsaturated zone parameter that have the largest influence on the total groundwater recharge is the effective porosity given in the pF-curve. In addition, the shape of the pF-curve above the water content at field capacity is also of great importance. The general conclusion is that the surrounding conditions have

  3. The effects of model complexity and calibration period on groundwater recharge simulations

    Moeck, Christian; Van Freyberg, Jana; Schirmer, Mario

    2017-04-01

    A significant number of groundwater recharge models exist that vary in terms of complexity (i.e., structure and parametrization). Typically, model selection and conceptualization is very subjective and can be a key source of uncertainty in the recharge simulations. Another source of uncertainty is the implicit assumption that model parameters, calibrated over historical periods, are also valid for the simulation period. To the best of our knowledge there is no systematic evaluation of the effect of the model complexity and calibration strategy on the performance of recharge models. To address this gap, we utilized a long-term recharge data set (20 years) from a large weighting lysimeter. We performed a differential split sample test with four groundwater recharge models that vary in terms of complexity. They were calibrated using six calibration periods with climatically contrasting conditions in a constrained Monte Carlo approach. Despite the climatically contrasting conditions, all models performed similarly well during the calibration. However, during validation a clear effect of the model structure on model performance was evident. The more complex, physically-based models predicted recharge best, even when calibration and prediction periods had very different climatic conditions. In contrast, more simplistic soil-water balance and lumped model performed poorly under such conditions. For these models we found a strong dependency on the chosen calibration period. In particular, our analysis showed that this can have relevant implications when using recharge models as decision-making tools in a broad range of applications (e.g. water availability, climate change impact studies, water resource management, etc.).

  4. Regional forecasting with global atmospheric models

    Crowley, T.J.; North, G.R.; Smith, N.R.

    1994-05-01

    The scope of the report is to present the results of the fourth year's work on the atmospheric modeling part of the global climate studies task. The development testing of computer models and initial results are discussed. The appendices contain studies that provide supporting information and guidance to the modeling work and further details on computer model development. Complete documentation of the models, including user information, will be prepared under separate reports and manuals

  5. Multivariate Calibration Models for Sorghum Composition using Near-Infrared Spectroscopy

    Wolfrum, E.; Payne, C.; Stefaniak, T.; Rooney, W.; Dighe, N.; Bean, B.; Dahlberg, J.

    2013-03-01

    NREL developed calibration models based on near-infrared (NIR) spectroscopy coupled with multivariate statistics to predict compositional properties relevant to cellulosic biofuels production for a variety of sorghum cultivars. A robust calibration population was developed in an iterative fashion. The quality of models developed using the same sample geometry on two different types of NIR spectrometers and two different sample geometries on the same spectrometer did not vary greatly.

  6. [Outlier sample discriminating methods for building calibration model in melons quality detecting using NIR spectra].

    Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang

    2012-11-01

    Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).

  7. Global-warming forecasting models

    Moeller, K.P.

    1992-01-01

    In spite of an annual man-made quantity of about 20 billion tons, carbon dioxide has remained a trace gas in the atmosphere (350 ppm at present). The reliability of model calculations which forecast temperatures is dicussed in view of the world-wide increase in carbon dioxides. Computer simulations reveal a general, serious threat to the future of mankind. (DG) [de

  8. Our calibrated model has poor predictive value: An example from the petroleum industry

    Carter, J.N.; Ballester, P.J.; Tavassoli, Z.; King, P.R.

    2006-01-01

    It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way. Using an example from the petroleum industry, we show that cases can exit where calibrated models have limited predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability. We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not

  9. Our calibrated model has poor predictive value: An example from the petroleum industry

    Carter, J.N. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom)]. E-mail: j.n.carter@ic.ac.uk; Ballester, P.J. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom); Tavassoli, Z. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom); King, P.R. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom)

    2006-10-15

    It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way. Using an example from the petroleum industry, we show that cases can exit where calibrated models have limited predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability. We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not.

  10. Efficient Calibration of Distributed Catchment Models Using Perceptual Understanding and Hydrologic Signatures

    Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.

    2015-12-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.

  11. Calibration of Mine Ventilation Network Models Using the Non-Linear Optimization Algorithm

    Guang Xu

    2017-12-01

    Full Text Available Effective ventilation planning is vital to underground mining. To ensure stable operation of the ventilation system and to avoid airflow disorder, mine ventilation network (MVN models have been widely used in simulating and optimizing the mine ventilation system. However, one of the challenges for MVN model simulation is that the simulated airflow distribution results do not match the measured data. To solve this problem, a simple and effective calibration method is proposed based on the non-linear optimization algorithm. The calibrated model not only makes simulated airflow distribution results in accordance with the on-site measured data, but also controls the errors of other parameters within a minimum range. The proposed method was then applied to calibrate an MVN model in a real case, which is built based on ventilation survey results and Ventsim software. Finally, airflow simulation experiments are carried out respectively using data before and after calibration, whose results were compared and analyzed. This showed that the simulated airflows in the calibrated model agreed much better to the ventilation survey data, which verifies the effectiveness of calibrating method.

  12. Development and Calibration of a Model for the Determination of Hurricane Wind Speed Field at the Peninsula of Yucatan

    L.E. Fernández–Baqueiro

    2009-01-01

    Full Text Available In this work a model to calculate the wind speed field produced by hurricanes that hit the Yucatan Peninsula is developed. The model variables are calculated using equations recently developed, that include new advances in meteorology. The steps in the model are described and implemented in a computer program to systematize and facilitate the use of this model. The model and the program are calibrated using two data bases; the first one includes trajectories and maximum wind velocities of hurricanes; the second one includes records of wind velocities obtained from the Automatic Meteorology Stations of the National Meteorology Service. The hurricane wind velocity field is calculated using the model and information of the first data base. The model results are compared with field data from the second data base. The model is calibrated adjusting the Holland's pressure radial profile parameter B; this is carried out for three hurricane records: Isidore, Emily and Wilma. It is concluded that a value of B of 1.3 adjusts globally the three hurricane records and that the developed model is capable of reproducing satisfactorily the wind velocity records.

  13. A hierarchical analysis of terrestrial ecosystem model Biome-BGC: Equilibrium analysis and model calibration

    Thornton, Peter E [ORNL; Wang, Weile [ORNL; Law, Beverly E. [Oregon State University; Nemani, Ramakrishna R [NASA Ames Research Center

    2009-01-01

    The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically support the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.

  14. Qualitative models of global warming amplifiers

    Milošević, U.; Bredeweg, B.; de Kleer, J.; Forbus, K.D.

    2010-01-01

    There is growing interest from ecological experts to create qualitative models of phenomena for which numerical information is sparse or missing. We present a number of successful models in the field of environmental science, namely, the domain of global warming. The motivation behind the effort is

  15. Technology Learning Ratios in Global Energy Models

    Varela, M.

    2001-01-01

    The process of introduction of a new technology supposes that while its production and utilisation increases, also its operation improves and its investment costs and production decreases. The accumulation of experience and learning of a new technology increase in parallel with the increase of its market share. This process is represented by the technological learning curves and the energy sector is not detached from this process of substitution of old technologies by new ones. The present paper carries out a brief revision of the main energy models that include the technology dynamics (learning). The energy scenarios, developed by global energy models, assume that the characteristics of the technologies are variables with time. But this trend is incorporated in a exogenous way in these energy models, that is to say, it is only a time function. This practice is applied to the cost indicators of the technology such as the specific investment costs or to the efficiency of the energy technologies. In the last years, the new concept of endogenous technological learning has been integrated within these global energy models. This paper examines the concept of technological learning in global energy models. It also analyses the technological dynamics of the energy system including the endogenous modelling of the process of technological progress. Finally, it makes a comparison of several of the most used global energy models (MARKAL, MESSAGE and ERIS) and, more concretely, about the use these models make of the concept of technological learning. (Author) 17 refs

  16. Nonlinear propagation model for ultrasound hydrophones calibration in the frequency range up to 100 MHz.

    Radulescu, E G; Wójcik, J; Lewin, P A; Nowicki, A

    2003-06-01

    To facilitate the implementation and verification of the new ultrasound hydrophone calibration techniques described in the companion paper (somewhere in this issue) a nonlinear propagation model was developed. A brief outline of the theoretical considerations is presented and the model's advantages and disadvantages are discussed. The results of simulations yielding spatial and temporal acoustic pressure amplitude are also presented and compared with those obtained using KZK and Field II models. Excellent agreement between all models is evidenced. The applicability of the model in discrete wideband calibration of hydrophones is documented in the companion paper somewhere in this volume.

  17. Modelling Machine Tools using Structure Integrated Sensors for Fast Calibration

    Benjamin Montavon

    2018-02-01

    Full Text Available Monitoring of the relative deviation between commanded and actual tool tip position, which limits the volumetric performance of the machine tool, enables the use of contemporary methods of compensation to reduce tolerance mismatch and the uncertainties of on-machine measurements. The development of a primarily optical sensor setup capable of being integrated into the machine structure without limiting its operating range is presented. The use of a frequency-modulating interferometer and photosensitive arrays in combination with a Gaussian laser beam allows for fast and automated online measurements of the axes’ motion errors and thermal conditions with comparable accuracy, lower cost, and smaller dimensions as compared to state-of-the-art optical measuring instruments for offline machine tool calibration. The development is tested through simulation of the sensor setup based on raytracing and Monte-Carlo techniques.

  18. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  19. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    Chengyi Yu

    2017-01-01

    Full Text Available A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method.

  20. CALIBRATING THE JOHNSON-HOLMQUIST CERAMIC MODEL FOR SIC USING CTH

    Cazamias, J. U.; Bilyk, S. R.

    2009-01-01

    The Johnson-Holmquist ceramic material model has been calibrated and successfully applied to numerically simulate ballistic events using the Lagrangian code EPIC. While the majority of the constants are ''physics'' based, two of the constants for the failed material response are calibrated using ballistic experiments conducted on a confined cylindrical ceramic target. The maximum strength of the failed ceramic is calibrated by matching the penetration velocity. The second refers to the equivalent plastic strain at failure under constant pressure and is calibrated using the dwell time. Use of these two constants in the CTH Eulerian hydrocode does not predict the ballistic response. This difference may be due to the phenomenological nature of the model and the different numerical schemes used by the codes. This paper determines the aforementioned material constants for SiC suitable for simulating ballistic events using CTH.

  1. A case study on robust optimal experimental design for model calibration of ω-Transaminase

    Daele, Timothy, Van; Van Hauwermeiren, Daan; Ringborg, Rolf Hoffmeyer

    the experimental space. However, it is expected that more informative experiments can be designed to increase the confidence of the parameter estimates. Therefore, we apply Optimal Experimental Design (OED) to the calibrated model of Shin and Kim (1998). The total number of samples was retained to allow fair......” parameter values are not known before finishing the model calibration. However, it is important that the chosen parameter values are close to the real parameter values, otherwise the OED can possibly yield non-informative experiments. To counter this problem, one can use robust OED. The idea of robust OED......Proper calibration of models describing enzyme kinetics can be quite challenging. This is especially the case for more complex models like transaminase models (Shin and Kim, 1998). The latter fitted model parameters, but the confidence on the parameter estimation was not derived. Hence...

  2. Radiometric modeling and calibration of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) ground based measurement experiment

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-12-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data

  3. Deriving global parameter estimates for the Noah land surface model using FLUXNET and machine learning

    Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.

    2016-11-01

    With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.

  4. Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates

    Todorovic, Andrijana; Plavsic, Jasna

    2015-04-01

    A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters

  5. New global ICT-based business models

    The New Global Business model (NEWGIBM) book describes the background, theory references, case studies, results and learning imparted by the NEWGIBM project, which is supported by ICT, to a research group during the period from 2005-2011. The book is a result of the efforts and the collaborative ...... The NEWGIBM Cases Show? The Strategy Concept in Light of the Increased Importance of Innovative Business Models Successful Implementation of Global BM Innovation Globalisation Of ICT Based Business Models: Today And In 2020......The New Global Business model (NEWGIBM) book describes the background, theory references, case studies, results and learning imparted by the NEWGIBM project, which is supported by ICT, to a research group during the period from 2005-2011. The book is a result of the efforts and the collaborative....... The NEWGIBM book serves as a part of the final evaluation and documentation of the NEWGIBM project and is supported by results from the following projects: M-commerce, Global Innovation, Global Ebusiness & M-commerce, The Blue Ocean project, International Center for Innovation and Women in Business, NEFFICS...

  6. Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers

    Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)

    1996-01-01

    Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.

  7. The Wally plot approach to assess the calibration of clinical prediction models.

    Blanche, Paul; Gerds, Thomas A; Ekstrøm, Claus T

    2017-12-06

    A prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. Typically, the calibration assumption is assessed graphically but in practice it is often challenging to judge whether a "disappointing" calibration plot is the consequence of a departure from the calibration assumption, or alternatively just "bad luck" due to sampling variability. We propose a graphical approach which enables the visualization of how much a calibration plot agrees with the calibration assumption to address this issue. The approach is mainly based on the idea of generating new plots which mimic the available data under the calibration assumption. The method handles the common non-trivial situations in which the data contain censored observations and occurrences of competing events. This is done by building on ideas from constrained non-parametric maximum likelihood estimation methods. Two examples from large cohort data illustrate our proposal. The 'wally' R package is provided to make the methodology easily usable.

  8. Calibration of a rainfall-runoff hydrological model and flood simulation using data assimilation

    Piacentini, A.; Ricci, S. M.; Thual, O.; Coustau, M.; Marchandise, A.

    2010-12-01

    Rainfall-runoff models are crucial tools for long-term assessment of flash floods or real-time forecasting. This work focuses on the calibration of a distributed parsimonious event-based rainfall-runoff model using data assimilation. The model combines a SCS-derived runoff model and a Lag and Route routing model for each cell of a regular grid mesh. The SCS-derived runoff model is parametrized by the initial water deficit, the discharge coefficient for the soil reservoir and a lagged discharge coefficient. The Lag and Route routing model is parametrized by the velocity of travel and the lag parameter. These parameters are assumed to be constant for a given catchment except for the initial water deficit and the velocity travel that are event-dependent (landuse, soil type and moisture initial conditions). In the present work, a BLUE filtering technique was used to calibrate the initial water deficit and the velocity travel for each flood event assimilating the first available discharge measurements at the catchment outlet. The advantages of the BLUE algorithm are its low computational cost and its convenient implementation, especially in the context of the calibration of a reduced number of parameters. The assimilation algorithm was applied on two Mediterranean catchment areas of different size and dynamics: Gardon d'Anduze and Lez. The Lez catchment, of 114 km2 drainage area, is located upstream Montpellier. It is a karstic catchment mainly affected by floods in autumn during intense rainstorms with short Lag-times and high discharge peaks (up to 480 m3.s-1 in September 2005). The Gardon d'Anduze catchment, mostly granite and schistose, of 545 km2 drainage area, lies over the departements of Lozère and Gard. It is often affected by flash and devasting floods (up to 3000 m3.s-1 in September 2002). The discharge observations at the beginning of the flood event are assimilated so that the BLUE algorithm provides optimal values for the initial water deficit and the

  9. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    and Ben Polly, Joseph Robertson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Polly, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Collis, Jon [Colorado School of Mines, Golden, CO (United States)

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  10. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  11. Astronomically calibrated 40Ar/39Ar age for the Toba supereruption and global synchronization of late Quaternary records

    Storey, Michael; Roberts, Richard G.; Saidin, Mokhtar

    2012-11-01

    The Toba supereruption in Sumatra, ∼74 thousand years (ka) ago, was the largest terrestrial volcanic event of the Quaternary. Ash and sulfate aerosols were deposited in both hemispheres, forming a time-marker horizon that can be used to synchronize late Quaternary records globally. A precise numerical age for this event has proved elusive, with dating uncertainties larger than the millennial-scale climate cycles that characterized this period. We report an astronomically calibrated 40Ar/39Ar age of 73.88 ± 0.32 ka (1σ, full external errors) for sanidine crystals extracted from Toba deposits in the Lenggong Valley, Malaysia, 350 km from the eruption source and 6 km from an archaeological site with stone artifacts buried by ash. If these artifacts were made by Homo sapiens, as has been suggested, then our age indicates that modern humans had reached Southeast Asia by ∼74 ka ago. Our 40Ar/39Ar age is an order-of-magnitude more precise than previous estimates, resolving the timing of the eruption to the middle of the cold interval between Dansgaard-Oeschger events 20 and 19, when a peak in sulfate concentration occurred as registered by Greenland ice cores. This peak is followed by a ∼10 °C drop in the Greenland surface temperature over ∼150 y, revealing the possible climatic impact of the eruption. Our 40Ar/39Ar age also provides a high-precision calibration point for other ice, marine, and terrestrial archives containing Toba sulfates and ash, facilitating their global synchronization at unprecedented resolution for a critical period in Earth and human history beyond the range of 14C dating.

  12. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  13. Intercomparison of hydrological model structures and calibration approaches in climate scenario impact projections

    Vansteenkiste, Thomas; Tavakoli, Mohsen; Ntegeka, Victor; De Smedt, Florimond; Batelaan, Okke; Pereira, Fernando; Willems, Patrick

    2014-11-01

    The objective of this paper is to investigate the effects of hydrological model structure and calibration on climate change impact results in hydrology. The uncertainty in the hydrological impact results is assessed by the relative change in runoff volumes and peak and low flow extremes from historical and future climate conditions. The effect of the hydrological model structure is examined through the use of five hydrological models with different spatial resolutions and process descriptions. These were applied to a medium sized catchment in Belgium. The models vary from the lumped conceptual NAM, PDM and VHM models over the intermediate detailed and distributed WetSpa model to the fully distributed MIKE SHE model. The latter model accounts for the 3D groundwater processes and interacts bi-directionally with a full hydrodynamic MIKE 11 river model. After careful and manual calibration of these models, accounting for the accuracy of the peak and low flow extremes and runoff subflows, and the changes in these extremes for changing rainfall conditions, the five models respond in a similar way to the climate scenarios over Belgium. Future projections on peak flows are highly uncertain with expected increases as well as decreases depending on the climate scenario. The projections on future low flows are more uniform; low flows decrease (up to 60%) for all models and for all climate scenarios. However, the uncertainties in the impact projections are high, mainly in the dry season. With respect to the model structural uncertainty, the PDM model simulates significantly higher runoff peak flows under future wet scenarios, which is explained by its specific model structure. For the low flow extremes, the MIKE SHE model projects significantly lower low flows in dry scenario conditions in comparison to the other models, probably due to its large difference in process descriptions for the groundwater component, the groundwater-river interactions. The effect of the model

  14. Imaging 2015 Mw 7.8 Gorkha Earthquake and Its Aftershock Sequence Combining Multiple Calibrated Global Seismic Arrays

    LI, B.; Ghosh, A.

    2016-12-01

    The 2015 Mw 7.8 Gorkha earthquake provides a good opportunity to study the tectonics and earthquake hazards in the Himalayas, one of the most seismically active plate boundaries. Details of the seismicity patterns and associated structures in the Himalayas are poorly understood mainly due to limited instrumentation. Here, we apply a back-projection method to study the mainshock rupture and the following aftershock sequence using four large aperture global seismic arrays. All the arrays show eastward rupture propagation of about 130 km and reveal similar evolution of seismic energy radiation, with strong high-frequency energy burst about 50 km north of Kathmandu. Each single array, however, is typically limited by large azimuthal gap, low resolution, and artifacts due to unmodeled velocity structures. Therefore, we use a self-consistent empirical calibration method to combine four different arrays to image the Gorkha event. It greatly improves the resolution, can better track rupture and reveal details that cannot be resolved by any individual array. In addition, we also use the same arrays at teleseismic distances and apply a back-projection technique to detect and locate the aftershocks immediately following the Gorkha earthquake. We detect about 2.5 times the aftershocks recorded by the Advance National Seismic System comprehensive earthquake catalog during the 19 days following the mainshock. The aftershocks detected by the arrays show an east-west trend in general, with majority of the aftershocks located at the eastern part of the rupture patch and surrounding the rupture zone of the largest Mw 7.3 aftershock. Overall spatiotemporal aftershock pattern agrees well with global catalog, with our catalog showing more details relative to the standard global catalog. The improved aftershock catalog enables us to better study the aftershock dynamics, stress evolution in this region. Moreover, rapid and better imaging of aftershock distribution may aid rapid response

  15. Nitrous oxide emissions from cropland: a procedure for calibrating the DayCent biogeochemical model using inverse modelling

    Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.

    2013-01-01

    DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.

  16. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this

  17. Calibration of uncertain inputs to computer models using experimentally measured quantities and the BMARS emulator

    Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.

    2011-01-01

    We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to produce posterior distributions of the uncertain inputs such that when samples from the posteriors are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments within confidence bounds. The method is similar to the Markov chain Monte Carlo (MCMC) calibration methods with independent sampling with the exception that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our system, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The results of the calibration are posterior distributions that both agree with intuition and improve the accuracy and decrease the uncertainty in experimental predictions. (author)

  18. Regional forecasting with global atmospheric models

    Crowley, T.J.; North, G.R.; Smith, N.R.

    1994-05-01

    This report was prepared by the Applied Research Corporation (ARC), College Station, Texas, under subcontract to Pacific Northwest Laboratory (PNL) as part of a global climate studies task. The task supports site characterization work required for the selection of a potential high-level nuclear waste repository and is part of the Performance Assessment Scientific Support (PASS) Program at PNL. The work is under the overall direction of the Office of Civilian Radioactive Waste Management (OCRWM), US Department of Energy Headquarters, Washington, DC. The scope of the report is to present the results of the third year's work on the atmospheric modeling part of the global climate studies task. The development testing of computer models and initial results are discussed. The appendices contain several studies that provide supporting information and guidance to the modeling work and further details on computer model development. Complete documentation of the models, including user information, will be prepared under separate reports and manuals

  19. Assessing global vegetation activity using spatio-temporal Bayesian modelling

    Mulder, Vera L.; van Eck, Christel M.; Friedlingstein, Pierre; Regnier, Pierre A. G.

    2016-04-01

    This work demonstrates the potential of modelling vegetation activity using a hierarchical Bayesian spatio-temporal model. This approach allows modelling changes in vegetation and climate simultaneous in space and time. Changes of vegetation activity such as phenology are modelled as a dynamic process depending on climate variability in both space and time. Additionally, differences in observed vegetation status can be contributed to other abiotic ecosystem properties, e.g. soil and terrain properties. Although these properties do not change in time, they do change in space and may provide valuable information in addition to the climate dynamics. The spatio-temporal Bayesian models were calibrated at a regional scale because the local trends in space and time can be better captured by the model. The regional subsets were defined according to the SREX segmentation, as defined by the IPCC. Each region is considered being relatively homogeneous in terms of large-scale climate and biomes, still capturing small-scale (grid-cell level) variability. Modelling within these regions is hence expected to be less uncertain due to the absence of these large-scale patterns, compared to a global approach. This overall modelling approach allows the comparison of model behavior for the different regions and may provide insights on the main dynamic processes driving the interaction between vegetation and climate within different regions. The data employed in this study encompasses the global datasets for soil properties (SoilGrids), terrain properties (Global Relief Model based on SRTM DEM and ETOPO), monthly time series of satellite-derived vegetation indices (GIMMS NDVI3g) and climate variables (Princeton Meteorological Forcing Dataset). The findings proved the potential of a spatio-temporal Bayesian modelling approach for assessing vegetation dynamics, at a regional scale. The observed interrelationships of the employed data and the different spatial and temporal trends support

  20. Procedure for the Selection and Validation of a Calibration Model I-Description and Application.

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2017-05-01

    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Calibration of a distributed hydrologic model for six European catchments using remote sensing data

    Stisen, S.; Demirel, M. C.; Mendiguren González, G.; Kumar, R.; Rakovec, O.; Samaniego, L. E.

    2017-12-01

    While observed streamflow has been the single reference for most conventional hydrologic model calibration exercises, the availability of spatially distributed remote sensing observations provide new possibilities for multi-variable calibration assessing both spatial and temporal variability of different hydrologic processes. In this study, we first identify the key transfer parameters of the mesoscale Hydrologic Model (mHM) controlling both the discharge and the spatial distribution of actual evapotranspiration (AET) across six central European catchments (Elbe, Main, Meuse, Moselle, Neckar and Vienne). These catchments are selected based on their limited topographical and climatic variability which enables to evaluate the effect of spatial parameterization on the simulated evapotranspiration patterns. We develop a European scale remote sensing based actual evapotranspiration dataset at a 1 km grid scale driven primarily by land surface temperature observations from MODIS using the TSEB approach. Using the observed AET maps we analyze the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mHM model. This model allows calibrating one-basin-at-a-time or all-basins-together using its unique structure and multi-parameter regionalization approach. Results will indicate any tradeoffs between spatial pattern and discharge simulation during model calibration and through validation against independent internal discharge locations. Moreover, added value on internal water balances will be analyzed.

  2. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    Elsheikh, A. H.

    2013-12-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.

  3. Calibration and analysis of genome-based models for microbial ecology.

    Louca, Stilianos; Doebeli, Michael

    2015-10-16

    Microbial ecosystem modeling is complicated by the large number of unknown parameters and the lack of appropriate calibration tools. Here we present a novel computational framework for modeling microbial ecosystems, which combines genome-based model construction with statistical analysis and calibration to experimental data. Using this framework, we examined the dynamics of a community of Escherichia coli strains that emerged in laboratory evolution experiments, during which an ancestral strain diversified into two coexisting ecotypes. We constructed a microbial community model comprising the ancestral and the evolved strains, which we calibrated using separate monoculture experiments. Simulations reproduced the successional dynamics in the evolution experiments, and pathway activation patterns observed in microarray transcript profiles. Our approach yielded detailed insights into the metabolic processes that drove bacterial diversification, involving acetate cross-feeding and competition for organic carbon and oxygen. Our framework provides a missing link towards a data-driven mechanistic microbial ecology.

  4. Multi-site calibration, validation, and sensitivity analysis of the MIKE SHE Model for a large watershed in northern China

    S. Wang; Z. Zhang; G. Sun; P. Strauss; J. Guo; Y. Tang; A. Yao

    2012-01-01

    Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped...

  5. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  6. Global thermal models of the lithosphere

    Cammarano, F.; Guerri, M.

    2017-12-01

    Unraveling the thermal structure of the outermost shell of our planet is key for understanding its evolution. We obtain temperatures from interpretation of global shear-velocity (VS) models. Long-wavelength thermal structure is well determined by seismic models and only slightly affected by compositional effects and uncertainties in mineral-physics properties. Absolute temperatures and gradients with depth, however, are not well constrained. Adding constraints from petrology, heat-flow observations and thermal evolution of oceanic lithosphere help to better estimate absolute temperatures in the top part of the lithosphere. We produce global thermal models of the lithosphere at different spatial resolution, up to spherical-harmonics degree 24, and provide estimated standard deviations. We provide purely seismic thermal (TS) model and hybrid models where temperatures are corrected with steady-state conductive geotherms on continents and cooling model temperatures on oceanic regions. All relevant physical properties, with the exception of thermal conductivity, are based on a self-consistent thermodynamical modelling approach. Our global thermal models also include density and compressional-wave velocities (VP) as obtained either assuming no lateral variations in composition or a simple reference 3-D compositional structure, which takes into account a chemically depleted continental lithosphere. We found that seismically-derived temperatures in continental lithosphere fit well, overall, with continental geotherms, but a large variation in radiogenic heat is required to reconcile them with heat flow (long wavelength) observations. Oceanic shallow lithosphere below mid-oceanic ridges and young oceans is colder than expected, confirming the possible presence of a dehydration boundary around 80 km depth already suggested in previous studies. The global thermal models should serve as the basis to move at a smaller spatial scale, where additional thermo-chemical variations

  7. Calibrated Blade-Element/Momentum Theory Aerodynamic Model of the MARIN Stock Wind Turbine: Preprint

    Goupee, A.; Kimball, R.; de Ridder, E. J.; Helder, J.; Robertson, A.; Jonkman, J.

    2015-04-02

    In this paper, a calibrated blade-element/momentum theory aerodynamic model of the MARIN stock wind turbine is developed and documented. The model is created using open-source software and calibrated to closely emulate experimental data obtained by the DeepCwind Consortium using a genetic algorithm optimization routine. The provided model will be useful for those interested in validating interested in validating floating wind turbine numerical simulators that rely on experiments utilizing the MARIN stock wind turbine—for example, the International Energy Agency Wind Task 30’s Offshore Code Comparison Collaboration Continued, with Correlation project.

  8. Spherical Process Models for Global Spatial Statistics

    Jeong, Jaehong

    2017-11-28

    Statistical models used in geophysical, environmental, and climate science applications must reflect the curvature of the spatial domain in global data. Over the past few decades, statisticians have developed covariance models that capture the spatial and temporal behavior of these global data sets. Though the geodesic distance is the most natural metric for measuring distance on the surface of a sphere, mathematical limitations have compelled statisticians to use the chordal distance to compute the covariance matrix in many applications instead, which may cause physically unrealistic distortions. Therefore, covariance functions directly defined on a sphere using the geodesic distance are needed. We discuss the issues that arise when dealing with spherical data sets on a global scale and provide references to recent literature. We review the current approaches to building process models on spheres, including the differential operator, the stochastic partial differential equation, the kernel convolution, and the deformation approaches. We illustrate realizations obtained from Gaussian processes with different covariance structures and the use of isotropic and nonstationary covariance models through deformations and geographical indicators for global surface temperature data. To assess the suitability of each method, we compare their log-likelihood values and prediction scores, and we end with a discussion of related research problems.

  9. On coupling global biome models with climate models

    Claussen, M.

    1994-01-01

    The BIOME model of Prentice et al. (1992; J. Biogeogr. 19: 117-134), which predicts global vegetation patterns in equilibrium with climate, was coupled with the ECHAM climate model of the Max-Planck-Institut fiir Meteorologie, Hamburg, Germany. It was found that incorporation of the BIOME model into ECHAM, regardless at which frequency, does not enhance the simulated climate variability, expressed in terms of differences between global vegetation patterns. Strongest changes are seen only betw...

  10. Stochastic Modeling of Overtime Occupancy and Its Application in Building Energy Simulation and Calibration

    Sun, Kaiyu; Yan, Da; Hong, Tianzhen; Guo, Siyue

    2014-02-28

    Overtime is a common phenomenon around the world. Overtime drives both internal heat gains from occupants, lighting and plug-loads, and HVAC operation during overtime periods. Overtime leads to longer occupancy hours and extended operation of building services systems beyond normal working hours, thus overtime impacts total building energy use. Current literature lacks methods to model overtime occupancy because overtime is stochastic in nature and varies by individual occupants and by time. To address this gap in the literature, this study aims to develop a new stochastic model based on the statistical analysis of measured overtime occupancy data from an office building. A binomial distribution is used to represent the total number of occupants working overtime, while an exponential distribution is used to represent the duration of overtime periods. The overtime model is used to generate overtime occupancy schedules as an input to the energy model of a second office building. The measured and simulated cooling energy use during the overtime period is compared in order to validate the overtime model. A hybrid approach to energy model calibration is proposed and tested, which combines ASHRAE Guideline 14 for the calibration of the energy model during normal working hours, and a proposed KS test for the calibration of the energy model during overtime. The developed stochastic overtime model and the hybrid calibration approach can be used in building energy simulations to improve the accuracy of results, and better understand the characteristics of overtime in office buildings.

  11. COLUMBUS. A global gas market model

    Hecking, Harald; Panke, Timo

    2012-03-15

    A model of the global gas market is presented which in its basic version optimises the future development of production, transport and storage capacities as well as the actual gas flows around the world assuming perfect competition. Besides the transport of natural gas via pipelines also the global market for liquefied natural gas (LNG) is modelled using a hub-and-spoke approach. While in the basic version of the model an inelastic demand and a piecewise-linear supply function are used, both can be changed easily, e.g. to a Golombek style production function or a constant elasticity of substitution (CES) demand function. Due to the usage of mixed complementary programming (MCP) the model additionally allows for the simulation of strategic behaviour of different players in the gas market, e.g. the gas producers.

  12. Visible spectroscopy calibration transfer model in determining pH of Sala mangoes

    Yahaya, O.K.M.; MatJafri, M.Z.; Aziz, A.A.; Omar, A.F.

    2015-01-01

    The purpose of this study is to compare the efficiency of calibration transfer procedures between three spectrometers involving two Ocean Optics Inc. spectrometers, namely, QE65000 and Jaz, and also, ASD FieldSpec 3 in measuring the pH of Sala mango by visible reflectance spectroscopy. This study evaluates the ability of these spectrometers in measuring the pH of Sala mango by applying similar calibration algorithms through direct calibration transfer. This visible reflectance spectroscopy technique defines a spectrometer as a master instrument and another spectrometer as a slave. The multiple linear regression (MLR) of calibration model generated using the QE65000 spectrometer is transferred to the Jaz spectrometer and vice versa for Set 1. The same technique is applied for Set 2 with QE65000 spectrometer is transferred to the FieldSpec3 spectrometer and vice versa. For Set 1, the result showed that the QE65000 spectrometer established a calibration model with higher accuracy than that of the Jaz spectrometer. In addition, the calibration model developed on Jaz spectrometer successfully predicted the pH of Sala mango, which was measured using QE65000 spectrometer, with a root means square error of prediction RMSEP = 0.092 pH and coefficients of determination R 2  = 0.892. Moreover, the best prediction result is obtained for Set 2 when the calibration model developed on QE65000 spectrometer is successfully transferred to FieldSpec 3 with R 2  = 0.839 and RMSEP = 0.16 pH

  13. Innovative Calibration Method for System Level Simulation Models of Internal Combustion Engines

    Ivo Prah

    2016-09-01

    Full Text Available The paper outlines a procedure for the computer-controlled calibration of the combined zero-dimensional (0D and one-dimensional (1D thermodynamic simulation model of a turbocharged internal combustion engine (ICE. The main purpose of the calibration is to determine input parameters of the simulation model in such a way as to achieve the smallest difference between the results of the measurements and the results of the numerical simulations with minimum consumption of the computing time. An innovative calibration methodology is based on a novel interaction between optimization methods and physically based methods of the selected ICE sub-systems. Therein physically based methods were used for steering the division of the integral ICE to several sub-models and for determining parameters of selected components considering their governing equations. Innovative multistage interaction between optimization methods and physically based methods allows, unlike the use of well-established methods that rely only on the optimization techniques, for successful calibration of a large number of input parameters with low time consumption. Therefore, the proposed method is suitable for efficient calibration of simulation models of advanced ICEs.

  14. PEST modules with regularization for the acceleration of the automatic calibration in hydrodynamic models

    Polomčić Dušan M.

    2015-01-01

    Full Text Available The calibration process of hydrodynamic model is done usually manually by 'testing' with different values of hydrogeological parameters and hydraulic characteristics of the boundary conditions. By using the PEST program, automatic calibration of models has been introduced, and it has proved to significantly reduce the subjective influence of the model creator on results. With the relatively new approach of PEST, i.e. with the introduction of so-called 'pilot points', the concept of homogeneous zones with parameter values of porous media or zones with the given boundary conditions has been outdated. However, the consequence of this kind of automatic calibration is that a significant amount of time is required to perform the calculation. The duration of calibration is measured in hours, sometimes even days. PEST contains two modules for the shortening of that process - Parallel PEST and BeoPEST. The paper presents performed experiments and analysis of different cases of PEST module usage, based on which the reduction in the time required to calibrate the model is done.

  15. GLOMO - Global Mobility Model: Beschreibung und Ergebnisse

    Kühn, André; Novinsky, Patrick; Schade, Wolfgang

    2014-01-01

    The development of both, emerging markets as well as the already establish markets (USA, Japan, Europe), is highly relevant for future success of the export-oriented German automotive industry. This paper describes the so called Global Mobility Model (GLOMO) based on the system dynamics approach, which simulates the future development of car sales by segment and drive technology. The modularized model contains population, income and GDP development in order to describe the framework in the mo...

  16. A multi-objective approach to improve SWAT model calibration in alpine catchments

    Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele

    2018-04-01

    Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.

  17. A model for global cycling of tritium

    Killough, G.G.; Kocher, D.C.

    1988-01-01

    Dynamic compartment models are widely used to describe global cycling of radionuclides for purposes of dose estimation. In this paper the authors present a new global tritium model that reproduces environmental time-series data on concentrations in precipitation, ocean surface waters, and surface fresh waters in the northern hemisphere, concentrations of atmospheric tritium in the southern hemisphere, and the latitude dependence of tritium in both hemispheres. Names TRICYCLE (for TRItium CYCLE) the model is based on the global hydrologic cycle and includes hemispheric stratospheric compartments, disaggregation of the troposphere and ocean surface waters into eight latitude zones, consideration of the different concentrations of atmospheric tritium over land and over the ocean, and a diffusive model for transport in the ocean. TRICYCLE reproduces the environmental data if it is assumed that about 50% of the tritium from atmospheric weapons testing was injected directly into the northern stratosphere as HTO. The model's latitudinal disaggregation permits taking into account the distribution of population. For a uniformly distributed release of HTO into the worldwide troposphere, TRICYCLE predicts a collective dose commitment to the world population that exceeds the NCRP model's corresponding prediction by a factor of three

  18. A model for global cycling of tritium

    Killough, G.G.; Kocher, D.C.

    1988-01-01

    Dynamic compartment models are widely used to describe global cycling of radionuclides for purposes of dose estimation. In this paper, we present a new global tritium model that reproduces environmental time-series data on concentrations in precipitation, ocean surface waters, and surface fresh waters in the northern hemisphere, concentrations of atmospheric tritium in the soutehrn hemisphere, and the latitude dependence of tritium in both hemispheres. Named TRICYCLE for Tritium CYCLE, the model is based on the global hydrologic cycle and includes hemisphereic stratospheric compartments, disaggregation of the troposphere and ocean surface waters into eight latitudezones, consideration of the different concentrations of atmospheric tritium over land and over the ocean, and a diffusive model for transport in the ocean. TRICYCLE reproduces the environmental data if we assume that about 50% of the tritium from atmospheric weapons testing was injected directly into the northern stratosphere as HTO. The models latitudinal disaggregation permits taking into account the distribution of population. For a unfiormaly distributed release of HTO into the worldwide troposphere, TRICYCLE predicts a collective dose commitment to the world population that exceeds the corresponding prediction by the NCRP model by about a factor of 3. 11 refs., 5 figs., 1 tab

  19. How does observation uncertainty influence which stream water samples are most informative for model calibration?

    Wang, Ling; van Meerveld, Ilja; Seibert, Jan

    2016-04-01

    Streamflow isotope samples taken during rainfall-runoff events are very useful for multi-criteria model calibration because they can help decrease parameter uncertainty and improve internal model consistency. However, the number of samples that can be collected and analysed is often restricted by practical and financial constraints. It is, therefore, important to choose an appropriate sampling strategy and to obtain samples that have the highest information content for model calibration. We used the Birkenes hydrochemical model and synthetic rainfall, streamflow and isotope data to explore which samples are most informative for model calibration. Starting with error-free observations, we investigated how many samples are needed to obtain a certain model fit. Based on different parameter sets, representing different catchments, and different rainfall events, we also determined which sampling times provide the most informative data for model calibration. Our results show that simulation performance for models calibrated with the isotopic data from two intelligently selected samples was comparable to simulations based on isotopic data for all 100 time steps. The models calibrated with the intelligently selected samples also performed better than the model calibrations with two benchmark sampling strategies (random selection and selection based on hydrologic information). Surprisingly, samples on the rising limb and at the peak were less informative than expected and, generally, samples taken at the end of the event were most informative. The timing of the most informative samples depends on the proportion of different flow components (baseflow, slow response flow, fast response flow and overflow). For events dominated by baseflow and slow response flow, samples taken at the end of the event after the fast response flow has ended were most informative; when the fast response flow was dominant, samples taken near the peak were most informative. However when overflow

  20. Diagnosing the impact of alternative calibration strategies on coupled hydrologic models

    Smith, T. J.; Perera, C.; Corrigan, C.

    2017-12-01

    Hydrologic models represent a significant tool for understanding, predicting, and responding to the impacts of water on society and society on water resources and, as such, are used extensively in water resources planning and management. Given this important role, the validity and fidelity of hydrologic models is imperative. While extensive focus has been paid to improving hydrologic models through better process representation, better parameter estimation, and better uncertainty quantification, significant challenges remain. In this study, we explore a number of competing model calibration scenarios for simple, coupled snowmelt-runoff models to better understand the sensitivity / variability of parameterizations and its impact on model performance, robustness, fidelity, and transferability. Our analysis highlights the sensitivity of coupled snowmelt-runoff model parameterizations to alterations in calibration approach, underscores the concept of information content in hydrologic modeling, and provides insight into potential strategies for improving model robustness / fidelity.

  1. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William

    2017-09-01

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  2. Calibration and validation of the SWAT model for a forested watershed in coastal South Carolina

    Devendra M. Amatya; Elizabeth B. Haley; Norman S. Levine; Timothy J. Callahan; Artur Radecki-Pawlik; Manoj K. Jha

    2008-01-01

    Modeling the hydrology of low-gradient coastal watersheds on shallow, poorly drained soils is a challenging task due to the complexities in watershed delineation, runoff generation processes and pathways, flooding, and submergence caused by tropical storms. The objective of the study is to calibrate and validate a GIS-based spatially-distributed hydrologic model, SWAT...

  3. Calibration of a user-defined mine blast model in LSDYNA and comparison with ale simultions

    Verreault, J.; Leerdam, P.J.C.; Weerheijm, J.

    2016-01-01

    The calibration of a user-defined blast model implemented in LS-DYNA is presented using full-scale test rig experiments, partly according to the NATO STANAG 4569 AEP-55 Volume 2 specifications where the charge weight varies between 6 kg and 10 kg and the burial depth is 100 mm and deeper. The model

  4. Global nuclear material flow/control model

    Dreicer, J.S.; Rutherford, D.S.; Fasel, P.K.; Riese, J.M.

    1997-01-01

    This is the final report of a two-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The nuclear danger can be reduced by a system for global management, protection, control, and accounting as part of an international regime for nuclear materials. The development of an international fissile material management and control regime requires conceptual research supported by an analytical and modeling tool which treats the nuclear fuel cycle as a complete system. The prototype model developed visually represents the fundamental data, information, and capabilities related to the nuclear fuel cycle in a framework supportive of national or an international perspective. This includes an assessment of the global distribution of military and civilian fissile material inventories, a representation of the proliferation pertinent physical processes, facility specific geographic identification, and the capability to estimate resource requirements for the management and control of nuclear material. The model establishes the foundation for evaluating the global production, disposition, and safeguards and security requirements for fissile nuclear material and supports the development of other pertinent algorithmic capabilities necessary to undertake further global nuclear material related studies

  5. Presentation, calibration and validation of the low-order, DCESS Earth System Model (Version 1

    J. O. Pepke Pedersen

    2008-11-01

    Full Text Available A new, low-order Earth System Model is described, calibrated and tested against Earth system data. The model features modules for the atmosphere, ocean, ocean sediment, land biosphere and lithosphere and has been designed to simulate global change on time scales of years to millions of years. The atmosphere module considers radiation balance, meridional transport of heat and water vapor between low-mid latitude and high latitude zones, heat and gas exchange with the ocean and sea ice and snow cover. Gases considered are carbon dioxide and methane for all three carbon isotopes, nitrous oxide and oxygen. The ocean module has 100 m vertical resolution, carbonate chemistry and prescribed circulation and mixing. Ocean biogeochemical tracers are phosphate, dissolved oxygen, dissolved inorganic carbon for all three carbon isotopes and alkalinity. Biogenic production of particulate organic matter in the ocean surface layer depends on phosphate availability but with lower efficiency in the high latitude zone, as determined by model fit to ocean data. The calcite to organic carbon rain ratio depends on surface layer temperature. The semi-analytical, ocean sediment module considers calcium carbonate dissolution and oxic and anoxic organic matter remineralisation. The sediment is composed of calcite, non-calcite mineral and reactive organic matter. Sediment porosity profiles are related to sediment composition and a bioturbated layer of 0.1 m thickness is assumed. A sediment segment is ascribed to each ocean layer and segment area stems from observed ocean depth distributions. Sediment burial is calculated from sedimentation velocities at the base of the bioturbated layer. Bioturbation rates and oxic and anoxic remineralisation rates depend on organic carbon rain rates and dissolved oxygen concentrations. The land biosphere module considers leaves, wood, litter and soil. Net primary production depends on atmospheric carbon dioxide concentration and

  6. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  7. Modelling and calibration with mechatronic blockset for Simulink

    Ravn, Ole; Szymkat, Maciej

    1997-01-01

    The paper describes the design considerations for a software tool for modelling and simulation of mechatronic systems. The tool is based on a concept enabling the designer to pick component models that match the physical components of the system to be modelled from a block library. Another...... on the component level and for the whole model. The library that can be extended by the user contains all the standard components, DC-motors, potentiometers, encoders etc. The library is presently being tested in different projects and the response of these users is being incorporated in the code. The Mechatronic...... Simulink Library blockset is implemented basing on MATLAB and Simulink and has been used to model several mechatronic systems....

  8. A simple topography-driven, calibration-free runoff generation model

    Gao, H.; Birkel, C.; Hrachowitz, M.; Tetzlaff, D.; Soulsby, C.; Savenije, H. H. G.

    2017-12-01

    Determining the amount of runoff generation from rainfall occupies a central place in rainfall-runoff modelling. Moreover, reading landscapes and developing calibration-free runoff generation models that adequately reflect land surface heterogeneities remains the focus of much hydrological research. In this study, we created a new method to estimate runoff generation - HAND-based Storage Capacity curve (HSC) which uses a topographic index (HAND, Height Above the Nearest Drainage) to identify hydrological similarity and partially the saturated areas of catchments. We then coupled the HSC model with the Mass Curve Technique (MCT) method to estimate root zone storage capacity (SuMax), and obtained the calibration-free runoff generation model HSC-MCT. Both the two models (HSC and HSC-MCT) allow us to estimate runoff generation and simultaneously visualize the spatial dynamic of saturated area. We tested the two models in the data-rich Bruntland Burn (BB) experimental catchment in Scotland with an unusual time series of the field-mapped saturation area extent. The models were subsequently tested in 323 MOPEX (Model Parameter Estimation Experiment) catchments in the United States. HBV and TOPMODEL were used as benchmarks. We found that the HSC performed better in reproducing the spatio-temporal pattern of the observed saturated areas in the BB catchment compared with TOPMODEL which is based on the topographic wetness index (TWI). The HSC also outperformed HBV and TOPMODEL in the MOPEX catchments for both calibration and validation. Despite having no calibrated parameters, the HSC-MCT model also performed comparably well with the calibrated HBV and TOPMODEL, highlighting the robustness of the HSC model to both describe the spatial distribution of the root zone storage capacity and the efficiency of the MCT method to estimate the SuMax. Moreover, the HSC-MCT model facilitated effective visualization of the saturated area, which has the potential to be used for broader

  9. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that

  10. An alternative method for calibration of narrow band radiometer using a radiative transfer model

    Salvador, J; Wolfram, E; D' Elia, R [Centro de Investigaciones en Laseres y Aplicaciones, CEILAP (CITEFA-CONICET), Juan B. de La Salle 4397 (B1603ALO), Villa Martelli, Buenos Aires (Argentina); Zamorano, F; Casiccia, C [Laboratorio de Ozono y Radiacion UV, Universidad de Magallanes, Punta Arenas (Chile) (Chile); Rosales, A [Universidad Nacional de la Patagonia San Juan Bosco, UNPSJB, Facultad de Ingenieria, Trelew (Argentina) (Argentina); Quel, E, E-mail: jsalvador@citefa.gov.ar [Universidad Nacional de la Patagonia Austral, Unidad Academica Rio Gallegos Avda. Lisandro de la Torre 1070 ciudad de Rio Gallegos-Sta Cruz (Argentina) (Argentina)

    2011-01-01

    The continual monitoring of solar UV radiation is one of the major objectives proposed by many atmosphere research groups. The purpose of this task is to determine the status and degree of progress over time of the anthropogenic composition perturbation of the atmosphere. Such changes affect the intensity of the UV solar radiation transmitted through the atmosphere that then interacts with living organisms and all materials, causing serious consequences in terms of human health and durability of materials that interact with this radiation. One of the many challenges that need to be faced to perform these measurements correctly is the maintenance of periodic calibrations of these instruments. Otherwise, damage caused by the UV radiation received will render any one calibration useless after the passage of some time. This requirement makes the usage of these instruments unattractive, and the lack of frequent calibration may lead to the loss of large amounts of acquired data. Motivated by this need to maintain calibration or, at least, know the degree of stability of instrumental behavior, we have developed a calibration methodology that uses the potential of radiative transfer models to model solar radiation with 5% accuracy or better relative to actual conditions. Voltage values in each radiometer channel involved in the calibration process are carefully selected from clear sky data. Thus, tables are constructed with voltage values corresponding to various atmospheric conditions for a given solar zenith angle. Then we model with a radiative transfer model using the same conditions as for the measurements to assemble sets of values for each zenith angle. The ratio of each group (measured and modeled) allows us to calculate the calibration coefficient value as a function of zenith angle as well as the cosine response presented by the radiometer. The calibration results obtained by this method were compared with those obtained with a Brewer MKIII SN 80 located in the

  11. Global Optimization Ensemble Model for Classification Methods

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  12. Global Optimization Ensemble Model for Classification Methods

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  13. Bayesian Calibration, Validation and Uncertainty Quantification for Predictive Modelling of Tumour Growth: A Tutorial.

    Collis, Joe; Connor, Anthony J; Paczkowski, Marcin; Kannan, Pavitra; Pitt-Francis, Joe; Byrne, Helen M; Hubbard, Matthew E

    2017-04-01

    In this work, we present a pedagogical tumour growth example, in which we apply calibration and validation techniques to an uncertain, Gompertzian model of tumour spheroid growth. The key contribution of this article is the discussion and application of these methods (that are not commonly employed in the field of cancer modelling) in the context of a simple model, whose deterministic analogue is widely known within the community. In the course of the example, we calibrate the model against experimental data that are subject to measurement errors, and then validate the resulting uncertain model predictions. We then analyse the sensitivity of the model predictions to the underlying measurement model. Finally, we propose an elementary learning approach for tuning a threshold parameter in the validation procedure in order to maximize predictive accuracy of our validated model.

  14. Multi-objective calibration of a reservoir model: aggregation and non-dominated sorting approaches

    Huang, Y.

    2012-12-01

    Numerical reservoir models can be helpful tools for water resource management. These models are generally calibrated against historical measurement data made in reservoirs. In this study, two methods are proposed for the multi-objective calibration of such models: aggregation and non-dominated sorting methods. Both methods use a hybrid genetic algorithm as an optimization engine and are different in fitness assignment. In the aggregation method, a weighted sum of scaled simulation errors is designed as an overall objective function to measure the fitness of solutions (i.e. parameter values). The contribution of this study to the aggregation method is the correlation analysis and its implication to the choice of weight factors. In the non-dominated sorting method, a novel method based on non-dominated sorting and the method of minimal distance is used to calculate the dummy fitness of solutions. The proposed methods are illustrated using a water quality model that was set up to simulate the water quality of Pepacton Reservoir, which is located to the north of New York City and is used for water supply of city. The study also compares the aggregation and the non-dominated sorting methods. The purpose of this comparison is not to evaluate the pros and cons between the two methods but to determine whether the parameter values, objective function values (simulation errors) and simulated results obtained are significantly different with each other. The final results (objective function values) from the two methods are good compromise between all objective functions, and none of these results are the worst for any objective function. The calibrated model provides an overall good performance and the simulated results with the calibrated parameter values match the observed data better than the un-calibrated parameters, which supports and justifies the use of multi-objective calibration. The results achieved in this study can be very useful for the calibration of water

  15. Using genetic algorithms for calibrating simplified models of nuclear reactor dynamics

    Marseguerra, Marzio; Zio, Enrico; Canetta, Raffaele

    2004-01-01

    In this paper the use of genetic algorithms for the estimation of the effective parameters of a model of nuclear reactor dynamics is investigated. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest (e.g., reactor power, average fuel and coolant temperatures) to the actual evolution profiles, here simulated by the Quandry based reactor kinetics (Quark) code available from the Nuclear Energy Agency. Alternative schemes of single- and multi-objective optimization are investigated. The efficiency of convergence of the algorithm with respect to the different effective parameters to be calibrated is studied with reference to the physical relationships involved

  16. Using genetic algorithms for calibrating simplified models of nuclear reactor dynamics

    Marseguerra, Marzio E-mail: marzio.marseguerra@polimi.it; Zio, Enrico E-mail: enrico.zio@polimi.it; Canetta, Raffaele

    2004-07-01

    In this paper the use of genetic algorithms for the estimation of the effective parameters of a model of nuclear reactor dynamics is investigated. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest (e.g., reactor power, average fuel and coolant temperatures) to the actual evolution profiles, here simulated by the Quandry based reactor kinetics (Quark) code available from the Nuclear Energy Agency. Alternative schemes of single- and multi-objective optimization are investigated. The efficiency of convergence of the algorithm with respect to the different effective parameters to be calibrated is studied with reference to the physical relationships involved.

  17. Calibration plots for risk prediction models in the presence of competing risks

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-01-01

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks...... prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves...

  18. Calibration of controlling input models for pavement management system.

    2013-07-01

    The Oklahoma Department of Transportation (ODOT) is currently using the Deighton Total Infrastructure Management System (dTIMS) software for pavement management. This system is based on several input models which are computational backbones to dev...

  19. Value of using remotely sensed evapotranspiration for SWAT model calibration

    Hydrologic models are useful management tools for assessing water resources solutions and estimating the potential impact of climate variation scenarios. A comprehensive understanding of the water budget components and especially the evapotranspiration (ET) is critical and often overlooked for adeq...

  20. Calibration of Chaboche Model with a Memory Surface

    Radim HALAMA

    2013-06-01

    Full Text Available This paper points out a sufficient description of the stress-strain behaviour of the Chaboche nonlinear kinematic hardening model only for materials with the Masing's behaviour, regardless of the number of backstress parts. Subsequently, there are presented two concepts of most widely used memory surfaces: Jiang-Sehitoglu concept (deviatoric plane and Chaboche concept (strain-space. On the base of experimental data of steel ST52 is then shown the possibility of capturing hysteresis loops and cyclic strain curve simultaneously in the usual range for low cycle fatigue calculations. A new model for cyclic hardening/softening behaviour modeling has been also developed based on the Jiang-Sehitoglu memory surface concept. Finally, there are formulated some recommendations for the use of individual models and the direction of further research in conclusions.

  1. Global Analysis, Interpretation and Modelling: An Earth Systems Modelling Program

    Moore, Berrien, III; Sahagian, Dork

    1997-01-01

    The Goal of the GAIM is: To advance the study of the coupled dynamics of the Earth system using as tools both data and models; to develop a strategy for the rapid development, evaluation, and application of comprehensive prognostic models of the Global Biogeochemical Subsystem which could eventually be linked with models of the Physical-Climate Subsystem; to propose, promote, and facilitate experiments with existing models or by linking subcomponent models, especially those associated with IGBP Core Projects and with WCRP efforts. Such experiments would be focused upon resolving interface issues and questions associated with developing an understanding of the prognostic behavior of key processes; to clarify key scientific issues facing the development of Global Biogeochemical Models and the coupling of these models to General Circulation Models; to assist the Intergovernmental Panel on Climate Change (IPCC) process by conducting timely studies that focus upon elucidating important unresolved scientific issues associated with the changing biogeochemical cycles of the planet and upon the role of the biosphere in the physical-climate subsystem, particularly its role in the global hydrological cycle; and to advise the SC-IGBP on progress in developing comprehensive Global Biogeochemical Models and to maintain scientific liaison with the WCRP Steering Group on Global Climate Modelling.

  2. A multi-source satellite data approach for modelling Lake Turkana water level: calibration and validation using satellite altimetry data

    N. M. Velpuri

    2012-01-01

    satellite-driven water balance model for (i quantitative assessment of the impact of basin developmental activities on lake levels and for (ii forecasting lake level changes and their impact on fisheries. From this study, we suggest that globally available satellite altimetry data provide a unique opportunity for calibration and validation of hydrologic models in ungauged basins.

  3. A multi-source satellite data approach for modelling Lake Turkana water level: Calibration and validation using satellite altimetry data

    Velpuri, N.M.; Senay, G.B.; Asante, K.O.

    2012-01-01

    model for (i) quantitative assessment of the impact of basin developmental activities on lake levels and for (ii) forecasting lake level changes and their impact on fisheries. From this study, we suggest that globally available satellite altimetry data provide a unique opportunity for calibration and validation of hydrologic models in ungauged basins. ?? Author(s) 2012.

  4. Constitutive Model Calibration via Autonomous Multiaxial Experimentation (Postprint)

    2016-09-17

    ABSTRACT (Maximum 200 words) Modern plasticity models contain numerous parameters that can be difficult and time consuming to fit using current...Abstract Modern plasticity models contain numerous parameters that can be difficult and time consuming to fit using current methods. Additional...complexity, is a difficult and time consuming process that has historically be a separate process from the experimental testing. As such, additional

  5. Embodying, calibrating and caring for a local model of obesity

    Winther, Jonas; Hillersdal, Line

    Interdisciplinary research collaborations are increasingly made a mandatory 'standard' within strategic research grants. Collaborations between the natural, social and humanistic sciences are conceptualized as uniquely suited to study pressing societal problems. The obesity epidemic has been...... highlighted as such a problem. Within research communities disparate explanatory models of obesity exist (Ulijaszek 2008) and some of these models of obesity are brought together in the Copenhagen-based interdisciplinary research initiative; Governing Obesity (GO) with the aim of addressing the causes...

  6. Global modelling of Cryptosporidium in surface water

    Vermeulen, Lucie; Hofstra, Nynke

    2016-04-01

    Introduction Waterborne pathogens that cause diarrhoea, such as Cryptosporidium, pose a health risk all over the world. In many regions quantitative information on pathogens in surface water is unavailable. Our main objective is to model Cryptosporidium concentrations in surface waters worldwide. We present the GloWPa-Crypto model and use the model in a scenario analysis. A first exploration of global Cryptosporidium emissions to surface waters has been published by Hofstra et al. (2013). Further work has focused on modelling emissions of Cryptosporidium and Rotavirus to surface waters from human sources (Vermeulen et al 2015, Kiulia et al 2015). A global waterborne pathogen model can provide valuable insights by (1) providing quantitative information on pathogen levels in data-sparse regions, (2) identifying pathogen hotspots, (3) enabling future projections under global change scenarios and (4) supporting decision making. Material and Methods GloWPa-Crypto runs on a monthly time step and represents conditions for approximately the year 2010. The spatial resolution is a 0.5 x 0.5 degree latitude x longitude grid for the world. We use livestock maps (http://livestock.geo-wiki.org/) combined with literature estimates to calculate spatially explicit livestock Cryptosporidium emissions. For human Cryptosporidium emissions, we use UN population estimates, the WHO/UNICEF JMP sanitation country data and literature estimates of wastewater treatment. We combine our emissions model with a river routing model and data from the VIC hydrological model (http://vic.readthedocs.org/en/master/) to calculate concentrations in surface water. Cryptosporidium survival during transport depends on UV radiation and water temperature. We explore pathogen emissions and concentrations in 2050 with the new Shared Socio-economic Pathways (SSPs) 1 and 3. These scenarios describe plausible future trends in demographics, economic development and the degree of global integration. Results and

  7. Global energy modeling - A biophysical approach

    Dale, Michael

    2010-09-15

    This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.

  8. Calibration under uncertainty for finite element models of masonry monuments

    Atamturktur, Sezer,; Hemez, Francois,; Unal, Cetin

    2010-02-01

    Historical unreinforced masonry buildings often include features such as load bearing unreinforced masonry vaults and their supporting framework of piers, fill, buttresses, and walls. The masonry vaults of such buildings are among the most vulnerable structural components and certainly among the most challenging to analyze. The versatility of finite element (FE) analyses in incorporating various constitutive laws, as well as practically all geometric configurations, has resulted in the widespread use of the FE method for the analysis of complex unreinforced masonry structures over the last three decades. However, an FE model is only as accurate as its input parameters, and there are two fundamental challenges while defining FE model input parameters: (1) material properties and (2) support conditions. The difficulties in defining these two aspects of the FE model arise from the lack of knowledge in the common engineering understanding of masonry behavior. As a result, engineers are unable to define these FE model input parameters with certainty, and, inevitably, uncertainties are introduced to the FE model.

  9. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  10. Calibration of HEC-Ras hydrodynamic model using gauged discharge data and flood inundation maps

    Tong, Rui; Komma, Jürgen

    2017-04-01

    The estimation of flood is essential for disaster alleviation. Hydrodynamic models are implemented to predict the occurrence and variance of flood in different scales. In practice, the calibration of hydrodynamic models aims to search the best possible parameters for the representation the natural flow resistance. Recent years have seen the calibration of hydrodynamic models being more actual and faster following the advance of earth observation products and computer based optimization techniques. In this study, the Hydrologic Engineering River Analysis System (HEC-Ras) model was set up with high-resolution digital elevation model from Laser scanner for the river Inn in Tyrol, Austria. 10 largest flood events from 19 hourly discharge gauges and flood inundation maps were selected to calibrate the HEC-Ras model. Manning roughness values and lateral inflow factors as parameters were automatically optimized with the Shuffled complex with Principal component analysis (SP-UCI) algorithm developed from the Shuffled Complex Evolution (SCE-UA). Different objective functions (Nash-Sutcliffe model efficiency coefficient, the timing of peak, peak value and Root-mean-square deviation) were used in single or multiple way. It was found that the lateral inflow factor was the most sensitive parameter. SP-UCI algorithm could avoid the local optimal and achieve efficient and effective parameters in the calibration of HEC-Ras model using flood extension images. As results showed, calibration by means of gauged discharge data and flood inundation maps, together with objective function of Nash-Sutcliffe model efficiency coefficient, was very robust to obtain more reliable flood simulation, and also to catch up with the peak value and the timing of peak.

  11. Calibration of Yucca Mountain unsaturated zone flow and transport model using porewater chloride data

    Liu, Jianchun; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.

    2002-01-01

    In this study, porewater chloride data from Yucca Mountain, Nevada, are analyzed and modeled by 3-D chemical transport simulations and analytical methods. The simulation modeling approach is based on a continuum formulation of coupled multiphase fluid flow and tracer transport processes through fractured porous rock, using a dual-continuum concept. Infiltration-rate calibrations were using the pore water chloride data. Model results of chloride distributions were improved in matching the observed data with the calibrated infiltration rates. Statistical analyses of the frequency distribution for overall percolation fluxes and chloride concentration in the unsaturated zone system demonstrate that the use of the calibrated infiltration rates had insignificant effect on the distribution of simulated percolation fluxes but significantly changed the predicated distribution of simulated chloride concentrations. An analytical method was also applied to model transient chloride transport. The method was verified by 3-D simulation results as able to capture major chemical transient behavior and trends. Effects of lateral flow in the Paintbrush nonwelded unit on percolation fluxes and chloride distribution were studied by 3-D simulations with increased horizontal permeability. The combined results from these model calibrations furnish important information for the UZ model studies, contributing to performance assessment of the potential repository

  12. Effect of heteroscedasticity treatment in residual error models on model calibration and prediction uncertainty estimation

    Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli

    2017-11-01

    The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.

  13. Modeling and Experimental Analysis of Piezoelectric Shakers for High-Frequency Calibration of Accelerometers

    Vogl, Gregory W.; Harper, Kari K.; Payne, Bev

    2010-01-01

    Piezoelectric shakers have been developed and used at the National Institute of Standards and Technology (NIST) for decades for high-frequency calibration of accelerometers. Recently, NIST researchers built new piezoelectric shakers in the hopes of reducing the uncertainties in the calibrations of accelerometers while extending the calibration frequency range beyond 20 kHz. The ability to build and measure piezoelectric shakers invites modeling of these systems in order to improve their design for increased performance, which includes a sinusoidal motion with lower distortion, lower cross-axial motion, and an increased frequency range. In this paper, we present a model of piezoelectric shakers and match it to experimental data. The equations of motion for all masses are solved along with the coupled state equations for the piezoelectric actuator. Finally, additional electrical elements like inductors, capacitors, and resistors are added to the piezoelectric actuator for matching of experimental and theoretical frequency responses.

  14. Statistical models of global Langmuir mixing

    Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean

    2017-05-01

    The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.

  15. Experimental validation and calibration of pedestrian loading models for footbridges

    Ricciardelli, Fransesco; Briatico, C; Ingólfsson, Einar Thór

    2006-01-01

    Different patterns of pedestrian loading of footbridges exist, whose occurrence depends on a number of parameters, such as the bridge span, frequency, damping and mass, and the pedestrian density and activity. In this paper analytical models for the transient action of one walker and for the stat...

  16. An auto-calibration procedure for empirical solar radiation models

    Bojanowski, J.S.; Donatelli, Marcello; Skidmore, A.K.; Vrieling, A.

    2013-01-01

    Solar radiation data are an important input for estimating evapotranspiration and modelling crop growth. Direct measurement of solar radiation is now carried out in most European countries, but the network of measuring stations is too sparse for reliable interpolation of measured values. Instead of

  17. The Active Model: a calibration of material intent

    Ramsgaard Thomsen, Mette; Tamke, Martin

    2012-01-01

    created it. This definition suggests structural characteristics that are perhaps not immediately obvious when implemented within architectural models. It opens the idea that materiality might persist into the digital environment, as well as the digital lingering within the material. It implies questions...

  18. Remote sensing estimation of evapotranspiration for SWAT Model Calibration

    Hydrological models are used to assess many water resource problems from water quantity to water quality issues. The accurate assessment of the water budget, primarily the influence of precipitation and evapotranspiration (ET), is a critical first-step evaluation, which is often overlooked in hydro...

  19. Modeling of the Global Water Cycle - Analytical Models

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  20. Global Environmental Change: An integrated modelling approach

    Den Elzen, M.

    1993-01-01

    Two major global environmental problems are dealt with: climate change and stratospheric ozone depletion (and their mutual interactions), briefly surveyed in part 1. In Part 2 a brief description of the integrated modelling framework IMAGE 1.6 is given. Some specific parts of the model are described in more detail in other Chapters, e.g. the carbon cycle model, the atmospheric chemistry model, the halocarbon model, and the UV-B impact model. In Part 3 an uncertainty analysis of climate change and stratospheric ozone depletion is presented (Chapter 4). Chapter 5 briefly reviews the social and economic uncertainties implied by future greenhouse gas emissions. Chapters 6 and 7 describe a model and sensitivity analysis pertaining to the scientific uncertainties and/or lacunae in the sources and sinks of methane and carbon dioxide, and their biogeochemical feedback processes. Chapter 8 presents an uncertainty and sensitivity analysis of the carbon cycle model, the halocarbon model, and the IMAGE model 1.6 as a whole. Part 4 presents the risk assessment methodology as applied to the problems of climate change and stratospheric ozone depletion more specifically. In Chapter 10, this methodology is used as a means with which to asses current ozone policy and a wide range of halocarbon policies. Chapter 11 presents and evaluates the simulated globally-averaged temperature and sea level rise (indicators) for the IPCC-1990 and 1992 scenarios, concluding with a Low Risk scenario, which would meet the climate targets. Chapter 12 discusses the impact of sea level rise on the frequency of the Dutch coastal defence system (indicator) for the IPCC-1990 scenarios. Chapter 13 presents projections of mortality rates due to stratospheric ozone depletion based on model simulations employing the UV-B chain model for a number of halocarbon policies. Chapter 14 presents an approach for allocating future emissions of CO 2 among regions. (Abstract Truncated)

  1. SWAT application in intensive irrigation systems: Model modification, calibration and validation

    Dechmi, Farida; Burguete, Javier; Skhiri, Ahmed

    2012-11-01

    SummaryThe Soil and Water Assessment Tool (SWAT) is a well established, distributed, eco-hydrologic model. However, using the study case of an agricultural intensive irrigated watershed, it was shown that all the model versions are not able to appropriately reproduce the total streamflow in such system when the irrigation source is outside the watershed. The objective of this study was to modify the SWAT2005 version for correctly simulating the main hydrological processes. Crop yield, total streamflow, total suspended sediment (TSS) losses and phosphorus load calibration and validation were performed using field survey information and water quantity and quality data recorded during 2008 and 2009 years in Del Reguero irrigated watershed in Spain. The goodness of the calibration and validation results was assessed using five statistical measures, including the Nash-Sutcliffe efficiency (NSE). Results indicated that the average annual crop yield and actual evapotranspiration estimations were quite satisfactory. On a monthly basis, the values of NSE were 0.90 (calibration) and 0.80 (validation) indicating that the modified model could reproduce accurately the observed streamflow. The TSS losses were also satisfactorily estimated (NSE = 0.72 and 0.52 for the calibration and validation steps). The monthly temporal patterns and all the statistical parameters indicated that the modified SWAT-IRRIG model adequately predicted the total phosphorus (TP) loading. Therefore, the model could be used to assess the impacts of different best management practices on nonpoint phosphorus losses in irrigated systems.

  2. Global evaluation and calibration of a passive air sampler for gaseous mercury

    McLagan, David S.; Mitchell, Carl P. J.; Steffen, Alexandra; Hung, Hayley; Shin, Cecilia; Stupple, Geoff W.; Olson, Mark L.; Luke, Winston T.; Kelley, Paul; Howard, Dean; Edwards, Grant C.; Nelson, Peter F.; Xiao, Hang; Sheu, Guey-Rong; Dreyer, Annekatrin; Huang, Haiyong; Hussain, Batual Abdul; Lei, Ying D.; Tavshunsky, Ilana; Wania, Frank

    2018-04-01

    Passive air samplers (PASs) for gaseous mercury (Hg) were deployed for time periods between 1 month and 1 year at 20 sites across the globe with continuous atmospheric Hg monitoring using active Tekran instruments. The purpose was to evaluate the accuracy of the PAS vis-à-vis the industry standard active instruments and to determine a sampling rate (SR; the volume of air stripped of gaseous Hg per unit of time) that is applicable across a wide range of conditions. The sites spanned a wide range of latitudes, altitudes, meteorological conditions, and gaseous Hg concentrations. Precision, based on 378 replicated deployments performed by numerous personnel at multiple sites, is 3.6 ± 3.0 %1, confirming the PAS's excellent reproducibility and ease of use. Using a SR previously determined at a single site, gaseous Hg concentrations derived from the globally distributed PASs deviate from Tekran-based concentrations by 14.2 ± 10 %. A recalibration using the entire new data set yields a slightly higher SR of 0.1354 ± 0.016 m3 day-1. When concentrations are derived from the PAS using this revised SR the difference between concentrations from active and passive sampling is reduced to 8.8 ± 7.5 %. At the mean gaseous Hg concentration across the study sites of 1.54 ng m-3, this represents an ability to resolve concentrations to within 0.13 ng m-3. Adjusting the sampling rate to deployment specific temperatures and wind speeds does not decrease the difference in active-passive concentration further (8.7 ± 5.7 %), but reduces its variability by leading to better agreement in Hg concentrations measured at sites with very high and very low temperatures and very high wind speeds. This value (8.7 ± 5.7 %) represents a conservative assessment of the overall uncertainty of the PAS due to inherent uncertainties of the Tekran instruments. Going forward, the recalibrated SR adjusted for temperature and wind speed should be used, especially if conditions are highly variable or

  3. Global evaluation and calibration of a passive air sampler for gaseous mercury

    D. S. McLagan

    2018-04-01

    Full Text Available Passive air samplers (PASs for gaseous mercury (Hg were deployed for time periods between 1 month and 1 year at 20 sites across the globe with continuous atmospheric Hg monitoring using active Tekran instruments. The purpose was to evaluate the accuracy of the PAS vis-à-vis the industry standard active instruments and to determine a sampling rate (SR; the volume of air stripped of gaseous Hg per unit of time that is applicable across a wide range of conditions. The sites spanned a wide range of latitudes, altitudes, meteorological conditions, and gaseous Hg concentrations. Precision, based on 378 replicated deployments performed by numerous personnel at multiple sites, is 3.6 ± 3.0 %1, confirming the PAS's excellent reproducibility and ease of use. Using a SR previously determined at a single site, gaseous Hg concentrations derived from the globally distributed PASs deviate from Tekran-based concentrations by 14.2 ± 10 %. A recalibration using the entire new data set yields a slightly higher SR of 0.1354 ± 0.016 m3 day−1. When concentrations are derived from the PAS using this revised SR the difference between concentrations from active and passive sampling is reduced to 8.8 ± 7.5 %. At the mean gaseous Hg concentration across the study sites of 1.54 ng m−3, this represents an ability to resolve concentrations to within 0.13 ng m−3. Adjusting the sampling rate to deployment specific temperatures and wind speeds does not decrease the difference in active–passive concentration further (8.7 ± 5.7 %, but reduces its variability by leading to better agreement in Hg concentrations measured at sites with very high and very low temperatures and very high wind speeds. This value (8.7 ± 5.7 % represents a conservative assessment of the overall uncertainty of the PAS due to inherent uncertainties of the Tekran instruments. Going forward, the recalibrated SR adjusted for temperature and wind speed

  4. Calibration of Airframe and Occupant Models for Two Full-Scale Rotorcraft Crash Tests

    Annett, Martin S.; Horta, Lucas G.; Polanco, Michael A.

    2012-01-01

    Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. Accelerations and kinematic data collected from the crash tests were compared to a system integrated finite element model of the test article. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the second full-scale crash test. This combination of heuristic and quantitative methods was used to identify modeling deficiencies, evaluate parameter importance, and propose required model changes. It is shown that the multi-dimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and co-pilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. This approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and test planning guidance. Complete crash simulations with validated finite element models can be used

  5. Statistical validation of engineering and scientific models : bounds, calibration, and extrapolation.

    Dowding, Kevin J.; Hills, Richard Guy (New Mexico State University, Las Cruces, NM)

    2005-04-01

    Numerical models of complex phenomena often contain approximations due to our inability to fully model the underlying physics, the excessive computational resources required to fully resolve the physics, the need to calibrate constitutive models, or in some cases, our ability to only bound behavior. Here we illustrate the relationship between approximation, calibration, extrapolation, and model validation through a series of examples that use the linear transient convective/dispersion equation to represent the nonlinear behavior of Burgers equation. While the use of these models represents a simplification relative to the types of systems we normally address in engineering and science, the present examples do support the tutorial nature of this document without obscuring the basic issues presented with unnecessarily complex models.

  6. Calibration of a finite element composite delamination model by experiments

    Gaiotti, M.; Rizzo, C.M.; Branner, Kim

    2013-01-01

    This paper deals with the mechanical behavior under in plane compressive loading of thick and mostly unidirectional glass fiber composite plates made with an initial embedded delamination. The delamination is rectangular in shape, causing the separation of the central part of the plate into two...... distinct sub-laminates. The work focuses on experimental validation of a finite element model built using the 9-noded MITC9 shell elements, which prevent locking effects and aiming to capture the highly non linear buckling features involved in the problem. The geometry has been numerically defined...

  7. Calibration of a distributed hydrologic model using observed spatial patterns from MODIS data

    Demirel, Mehmet C.; González, Gorka M.; Mai, Juliane; Stisen, Simon

    2016-04-01

    Distributed hydrologic models are typically calibrated against streamflow observations at the outlet of the basin. Along with these observations from gauging stations, satellite based estimates offer independent evaluation data such as remotely sensed actual evapotranspiration (aET) and land surface temperature. The primary objective of the study is to compare model calibrations against traditional downstream discharge measurements with calibrations against simulated spatial patterns and combinations of both types of observations. While the discharge based model calibration typically improves the temporal dynamics of the model, it seems to give rise to minimum improvement of the simulated spatial patterns. In contrast, objective functions specifically targeting the spatial pattern performance could potentially increase the spatial model performance. However, most modeling studies, including the model formulations and parameterization, are not designed to actually change the simulated spatial pattern during calibration. This study investigates the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale hydrologic model (mHM). This model is selected as it allows for a change in the spatial distribution of key soil parameters through the optimization of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) values directly as input. In addition the simulated aET can be estimated at a spatial resolution suitable for comparison to the spatial patterns observed with MODIS data. To increase our control on spatial calibration we introduced three additional parameters to the model. These new parameters are part of an empirical equation to the calculate crop coefficient (Kc) from daily LAI maps and used to update potential evapotranspiration (PET) as model inputs. This is done instead of correcting/updating PET with just a uniform (or aspect driven) factor used in the mHM model

  8. Three-dimensional DFN Model Development and Calibration: A Case Study for Pahute Mesa, Nevada National Security Site

    Pham, H. V.; Parashar, R.; Sund, N. L.; Pohlmann, K.

    2017-12-01

    Pahute Mesa, located in the north-western region of the Nevada National Security Site, is an area where numerous underground nuclear tests were conducted. The mesa contains several fractured aquifers that can potentially provide high permeability pathways for migration of radionuclides away from testing locations. The BULLION Forced-Gradient Experiment (FGE) conducted on Pahute Mesa injected and pumped solute and colloid tracers from a system of three wells for obtaining site-specific information about the transport of radionuclides in fractured rock aquifers. This study aims to develop reliable three-dimensional discrete fracture network (DFN) models to simulate the BULLION FGE as a means for computing realistic ranges of important parameters describing fractured rock. Multiple conceptual DFN models were developed using dfnWorks, a parallelized computational suite developed by Los Alamos National Laboratory, to simulate flow and conservative particle movement in subsurface fractured rocks downgradient from the BULLION test. The model domain is 100x200x100 m and includes the three tracer-test wells of the BULLION FGE and the Pahute Mesa Lava-flow aquifer. The model scenarios considered differ from each other in terms of boundary conditions and fracture density. For each conceptual model, a number of statistically equivalent fracture network realizations were generated using data from fracture characterization studies. We adopt the covariance matrix adaptation-evolution strategy (CMA-ES) as a global local stochastic derivative-free optimization method to calibrate the DFN models using groundwater levels and tracer breakthrough data obtained from the three wells. Models of fracture apertures based on fracture type and size are proposed and the values of apertures in each model are estimated during model calibration. The ranges of fracture aperture values resulting from this study are expected to enhance understanding of radionuclide transport in fractured rocks and

  9. Optimal Operational Monetary Policy Rules in an Endogenous Growth Model: a calibrated analysis

    Arato, Hiroki

    2009-01-01

    This paper constructs an endogenous growth New Keynesian model and considers growth and welfare effect of Taylor-type (operational) monetary policy rules. The Ramsey equilibrium and optimal operational monetary policy rule is also computed. In the calibrated model, the Ramseyoptimal volatility of inflation rate is smaller than that in standard exogenous growth New Keynesian model with physical capital accumulation. Optimal operational monetary policy rule makes nominal interest rate respond s...

  10. Uncertainty analyses of the calibrated parameter values of a water quality model

    Rode, M.; Suhr, U.; Lindenschmidt, K.-E.

    2003-04-01

    For river basin management water quality models are increasingly used for the analysis and evaluation of different management measures. However substantial uncertainties exist in parameter values depending on the available calibration data. In this paper an uncertainty analysis for a water quality model is presented, which considers the impact of available model calibration data and the variance of input variables. The investigation was conducted based on four extensive flowtime related longitudinal surveys in the River Elbe in the years 1996 to 1999 with varying discharges and seasonal conditions. For the model calculations the deterministic model QSIM of the BfG (Germany) was used. QSIM is a one dimensional water quality model and uses standard algorithms for hydrodynamics and phytoplankton dynamics in running waters, e.g. Michaelis Menten/Monod kinetics, which are used in a wide range of models. The multi-objective calibration of the model was carried out with the nonlinear parameter estimator PEST. The results show that for individual flow time related measuring surveys very good agreements between model calculation and measured values can be obtained. If these parameters are applied to deviating boundary conditions, substantial errors in model calculation can occur. These uncertainties can be decreased with an increased calibration database. More reliable model parameters can be identified, which supply reasonable results for broader boundary conditions. The extension of the application of the parameter set on a wider range of water quality conditions leads to a slight reduction of the model precision for the specific water quality situation. Moreover the investigations show that highly variable water quality variables like the algal biomass always allow a smaller forecast accuracy than variables with lower coefficients of variation like e.g. nitrate.

  11. Comparison between two calibration models of a measurement system for thyroid monitoring

    Venturini, Luzia

    2005-01-01

    This paper shows a comparison between two theoretical calibration that use two mathematical models to represent the neck region. In the first model thyroid is considered to be just a region limited by two concentric cylinders whose dimensions are those of trachea and neck. The second model uses functional functions to get a better representation of the thyroid geometry. Efficiency values are obtained using Monte Carlo simulation. (author)

  12. Hierarchical Bayesian modelling of mobility metrics for hazard model input calibration

    Calder, Eliza; Ogburn, Sarah; Spiller, Elaine; Rutarindwa, Regis; Berger, Jim

    2015-04-01

    In this work we present a method to constrain flow mobility input parameters for pyroclastic flow models using hierarchical Bayes modeling of standard mobility metrics such as H/L and flow volume etc. The advantage of hierarchical modeling is that it can leverage the information in global dataset for a particular mobility metric in order to reduce the uncertainty in modeling of an individual volcano, especially important where individual volcanoes have only sparse datasets. We use compiled pyroclastic flow runout data from Colima, Merapi, Soufriere Hills, Unzen and Semeru volcanoes, presented in an open-source database FlowDat (https://vhub.org/groups/massflowdatabase). While the exact relationship between flow volume and friction varies somewhat between volcanoes, dome collapse flows originating from the same volcano exhibit similar mobility relationships. Instead of fitting separate regression models for each volcano dataset, we use a variation of the hierarchical linear model (Kass and Steffey, 1989). The model presents a hierarchical structure with two levels; all dome collapse flows and dome collapse flows at specific volcanoes. The hierarchical model allows us to assume that the flows at specific volcanoes share a common distribution of regression slopes, then solves for that distribution. We present comparisons of the 95% confidence intervals on the individual regression lines for the data set from each volcano as well as those obtained from the hierarchical model. The results clearly demonstrate the advantage of considering global datasets using this technique. The technique developed is demonstrated here for mobility metrics, but can be applied to many other global datasets of volcanic parameters. In particular, such methods can provide a means to better contain parameters for volcanoes for which we only have sparse data, a ubiquitous problem in volcanology.

  13. Analysis and classification of data sets for calibration and validation of agro-ecosystem models

    Kersebaum, K C; Boote, K J; Jorgenson, J S

    2015-01-01

    Experimental field data are used at different levels of complexity to calibrate, validate and improve agro-ecosystem models to enhance their reliability for regional impact assessment. A methodological framework and software are presented to evaluate and classify data sets into four classes regar...

  14. Optimization of electronic enclosure design for thermal and moisture management using calibrated models of progressive complexity

    Mohanty, Sankhya; Staliulionis, Zygimantas; Shojaee Nasirabadi, Parizad

    2016-01-01

    the development of rigorous calibrated CFD models as well as simple predictive numerical tools, the current paper tackles the optimization of critical features of a typical two-chamber electronic enclosure. The progressive optimization strategy begins the design parameter selection by initially using simpler...

  15. Calibration of the L-MEB model over a coniferous and a deciduous forest

    Grant, Jennifer P.; Saleh-Contell, Kauzar; Wigneron, Jean-Pierre

    2008-01-01

    In this paper, the L-band Microwave Emission of the Biosphere (L-MEB) model used in the Soil Moisture and Ocean Salinity (SMOS) Level 2 Soil Moisture algorithm is calibrated using L-band (1.4 GHz) microwave measurements over a coniferous (Pine) and a deciduous (mixed/Beech) forest. This resulted...

  16. Displaced calibration of PM10 measurements using spatio-temporal models

    Daniela Cocchi

    2007-12-01

    Full Text Available PM10 monitoring networks are equipped with heterogeneous samplers. Some of these samplers are known to underestimate true levels of concentrations (non-reference samplers. In this paper we propose a hierarchical spatio-temporal Bayesian model for the calibration of measurements recorded using non-reference samplers, by borrowing strength from non co-located reference sampler measurements.

  17. A parameter for the selection of an optimum balance calibration model by Monte Carlo simulation

    Bidgood, Peter M

    2013-09-01

    Full Text Available The current trend in balance calibration-matrix generation is to use non-linear regression and statistical methods. Methods typically include Modified-Design-of-Experiment (MDOE), Response-Surface-Models (RSMs) and Analysis of Variance (ANOVA...

  18. Predictive error dependencies when using pilot points and singular value decomposition in groundwater model calibration

    Christensen, Steen; Doherty, John

    2008-01-01

    super parameters), and that the structural errors caused by using pilot points and super parameters to parameterize the highly heterogeneous log-transmissivity field can be significant. For the test case much effort is put into studying how the calibrated model's ability to make accurate predictions...

  19. Calibration of a semi-distributed hydrological model using discharge and remote sensing data

    Muthuwatta, L.P.; Muthuwatta, Lal P.; Booij, Martijn J.; Rientjes, T.H.M.; Rientjes, Tom H.M.; Bos, M.G.; Gieske, A.S.M.; Ahmad, Mobin-Ud-Din; Yilmaz, Koray; Yucel, Ismail; Gupta, Hoshin V.; Wagener, Thorsten; Yang, Dawen; Savenije, Hubert; Neale, Christopher; Kunstmann, Harald; Pomeroy, John

    2009-01-01

    The objective of this study is to present an approach to calibrate a semi-distributed hydrological model using observed streamflow data and actual evapotranspiration time series estimates based on remote sensing data. First, daily actual evapotranspiration is estimated using available MODIS

  20. Performance and Model Calibration of R-D-N Processes in Pilot Plant

    de la Sota, A.; Larrea, L.; Novak, L.

    1994-01-01

    This paper deals with the first part of an experimental programme in a pilot plant configured for advanced biological nutrient removal processes treating domestic wastewater of Bilbao. The IAWPRC Model No.1 was calibrated in order to optimize the design of the full-scale plant. In this first phas...

  1. Using expert knowledge of the hydrological system to constrain multi-objective calibration of SWAT models

    The SWAT model is a helpful tool to predict hydrological processes in a study catchment and their impact on the river discharge at the catchment outlet. For reliable discharge predictions, a precise simulation of hydrological processes is required. Therefore, SWAT has to be calibrated accurately to ...

  2. Regional calibration models for predicting loblolly pine tracheid properties using near-infrared spectroscopy

    Mohamad Nabavi; Joseph Dahlen; Laurence Schimleck; Thomas L. Eberhardt; Cristian Montes

    2018-01-01

    This study developed regional calibration models for the prediction of loblolly pine (Pinus taeda) tracheid properties using near-infrared (NIR) spectroscopy. A total of 1842 pith-to-bark radial strips, aged 19–31 years, were acquired from 268 trees from 109 stands across the southeastern USA. Diffuse reflectance NIR spectra were collected at 10-mm...

  3. Calibration of the model SMART2 in the Netherlands, using data available at the European scale

    Mol-Dijkstra, J.P.; Kros, J.

    1999-01-01

    The soil acidification model SMART2 has been developed for application on a national to a continental scale. In this study SMART2 is applied at the European scale, which means that SMART2 was applied to the Netherlands with data that are available at the European scale. In order to calibrate SMART2,

  4. Calibration and Monte Carlo modelling of neutron long counters

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  5. Model independent approach to the single photoelectron calibration of photomultiplier tubes

    Saldanha, R.; Grandi, L.; Guardincerri, Y.; Wester, T.

    2017-08-01

    The accurate calibration of photomultiplier tubes is critical in a wide variety of applications in which it is necessary to know the absolute number of detected photons or precisely determine the resolution of the signal. Conventional calibration methods rely on fitting the photomultiplier response to a low intensity light source with analytical approximations to the single photoelectron distribution, often leading to biased estimates due to the inability to accurately model the full distribution, especially at low charge values. In this paper we present a simple statistical method to extract the relevant single photoelectron calibration parameters without making any assumptions about the underlying single photoelectron distribution. We illustrate the use of this method through the calibration of a Hamamatsu R11410 photomultiplier tube and study the accuracy and precision of the method using Monte Carlo simulations. The method is found to have significantly reduced bias compared to conventional methods and works under a wide range of light intensities, making it suitable for simultaneously calibrating large arrays of photomultiplier tubes.

  6. Development of a generic auto-calibration package for regional ecological modeling and application in the Central Plains of the United States

    Wu, Yiping; Liu, Shuguang; Li, Zhengpeng; Dahal, Devendra; Young, Claudia J.; Schmidt, Gail L.; Liu, Jinxun; Davis, Brian; Sohl, Terry L.; Werner, Jeremy M.; Oeding, Jennifer

    2014-01-01

    Process-oriented ecological models are frequently used for predicting potential impacts of global changes such as climate and land-cover changes, which can be useful for policy making. It is critical but challenging to automatically derive optimal parameter values at different scales, especially at regional scale, and validate the model performance. In this study, we developed an automatic calibration (auto-calibration) function for a well-established biogeochemical model—the General Ensemble Biogeochemical Modeling System (GEMS)-Erosion Deposition Carbon Model (EDCM)—using data assimilation technique: the Shuffled Complex Evolution algorithm and a model-inversion R package—Flexible Modeling Environment (FME). The new functionality can support multi-parameter and multi-objective auto-calibration of EDCM at the both pixel and regional levels. We also developed a post-processing procedure for GEMS to provide options to save the pixel-based or aggregated county-land cover specific parameter values for subsequent simulations. In our case study, we successfully applied the updated model (EDCM-Auto) for a single crop pixel with a corn–wheat rotation and a large ecological region (Level II)—Central USA Plains. The evaluation results indicate that EDCM-Auto is applicable at multiple scales and is capable to handle land cover changes (e.g., crop rotations). The model also performs well in capturing the spatial pattern of grain yield production for crops and net primary production (NPP) for other ecosystems across the region, which is a good example for implementing calibration and validation of ecological models with readily available survey data (grain yield) and remote sensing data (NPP) at regional and national levels. The developed platform for auto-calibration can be readily expanded to incorporate other model inversion algorithms and potential R packages, and also be applied to other ecological models.

  7. Development of an Integrated Global Energy Model

    Krakowski, R.A.

    1999-01-01

    The primary objective of this research was to develop a forefront analysis tool for application to enhance understanding of long-term, global, nuclear-energy and nuclear-material futures. To this end, an existing economics-energy-environmental (E 3 ) model was adopted, modified, and elaborated to examine this problem in a multi-regional (13), long-term (approximately2,100) context. The E 3 model so developed was applied to create a Los Alamos presence in this E 3 area through ''niche analyses'' that provide input to the formulation of policies dealing with and shaping of nuclear-energy and nuclear-materials futures. Results from analyses using the E 3 model have been presented at a variety of national and international conferences and workshops. Through use of the E 3 model Los Alamos was afforded the opportunity to participate in a multi-national E 3 study team that is examining a range of global, long-term nuclear issues under the auspices of the IAEA during the 1998-99 period . Finally, the E 3 model developed under this LDRD project is being used as an important component in more recent Nuclear Material Management Systems (NMMS) project

  8. Drought Persistence Errors in Global Climate Models

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  9. How does higher frequency monitoring data affect the calibration of a process-based water quality model?

    Jackson-Blake, Leah; Helliwell, Rachel

    2015-04-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated

  10. Sensitivity analysis and calibration of a soil carbon model (SoilGen2 in two contrasting loess forest soils

    Y. Y. Yu

    2013-01-01

    Full Text Available To accurately estimate past terrestrial carbon pools is the key to understanding the global carbon cycle and its relationship with the climate system. SoilGen2 is a useful tool to obtain aspects of soil properties (including carbon content by simulating soil formation processes; thus it offers an opportunity for both past soil carbon pool reconstruction and future carbon pool prediction. In order to apply it to various environmental conditions, parameters related to carbon cycle process in SoilGen2 are calibrated based on six soil pedons from two typical loess deposition regions (Belgium and China. Sensitivity analysis using the Morris method shows that decomposition rate of humus (kHUM, fraction of incoming plant material as leaf litter (frecto and decomposition rate of resistant plant material (kRPM are the three most sensitive parameters that would cause the greatest uncertainty in simulated change of soil organic carbon in both regions. According to the principle of minimizing the difference between simulated and measured organic carbon by comparing quality indices, the suited values of kHUM, (frecto and kRPM in the model are deduced step by step and validated for independent soil pedons. The difference of calibrated parameters between Belgium and China may be attributed to their different vegetation types and climate conditions. This calibrated model allows more accurate simulation of carbon change in the whole pedon and has potential for future modeling of carbon cycle over long timescales.

  11. Progress in Global Multicompartmental Modelling of DDT

    Stemmler, I.; Lammel, G.

    2009-04-01

    Dichlorophenyltrichloroethane, DDT, and its major metabolite dichlorophenyldichloroethylene, DDE, are long-lived in the environment (persistent) and circulate since the 1950s. They accumulate along food chains, cause detrimental effects in marine and terrestrial wild life, and pose a hazard for human health. DDT was widely used as an insecticide in the past and is still in use in a number of tropical countries to combat vector borne diseases like malaria and typhus. It is a multicompartmental substance with only a small mass fraction residing in air. A global multicompartment chemistry transport model (MPI-MCTM; Semeena et al., 2006) is used to study the environmental distribution and fate of dichlorodiphenyltrichloroethane (DDT). For the first time a horizontally and vertically resolved global model was used to perform a long-term simulation of DDT and DDE. The model is based on general circulation models for the ocean (MPIOM; Marsland et al., 2003) and atmosphere (ECHAM5). In addition, an oceanic biogeochemistry model (HAMOCC5.1; Maier-Reimer et al., 2005 ) and a microphysical aerosol model (HAM; Stier et al., 2005 ) are included. Multicompartmental substances are cycling in atmosphere (3 phases), ocean (3 phases), top soil (3 phases), and vegetation surfaces. The model was run for 40 years forced with historical agricultural application data of 1950-1990. The model results show that the global environmental contamination started to decrease in air, soil and vegetation after the applications peaked in 1965-70. In some regions, however, the DDT mass had not yet reached a maximum in 1990 and was still accumulating mass until the end of the simulation. Modelled DDT and DDE concentrations in atmosphere, ocean and soil are evaluated by comparison with observational data. The evaluation of the model results indicate that degradation of DDE in air was underestimated. Also for DDT, the discrepancies between model results and observations are related to uncertainties of

  12. A satellite-based global landslide model

    A. Farahmand

    2013-05-01

    Full Text Available Landslides are devastating phenomena that cause huge damage around the world. This paper presents a quasi-global landslide model derived using satellite precipitation data, land-use land cover maps, and 250 m topography information. This suggested landslide model is based on the Support Vector Machines (SVM, a machine learning algorithm. The National Aeronautics and Space Administration (NASA Goddard Space Flight Center (GSFC landslide inventory data is used as observations and reference data. In all, 70% of the data are used for model development and training, whereas 30% are used for validation and verification. The results of 100 random subsamples of available landslide observations revealed that the suggested landslide model can predict historical landslides reliably. The average error of 100 iterations of landslide prediction is estimated to be approximately 7%, while approximately 2% false landslide events are observed.

  13. A hydroclimatic model of global fire patterns

    Boer, Matthias

    2015-04-01

    Satellite-based earth observation is providing an increasingly accurate picture of global fire patterns. The highest fire activity is observed in seasonally dry (sub-)tropical environments of South America, Africa and Australia, but fires occur with varying frequency, intensity and seasonality in almost all biomes on Earth. The particular combination of these fire characteristics, or fire regime, is known to emerge from the combined influences of climate, vegetation, terrain and land use, but has so far proven difficult to reproduce by global models. Uncertainty about the biophysical drivers and constraints that underlie current global fire patterns is propagated in model predictions of how ecosystems, fire regimes and biogeochemical cycles may respond to projected future climates. Here, I present a hydroclimatic model of global fire patterns that predicts the mean annual burned area fraction (F) of 0.25° x 0.25° grid cells as a function of the climatic water balance. Following Bradstock's four-switch model, long-term fire activity levels were assumed to be controlled by fuel productivity rates and the likelihood that the extant fuel is dry enough to burn. The frequency of ignitions and favourable fire weather were assumed to be non-limiting at long time scales. Fundamentally, fuel productivity and fuel dryness are a function of the local water and energy budgets available for the production and desiccation of plant biomass. The climatic water balance summarizes the simultaneous availability of biologically usable energy and water at a site, and may therefore be expected to explain a significant proportion of global variation in F. To capture the effect of the climatic water balance on fire activity I focused on the upper quantiles of F, i.e. the maximum level of fire activity for a given climatic water balance. Analysing GFED4 data for annual burned area together with gridded climate data, I found that nearly 80% of the global variation in the 0.99 quantile of F

  14. On coupling global biome models with climate models

    Claussen, M.

    1994-01-01

    The BIOME model of Prentice et al. (1992), which predicts global vegetation patterns in equilibrium with climate, is coupled with the ECHAM climate model of the Max-Planck-Institut fuer Meteorologie, Hamburg. It is found that incorporation of the BIOME model into ECHAM, regardless at which frequency, does not enhance the simulated climate variability, expressed in terms of differences between global vegetation patterns. Strongest changes are seen only between the initial biome distribution and the biome distribution computed after the first simulation period, provided that the climate-biome model is started from a biome distribution that resembles the present-day distribution. After the first simulation period, there is no significant shrinking, expanding, or shifting of biomes. Likewise, no trend is seen in global averages of land-surface parameters and climate variables. (orig.)

  15. Tree-Based Global Model Tests for Polytomous Rasch Models

    Komboz, Basil; Strobl, Carolin; Zeileis, Achim

    2018-01-01

    Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…

  16. Global and local level density models

    Koning, A.J.; Hilaire, S.; Goriely, S.

    2008-01-01

    Four different level density models, three phenomenological and one microscopic, are consistently parameterized using the same set of experimental observables. For each of the phenomenological models, the Constant Temperature Model, the Back-shifted Fermi gas Model and the Generalized Superfluid Model, a version without and with explicit collective enhancement is considered. Moreover, a recently published microscopic combinatorial model is compared with the phenomenological approaches and with the same set of experimental data. For each nuclide for which sufficient experimental data exists, a local level density parameterization is constructed for each model. Next, these local models have helped to construct global level density prescriptions, to be used for cases for which no experimental data exists. Altogether, this yields a collection of level density formulae and parameters that can be used with confidence in nuclear model calculations. To demonstrate this, a large-scale validation with experimental discrete level schemes and experimental cross sections and neutron emission spectra for various different reaction channels has been performed

  17. Experimental calibration of the mathematical model of Air Torque Position dampers with non-cascading blades

    Bikić Siniša M.

    2016-01-01

    Full Text Available This paper is focused on the mathematical model of the Air Torque Position dampers. The mathematical model establishes a link between the velocity of air in front of the damper, position of the damper blade and the moment acting on the blade caused by the air flow. This research aims to experimentally verify the mathematical model for the damper type with non-cascading blades. Four different types of dampers with non-cascading blades were considered: single blade dampers, dampers with two cross-blades, dampers with two parallel blades and dampers with two blades of which one is a fixed blade in the horizontal position. The case of a damper with a straight pipeline positioned in front of and behind the damper was taken in consideration. Calibration and verification of the mathematical model was conducted experimentally. The experiment was conducted on the laboratory facility for testing dampers used for regulation of the air flow rate in heating, ventilation and air conditioning systems. The design and setup of the laboratory facility, as well as construction, adjustment and calibration of the laboratory damper are presented in this paper. The mathematical model was calibrated by using one set of data, while the verification of the mathematical model was conducted by using the second set of data. The mathematical model was successfully validated and it can be used for accurate measurement of the air velocity on dampers with non-cascading blades under different operating conditions. [Projekat Ministarstva nauke Republike Srbije, br. TR31058

  18. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.

  19. Parallel Genetic Algorithms for calibrating Cellular Automata models: Application to lava flows

    D'Ambrosio, D.; Spataro, W.; Di Gregorio, S.; Calabria Univ., Cosenza; Crisci, G.M.; Rongo, R.; Calabria Univ., Cosenza

    2005-01-01

    Cellular Automata are highly nonlinear dynamical systems which are suitable far simulating natural phenomena whose behaviour may be specified in terms of local interactions. The Cellular Automata model SCIARA, developed far the simulation of lava flows, demonstrated to be able to reproduce the behaviour of Etnean events. However, in order to apply the model far the prediction of future scenarios, a thorough calibrating phase is required. This work presents the application of Genetic Algorithms, general-purpose search algorithms inspired to natural selection and genetics, far the parameters optimisation of the model SCIARA. Difficulties due to the elevated computational time suggested the adoption a Master-Slave Parallel Genetic Algorithm far the calibration of the model with respect to the 2001 Mt. Etna eruption. Results demonstrated the usefulness of the approach, both in terms of computing time and quality of performed simulations

  20. Calibration of the heat balance model for prediction of car climate

    Pokorný, Jan; Fišer, Jan; Jícha, Miroslav

    2012-04-01

    In the paper, the authors refer to development a heat balance model to predict car climate and power heat load. Model is developed in Modelica language using Dymola as interpreter. It is a dynamical system, which describes a heat exchange between car cabin and ambient. Inside a car cabin, there is considered heat exchange between air zone, interior and air-conditioning system. It is considered 1D heat transfer with a heat accumulation and a relative movement Sun respect to the car cabin, whilst car is moving. Measurements of the real operating conditions of gave us data for model calibration. The model was calibrated for Škoda Felicia parking-summer scenarios.

  1. Calibration of the heat balance model for prediction of car climate

    Jícha Miroslav

    2012-04-01

    Full Text Available In the paper, the authors refer to development a heat balance model to predict car climate and power heat load. Model is developed in Modelica language using Dymola as interpreter. It is a dynamical system, which describes a heat exchange between car cabin and ambient. Inside a car cabin, there is considered heat exchange between air zone, interior and air-conditioning system. It is considered 1D heat transfer with a heat accumulation and a relative movement Sun respect to the car cabin, whilst car is moving. Measurements of the real operating conditions of gave us data for model calibration. The model was calibrated for Škoda Felicia parking-summer scenarios.

  2. A Global Model of Meteoric Sodium

    Marsh, Daniel R.; Janches, Diego; Feng, Wuhu; Plane, John M. C.

    2013-01-01

    A global model of sodium in the mesosphere and lower thermosphere has been developed within the framework of the National Center for Atmospheric Research's Whole Atmosphere Community Climate Model (WACCM). The standard fully interactive WACCM chemistry module has been augmented with a chemistry scheme that includes nine neutral and ionized sodium species. Meteoric ablation provides the source of sodium in the model and is represented as a combination of a meteoroid input function (MIF) and a parameterized ablation model. The MIF provides the seasonally and latitudinally varying meteoric flux which is modeled taking into consideration the astronomical origins of sporadic meteors and considers variations in particle entry angle, velocity, mass, and the differential ablation of the chemical constituents. WACCM simulations show large variations in the sodium constituents over time scales from days to months. Seasonality of sodium constituents is strongly affected by variations in the MIF and transport via the mean meridional wind. In particular, the summer to winter hemisphere flow leads to the highest sodium species concentrations and loss rates occurring over the winter pole. In the Northern Hemisphere, this winter maximum can be dramatically affected by stratospheric sudden warmings. Simulations of the January 2009 major warming event show that it caused a short-term decrease in the sodium column over the polar cap that was followed by a factor of 3 increase in the following weeks. Overall, the modeled distribution of atomic sodium in WACCM agrees well with both ground-based and satellite observations. Given the strong sensitivity of the sodium layer to dynamical motions, reproducing its variability provides a stringent test of global models and should help to constrain key atmospheric variables in this poorly sampled region of the atmosphere.

  3. Global adjoint tomography: first-generation model

    Bozdağ, Ebru

    2016-09-23

    We present the first-generation global tomographic model constructed based on adjoint tomography, an iterative full-waveform inversion technique. Synthetic seismograms were calculated using GPU-accelerated spectral-element simulations of global seismic wave propagation, accommodating effects due to 3-D anelastic crust & mantle structure, topography & bathymetry, the ocean load, ellipticity, rotation, and self-gravitation. Fréchet derivatives were calculated in 3-D anelastic models based on an adjoint-state method. The simulations were performed on the Cray XK7 named \\'Titan\\', a computer with 18 688 GPU accelerators housed at Oak Ridge National Laboratory. The transversely isotropic global model is the result of 15 tomographic iterations, which systematically reduced differences between observed and simulated three-component seismograms. Our starting model combined 3-D mantle model S362ANI with 3-D crustal model Crust2.0. We simultaneously inverted for structure in the crust and mantle, thereby eliminating the need for widely used \\'crustal corrections\\'. We used data from 253 earthquakes in the magnitude range 5.8 ≤ M ≤ 7.0. We started inversions by combining ~30 s body-wave data with ~60 s surface-wave data. The shortest period of the surface waves was gradually decreased, and in the last three iterations we combined ~17 s body waves with ~45 s surface waves. We started using 180 min long seismograms after the 12th iteration and assimilated minor- and major-arc body and surface waves. The 15th iteration model features enhancements of well-known slabs, an enhanced image of the Samoa/Tahiti plume, as well as various other plumes and hotspots, such as Caroline, Galapagos, Yellowstone and Erebus. Furthermore, we see clear improvements in slab resolution along the Hellenic and Japan Arcs, as well as subduction along the East of Scotia Plate, which does not exist in the starting model. Point-spread function tests demonstrate that we are approaching the

  4. Inverse modeling as a step in the calibration of the LBL-USGS site-scale model of Yucca Mountain

    Finsterle, S.; Bodvarsson, G.S.; Chen, G.

    1995-05-01

    Calibration of the LBL-USGS site-scale model of Yucca Mountain is initiated. Inverse modeling techniques are used to match the results of simplified submodels to the observed pressure, saturation, and temperature data. Hydrologic and thermal parameters are determined and compared to the values obtained from laboratory measurements and conventional field test analysis

  5. Calibration of Linked Hydrodynamic and Water Quality Model for Santa Margarita Lagoon

    2016-07-01

    was used to drive the transport and water quality kinetics for the simulation of 2007–2009. The sand berm, which controlled the opening/closure of...TECHNICAL REPORT 3015 July 2016 Calibration of Linked Hydrodynamic and Water Quality Model for Santa Margarita Lagoon Final Report Pei...Linked Hydrodynamic and Water Quality Model for Santa Margarita Lagoon Final Report Pei-Fang Wang Chuck Katz Ripan Barua SSC Pacific James

  6. The regression-calibration method for fitting generalized linear models with additive measurement error

    James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll

    2003-01-01

    This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...

  7. Calibration and Validation of the Dynamic Wake Meandering Model for Implementation in an Aeroelastic Code

    Aagaard Madsen, Helge; Larsen, Gunner Chr.; Larsen, Torben J.

    2010-01-01

    in an aeroelastic model. Calibration and validation of the different parts of the model is carried out by comparisons with actuator disk and actuator line (ACL) computations as well as with inflow measurements on a full-scale 2 MW turbine. It is shown that the load generating part of the increased turbulence....... Finally, added turbulence characteristics are compared with correlation results from literature. ©2010 American Society of Mechanical Engineers...

  8. Improved method for calibration of exchange flows for a physical transport box model of Tampa Bay, FL USA

    Results for both sequential and simultaneous calibration of exchange flows between segments of a 10-box, one-dimensional, well-mixed, bifurcated tidal mixing model for Tampa Bay are reported. Calibrations were conducted for three model options with different mathematical expressi...

  9. Feasibility of the use of optimisation techniques to calibrate the models used in a post-closure radiological assessment

    Laundy, R.S.

    1991-01-01

    This report addresses the feasibility of the use of optimisation techniques to calibrate the models developed for the impact assessment of a radioactive waste repository. The maximum likelihood method for improving parameter estimates is considered in detail, and non-linear optimisation techniques for finding solutions are reviewed. Applications are described for the calibration of groundwater flow, radionuclide transport and biosphere models. (author)

  10. The dielectric calibration of capacitance probes for soil hydrology using an oscillation frequency response model

    D. A. Robinson

    1998-01-01

    Full Text Available Capacitance probes are a fast, safe and relatively inexpensive means of measuring the relative permittivity of soils, which can then be used to estimate soil water content. Initial experiments with capacitance probes used empirical calibrations between the frequency response of the instrument and soil water content. This has the disadvantage that the calibrations are instrument-dependent. A twofold calibration strategy is described in this paper; the instrument frequency is turned into relative permittivity (dielectric constant which can then be calibrated against soil water content. This approach offers the advantages of making the second calibration, from soil permittivity to soil water content. instrument-independent and allows comparison with other dielectric methods, such as time domain reflectometry. A physically based model, used to calibrate capacitance probes in terms of relative permittivity (εr is presented. The model, which was developed from circuit analysis, predicts, successfully, the frequency response of the instrument in liquids with different relative permittivities, using only measurements in air and water. lt was used successfully to calibrate 10 prototype surface capacitance insertion probes (SCIPS and a depth capacitance probe. The findings demonstrate that the geometric properties of the instrument electrodes were an important parameter in the model, the value of which could be fixed through measurement. The relationship between apparent soil permittivity and volumetric water content has been the subject of much research in the last 30 years. Two lines of investigation have developed, time domain reflectometry (TDR and capacitance. Both methods claim to measure relative permittivity and should therefore be comparable. This paper demonstrates that the IH capacitance probe overestimates relative permittivity as the ionic conductivity of the medium increases. Electrically conducting ionic solutions were used to test the

  11. ANN-based calibration model of FTIR used in transformer online monitoring

    Li, Honglei; Liu, Xian-yong; Zhou, Fangjie; Tan, Kexiong

    2005-02-01

    Recently, chromatography column and gas sensor have been used in online monitoring device of dissolved gases in transformer oil. But some disadvantages still exist in these devices: consumption of carrier gas, requirement of calibration, etc. Since FTIR has high accuracy, consume no carrier gas and require no calibration, the researcher studied the application of FTIR in such monitoring device. Experiments of "Flow gas method" were designed, and spectrum of mixture composed of different gases was collected with A BOMEM MB104 FTIR Spectrometer. A key question in the application of FTIR is that: the absorbance spectrum of 3 fault key gases, including C2H4, CH4 and C2H6, are overlapped seriously at 2700~3400cm-1. Because Absorbance Law is no longer appropriate, a nonlinear calibration model based on BP ANN was setup to in the quantitative analysis. The height absorbance of C2H4, CH4 and C2H6 were adopted as quantitative feature, and all the data were normalized before training the ANN. Computing results show that the calibration model can effectively eliminate the cross disturbance to measurement.

  12. Calibration of a complex activated sludge model for the full-scale wastewater treatment plant.

    Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw

    2011-08-01

    In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that upon the calculations of normalized sensitivity coefficient (S(i,j)) 17 (steady-state) or 19 (dynamic conditions) kinetic and stoichiometric parameters are sensitive. Most of them are associated with growth and decay of ordinary heterotrophic organisms and phosphorus accumulating organisms. The rankings of ten most sensitive parameters established on the basis of the calculations of the mean square sensitivity measure (δ(msqr)j) indicate that irrespective of the fact, whether the steady-state or dynamic calibration was performed, there is an agreement in the sensitivity of parameters.

  13. Challenges in Modeling of the Global Atmosphere

    Janjic, Zavisa; Djurdjevic, Vladimir; Vasic, Ratko; Black, Tom

    2015-04-01

    ") with significant amplitudes can develop. Due to their large scales, that are comparable to the scales of the dominant Rossby waves, such fictitious solutions are hard to identify and remove. Another new challenge on the global scale is that the limit of validity of the hydrostatic approximation is rapidly being approached. Having in mind the sensitivity of extended deterministic forecasts to small disturbances, we may need global non-hydrostatic models sooner than we think. The unified Non-hydrostatic Multi-scale Model (NMMB) that is being developed at the National Centers for Environmental Prediction (NCEP) as a part of the new NOAA Environmental Modeling System (NEMS) will be discussed as an example. The non-hydrostatic dynamics were designed in such a way as to avoid over-specification. The global version is run on the latitude-longitude grid, and the polar filter selectively slows down the waves that would otherwise be unstable. The model formulation has been successfully tested on various scales. A global forecasting system based on the NMMB has been run in order to test and tune the model. The skill of the medium range forecasts produced by the NMMB is comparable to that of other major medium range models. The computational efficiency of the global NMMB on parallel computers is good.

  14. Modeling microelectrode biosensors: free-flow calibration can substantially underestimate tissue concentrations.

    Newton, Adam J H; Wall, Mark J; Richardson, Magnus J E

    2017-03-01

    Microelectrode amperometric biosensors are widely used to measure concentrations of analytes in solution and tissue including acetylcholine, adenosine, glucose, and glutamate. A great deal of experimental and modeling effort has been directed at quantifying the response of the biosensors themselves; however, the influence that the macroscopic tissue environment has on biosensor response has not been subjected to the same level of scrutiny. Here we identify an important issue in the way microelectrode biosensors are calibrated that is likely to have led to underestimations of analyte tissue concentrations. Concentration in tissue is typically determined by comparing the biosensor signal to that measured in free-flow calibration conditions. In a free-flow environment the concentration of the analyte at the outer surface of the biosensor can be considered constant. However, in tissue the analyte reaches the biosensor surface by diffusion through the extracellular space. Because the enzymes in the biosensor break down the analyte, a density gradient is set up resulting in a significantly lower concentration of analyte near the biosensor surface. This effect is compounded by the diminished volume fraction (porosity) and reduction in the diffusion coefficient due to obstructions (tortuosity) in tissue. We demonstrate this effect through modeling and experimentally verify our predictions in diffusive environments. NEW & NOTEWORTHY Microelectrode biosensors are typically calibrated in a free-flow environment where the concentrations at the biosensor surface are constant. However, when in tissue, the analyte reaches the biosensor via diffusion and so analyte breakdown by the biosensor results in a concentration gradient and consequently a lower concentration around the biosensor. This effect means that naive free-flow calibration will underestimate tissue concentration. We develop mathematical models to better quantify the discrepancy between the calibration and tissue

  15. Field Measurement and Calibration of HDM-4 Fuel Consumption Model on Interstate Highway in Florida

    Xin Jiao

    2015-03-01

    Full Text Available Fuel consumptions are measured by operating passenger car and tractor-trailer on two interstate roadway sites in Florida. Each site contains flexible pavement and rigid pavement with similar pavement, traffic and environmental condition. Field test reveals that the average fuel consumption differences between vehicle operating on flexible pavement and rigid pavement at given test condition are 4.04% for tractor-trailer and 2.50% for passenger car, with a fuel saving on rigid pavement. The fuel consumption differences are found statistically significant at 95% confidence level for both vehicle types. Test data are then used to calibrate the Highway Development and Management IV (HDM-4 fuel consumption model and model coefficients are obtained for three sets of observations. Field measurement and prediction by calibrated model shows generally good agreement. Nevertheless, verification and adjustment with more experiment or data sources would be expected in future studies.

  16. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  17. Improving plasma shaping accuracy through consolidation of control model maintenance, diagnostic calibration, and hardware change control

    Baggest, D.S.; Rothweil, D.A.; Pang, S.

    1995-12-01

    With the advent of more sophisticated techniques for control of tokamak plasmas comes the requirement for increasingly more accurate models of plasma processes and tokamak systems. Development of accurate models for DIII-D power systems, vessel, and poloidal coils is already complete, while work continues in development of general plasma response modeling techniques. Increased accuracy in estimates of parameters to be controlled is also required. It is important to ensure that errors in supporting systems such as diagnostic and command circuits do not limit the accuracy of plasma parameter estimates or inhibit the ability to derive accurate plasma/tokamak system models. To address this issue, we have developed more formal power systems change control and power system/magnetic diagnostics calibration procedures. This paper discusses our approach to consolidating the tasks in these closely related areas. This includes, for example, defining criteria for when diagnostics should be re-calibrated along with required calibration tolerances, and implementing methods for tracking power systems hardware modifications and the resultant changes to control models

  18. Global plastic models for computerized structural analysis

    Roche, R.L.; Hoffmann, A.

    1977-01-01

    In many types of structures, it is possible to use generalized stresses (like membrane forces, bending moment, torsion moment...) to define a yield surface for a part of the structure. Analysis can be achieved by using the HILL's principle and a hardening rule. The whole formulation is said 'Global Plastic Model'. Two different global models are used in the CEASEMT system for structural analysis, one for shell analysis and the other for piping analysis (in plastic or creep field). In shell analysis the generalized stresses chosen are the membrane forces and bending (including torsion) moments. There is only one yield condition for a normal to the middle surface and no integration along the thickness is required. In piping analysis, the choice of generalized stresses is bending moments, torsional moment, hoop stress and tension stress. There is only a set of stresses for a cross section and no integration over the cross section area is needed. Connected strains are axis curvature, torsion, uniform strains. The definition of the yield surface is the most important item. A practical way is to use a diagonal quadratic function of the stress components. But the coefficients are depending of the shape of the pipe element, especially for curved segments. Indications will be given on the yield functions used. Some examples of applications in structural analysis are added to the text

  19. Modeling global scene factors in attention

    Torralba, Antonio

    2003-07-01

    Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America

  20. Global embedding of fibre inflation models

    Cicoli, Michele [Dipartimento di Fisica e Astronomia, Università di Bologna,via Irnerio 46, 40126 Bologna (Italy); INFN - Sezione di Bologna,viale Berti Pichat 6/2, 40127 Bologna (Italy); Abdus Salam ICTP,Strada Costiera 11, Trieste 34151 (Italy); Muia, Francesco [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Rd., Oxford OX1 3NP (United Kingdom); Shukla, Pramod [Abdus Salam ICTP,Strada Costiera 11, Trieste 34151 (Italy)

    2016-11-30

    We present concrete embeddings of fibre inflation models in globally consistent type IIB Calabi-Yau orientifolds with closed string moduli stabilisation. After performing a systematic search through the existing list of toric Calabi-Yau manifolds, we find several examples that reproduce the minimal setup to embed fibre inflation models. This involves Calabi-Yau manifolds with h{sup 1,1}=3 which are K3 fibrations over a ℙ{sup 1} base with an additional shrinkable rigid divisor. We then provide different consistent choices of the underlying brane set-up which generate a non-perturbative superpotential suitable for moduli stabilisation and string loop corrections with the correct form to drive inflation. For each Calabi-Yau orientifold setting, we also compute the effect of higher derivative contributions and study their influence on the inflationary dynamics.

  1. A Global Atmospheric Model of Meteoric Iron

    Feng, Wuhu; Marsh, Daniel R.; Chipperfield, Martyn P.; Janches, Diego; Hoffner, Josef; Yi, Fan; Plane, John M. C.

    2013-01-01

    The first global model of meteoric iron in the atmosphere (WACCM-Fe) has been developed by combining three components: the Whole Atmosphere Community Climate Model (WACCM), a description of the neutral and ion-molecule chemistry of iron in the mesosphere and lower thermosphere (MLT), and a treatment of the injection of meteoric constituents into the atmosphere. The iron chemistry treats seven neutral and four ionized iron containing species with 30 neutral and ion-molecule reactions. The meteoric input function (MIF), which describes the injection of Fe as a function of height, latitude, and day, is precalculated from an astronomical model coupled to a chemical meteoric ablation model (CABMOD). This newly developed WACCM-Fe model has been evaluated against a number of available ground-based lidar observations and performs well in simulating the mesospheric atomic Fe layer. The model reproduces the strong positive correlation of temperature and Fe density around the Fe layer peak and the large anticorrelation around 100 km. The diurnal tide has a significant effect in the middle of the layer, and the model also captures well the observed seasonal variations. However, the model overestimates the peak Fe+ concentration compared with the limited rocket-borne mass spectrometer data available, although good agreement on the ion layer underside can be obtained by adjusting the rate coefficients for dissociative recombination of Fe-molecular ions with electrons. Sensitivity experiments with the same chemistry in a 1-D model are used to highlight significant remaining uncertainties in reaction rate coefficients, and to explore the dependence of the total Fe abundance on the MIF and rate of vertical transport.

  2. The Software Architecture of Global Climate Models

    Alexander, K. A.; Easterbrook, S. M.

    2011-12-01

    It has become common to compare and contrast the output of multiple global climate models (GCMs), such as in the Climate Model Intercomparison Project Phase 5 (CMIP5). However, intercomparisons of the software architecture of GCMs are almost nonexistent. In this qualitative study of seven GCMs from Canada, the United States, and Europe, we attempt to fill this gap in research. We describe the various representations of the climate system as computer programs, and account for architectural differences between models. Most GCMs now practice component-based software engineering, where Earth system components (such as the atmosphere or land surface) are present as highly encapsulated sub-models. This architecture facilitates a mix-and-match approach to climate modelling that allows for convenient sharing of model components between institutions, but it also leads to difficulty when choosing where to draw the lines between systems that are not encapsulated in the real world, such as sea ice. We also examine different styles of couplers in GCMs, which manage interaction and data flow between components. Finally, we pay particular attention to the varying levels of complexity in GCMs, both between and within models. Many GCMs have some components that are significantly more complex than others, a phenomenon which can be explained by the respective institution's research goals as well as the origin of the model components. In conclusion, although some features of software architecture have been adopted by every GCM we examined, other features show a wide range of different design choices and strategies. These architectural differences may provide new insights into variability and spread between models.

  3. Sensitivities in global scale modeling of isoprene

    R. von Kuhlmann

    2004-01-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9  Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  4. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements

    Miguel A. Franesqui

    2017-08-01

    Full Text Available This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA. The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled “Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves” (Franesqui et al., 2017 [1].

  5. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements.

    Franesqui, Miguel A; Yepes, Jorge; García-González, Cándida

    2017-08-01

    This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA). The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled "Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves" (Franesqui et al., 2017) [1].

  6. Usability of Calibrating Monitor for Soft Proof According to CIE CAM02 Colour Appearance Model

    Dragoljub Novakovic

    2010-06-01

    Full Text Available Colour appearance models describe viewing conditions and enable simulating appearance of colours under different illuminants and illumination levels according to human perception. Since it is possible to predict how colour would look like when different illuminants are used, colour appearance models are incorporated in some monitor profiling software. Owing to these software, tone reproduction curve can be defined by taking into consideration viewing condition in which display is observed. In this work assessment of CIE CAM02 colour appearance model usage at calibrating LCD monitor for soft proof was tested in order to determine which tone reproduction curve enables better reproduction of colour. Luminance level was kept constant, whereas tone reproduction curves determined by gamma values and by parameters of CIE CAM02 model were varied. Testing was conducted in case where physical print reference is observed under illuminant which has colour temperature according to iso standard for soft-proofing (D50 and also for illuminants D65.  Based on the results of calibrations assessment, subjective and objective assessment of created profiles, as well as on the perceptual test carried out on human observers, differences in image display were defined and conclusions of the adequacy of CAM02 usage at monitor calibration for each of the viewing conditions reached.

  7. Calibration plots for risk prediction models in the presence of competing risks.

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-08-15

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    K. Ichii

    2010-07-01

    Full Text Available Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine – based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID, we conducted two simulations: (1 point simulations at four eddy flux sites in Japan and (2 spatial simulations for Japan with a default model (based on original settings and a modified model (based on model parameter tuning using eddy flux data. Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP, most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  9. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  10. Global cross-station assessment of neuro-fuzzy models for estimating daily reference evapotranspiration

    Shiri, Jalal; Nazemi, Amir Hossein; Sadraddini, Ali Ashraf; Landeras, Gorka; Kisi, Ozgur; Fard, Ahmad Fakheri; Marti, Pau

    2013-02-01

    SummaryAccurate estimation of reference evapotranspiration is important for irrigation scheduling, water resources management and planning and other agricultural water management issues. In the present paper, the capabilities of generalized neuro-fuzzy models were evaluated for estimating reference evapotranspiration using two separate sets of weather data from humid and non-humid regions of Spain and Iran. In this way, the data from some weather stations in the Basque Country and Valencia region (Spain) were used for training the neuro-fuzzy models [in humid and non-humid regions, respectively] and subsequently, the data from these regions were pooled to evaluate the generalization capability of a general neuro-fuzzy model in humid and non-humid regions. The developed models were tested in stations of Iran, located in humid and non-humid regions. The obtained results showed the capabilities of generalized neuro-fuzzy model in estimating reference evapotranspiration in different climatic zones. Global GNF models calibrated using both non-humid and humid data were found to successfully estimate ET0 in both non-humid and humid regions of Iran (the lowest MAE values are about 0.23 mm for non-humid Iranian regions and 0.12 mm for humid regions). non-humid GNF models calibrated using non-humid data performed much better than the humid GNF models calibrated using humid data in non-humid region while the humid GNF model gave better estimates in humid region.

  11. Evaluation of global solar radiation models for Shanghai, China

    Yao, Wanxiang; Li, Zhengrong; Wang, Yuyan; Jiang, Fujian; Hu, Lingzhou

    2014-01-01

    Highlights: • 108 existing models are compared and analyzed by 42 years meteorological data. • Fitting models based on measured data are established according to 42 years data. • All models are compared by recently 10 years meteorological data. • The results show that polynomial models are the most accurate models. - Abstract: In this paper, 89 existing monthly average daily global solar radiation models and 19 existing daily global solar radiation models are compared and analyzed by 42 years meteorological data. The results show that for existing monthly average daily global solar radiation models, linear models and polynomial models have been able to estimate global solar radiation accurately, and complex equation types cannot obviously improve the precision. Considering direct parameters such as latitude, altitude, solar altitude and sunshine duration can help improve the accuracy of the models, but indirect parameters cannot. For existing daily global solar radiation models, multi-parameter models are more accurate than single-parameter models, polynomial models are more accurate than linear models. Then measured data fitting monthly average daily global solar radiation models (MADGSR models) and daily global solar radiation models (DGSR models) are established according to 42 years meteorological data. Finally, existing models and fitting models based on measured data are comparative analysis by recent 10 years meteorological data, and the results show that polynomial models (MADGSR model 2, DGSR model 2 and Maduekwe model 2) are the most accurate models

  12. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  13. Multi-site calibration, validation, and sensitivity analysis of the MIKE SHE Model for a large watershed in northern China

    S. Wang

    2012-12-01

    Full Text Available Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped calibration protocol that used streamflow measured at one single watershed outlet to a multi-site calibration method which employed streamflow measurements at three stations within the large Chaohe River basin in northern China. Simulation results showed that the single-site calibrated model was able to sufficiently simulate the hydrographs for two of the three stations (Nash-Sutcliffe coefficient of 0.65–0.75, and correlation coefficient 0.81–0.87 during the testing period, but the model performed poorly for the third station (Nash-Sutcliffe coefficient only 0.44. Sensitivity analysis suggested that streamflow of upstream area of the watershed was dominated by slow groundwater, whilst streamflow of middle- and down- stream areas by relatively quick interflow. Therefore, a multi-site calibration protocol was deemed necessary. Due to the potential errors and uncertainties with respect to the representation of spatial variability, performance measures from the multi-site calibration protocol slightly decreased for two of the three stations, whereas it was improved greatly for the third station. We concluded that multi-site calibration protocol reached a compromise in term of model performance for the three stations, reasonably representing the hydrographs of all three stations with Nash-Sutcliffe coefficient ranging from 0.59–072. The multi-site calibration protocol applied in the analysis generally has advantages to the single site calibration protocol.

  14. Calibration model maintenance in melamine resin production: Integrating drift detection, smart sample selection and model adaptation.

    Nikzad-Langerodi, Ramin; Lughofer, Edwin; Cernuda, Carlos; Reischer, Thomas; Kantner, Wolfgang; Pawliczek, Marcin; Brandstetter, Markus

    2018-07-12

    The physico-chemical properties of Melamine Formaldehyde (MF) based thermosets are largely influenced by the degree of polymerization (DP) in the underlying resin. On-line supervision of the turbidity point by means of vibrational spectroscopy has recently emerged as a promising technique to monitor the DP of MF resins. However, spectroscopic determination of the DP relies on chemometric models, which are usually sensitive to drifts caused by instrumental and/or sample-associated changes occurring over time. In order to detect the time point when drifts start causing prediction bias, we here explore a universal drift detector based on a faded version of the Page-Hinkley (PH) statistic, which we test in three data streams from an industrial MF resin production process. We employ committee disagreement (CD), computed as the variance of model predictions from an ensemble of partial least squares (PLS) models, as a measure for sample-wise prediction uncertainty and use the PH statistic to detect changes in this quantity. We further explore supervised and unsupervised strategies for (semi-)automatic model adaptation upon detection of a drift. For the former, manual reference measurements are requested whenever statistical thresholds on Hotelling's T 2 and/or Q-Residuals are violated. Models are subsequently re-calibrated using weighted partial least squares in order to increase the influence of newer samples, which increases the flexibility when adapting to new (drifted) states. Unsupervised model adaptation is carried out exploiting the dual antecedent-consequent structure of a recently developed fuzzy systems variant of PLS termed FLEXFIS-PLS. In particular, antecedent parts are updated while maintaining the internal structure of the local linear predictors (i.e. the consequents). We found improved drift detection capability of the CD compared to Hotelling's T 2 and Q-Residuals when used in combination with the proposed PH test. Furthermore, we found that active

  15. Calibration of the k- ɛ model constants for use in CFD applications

    Glover, Nina; Guillias, Serge; Malki-Epshtein, Liora

    2011-11-01

    The k- ɛ turbulence model is a popular choice in CFD modelling due to its robust nature and the fact that it has been well validated. However it has been noted in previous research that the k- ɛ model has problems predicting flow separation as well as unconfined and transient flows. The model contains five empirical model constants whose values were found through data fitting for a wide range of flows (Launder 1972) but ad-hoc adjustments are often made to these values depending on the situation being modeled. Here we use the example of flow within a regular street canyon to perform a Bayesian calibration of the model constants against wind tunnel data. This allows us to assess the sensitivity of the CFD model to changes in these constants, find the most suitable values for the constants as well as quantifying the uncertainty related to the constants and the CFD model as a whole.

  16. Regional Calibration of SCS-CN L-THIA Model: Application for Ungauged Basins

    Jeon, Ji-Hong; Lim, Kyoung; Engel, Bernard

    2014-01-01

    Estimating surface runoff for ungauged watershed is an important issue. The Soil Conservation Service Curve Number (SCS-CN) method developed from long-term experimental data is widely used to estimate surface runoff from gaged or ungauged watersheds. Many modelers have used the documented SCS-CN parameters without calibration, sometimes resulting in significant errors in estimating surface runoff. Several methods for regionalization of SCS-CN parameters were evaluated. The regionalization met...

  17. (Pre-) calibration of a Reduced Complexity Model of the Antarctic Contribution to Sea-level Changes

    Ruckert, K. L.; Guan, Y.; Shaffer, G.; Forest, C. E.; Keller, K.

    2015-12-01

    (Pre-) calibration of a Reduced Complexity Model of the Antarctic Contribution to Sea-level ChangesKelsey L. Ruckert1*, Yawen Guan2, Chris E. Forest1,3,7, Gary Shaffer 4,5,6, and Klaus Keller1,7,81 Department of Geosciences, The Pennsylvania State University, University Park, Pennsylvania, USA 2 Department of Statistics, The Pennsylvania State University, University Park, Pennsylvania, USA 3 Department of Meteorology, The Pennsylvania State University, University Park, Pennsylvania, USA 4 GAIA_Antarctica, University of Magallanes, Punta Arenas, Chile 5 Center for Advanced Studies in Arid Zones, La Serena, Chile 6 Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark 7 Earth and Environmental Systems Institute, The Pennsylvania State University, University Park, Pennsylvania, USA 8 Department of Engineering and Public Policy, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA * Corresponding author. E-mail klr324@psu.eduUnderstanding and projecting future sea-level changes poses nontrivial challenges. Sea-level changes are driven primarily by changes in the density of seawater as well as changes in the size of glaciers and ice sheets. Previous studies have demonstrated that a key source of uncertainties surrounding sea-level projections is the response of the Antarctic ice sheet to warming temperatures. Here we calibrate a previously published and relatively simple model of the Antarctic ice sheet over a hindcast period from the last interglacial period to the present. We apply and compare a range of (pre-) calibration methods, including a Bayesian approach that accounts for heteroskedasticity. We compare the model hindcasts and projections for different levels of model complexity and calibration methods. We compare the projections with the upper bounds from previous studies and find our projections have a narrower range in 2100. Furthermore we discuss the implications for the design of climate risk management strategies.

  18. Reactive Burn Model Calibration for PETN Using Ultra-High-Speed Phase Contrast Imaging

    Johnson, Carl; Ramos, Kyle; Bolme, Cindy; Sanchez, Nathaniel; Barber, John; Montgomery, David

    2017-06-01

    A 1D reactive burn model (RBM) calibration for a plastic bonded high explosive (HE) requires run-to-detonation data. In PETN (pentaerythritol tetranitrate, 1.65 g/cc) the shock to detonation transition (SDT) is on the order of a few millimeters. This rapid SDT imposes experimental length scales that preclude application of traditional calibration methods such as embedded electromagnetic gauge methods (EEGM) which are very effective when used to study 10 - 20 mm thick HE specimens. In recent work at Argonne National Laboratory's Advanced Photon Source we have obtained run-to-detonation data in PETN using ultra-high-speed dynamic phase contrast imaging (PCI). A reactive burn model calibration valid for 1D shock waves is obtained using density profiles spanning the transition to detonation as opposed to particle velocity profiles from EEGM. Particle swarm optimization (PSO) methods were used to operate the LANL hydrocode FLAG iteratively to refine SURF RBM parameters until a suitable parameter set attained. These methods will be presented along with model validation simulations. The novel method described is generally applicable to `sensitive' energetic materials particularly those with areal densities amenable to radiography.

  19. A global digital elevation model - GTOP030

    1999-01-01

    GTOP030, the U.S. Geological Survey's (USGS) digital elevation model (DEM) of the Earth, provides the flrst global coverage of moderate resolution elevation data.  The original GTOP30 data set, which was developed over a 3-year period through a collaborative effort led by the USGS, was completed in 1996 at the USGS EROS Data Center in Sioux Falls, South Dakota.  The collaboration involved contributions of staffing, funding, or source data from cooperators including the National Aeronautics and Space Administration (NASA), the United Nations Environment Programme Global Resource Information Database (UNEP/GRID), the U.S. Agency for International Development (USAID), the Instituto Nacional de Estadistica Geografia e Informatica (INEGI) of Mexico, the Geographical Survey Institute (GSI) of Japan, Manaaki Whenua Landcare Research of New Zealand, and the Scientific Committee on Antarctic Research (SCAR). In 1999, work was begun on an update to the GTOP030 data set. Additional data sources are being incorporated into GTOP030 with an enhanced and improved data set planned for release in 2000.

  20. Use of wind data in global modelling

    Pailleux, J.

    1985-01-01

    The European Centre for Medium Range Weather Forecasts (ECMWF) is producing operational global analyses every 6 hours and operational global forecasts every day from the 12Z analysis. How the wind data are used in the ECMWF golbal analysis is described. For each current wind observing system, its ability to provide initial conditions for the forecast model is discussed as well as its weaknesses. An assessment of the impact of each individual system on the quality of the analysis and the forecast is given each time it is possible. Sometimes the deficiencies which are pointed out are related not only to the observing system itself but also to the optimum interpolation (OI) analysis scheme; then some improvements are generally possible through ad hoc modifications of the analysis scheme and especially tunings of the structure functions. Examples are given. The future observing network over the North Atlantic is examined. Several countries, coordinated by WMO, are working to set up an 'Operational WWW System Evaluation' (OWSE), in order to evaluate the operational aspects of the deployment of new systems (ASDAR, ASAP). Most of the new systems are expected to be deployed before January 1987, and in order to make the best use of the available resources during the deployment phase, some network studies are carried out at the present time, by using simulated data for ASDAR and ASAP systems. They are summarized.

  1. A new calibration model for pointing a radio telescope that considers nonlinear errors in the azimuth axis

    Kong De-Qing; Wang Song-Gen; Zhang Hong-Bo; Wang Jin-Qing; Wang Min

    2014-01-01

    A new calibration model of a radio telescope that includes pointing error is presented, which considers nonlinear errors in the azimuth axis. For a large radio telescope, in particular for a telescope with a turntable, it is difficult to correct pointing errors using a traditional linear calibration model, because errors produced by the wheel-on-rail or center bearing structures are generally nonlinear. Fourier expansion is made for the oblique error and parameters describing the inclination direction along the azimuth axis based on the linear calibration model, and a new calibration model for pointing is derived. The new pointing model is applied to the 40m radio telescope administered by Yunnan Observatories, which is a telescope that uses a turntable. The results show that this model can significantly reduce the residual systematic errors due to nonlinearity in the azimuth axis compared with the linear model

  2. Fiction and reality in the modelling world - Balance between simplicity and complexity, calibration and identifiability, verification and falsification

    Harremoës, P.; Madsen, H.

    1999-01-01

    Where is the balance between simplicity and complexity in model prediction of urban drainage structures? The calibration/verification approach to testing of model performance gives an exaggerated sense of certainty. Frequently, the model structure and the parameters are not identifiable by calibr......Where is the balance between simplicity and complexity in model prediction of urban drainage structures? The calibration/verification approach to testing of model performance gives an exaggerated sense of certainty. Frequently, the model structure and the parameters are not identifiable...... by calibration/verification on the basis of the data series available, which generates elements of sheer guessing - unless the universality of the model is be based on induction, i.e. experience from the sum of all previous investigations. There is a need to deal more explicitly with uncertainty...

  3. Calibration of the Nonlinear Accelerator Model at the Diamond Storage Ring

    Bartolini, Riccardo; Rowland, James; Martin, Ian; Schmidt, Frank

    2010-01-01

    The correct implementation of the nonlinear ring model is crucial to achieve the top performance of a synchrotron light source. Several dynamics quantities can be used to compare the real machine with the model and eventually to correct the accelerator. Most of these methods are based on the analysis of turn-by-turn data of excited betatron oscillations. We present the experimental results of the campaign of measurements carried out at the Diamond. A combination of Frequency Map Analysis (FMA) and detuning with momentum measurements has allowed a precise calibration of the nonlinear model capable of reproducing the nonlinear beam dynamics in the storage ring

  4. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Laxemar

    Aneljung, Maria; Sassner, Mona; Gustafsson, Lars-Goeran

    2007-11-01

    This report describes modelling where the hydrological modelling system MIKE SHE has been used to describe surface hydrology, near-surface hydrogeology, advective transport mechanisms, and the contact between groundwater and surface water within the SKB site investigation area at Laxemar. In the MIKE SHE system, surface water flow is described with the one-dimensional modelling tool MIKE 11, which is fully and dynamically integrated with the groundwater flow module in MIKE SHE. In early 2008, a supplementary data set will be available and a process of updating, rebuilding and calibrating the MIKE SHE model based on this data set will start. Before the calibration on the new data begins, it is important to gather as much knowledge as possible on calibration methods, and to identify critical calibration parameters and areas within the model that require special attention. In this project, the MIKE SHE model has been further developed. The model area has been extended, and the present model also includes an updated bedrock model and a more detailed description of the surface stream network. The numerical model has been updated and optimized, especially regarding the modelling of evapotranspiration and the unsaturated zone, and the coupling between the surface stream network in MIKE 11 and the overland flow in MIKE SHE. An initial calibration has been made and a base case has been defined and evaluated. In connection with the calibration, the most important changes made in the model were the following: The evapotranspiration was reduced. The infiltration capacity was reduced. The hydraulic conductivities of the Quaternary deposits in the water-saturated part of the subsurface were reduced. Data from one surface water level monitoring station, four surface water discharge monitoring stations and 43 groundwater level monitoring stations (SSM series boreholes) have been used to evaluate and calibrate the model. The base case simulations showed a reasonable agreement

  5. Calibration of the rutting model in HDM 4 on the highway network in Macedonia

    Ognjenovic Slobodan

    2018-01-01

    Full Text Available The World Bank HDM 4 model is adopted in many countries worldwide. It is consisted of the developed models for almost all types of deformation on the pavement structures, but it can’t be used as it is developed everywhere in the world without proper adjustments to local conditions such as traffic load, climate, construction specificities, maintenance level etc. This paper presents the results of the researches carried out in Macedonia for determining calibration coefficient of the rutting model in HDM 4.

  6. Calibration and testing of IKU's oil spill contingency and response (OSCAR) model system

    Reed, M.; Aamo, O.M.; Downing, K.

    1996-01-01

    A computer modeling system entitled Oil Spill Contingency and Response (OSCAR), was calibrated and tested using a variety of field observations. The objective of the exercise was to establish model credibility and increase confidence in efforts to compare alternate oil spill response strategies, while maintaining a balance between response costs and environmental protection. The key components of the system are IKU's data-based oil weathering model, a three dimensional oil trajectory and chemical fates model, an oil spill combat model, and exposure models for fish, ichthyoplankton, birds, and marine mammals. Most modelled calculations were in good agreement with field observations. One discrepancy was found which could be attributed to an underestimation of wind drift in the current model. 21 refs., 4 tabs., 32 figs

  7. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    Xu, Jin; Yu, Yaming; Van Dyk, David A.; Kashyap, Vinay L.; Siemiginowska, Aneta; Drake, Jeremy; Ratzlaff, Pete; Connors, Alanna; Meng, Xiao-Li

    2014-01-01

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  8. Multiobjecitve Sampling Design for Calibration of Water Distribution Network Model Using Genetic Algorithm and Neural Network

    Kourosh Behzadian

    2008-03-01

    Full Text Available In this paper, a novel multiobjective optimization model is presented for selecting optimal locations in the water distribution network (WDN with the aim of installing pressure loggers. The pressure data collected at optimal locations will be used later on in the calibration of the proposed WDN model. Objective functions consist of maximization of calibrated model prediction accuracy and minimization of the total cost for sampling design. In order to decrease the model run time, an optimization model has been developed using multiobjective genetic algorithm and adaptive neural network (MOGA-ANN. Neural networks (NNs are initially trained after a number of initial GA generations and periodically retrained and updated after generation of a specified number of full model-analyzed solutions. Trained NNs are replaced with the fitness evaluation of some chromosomes within the GA progress. Using cache prevents objective function evaluation of repetitive chromosomes within GA. Optimal solutions are obtained through pareto-optimal front with respect to the two objective functions. Results show that jointing NNs in MOGA for approximating portions of chromosomes’ fitness in each generation leads to considerable savings in model run time and can be promising for reducing run-time in optimization models with significant computational effort.

  9. Robustness of near-infrared calibration models for the prediction of milk constituents during the milking process.

    Melfsen, Andreas; Hartung, Eberhard; Haeussermann, Angelika

    2013-02-01

    The robustness of in-line raw milk analysis with near-infrared spectroscopy (NIRS) was tested with respect to the prediction of the raw milk contents fat, protein and lactose. Near-infrared (NIR) spectra of raw milk (n = 3119) were acquired on three different farms during the milking process of 354 milkings over a period of six months. Calibration models were calculated for: a random data set of each farm (fully random internal calibration); first two thirds of the visits per farm (internal calibration); whole datasets of two of the three farms (external calibration), and combinations of external and internal datasets. Validation was done either on the remaining data set per farm (internal validation) or on data of the remaining farms (external validation). Excellent calibration results were obtained when fully randomised internal calibration sets were used for milk analysis. In this case, RPD values of around ten, five and three for the prediction of fat, protein and lactose content, respectively, were achieved. Farm internal calibrations achieved much poorer prediction results especially for the prediction of protein and lactose with RPD values of around two and one respectively. The prediction accuracy improved when validation was done on spectra of an external farm, mainly due to the higher sample variation in external calibration sets in terms of feeding diets and individual cow effects. The results showed that further improvements were achieved when additional farm information was added to the calibration set. One of the main requirements towards a robust calibration model is the ability to predict milk constituents in unknown future milk samples. The robustness and quality of prediction increases with increasing variation of, e.g., feeding and cow individual milk composition in the calibration model.

  10. 2-D model of global aerosol transport

    Rehkopf, J; Newiger, M; Grassl, H

    1984-01-01

    The distribution of aerosol particles in the troposphere is described. Starting with long term mean seasonal flow and diffusivities as well as temperature, cloud distribution (six cloud classes), relative humidity and OH radical concentration, the steady state concentration of aerosol particles and SO/sub 2/ are calculated in a two-dimensional global (height and latitude) model. The following sources and sinks for particles are handled: direct emission, gas-to-particle conversion from SO/sub 2/, coagulation, rainout, washout, gravitational settling, and dry deposition. The sinks considered for sulphur emissions are dry deposition, washout, rainout, gasphase oxidation, and aqueous phase oxidation. Model tests with the water vapour cycle show a good agreement between measured and calculated zonal mean precipitation distribution. The steady state concentration distribution for natural emissions reached after 10 weeks model time, may be described by a mean exponent ..cap alpha.. = 3.2 near the surface assuming a modified Junge distribution and an increased value, ..cap alpha.. = 3.7, for the combined natural and man-made emission. The maximum ground level concentrations are 2000 and 10,000 particules cm/sup -3/ for natural and natural plus man-made emissions, respectively. The resulting distribution of sulphur dioxide agrees satisfactorily with measurements given by several authors. 37 references, 4 figures.

  11. Integrated assessment models of global climate change

    Parson, E.A.; Fisher-Vanden, K.

    1997-01-01

    The authors review recent work in the integrated assessment modeling of global climate change. This field has grown rapidly since 1990. Integrated assessment models seek to combine knowledge from multiple disciplines in formal integrated representations; inform policy-making, structure knowledge, and prioritize key uncertainties; and advance knowledge of broad system linkages and feedbacks, particularly between socio-economic and bio-physical processes. They may combine simplified representations of the socio-economic determinants of greenhouse gas emissions, the atmosphere and oceans, impacts on human activities and ecosystems, and potential policies and responses. The authors summarize current projects, grouping them according to whether they emphasize the dynamics of emissions control and optimal policy-making, uncertainty, or spatial detail. They review the few significant insights that have been claimed from work to date and identify important challenges for integrated assessment modeling in its relationships to disciplinary knowledge and to broader assessment seeking to inform policy- and decision-making. 192 refs., 2 figs

  12. New error calibration tests for gravity models using subset solutions and independent data - Applied to GEM-T3

    Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.

    1993-01-01

    A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.

  13. Estimation of the limit of detection in semiconductor gas sensors through linearized calibration models.

    Burgués, Javier; Jiménez-Soto, Juan Manuel; Marco, Santiago

    2018-07-12

    The limit of detection (LOD) is a key figure of merit in chemical sensing. However, the estimation of this figure of merit is hindered by the non-linear calibration curve characteristic of semiconductor gas sensor technologies such as, metal oxide (MOX), gasFETs or thermoelectric sensors. Additionally, chemical sensors suffer from cross-sensitivities and temporal stability problems. The application of the International Union of Pure and Applied Chemistry (IUPAC) recommendations for univariate LOD estimation in non-linear semiconductor gas sensors is not straightforward due to the strong statistical requirements of the IUPAC methodology (linearity, homoscedasticity, normality). Here, we propose a methodological approach to LOD estimation through linearized calibration models. As an example, the methodology is applied to the detection of low concentrations of carbon monoxide using MOX gas sensors in a scenario where the main source of error is the presence of uncontrolled levels of humidity. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. A Monte Carlo modeling alternative for the API Gamma Ray Calibration Facility

    Galford, J.E.

    2017-01-01

    The gamma ray pit at the API Calibration Facility, located on the University of Houston campus, defines the API unit for natural gamma ray logs used throughout the petroleum logging industry. Future use of the facility is uncertain. An alternative method is proposed to preserve the gamma ray API unit definition as an industry standard by using Monte Carlo modeling to obtain accurate counting rate-to-API unit conversion factors for gross-counting and spectral gamma ray tool designs. - Highlights: • A Monte Carlo alternative is proposed to replace empirical calibration procedures. • The proposed Monte Carlo alternative preserves the original API unit definition. • MCNP source and materials descriptions are provided for the API gamma ray pit. • Simulated results are presented for several wireline logging tool designs. • The proposed method can be adapted for use with logging-while-drilling tools.

  15. α Centauri A as a potential stellar model calibrator: establishing the nature of its core

    Nsamba, B.; Monteiro, M. J. P. F. G.; Campante, T. L.; Cunha, M. S.; Sousa, S. G.

    2018-05-01

    Understanding the physical process responsible for the transport of energy in the core of α Centauri A is of the utmost importance if this star is to be used in the calibration of stellar model physics. Adoption of different parallax measurements available in the literature results in differences in the interferometric radius constraints used in stellar modelling. Further, this is at the origin of the different dynamical mass measurements reported for this star. With the goal of reproducing the revised dynamical mass derived by Pourbaix & Boffin, we modelled the star using two stellar grids varying in the adopted nuclear reaction rates. Asteroseismic and spectroscopic observables were complemented with different interferometric radius constraints during the optimisation procedure. Our findings show that best-fit models reproducing the revised dynamical mass favour the existence of a convective core (≳ 70% of best-fit models), a result that is robust against changes to the model physics. If this mass is accurate, then α Centauri A may be used to calibrate stellar model parameters in the presence of a convective core.

  16. Increasing parameter certainty and data utility through multi-objective calibration of a spatially distributed temperature and solute model

    C. Bandaragoda

    2011-05-01

    Full Text Available To support the goal of distributed hydrologic and instream model predictions based on physical processes, we explore multi-dimensional parameterization determined by a broad set of observations. We present a systematic approach to using various data types at spatially distributed locations to decrease parameter bounds sampled within calibration algorithms that ultimately provide information regarding the extent of individual processes represented within the model structure. Through the use of a simulation matrix, parameter sets are first locally optimized by fitting the respective data at one or two locations and then the best results are selected to resolve which parameter sets perform best at all locations, or globally. This approach is illustrated using the Two-Zone Temperature and Solute (TZTS model for a case study in the Virgin River, Utah, USA, where temperature and solute tracer data were collected at multiple locations and zones within the river that represent the fate and transport of both heat and solute through the study reach. The result was a narrowed parameter space and increased parameter certainty which, based on our results, would not have been as successful if only single objective algorithms were used. We also found that the global optimum is best defined by multiple spatially distributed local optima, which supports the hypothesis that there is a discrete and narrowly bounded parameter range that represents the processes controlling the dominant hydrologic responses. Further, we illustrate that the optimization process itself can be used to determine which observed responses and locations are most useful for estimating the parameters that result in a global fit to guide future data collection efforts.

  17. On the possibility of calibrating urban storm-water drainage models using gauge-based adjusted radar rainfall estimates

    Ochoa-Rodriguez, S; Wang, L; Simoes, N; Onof, C; Maksimovi?, ?

    2013-01-01

    24/07/14 meb. Authors did not sign CTA. Traditionally, urban storm water drainage models have been calibrated using only raingauge data, which may result in overly conservative models due to the lack of spatial description of rainfall. With the advent of weather radars, radar rainfall estimates with higher temporal and spatial resolution have become increasingly available and have started to be used operationally for urban storm water model calibration and real time operation. Nonetheless,...

  18. Model calibration and validation for OFMSW and sewage sludge co-digestion reactors

    Esposito, G.; Frunzo, L.; Panico, A.; Pirozzi, F.

    2011-01-01

    Highlights: → Disintegration is the limiting step of the anaerobic co-digestion process. → Disintegration kinetic constant does not depend on the waste particle size. → Disintegration kinetic constant depends only on the waste nature and composition. → The model calibration can be performed on organic waste of any particle size. - Abstract: A mathematical model has recently been proposed by the authors to simulate the biochemical processes that prevail in a co-digestion reactor fed with sewage sludge and the organic fraction of municipal solid waste. This model is based on the Anaerobic Digestion Model no. 1 of the International Water Association, which has been extended to include the co-digestion processes, using surface-based kinetics to model the organic waste disintegration and conversion to carbohydrates, proteins and lipids. When organic waste solids are present in the reactor influent, the disintegration process is the rate-limiting step of the overall co-digestion process. The main advantage of the proposed modeling approach is that the kinetic constant of such a process does not depend on the waste particle size distribution (PSD) and rather depends only on the nature and composition of the waste particles. The model calibration aimed to assess the kinetic constant of the disintegration process can therefore be conducted using organic waste samples of any PSD, and the resulting value will be suitable for all the organic wastes of the same nature as the investigated samples, independently of their PSD. This assumption was proven in this study by biomethane potential experiments that were conducted on organic waste samples with different particle sizes. The results of these experiments were used to calibrate and validate the mathematical model, resulting in a good agreement between the simulated and observed data for any investigated particle size of the solid waste. This study confirms the strength of the proposed model and calibration procedure

  19. Principal components based support vector regression model for on-line instrument calibration monitoring in NPPs

    Seo, In Yong; Ha, Bok Nam; Lee, Sung Woo; Shin, Chang Hoon; Kim, Seong Jun

    2010-01-01

    In nuclear power plants (NPPs), periodic sensor calibrations are required to assure that sensors are operating correctly. By checking the sensor's operating status at every fuel outage, faulty sensors may remain undetected for periods of up to 24 months. Moreover, typically, only a few faulty sensors are found to be calibrated. For the safe operation of NPP and the reduction of unnecessary calibration, on-line instrument calibration monitoring is needed. In this study, principal component based auto-associative support vector regression (PCSVR) using response surface methodology (RSM) is proposed for the sensor signal validation of NPPs. This paper describes the design of a PCSVR-based sensor validation system for a power generation system. RSM is employed to determine the optimal values of SVR hyperparameters and is compared to the genetic algorithm (GA). The proposed PCSVR model is confirmed with the actual plant data of Kori Nuclear Power Plant Unit 3 and is compared with the Auto-Associative support vector regression (AASVR) and the auto-associative neural network (AANN) model. The auto-sensitivity of AASVR is improved by around six times by using a PCA, resulting in good detection of sensor drift. Compared to AANN, accuracy and cross-sensitivity are better while the auto-sensitivity is almost the same. Meanwhile, the proposed RSM for the optimization of the PCSVR algorithm performs even better in terms of accuracy, auto-sensitivity, and averaged maximum error, except in averaged RMS error, and this method is much more time efficient compared to the conventional GA method

  20. BETR global - A geographically-explicit global-scale multimedia contaminant fate model

    MacLeod, Matthew; Waldow, Harald von; Tay, Pascal; Armitage, James M.; Woehrnschimmel, Henry; Riley, William J.; McKone, Thomas E.; Hungerbuhler, Konrad

    2011-01-01

    We present two new software implementations of the BETR Global multimedia contaminant fate model. The model uses steady-state or non-steady-state mass-balance calculations to describe the fate and transport of persistent organic pollutants using a desktop computer. The global environment is described using a database of long-term average monthly conditions on a 15 o x 15 o grid. We demonstrate BETR Global by modeling the global sources, transport, and removal of decamethylcyclopentasiloxane (D5). - Two new software implementations of the Berkeley-Trent Global Contaminant Fate Model are available. The new model software is illustrated using a case study of the global fate of decamethylcyclopentasiloxane (D5).

  1. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Laxemar

    Aneljung, Maria; Sassner, Mona; Gustafsson, Lars-Goeran (DHI Sverige AB, Lilla Bommen 1, SE-411 04 Goeteborg (Sweden))

    2007-11-15

    This report describes modelling where the hydrological modelling system MIKE SHE has been used to describe surface hydrology, near-surface hydrogeology, advective transport mechanisms, and the contact between groundwater and surface water within the SKB site investigation area at Laxemar. In the MIKE SHE system, surface water flow is described with the one-dimensional modelling tool MIKE 11, which is fully and dynamically integrated with the groundwater flow module in MIKE SHE. In early 2008, a supplementary data set will be available and a process of updating, rebuilding and calibrating the MIKE SHE model based on this data set will start. Before the calibration on the new data begins, it is important to gather as much knowledge as possible on calibration methods, and to identify critical calibration parameters and areas within the model that require special attention. In this project, the MIKE SHE model has been further developed. The model area has been extended, and the present model also includes an updated bedrock model and a more detailed description of the surface stream network. The numerical model has been updated and optimized, especially regarding the modelling of evapotranspiration and the unsaturated zone, and the coupling between the surface stream network in MIKE 11 and the overland flow in MIKE SHE. An initial calibration has been made and a base case has been defined and evaluated. In connection with the calibration, the most important changes made in the model were the following: The evapotranspiration was reduced. The infiltration capacity was reduced. The hydraulic conductivities of the Quaternary deposits in the water-saturated part of the subsurface were reduced. Data from one surface water level monitoring station, four surface water discharge monitoring stations and 43 groundwater level monitoring stations (SSM series boreholes) have been used to evaluate and calibrate the model. The base case simulations showed a reasonable agreement

  2. European extra-tropical storm damage risk from a multi-model ensemble of dynamically-downscaled global climate models

    Haylock, M. R.

    2011-10-01

    Uncertainty in the return levels of insured loss from European wind storms was quantified using storms derived from twenty-two 25 km regional climate model runs driven by either the ERA40 reanalyses or one of four coupled atmosphere-ocean global climate models. Storms were identified using a model-dependent storm severity index based on daily maximum 10 m wind speed. The wind speed from each model was calibrated to a set of 7 km historical storm wind fields using the 70 storms with the highest severity index in the period 1961-2000, employing a two stage calibration methodology. First, the 25 km daily maximum wind speed was downscaled to the 7 km historical model grid using the 7 km surface roughness length and orography, also adopting an empirical gust parameterisation. Secondly, downscaled wind gusts were statistically scaled to the historical storms to match the geographically-dependent cumulative distribution function of wind gust speed. The calibrated wind fields were run through an operational catastrophe reinsurance risk model to determine the return level of loss to a European population density-derived property portfolio. The risk model produced a 50-yr return level of loss of between 0.025% and 0.056% of the total insured value of the portfolio.

  3. European extra-tropical storm damage risk from a multi-model ensemble of dynamically-downscaled global climate models

    M. R. Haylock

    2011-10-01

    Full Text Available Uncertainty in the return levels of insured loss from European wind storms was quantified using storms derived from twenty-two 25 km regional climate model runs driven by either the ERA40 reanalyses or one of four coupled atmosphere-ocean global climate models. Storms were identified using a model-dependent storm severity index based on daily maximum 10 m wind speed. The wind speed from each model was calibrated to a set of 7 km historical storm wind fields using the 70 storms with the highest severity index in the period 1961–2000, employing a two stage calibration methodology. First, the 25 km daily maximum wind speed was downscaled to the 7 km historical model grid using the 7 km surface roughness length and orography, also adopting an empirical gust parameterisation. Secondly, downscaled wind gusts were statistically scaled to the historical storms to match the geographically-dependent cumulative distribution function of wind gust speed.

    The calibrated wind fields were run through an operational catastrophe reinsurance risk model to determine the return level of loss to a European population density-derived property portfolio. The risk model produced a 50-yr return level of loss of between 0.025% and 0.056% of the total insured value of the portfolio.

  4. Hydrologic Model Development and Calibration: Contrasting a Single- and Multi-Objective Approach for Comparing Model Performance

    Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.

    2009-05-01

    Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment

  5. Modelling and analysis of global coal markets

    Trueby, Johannes

    2013-01-01

    International Steam Coal Trade. In this paper, we analyse steam coal market equilibria in the years 2006 and 2008 by testing for two possible market structure scenarios: perfect competition and an oligopoly setup with major exporters competing in quantities. The assumed oligopoly scenario cannot explain market equilibria for any year. While we find that the competitive model simulates market equilibria well in 2006, the competitive model is not able to reproduce real market outcomes in 2008. The analysis shows that not all available supply capacity was utilised in 2008. We conclude that either unknown capacity bottlenecks or more sophisticated non-competitive strategies were the cause for the high prices in 2008. Chapter 4 builds upon the findings of the analysis in chapter 3 and adds a more detailed representation of domestic markets. The corresponding essay is titled Nations as Strategic Players in Global Commodity Markets: Evidence from World Coal Trade. In this chapter we explore the hypothesis that export policies and trade patterns of national players in the steam coal market are consistent with non-competitive market behaviour. We test this hypothesis by developing a static equilibrium model which is able to model coal producing nations as strategic players. We explicitly account for integrated seaborne trade and domestic markets. The global steam coal market is simulated under several imperfect market structure setups. We find that trade and prices of a China - Indonesia duopoly fits the real market outcome best and that real Chinese export quotas in 2008 were consistent with simulated exports under a Cournot-Nash strategy. Chapter 5 looks at the long-term effect of Chinese energy system planning decisions. The time horizon is 2006 to 2030. The analysis in this chapter combines a dynamic equilibrium model with the scenario analysis technique. The corresponding essay is titled Coal Lumps vs. Electrons: How Do Chinese Bulk Energy Transport Decisions Affect the Global

  6. Modelling and analysis of global coal markets

    Trueby, Johannes

    2013-01-17

    International Steam Coal Trade. In this paper, we analyse steam coal market equilibria in the years 2006 and 2008 by testing for two possible market structure scenarios: perfect competition and an oligopoly setup with major exporters competing in quantities. The assumed oligopoly scenario cannot explain market equilibria for any year. While we find that the competitive model simulates market equilibria well in 2006, the competitive model is not able to reproduce real market outcomes in 2008. The analysis shows that not all available supply capacity was utilised in 2008. We conclude that either unknown capacity bottlenecks or more sophisticated non-competitive strategies were the cause for the high prices in 2008. Chapter 4 builds upon the findings of the analysis in chapter 3 and adds a more detailed representation of domestic markets. The corresponding essay is titled Nations as Strategic Players in Global Commodity Markets: Evidence from World Coal Trade. In this chapter we explore the hypothesis that export policies and trade patterns of national players in the steam coal market are consistent with non-competitive market behaviour. We test this hypothesis by developing a static equilibrium model which is able to model coal producing nations as strategic players. We explicitly account for integrated seaborne trade and domestic markets. The global steam coal market is simulated under several imperfect market structure setups. We find that trade and prices of a China - Indonesia duopoly fits the real market outcome best and that real Chinese export quotas in 2008 were consistent with simulated exports under a Cournot-Nash strategy. Chapter 5 looks at the long-term effect of Chinese energy system planning decisions. The time horizon is 2006 to 2030. The analysis in this chapter combines a dynamic equilibrium model with the scenario analysis technique. The corresponding essay is titled Coal Lumps vs. Electrons: How Do Chinese Bulk Energy Transport Decisions Affect the Global

  7. Modeling transducer impulse responses for predicting calibrated pressure pulses with the ultrasound simulation program Field II

    Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten

    2010-01-01

    FIELD II is a simulation software capable of predicting the field pressure in front of transducers having any complicated geometry. A calibrated prediction with this program is, however, dependent on an exact voltage-to-surface acceleration impulse response of the transducer. Such impulse response...... is not calculated by FIELD II. This work investigates the usability of combining a one-dimensional multilayer transducer modeling principle with the FIELD II software. Multilayer here refers to a transducer composed of several material layers. Measurements of pressure and current from Pz27 piezoceramic disks...... transducer model and the FIELD II software in combination give good agreement with measurements....

  8. The analytical calibration model of temperature effects on a silicon piezoresistive pressure sensor

    Meng Nie

    2017-03-01

    Full Text Available Presently, piezoresistive pressure sensors are highly demanded for using in various microelectronic devices. The electrical behavior of these pressure sensor is mainly dependent on the temperature gradient. In this paper, various factors,which includes effect of temperature, doping concentration on the pressure sensitive resistance, package stress, and temperature on the Young’s modulus etc., are responsible for the temperature drift of the pressure sensor are analyzed. Based on the above analysis, an analytical calibration model of the output voltage of the sensor is proposed and the experimental data is validated through a suitable model.

  9. Spatial pattern evaluation of a calibrated national hydrological model - a remote-sensing-based diagnostic approach

    Mendiguren, Gorka; Koch, Julian; Stisen, Simon

    2017-11-01

    Distributed hydrological models are traditionally evaluated against discharge stations, emphasizing the temporal and neglecting the spatial component of a model. The present study widens the traditional paradigm by highlighting spatial patterns of evapotranspiration (ET), a key variable at the land-atmosphere interface, obtained from two different approaches at the national scale of Denmark. The first approach is based on a national water resources model (DK-model), using the MIKE-SHE model code, and the second approach utilizes a two-source energy balance model (TSEB) driven mainly by satellite remote sensing data. Ideally, the hydrological model simulation and remote-sensing-based approach should present similar spatial patterns and driving mechanisms of ET. However, the spatial comparison showed that the differences are significant and indicate insufficient spatial pattern performance of the hydrological model.The differences in spatial patterns can partly be explained by the fact that the hydrological model is configured to run in six domains that are calibrated independently from each other, as it is often the case for large-scale multi-basin calibrations. Furthermore, the model incorporates predefined temporal dynamics of leaf area index (LAI), root depth (RD) and crop coefficient (Kc) for each land cover type. This zonal approach of model parameterization ignores the spatiotemporal complexity of the natural system. To overcome this limitation, this study features a modified version of the DK-model in which LAI, RD and Kc are empirically derived using remote sensing data and detailed soil property maps in order to generate a higher degree of spatiotemporal variability and spatial consistency between the six domains. The effects of these changes are analyzed by using empirical orthogonal function (EOF) analysis to evaluate spatial patterns. The EOF analysis shows that including remote-sensing-derived LAI, RD and Kc in the distributed hydrological model adds

  10. Calibrating a forest landscape model to simulate frequent fire in Mediterranean-type shrublands

    Syphard, A.D.; Yang, J.; Franklin, J.; He, H.S.; Keeley, J.E.

    2007-01-01

    In Mediterranean-type ecosystems (MTEs), fire disturbance influences the distribution of most plant communities, and altered fire regimes may be more important than climate factors in shaping future MTE vegetation dynamics. Models that simulate the high-frequency fire and post-fire response strategies characteristic of these regions will be important tools for evaluating potential landscape change scenarios. However, few existing models have been designed to simulate these properties over long time frames and broad spatial scales. We refined a landscape disturbance and succession (LANDIS) model to operate on an annual time step and to simulate altered fire regimes in a southern California Mediterranean landscape. After developing a comprehensive set of spatial and non-spatial variables and parameters, we calibrated the model to simulate very high fire frequencies and evaluated the simulations under several parameter scenarios representing hypotheses about system dynamics. The goal was to ensure that observed model behavior would simulate the specified fire regime parameters, and that the predictions were reasonable based on current understanding of community dynamics in the region. After calibration, the two dominant plant functional types responded realistically to different fire regime scenarios. Therefore, this model offers a new alternative for simulating altered fire regimes in MTE landscapes. ?? 2007 Elsevier Ltd. All rights reserved.

  11. Calibrating and validating a FE model for long-term behavior of RC beams

    Tošić Nikola D.

    2014-01-01

    Full Text Available This study presents the research carried out in finding an optimal finite element (FE model for calculating the long-term behavior of reinforced concrete (RC beams. A multi-purpose finite element software DIANA was used. A benchmark test in the form of a simply supported beam loaded in four point bending was selected for model calibration. The result was the choice of 3-node beam elements, a multi-directional fixed crack model with constant stress cut-off, nonlinear tension softening and constant shear retention and a creep and shrinkage model according to CEB-FIP Model Code 1990. The model was then validated on 14 simply supported beams and 6 continuous beams. Good agreement was found with experimental results (within ±15%.

  12. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  13. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  14. The worth of data to reduce predictive uncertainty of an integrated catchment model by multi-constraint calibration

    Koch, J.; Jensen, K. H.; Stisen, S.

    2017-12-01

    Hydrological models that integrate numerical process descriptions across compartments of the water cycle are typically required to undergo thorough model calibration in order to estimate suitable effective model parameters. In this study, we apply a spatially distributed hydrological model code which couples the saturated zone with the unsaturated zone and the energy portioning at the land surface. We conduct a comprehensive multi-constraint model calibration against nine independent observational datasets which reflect both the temporal and the spatial behavior of hydrological response of a 1000km2 large catchment in Denmark. The datasets are obtained from satellite remote sensing and in-situ measurements and cover five keystone hydrological variables: discharge, evapotranspiration, groundwater head, soil moisture and land surface temperature. Results indicate that a balanced optimization can be achieved where errors on objective functions for all nine observational datasets can be reduced simultaneously. The applied calibration framework was tailored with focus on improving the spatial pattern performance; however results suggest that the optimization is still more prone to improve the temporal dimension of model performance. This study features a post-calibration linear uncertainty analysis. This allows quantifying parameter identifiability which is the worth of a specific observational dataset to infer values to model parameters through calibration. Furthermore the ability of an observation to reduce predictive uncertainty is assessed as well. Such findings determine concrete implications on the design of model calibration frameworks and, in more general terms, the acquisition of data in hydrological observatories.

  15. Calibration of a γ- Re θ transition model and its application in low-speed flows

    Wang, YunTao; Zhang, YuLun; Meng, DeHong; Wang, GunXue; Li, Song

    2014-12-01

    The prediction of laminar-turbulent transition in boundary layer is very important for obtaining accurate aerodynamic characteristics with computational fluid dynamic (CFD) tools, because laminar-turbulent transition is directly related to complex flow phenomena in boundary layer and separated flow in space. Unfortunately, the transition effect isn't included in today's major CFD tools because of non-local calculations in transition modeling. In this paper, Menter's γ- Re θ transition model is calibrated and incorporated into a Reynolds-Averaged Navier-Stokes (RANS) code — Trisonic Platform (TRIP) developed in China Aerodynamic Research and Development Center (CARDC). Based on the experimental data of flat plate from the literature, the empirical correlations involved in the transition model are modified and calibrated numerically. Numerical simulation for low-speed flow of Trapezoidal Wing (Trap Wing) is performed and compared with the corresponding experimental data. It is indicated that the γ- Re θ transition model can accurately predict the location of separation-induced transition and natural transition in the flow region with moderate pressure gradient. The transition model effectively imporves the simulation accuracy of the boundary layer and aerodynamic characteristics.

  16. Mathematical Model and Calibration Procedure of a PSD Sensor Used in Local Positioning Systems.

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Domingo-Perez, Francisco; Tsirigotis, Georgios

    2016-09-15

    Here, we propose a mathematical model and a calibration procedure for a PSD (position sensitive device) sensor equipped with an optical system, to enable accurate measurement of the angle of arrival of one or more beams of light emitted by infrared (IR) transmitters located at distances of between 4 and 6 m. To achieve this objective, it was necessary to characterize the intrinsic parameters that model the system and obtain their values. This first approach was based on a pin-hole model, to which system nonlinearities were added, and this was used to model the points obtained with the nA currents provided by the PSD. In addition, we analyzed the main sources of error, including PSD sensor signal noise, gain factor imbalances and PSD sensor distortion. The results indicated that the proposed model and method provided satisfactory calibration and yielded precise parameter values, enabling accurate measurement of the angle of arrival with a low degree of error, as evidenced by the experimental results.

  17. Development and Calibration of Two-Dimensional Hydrodynamic Model of the Tanana River near Tok, Alaska

    Conaway, Jeffrey S.; Moran, Edward H.

    2004-01-01

    Bathymetric and hydraulic data were collected by the U.S. Geological Survey on the Tanana River in proximity to Alaska Department of Transportation and Public Facilities' bridge number 505 at mile 80.5 of the Alaska Highway. Data were collected from August 7-9, 2002, over an approximate 5,000- foot reach of the river. These data were combined with topographic data provided by Alaska Department of Transportation and Public Facilities to generate a two-dimensional hydrodynamic model. The hydrodynamic model was calibrated with water-surface elevations, flow velocities, and flow directions collected at a discharge of 25,600 cubic feet per second. The calibrated model was then used for a simulation of the 100-year recurrence interval discharge of 51,900 cubic feet per second. The existing bridge piers were removed from the model geometry in a second simulation to model the hydraulic conditions in the channel without the piers' influence. The water-surface elevations, flow velocities, and flow directions from these simulations can be used to evaluate the influence of the piers on flow hydraulics and will assist the Alaska Department of Transportation and Public Facilities in the design of a replacement bridge.

  18. BETR Global - A geographically explicit global-scale multimedia contaminant fate model

    Macleod, M.; Waldow, H. von; Tay, P.; Armitage, J. M.; Wohrnschimmel, H.; Riley, W.; McKone, T. E.; Hungerbuhler, K.

    2011-04-01

    We present two new software implementations of the BETR Global multimedia contaminant fate model. The model uses steady-state or non-steady-state mass-balance calculations to describe the fate and transport of persistent organic pollutants using a desktop computer. The global environment is described using a database of long-term average monthly conditions on a 15{sup o} x 15{sup o} grid. We demonstrate BETR Global by modeling the global sources, transport, and removal of decamethylcyclopentasiloxane (D5).

  19. Moduli stabilisation for chiral global models

    Cicoli, Michele; Mayrhofer, Christoph; Valandro, Roberto

    2011-10-01

    We combine moduli stabilisation and (chiral) model building in a fully consistent global set-up in Type IIB/F-theory. We consider compactifications on Calabi-Yau orientifolds which admit an explicit description in terms of toric geometry. We build globally consistent compactifications with tadpole and Freed-Witten anomaly cancellation by choosing appropriate brane set-ups and world-volume fluxes which also give rise to SU(5)- or MSSM-like chiral models. We fix all the Kaehler moduli within the Kaehler cone and the regime of validity of the 4D effective field theory. This is achieved in a way compatible with the local presence of chirality. The hidden sector generating the non-perturbative effects is placed on a del Pezzo divisor that does not have any chiral intersections with any other brane. In general, the vanishing D-term condition implies the shrinking of the rigid divisor supporting the visible sector. However, we avoid this problem by generating r< n D-term conditions on a set of n intersecting divisors. The remaining (n-r) flat directions are fixed by perturbative corrections to the Kaehler potential. We illustrate our general claims in an explicit example. We consider a K3-fibred Calabi-Yau with four Kaehler moduli, that is an hypersurface in a toric ambient space and admits a simple F-theory up-lift. We present explicit choices of brane set-ups and fluxes which lead to three different phenomenological scenarios: the first with GUT-scale strings and TeV-scale SUSY by fine-tuning the background fluxes; the second with an exponentially large value of the volume and TeV-scale SUSY without fine-tuning the background fluxes; and the third with a very anisotropic configuration that leads to TeV-scale strings and two micron-sized extra dimensions. The K3 fibration structure of the Calabi-Yau three-fold is also particularly suitable for cosmological purposes. (orig.)

  20. Moduli stabilisation for chiral global models

    Cicoli, Michele [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Mayrhofer, Christoph [Heidelberg Univ. (Germany). Inst. fuer Theoretische Physik; Valandro, Roberto [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik

    2011-10-15

    We combine moduli stabilisation and (chiral) model building in a fully consistent global set-up in Type IIB/F-theory. We consider compactifications on Calabi-Yau orientifolds which admit an explicit description in terms of toric geometry. We build globally consistent compactifications with tadpole and Freed-Witten anomaly cancellation by choosing appropriate brane set-ups and world-volume fluxes which also give rise to SU(5)- or MSSM-like chiral models. We fix all the Kaehler moduli within the Kaehler cone and the regime of validity of the 4D effective field theory. This is achieved in a way compatible with the local presence of chirality. The hidden sector generating the non-perturbative effects is placed on a del Pezzo divisor that does not have any chiral intersections with any other brane. In general, the vanishing D-term condition implies the shrinking of the rigid divisor supporting the visible sector. However, we avoid this problem by generating r

  1. Hydrological model calibration for flood prediction in current and future climates using probability distributions of observed peak flows and model based rainfall

    Haberlandt, Uwe; Wallner, Markus; Radtke, Imke

    2013-04-01

    Derived flood frequency analysis based on continuous hydrological modelling is very demanding regarding the required length and temporal resolution of precipitation input data. Often such flood predictions are obtained using long precipitation time series from stochastic approaches or from regional climate models as input. However, the calibration of the hydrological model is usually done using short time series of observed data. This inconsistent employment of different data types for calibration and application of a hydrological model increases its uncertainty. Here, it is proposed to calibrate a hydrological model directly on probability distributions of observed peak flows using model based rainfall in line with its later application. Two examples are given to illustrate the idea. The first one deals with classical derived flood frequency analysis using input data from an hourly stochastic rainfall model. The second one concerns a climate impact analysis using hourly precipitation from a regional climate model. The results show that: (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated on extreme conditions works quite well for average conditions but not vice versa, (III) the calibration of the hydrological model using regional climate model data works as an implicit bias correction method and (IV) the best performance for flood estimation is usually obtained when model based precipitation and observed probability distribution of peak flows are used for model calibration.

  2. Combining engineering and data-driven approaches: Development of a generic fire risk model facilitating calibration

    De Sanctis, G.; Fischer, K.; Kohler, J.

    2014-01-01

    Fire risk models support decision making for engineering problems under the consistent consideration of the associated uncertainties. Empirical approaches can be used for cost-benefit studies when enough data about the decision problem are available. But often the empirical approaches...... a generic risk model that is calibrated to observed fire loss data. Generic risk models assess the risk of buildings based on specific risk indicators and support risk assessment at a portfolio level. After an introduction to the principles of generic risk assessment, the focus of the present paper...... are not detailed enough. Engineering risk models, on the other hand, may be detailed but typically involve assumptions that may result in a biased risk assessment and make a cost-benefit study problematic. In two related papers it is shown how engineering and data-driven modeling can be combined by developing...

  3. Calibrating a Salt Water Intrusion Model with Time-Domain Electromagnetic Data

    Herckenrath, Daan; Odlum, Nick; Nenna, Vanessa

    2013-01-01

    Salt water intrusion models are commonly used to support groundwater resource management in coastal aquifers. Concentration data used for model calibration are often sparse and limited in spatial extent. With airborne and ground-based electromagnetic surveys, electrical resistivity models can......, we perform a coupled hydrogeophysical inversion (CHI) in which we use a salt water intrusion model to interpret the geophysical data and guide the geophysical inversion. We refer to this methodology as a Coupled Hydrogeophysical Inversion-State (CHI-S), in which simulated salt concentrations...... are transformed to an electrical resistivity model, after which a geophysical forward response is calculated and compared with the measured geophysical data. This approach was applied for a field site in Santa Cruz County, California, where a time-domain electromagnetic (TDEM) dataset was collected...

  4. Cosmological model-independent Gamma-ray bursts calibration and its cosmological constraint to dark energy

    Xu, Lixin

    2012-01-01

    As so far, the redshift of Gamma-ray bursts (GRBs) can extend to z ∼ 8 which makes it as a complementary probe of dark energy to supernova Ia (SN Ia). However, the calibration of GRBs is still a big challenge when they are used to constrain cosmological models. Though, the absolute magnitude of GRBs is still unknown, the slopes of GRBs correlations can be used as a useful constraint to dark energy in a completely cosmological model independent way. In this paper, we follow Wang's model-independent distance measurement method and calculate their values by using 109 GRBs events via the so-called Amati relation. Then, we use the obtained model-independent distances to constrain ΛCDM model as an example

  5. When to Make Mountains out of Molehills: The Pros and Cons of Simple and Complex Model Calibration Procedures

    Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.

    2017-12-01

    Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi

  6. Flood Inundation Modelling Under Uncertainty Using Globally and Freely Available Remote Sensing Data

    Yan, K.; Di Baldassarre, G.; Giustarini, L.; Solomatine, D. P.

    2012-04-01

    The extreme consequences of recent catastrophic events have highlighted that flood risk prevention still needs to be improved to reduce human losses and economic damages, which have considerably increased worldwide in recent years. Flood risk management and long term floodplain planning are vital for living with floods, which is the currently proposed approach to cope with floods. To support the decision making processes, a significant issue is the availability of data to build appropriate and reliable models, from which the needed information could be obtained. The desirable data for model building, calibration and validation are often not sufficient or available. A unique opportunity is offered nowadays by globally available data which can be freely downloaded from internet. This might open new opportunities for filling the gap between available and needed data, in order to build reliable models and potentially lead to the development of global inundation models to produce floodplain maps for the entire globe. However, there remains the question of what is the real potential of those global remote sensing data, characterized by different accuracy, for global inundation monitoring and how to integrate them with inundation models. This research aims at contributing to understand whether the current globally and freely available remote sensing data (e.g. SRTM, SAR) can be actually used to appropriately support inundation modelling. In this study, the SRTM DEM is used for hydraulic model building, while ENVISAT-ASAR satellite imagery is used for model validation. To test the usefulness of these globally and freely available data, a model based on the high resolution LiDAR DEM and ground data (high water marks) is used as benchmark. The work is carried out on a data-rich test site: the River Alzette in the north of Luxembourg City. Uncertainties are estimated for both SRTM and LiDAR based models. Probabilistic flood inundation maps are produced under the framework of

  7. Global plastic models for computerized structural analysis

    Roche, R.; Hoffmann, A.

    1977-01-01

    Two different global models are used in the CEASEMT system for structural analysis, one for the shells analysis and the other for piping analysis (in plastic or creep field). In shell analysis the generalized stresses choosed are the membrane forces Nsub(ij) and bending (including torsion) moments Msub(ij). There is only one yield condition for a normal (to the middle surface) and no integration along the thickness is required. In piping analysis, the choice of generalized stresses is: bending moments, torsional moments, Hoop stress and tension stress. There is only a set of stresses for a cross section and non integration over the cross section area is needed. Connected strains are axis curvature, torsion, uniform strains. The definition of the yield surface is the most important item. A practical way is to use a diagonal quadratic fonction of the stress components. But the coefficients are depending of the shape of the pipe element, especially for curved segments. Indications will be given on the yield fonctions used. Some examples of applications in structural analysis are added to the text [fr

  8. Multiobjective Optimal Algorithm for Automatic Calibration of Daily Streamflow Forecasting Model

    Yi Liu

    2016-01-01

    Full Text Available Single-objection function cannot describe the characteristics of the complicated hydrologic system. Consequently, it stands to reason that multiobjective functions are needed for calibration of hydrologic model. The multiobjective algorithms based on the theory of nondominate are employed to solve this multiobjective optimal problem. In this paper, a novel multiobjective optimization method based on differential evolution with adaptive Cauchy mutation and Chaos searching (MODE-CMCS is proposed to optimize the daily streamflow forecasting model. Besides, to enhance the diversity performance of Pareto solutions, a more precise crowd distance assigner is presented in this paper. Furthermore, the traditional generalized spread metric (SP is sensitive with the size of Pareto set. A novel diversity performance metric, which is independent of Pareto set size, is put forward in this research. The efficacy of the new algorithm MODE-CMCS is compared with the nondominated sorting genetic algorithm II (NSGA-II on a daily streamflow forecasting model based on support vector machine (SVM. The results verify that the performance of MODE-CMCS is superior to the NSGA-II for automatic calibration of hydrologic model.

  9. Fertilizer Induced Nitrate Pollution in RCW: Calibration of the DNDC Model

    El Hailouch, E.; Hornberger, G.; Crane, J. W.

    2012-12-01

    Fertilizer is widely used among urban and suburban households due to the socially driven attention of homeowners to lawn appearance. With high nitrogen content, fertilizer considerably impacts the environment through the emission of the highly potent greenhouse gas nitrous oxide and the leaching of nitrate. Nitrate leaching is significantly important because fertilizer sourced nitrate that is partially leached into soil causes groundwater pollution. In an effort to model the effect of fertilizer application on the environment, the geochemical DeNitrification-DeComposition model (DNDC) was previously developed to quantitatively measure the effects of fertilizer use. The purpose of this study is to use this model more effectively on a large scale through a measurement based calibration. For this reason, leaching was measured and studied on 12 sites in the Richland Creek Watershed (RCW). Information about the fertilization and irrigation regimes of these sites was collected, along with lysimeter readings that gave nitrate fluxes in the soil. A study of the amount and variation in nitrate leaching with respect to the varying geographical locations, time of the year, and fertilization and irrigation regimes has lead to a better understanding of the driving forces behind nitrate leaching. Quantifying the influence of each of these parameters allows for a more accurate calibration of the model thus permitting use that extends beyond the RCW. Measurement of nitrate leaching on a statewide or nationwide level in turn will help guide efforts in the reduction of groundwater pollution caused by fertilizer.

  10. Electronic transport in VO2—Experimentally calibrated Boltzmann transport modeling

    Kinaci, Alper; Rosenmann, Daniel; Chan, Maria K. Y.; Kado, Motohisa; Ling, Chen; Zhu, Gaohua; Banerjee, Debasish

    2015-01-01

    Materials that undergo metal-insulator transitions (MITs) are under intense study, because the transition is scientifically fascinating and technologically promising for various applications. Among these materials, VO 2 has served as a prototype due to its favorable transition temperature. While the physical underpinnings of the transition have been heavily investigated experimentally and computationally, quantitative modeling of electronic transport in the two phases has yet to be undertaken. In this work, we establish a density-functional-theory (DFT)-based approach with Hubbard U correction (DFT + U) to model electronic transport properties in VO 2 in the semiconducting and metallic regimes, focusing on band transport using the Boltzmann transport equations. We synthesized high quality VO 2 films and measured the transport quantities across the transition, in order to calibrate the free parameters in the model. We find that the experimental calibration of the Hubbard correction term can efficiently and adequately model the metallic and semiconducting phases, allowing for further computational design of MIT materials for desirable transport properties

  11. Electronic transport in VO{sub 2}—Experimentally calibrated Boltzmann transport modeling

    Kinaci, Alper; Rosenmann, Daniel; Chan, Maria K. Y., E-mail: debasish.banerjee@toyota.com, E-mail: mchan@anl.gov [Center for Nanoscale Materials, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Kado, Motohisa [Higashifuji Technical Center, Toyota Motor Corporation, Susono, Shizuoka 410-1193 (Japan); Ling, Chen; Zhu, Gaohua; Banerjee, Debasish, E-mail: debasish.banerjee@toyota.com, E-mail: mchan@anl.gov [Materials Research Department, Toyota Motor Engineering and Manufacturing North America, Inc., Ann Arbor, Michigan 48105 (United States)

    2015-12-28

    Materials that undergo metal-insulator transitions (MITs) are under intense study, because the transition is scientifically fascinating and technologically promising for various applications. Among these materials, VO{sub 2} has served as a prototype due to its favorable transition temperature. While the physical underpinnings of the transition have been heavily investigated experimentally and computationally, quantitative modeling of electronic transport in the two phases has yet to be undertaken. In this work, we establish a density-functional-theory (DFT)-based approach with Hubbard U correction (DFT + U) to model electronic transport properties in VO{sub 2} in the semiconducting and metallic regimes, focusing on band transport using the Boltzmann transport equations. We synthesized high quality VO{sub 2} films and measured the transport quantities across the transition, in order to calibrate the free parameters in the model. We find that the experimental calibration of the Hubbard correction term can efficiently and adequately model the metallic and semiconducting phases, allowing for further computational design of MIT materials for desirable transport properties.

  12. The scintillating optical fiber isotope experiment: Bevalac calibrations of test models

    Connell, J.J.; Binns, W.R.; Dowkontt, P.F.; Epstein, J.W.; Israel, M.H.; Klarmann, J.; Washington Univ., St. Louis, MO; Webber, W.R.; Kish, J.C.

    1990-01-01

    The Scintillating Optical Fiber Isotope Experiment (SOFIE) is a Cherenkov dE/dx-range experiment being developed to study the isotopic composition of cosmic rays in the iron region with sufficient resolution to resolve isotopes separated by one mass unit at iron. This instrument images stopping particles with a block of scintillating optical fibers coupled to an image intensified video camera. From the digitized video data the trajectory and range of particles stopping in the fiber bundle can be determined; this information, together with a Cherenkov measurement, is used to determine mass. To facilitate this determination, a new Cherenkov response equation was derived for heavy ions at energies near threshold in thick Cherenkov radiators. Test models of SOFIE were calibrated at the Lawrence Berkeley Laboratory's Bevalac heavy ion accelerator in 1985 and 1986 using beams of iron nuclei with energies of 465 to 515 MeV/nucleon. This paper presents the results of these calibrations and discusses the design of the SOFIE Bevalac test models in the context of the scientific objectives of the eventual balloon experiment. The test models show a mass resolution of σ A ≅0.30 amu and a range resolution of σ R ≅250 μm. These results are sufficient for a successful cosmic ray isotope experiment, thus demonstrating the feasibility of the detector system. The SOFIE test models represent the first successful application in the field of cosmic ray astrophysics of the emerging technology of scintillating optical fibers. (orig.)

  13. Three Different Ways of Calibrating Burger's Contact Model for Viscoelastic Model of Asphalt Mixtures by Discrete Element Method

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2016-01-01

    modulus. Three different approaches have been used and compared for calibrating the Burger's contact model. Values of the dynamic modulus and phase angle of asphalt mixtures were predicted by conducting DE simulation under dynamic strain control loading. The excellent agreement between the predicted......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional discrete element method. Combined with Burger's model, three contact models were used for the construction of constitutive asphalt mixture model with viscoelastic properties...

  14. hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models

    Zambrano-Bigiarini, M.; Rojas, R.

    2012-04-01

    Particle Swarm Optimisation (PSO) is a recent and powerful population-based stochastic optimisation technique inspired by social behaviour of bird flocking, which shares similarities with other evolutionary techniques such as Genetic Algorithms (GA). In PSO, however, each individual of the population, known as particle in PSO terminology, adjusts its flying trajectory on the multi-dimensional search-space according to its own experience (best-known personal position) and the one of its neighbours in the swarm (best-known local position). PSO has recently received a surge of attention given its flexibility, ease of programming, low memory and CPU requirements, and efficiency. Despite these advantages, PSO may still get trapped into sub-optimal solutions, suffer from swarm explosion or premature convergence. Thus, the development of enhancements to the "canonical" PSO is an active area of research. To date, several modifications to the canonical PSO have been proposed in the literature, resulting into a large and dispersed collection of codes and algorithms which might well be used for similar if not identical purposes. In this work we present hydroPSO, a platform-independent R package implementing several enhancements to the canonical PSO that we consider of utmost importance to bring this technique to the attention of a broader community of scientists and practitioners. hydroPSO is model-independent, allowing the user to interface any model code with the calibration engine without having to invest considerable effort in customizing PSO to a new calibration problem. Some of the controlling options to fine-tune hydroPSO are: four alternative topologies, several types of inertia weight, time-variant acceleration coefficients, time-variant maximum velocity, regrouping of particles when premature convergence is detected, different types of boundary conditions and many others. Additionally, hydroPSO implements recent PSO variants such as: Improved Particle Swarm

  15. Econometrically calibrated computable general equilibrium models: Applications to the analysis of energy and climate politics

    Schu, Kathryn L.

    Economy-energy-environment models are the mainstay of economic assessments of policies to reduce carbon dioxide (CO2) emissions, yet their empirical basis is often criticized as being weak. This thesis addresses these limitations by constructing econometrically calibrated models in two policy areas. The first is a 35-sector computable general equilibrium (CGE) model of the U.S. economy which analyzes the uncertain impacts of CO2 emission abatement. Econometric modeling of sectors' nested constant elasticity of substitution (CES) cost functions based on a 45-year price-quantity dataset yields estimates of capital-labor-energy-material input substitution elasticities and biases of technical change that are incorporated into the CGE model. I use the estimated standard errors and variance-covariance matrices to construct the joint distribution of the parameters of the economy's supply side, which I sample to perform Monte Carlo baseline and counterfactual runs of the model. The resulting probabilistic abatement cost estimates highlight the importance of the uncertainty in baseline emissions growth. The second model is an equilibrium simulation of the market for new vehicles which I use to assess the response of vehicle prices, sales and mileage to CO2 taxes and increased corporate average fuel economy (CAFE) standards. I specify an econometric model of a representative consumer's vehicle preferences using a nested CES expenditure function which incorporates mileage and other characteristics in addition to prices, and develop a novel calibration algorithm to link this structure to vehicle model supplies by manufacturers engaged in Bertrand competition. CO2 taxes' effects on gasoline prices reduce vehicle sales and manufacturers' profits if vehicles' mileage is fixed, but these losses shrink once mileage can be adjusted. Accelerated CAFE standards induce manufacturers to pay fines for noncompliance rather than incur the higher costs of radical mileage improvements

  16. Development of Camera Model and Geometric Calibration/validation of Xsat IRIS Imagery

    Kwoh, L. K.; Huang, X.; Tan, W. J.

    2012-07-01

    XSAT, launched on 20 April 2011, is the first micro-satellite designed and built in Singapore. It orbits the Earth at altitude of 822 km in a sun synchronous orbit. The satellite carries a multispectral camera IRIS with three spectral bands - 0.52~0.60 mm for Green, 0.63~0.69 mm for Red and 0.76~0.89 mm for NIR at 12 m resolution. In the design of IRIS camera, the three bands were acquired by three lines of CCDs (NIR, Red and Green). These CCDs were physically separated in the focal plane and their first pixels not absolutely aligned. The micro-satellite platform was also not stable enough to allow for co-registration of the 3 bands with simple linear transformation. In the camera model developed, this platform stability was compensated with 3rd to 4th order polynomials for the satellite's roll, pitch and yaw attitude angles. With the camera model, the camera parameters such as the band to band separations, the alignment of the CCDs relative to each other, as well as the focal length of the camera can be validated or calibrated. The results of calibration with more than 20 images showed that the band to band along-track separation agreed well with the pre-flight values provided by the vendor (0.093° and 0.046° for the NIR vs red and for green vs red CCDs respectively). The cross-track alignments were 0.05 pixel and 5.9 pixel for the NIR vs red and green vs red CCDs respectively. The focal length was found to be shorter by about 0.8%. This was attributed to the lower operating temperature which XSAT is currently operating. With the calibrated parameters and the camera model, a geometric level 1 multispectral image with RPCs can be generated and if required, orthorectified imagery can also be produced.

  17. Incorporation of sedimentological data into a calibrated groundwater flow and transport model

    Williams, N.J.; Young, S.C.; Barton, D.H.; Hurst, B.T.

    1997-01-01

    Analysis suggests that a high hydraulic conductivity (K) zone is associated with a former river channel at the Portsmouth Gaseous Diffusion Plant (PORTS). A two-dimensional (2-D) and three-dimensional (3-D) groundwater flow model was developed base on a sedimentological model to demonstrate the performance of a horizontal well for plume capture. The model produced a flow field with magnitudes and directions consistent with flow paths inferred from historical trichloroethylene (TCE) plume data. The most dominant feature affecting the well's performance was preferential high- and low-K zones. Based on results from the calibrated flow and transport model, a passive groundwater collection system was designed and built. Initial flow rates and concentrations measured from a gravity-drained horizontal well agree closely to predicted values

  18. Bayesian calibration of thermodynamic parameters for geochemical speciation modeling of cementitious materials

    Sarkar, S.; Kosson, D.S.; Mahadevan, S.; Meeussen, J.C.L.; Sloot, H. van der; Arnold, J.R.; Brown, K.G.

    2012-01-01

    Chemical equilibrium modeling of cementitious materials requires aqueous–solid equilibrium constants of the controlling mineral phases (K sp ) and the available concentrations of primary components. Inherent randomness of the input and model parameters, experimental measurement error, the assumptions and approximations required for numerical simulation, and inadequate knowledge of the chemical process contribute to uncertainty in model prediction. A numerical simulation framework is developed in this paper to assess uncertainty in K sp values used in geochemical speciation models. A Bayesian statistical method is used in combination with an efficient, adaptive Metropolis sampling technique to develop probability density functions for K sp values. One set of leaching experimental observations is used for calibration and another set is used for comparison to evaluate the applicability of the approach. The estimated probability distributions of K sp values can be used in Monte Carlo simulation to assess uncertainty in the behavior of aqueous–solid partitioning of constituents in cement-based materials.

  19. Calibration of a gamma spectrometer for natural radioactivity measurement. Experimental measurements and Monte Carlo modelling

    Courtine, Fabien

    2007-03-01

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137 Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60 Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  20. Global Information Enterprise (GIE) Modeling and Simulation (GIESIM)

    Bell, Paul

    2005-01-01

    ... AND S) toolkits into the Global Information Enterprise (GIE) Modeling and Simulation (GIESim) framework to create effective user analysis of candidate communications architectures and technologies...

  1. Modeling and control of temperature of heat-calibration wind tunnel

    Li Yunhua

    2012-01-01

    Full Text Available This paper investigates the temperature control of the heat air-flow wind tunnel for sensor temperature-calibration and heat strength experiment. Firstly, a mathematical model was established to describe the dynamic characteristics of the fuel supplying system based on a variable frequency driving pump. Then, based on the classical cascade control, an improved control law with the Smith predictive estimate and the fuzzy proportional-integral-derivative was proposed. The simulation result shows that the control effect of the proposed control strategy is better than the ordinary proportional-integral-derivative cascade control strategy.

  2. A Monte Carlo modeling alternative for the API Gamma Ray Calibration Facility.

    Galford, J E

    2017-04-01

    The gamma ray pit at the API Calibration Facility, located on the University of Houston campus, defines the API unit for natural gamma ray logs used throughout the petroleum logging industry. Future use of the facility is uncertain. An alternative method is proposed to preserve the gamma ray API unit definition as an industry standard by