WorldWideScience

Sample records for global calibration model

  1. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    Science.gov (United States)

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  2. A global model for residential energy use: Uncertainty in calibration to regional data

    International Nuclear Information System (INIS)

    van Ruijven, Bas; van Vuuren, Detlef P.; de Vries, Bert; van der Sluijs, Jeroen P.

    2010-01-01

    Uncertainties in energy demand modelling allow for the development of different models, but also leave room for different calibrations of a single model. We apply an automated model calibration procedure to analyse calibration uncertainty of residential sector energy use modelling in the TIMER 2.0 global energy model. This model simulates energy use on the basis of changes in useful energy intensity, technology development (AEEI) and price responses (PIEEI). We find that different implementations of these factors yield behavioural model results. Model calibration uncertainty is identified as influential source for variation in future projections: amounting 30% to 100% around the best estimate. Energy modellers should systematically account for this and communicate calibration uncertainty ranges. (author)

  3. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    Science.gov (United States)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  4. Calibration of a simple and a complex model of global marine biogeochemistry

    Science.gov (United States)

    Kriest, Iris

    2017-11-01

    The assessment of the ocean biota's role in climate change is often carried out with global biogeochemical ocean models that contain many components and involve a high level of parametric uncertainty. Because many data that relate to tracers included in a model are only sparsely observed, assessment of model skill is often restricted to tracers that can be easily measured and assembled. Examination of the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types - a complex seven-component model (MOPS) and a very simple four-component model (RetroMOPS) - for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual-mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model contains only two nutrients: oxygen and dissolved organic phosphorus (DOP). Its misfit and large-scale tracer distributions are sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias (fixed nitrogen and oxygen inventory) as a useful additional constraint on model parameters. Dissolved organic phosphorus at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer - although difficult to measure - may be an important asset for model calibration.

  5. Building and calibrating a large-extent and high resolution coupled groundwater-land surface model using globally available data-sets

    Science.gov (United States)

    Sutanudjaja, E. H.; Van Beek, L. P.; de Jong, S. M.; van Geer, F.; Bierkens, M. F.

    2012-12-01

    The current generation of large-scale hydrological models generally lacks a groundwater model component simulating lateral groundwater flow. Large-scale groundwater models are rare due to a lack of hydro-geological data required for their parameterization and a lack of groundwater head data required for their calibration. In this study, we propose an approach to develop a large-extent fully-coupled land surface-groundwater model by using globally available datasets and calibrate it using a combination of discharge observations and remotely-sensed soil moisture data. The underlying objective is to devise a collection of methods that enables one to build and parameterize large-scale groundwater models in data-poor regions. The model used, PCR-GLOBWB-MOD, has a spatial resolution of 1 km x 1 km and operates on a daily basis. It consists of a single-layer MODFLOW groundwater model that is dynamically coupled to the PCR-GLOBWB land surface model. This fully-coupled model accommodates two-way interactions between surface water levels and groundwater head dynamics, as well as between upper soil moisture states and groundwater levels, including a capillary rise mechanism to sustain upper soil storage and thus to fulfill high evaporation demands (during dry conditions). As a test bed, we used the Rhine-Meuse basin, where more than 4000 groundwater head time series have been collected for validation purposes. The model was parameterized using globally available data-sets on surface elevation, drainage direction, land-cover, soil and lithology. Next, the model was calibrated using a brute force approach and massive parallel computing, i.e. by running the coupled groundwater-land surface model for more than 3000 different parameter sets. Here, we varied minimal soil moisture storage and saturated conductivities of the soil layers as well as aquifer transmissivities. Using different regularization strategies and calibration criteria we compared three calibration scenarios

  6. Calibration and simulation of Heston model

    Directory of Open Access Journals (Sweden)

    Mrázek Milan

    2017-05-01

    Full Text Available We calibrate Heston stochastic volatility model to real market data using several optimization techniques. We compare both global and local optimizers for different weights showing remarkable differences even for data (DAX options from two consecutive days. We provide a novel calibration procedure that incorporates the usage of approximation formula and outperforms significantly other existing calibration methods.

  7. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  8. Another look at volume self-calibration: calibration and self-calibration within a pinhole model of Scheimpflug cameras

    International Nuclear Information System (INIS)

    Cornic, Philippe; Le Besnerais, Guy; Champagnat, Frédéric; Illoul, Cédric; Cheminet, Adam; Le Sant, Yves; Leclaire, Benjamin

    2016-01-01

    We address calibration and self-calibration of tomographic PIV experiments within a pinhole model of cameras. A complete and explicit pinhole model of a camera equipped with a 2-tilt angles Scheimpflug adapter is presented. It is then used in a calibration procedure based on a freely moving calibration plate. While the resulting calibrations are accurate enough for Tomo-PIV, we confirm, through a simple experiment, that they are not stable in time, and illustrate how the pinhole framework can be used to provide a quantitative evaluation of geometrical drifts in the setup. We propose an original self-calibration method based on global optimization of the extrinsic parameters of the pinhole model. These methods are successfully applied to the tomographic PIV of an air jet experiment. An unexpected by-product of our work is to show that volume self-calibration induces a change in the world frame coordinates. Provided the calibration drift is small, as generally observed in PIV, the bias on the estimated velocity field is negligible but the absolute location cannot be accurately recovered using standard calibration data. (paper)

  9. A multimethod Global Sensitivity Analysis to aid the calibration of geomechanical models via time-lapse seismic data

    Science.gov (United States)

    Price, D. C.; Angus, D. A.; Garcia, A.; Fisher, Q. J.; Parsons, S.; Kato, J.

    2018-03-01

    Time-lapse seismic attributes are used extensively in the history matching of production simulator models. However, although proven to contain information regarding production induced stress change, it is typically only loosely (i.e. qualitatively) used to calibrate geomechanical models. In this study we conduct a multimethod Global Sensitivity Analysis (GSA) to assess the feasibility and aid the quantitative calibration of geomechanical models via near-offset time-lapse seismic data. Specifically, the calibration of mechanical properties of the overburden. Via the GSA, we analyse the near-offset overburden seismic traveltimes from over 4000 perturbations of a Finite Element (FE) geomechanical model of a typical High Pressure High Temperature (HPHT) reservoir in the North Sea. We find that, out of an initially large set of material properties, the near-offset overburden traveltimes are primarily affected by Young's modulus and the effective stress (i.e. Biot) coefficient. The unexpected significance of the Biot coefficient highlights the importance of modelling fluid flow and pore pressure outside of the reservoir. The FE model is complex and highly nonlinear. Multiple combinations of model parameters can yield equally possible model realizations. Consequently, numerical calibration via a large number of random model perturbations is unfeasible. However, the significant differences in traveltime results suggest that more sophisticated calibration methods could potentially be feasible for finding numerous suitable solutions. The results of the time-varying GSA demonstrate how acquiring multiple vintages of time-lapse seismic data can be advantageous. However, they also suggest that significant overburden near-offset seismic time-shifts, useful for model calibration, may take up to 3 yrs after the start of production to manifest. Due to the nonlinearity of the model behaviour, similar uncertainty in the reservoir mechanical properties appears to influence overburden

  10. Vegetation root zone storage and rooting depth, derived from local calibration of a global hydrological model

    Science.gov (United States)

    van der Ent, R.; Van Beek, R.; Sutanudjaja, E.; Wang-Erlandsson, L.; Hessels, T.; Bastiaanssen, W.; Bierkens, M. F.

    2017-12-01

    The storage and dynamics of water in the root zone control many important hydrological processes such as saturation excess overland flow, interflow, recharge, capillary rise, soil evaporation and transpiration. These processes are parameterized in hydrological models or land-surface schemes and the effect on runoff prediction can be large. Root zone parameters in global hydrological models are very uncertain as they cannot be measured directly at the scale on which these models operate. In this paper we calibrate the global hydrological model PCR-GLOBWB using a state-of-the-art ensemble of evaporation fields derived by solving the energy balance for satellite observations. We focus our calibration on the root zone parameters of PCR-GLOBWB and derive spatial patterns of maximum root zone storage. We find these patterns to correspond well with previous research. The parameterization of our model allows for the conversion of maximum root zone storage to root zone depth and we find that these correspond quite well to the point observations where available. We conclude that climate and soil type should be taken into account when regionalizing measured root depth for a certain vegetation type. We equally find that using evaporation rather than discharge better allows for local adjustment of root zone parameters within a basin and thus provides orthogonal data to diagnose and optimize hydrological models and land surface schemes.

  11. Towards a global network of gamma-ray detector calibration facilities

    Science.gov (United States)

    Tijs, Marco; Koomans, Ronald; Limburg, Han

    2016-09-01

    Gamma-ray logging tools are applied worldwide. At various locations, calibration facilities are used to calibrate these gamma-ray logging systems. Several attempts have been made to cross-correlate well known calibration pits, but this cross-correlation does not include calibration facilities in Europe or private company calibration facilities. Our aim is to set-up a framework that gives the possibility to interlink all calibration facilities worldwide by using `tools of opportunity' - tools that have been calibrated in different calibration facilities, whether this usage was on a coordinated basis or by coincidence. To compare the measurement of different tools, it is important to understand the behaviour of the tools in the different calibration pits. Borehole properties, such as diameter, fluid, casing and probe diameter strongly influence the outcome of gamma-ray borehole logging. Logs need to be properly calibrated and compensated for these borehole properties in order to obtain in-situ grades or to do cross-hole correlation. Some tool providers provide tool-specific correction curves for this purpose. Others rely on reference measurements against sources of known radionuclide concentration and geometry. In this article, we present an attempt to set-up a framework for transferring `local' calibrations to be applied `globally'. This framework includes corrections for any geometry and detector size to give absolute concentrations of radionuclides from borehole measurements. This model is used to compare measurements in the calibration pits of Grand Junction, located in the USA; Adelaide (previously known as AMDEL), located in Adelaide Australia; and Stonehenge, located at Medusa Explorations BV in the Netherlands.

  12. Understanding Global Systems Today—A Calibration of the World3-03 Model between 1995 and 2012

    Directory of Open Access Journals (Sweden)

    Roberto Pasqualino

    2015-07-01

    Full Text Available In 1972 the Limits to Growth report was published. It used the World3 model to better understand the dynamics of global systems and their relationship to finite resource availability, land use, and persistent pollution accumulation. The trends of resource depletion and degradation of physical systems which were identified by Limits to Growth have continued. Although World3 forecast scenarios are based on key measures and assumptions that cannot be easily assessed using available data (i.e., non-renewable resources, persistent pollution, the dynamics of growth components of the model can be compared with publicly available global data trends. Based on Scenario 2 of the Limits to Growth study, we present a calibration of the updated World3-03 model using historical data from 1995 to 2012 to better understand the dynamics of today’s economic and resource system. Given that accurate data on physical limits does not currently exist, the dynamics of overshoot to global limits are not assessed. In this paper we offer a new interpretation of the parametrisation of World3-03 using these data to explore how its assumptions on global dynamics, environmental footprints and responses have changed over the past 40 years. The results show that human society has invested more to abate persistent pollution, to increase food productivity and have a more productive service sector.

  13. Calibration of the maximum carboxylation velocity (Vcmax using data mining techniques and ecophysiological data from the Brazilian semiarid region, for use in Dynamic Global Vegetation Models

    Directory of Open Access Journals (Sweden)

    L. F. C. Rezende

    Full Text Available Abstract The semiarid region of northeastern Brazil, the Caatinga, is extremely important due to its biodiversity and endemism. Measurements of plant physiology are crucial to the calibration of Dynamic Global Vegetation Models (DGVMs that are currently used to simulate the responses of vegetation in face of global changes. In a field work realized in an area of preserved Caatinga forest located in Petrolina, Pernambuco, measurements of carbon assimilation (in response to light and CO2 were performed on 11 individuals of Poincianella microphylla, a native species that is abundant in this region. These data were used to calibrate the maximum carboxylation velocity (Vcmax used in the INLAND model. The calibration techniques used were Multiple Linear Regression (MLR, and data mining techniques as the Classification And Regression Tree (CART and K-MEANS. The results were compared to the UNCALIBRATED model. It was found that simulated Gross Primary Productivity (GPP reached 72% of observed GPP when using the calibrated Vcmax values, whereas the UNCALIBRATED approach accounted for 42% of observed GPP. Thus, this work shows the benefits of calibrating DGVMs using field ecophysiological measurements, especially in areas where field data is scarce or non-existent, such as in the Caatinga.

  14. Validation of A Global Hydrological Model

    Science.gov (United States)

    Doell, P.; Lehner, B.; Kaspar, F.; Vassolo, S.

    Freshwater availability has been recognized as a global issue, and its consistent quan- tification not only in individual river basins but also at the global scale is required to support the sustainable use of water. The Global Hydrology Model WGHM, which is a submodel of the global water use and availability model WaterGAP 2, computes sur- face runoff, groundwater recharge and river discharge at a spatial resolution of 0.5. WGHM is based on the best global data sets currently available, including a newly developed drainage direction map and a data set of wetlands, lakes and reservoirs. It calculates both natural and actual discharge by simulating the reduction of river discharge by human water consumption (as computed by the water use submodel of WaterGAP 2). WGHM is calibrated against observed discharge at 724 gauging sta- tions (representing about 50% of the global land area) by adjusting a parameter of the soil water balance. It not only computes the long-term average water resources but also water availability indicators that take into account the interannual and seasonal variability of runoff and discharge. The reliability of the model results is assessed by comparing observed and simulated discharges at the calibration stations and at se- lected other stations. We conclude that reliable results can be obtained for basins of more than 20,000 km2. In particular, the 90% reliable monthly discharge is simu- lated well. However, there is the tendency that semi-arid and arid basins are modeled less satisfactorily than humid ones, which is partially due to neglecting river channel losses and evaporation of runoff from small ephemeral ponds in the model. Also, the hydrology of highly developed basins with large artificial storages, basin transfers and irrigation schemes cannot be simulated well. The seasonality of discharge in snow- dominated basins is overestimated by WGHM, and if the snow-dominated basin is uncalibrated, discharge is likely to be underestimated

  15. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ahlers, C.; Liu, H.

    2000-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  16. Forward Global Photometric Calibration of the Dark Energy Survey

    Science.gov (United States)

    Burke, D. L.; Rykoff, E. S.; Allam, S.; Annis, J.; Bechtol, K.; Bernstein, G. M.; Drlica-Wagner, A.; Finley, D. A.; Gruendl, R. A.; James, D. J.; Kent, S.; Kessler, R.; Kuhlmann, S.; Lasker, J.; Li, T. S.; Scolnic, D.; Smith, J.; Tucker, D. L.; Wester, W.; Yanny, B.; Abbott, T. M. C.; Abdalla, F. B.; Benoit-Lévy, A.; Bertin, E.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D’Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Estrada, J.; García-Bellido, J.; Gruen, D.; Gutierrez, G.; Honscheid, K.; Kuehn, K.; Kuropatkin, N.; Maia, M. A. G.; March, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Plazas, A. A.; Sako, M.; Sanchez, E.; Scarpine, V.; Schindler, R.; Sevilla-Noarbe, I.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Walker, A. R.; DES Collaboration

    2018-01-01

    Many scientific goals for the Dark Energy Survey (DES) require the calibration of optical/NIR broadband b = grizY photometry that is stable in time and uniform over the celestial sky to one percent or better. It is also necessary to limit to similar accuracy systematic uncertainty in the calibrated broadband magnitudes due to uncertainty in the spectrum of the source. Here we present a “Forward Global Calibration Method (FGCM)” for photometric calibration of the DES, and we present results of its application to the first three years of the survey (Y3A1). The FGCM combines data taken with auxiliary instrumentation at the observatory with data from the broadband survey imaging itself and models of the instrument and atmosphere to estimate the spatial and time dependences of the passbands of individual DES survey exposures. “Standard” passbands that are typical of the passbands encountered during the survey are chosen. The passband of any individual observation is combined with an estimate of the source spectral shape to yield a magnitude {m}b{std} in the standard system. This “chromatic correction” to the standard system is necessary to achieve subpercent calibrations and in particular, to resolve ambiguity between the broadband brightness of a source and the shape of its SED. The FGCM achieves a reproducible and stable photometric calibration of standard magnitudes {m}b{std} of stellar sources over the multiyear Y3A1 data sample with residual random calibration errors of σ =6{--}7 {mmag} per exposure. The accuracy of the calibration is uniform across the 5000 {\\deg }2 DES footprint to within σ =7 {mmag}. The systematic uncertainties of magnitudes in the standard system due to the spectra of sources are less than 5 {mmag} for main-sequence stars with 0.5< g-i< 3.0.

  17. Forward Global Photometric Calibration of the Dark Energy Survey

    Energy Technology Data Exchange (ETDEWEB)

    Burke, D. L.; Rykoff, E. S.; Allam, S.; Annis, J.; Bechtol, K.; Bernstein, G. M.; Drlica-Wagner, A.; Finley, D. A.; Gruendl, R. A.; James, D. J.; Kent, S.; Kessler, R.; Kuhlmann, S.; Lasker, J.; Li, T. S.; Scolnic, D.; Smith, J.; Tucker, D. L.; Wester, W.; Yanny, B.; Abbott, T. M. C.; Abdalla, F. B.; Benoit-Lévy, A.; Bertin, E.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Cunha, C. E.; D’Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Estrada, J.; García-Bellido, J.; Gruen, D.; Gutierrez, G.; Honscheid, K.; Kuehn, K.; Kuropatkin, N.; Maia, M. A. G.; March, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Plazas, A. A.; Sako, M.; Sanchez, E.; Scarpine, V.; Schindler, R.; Sevilla-Noarbe, I.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Walker, A. R.

    2017-12-28

    Many scientific goals for the Dark Energy Survey (DES) require calibration of optical/NIR broadband $b = grizY$ photometry that is stable in time and uniform over the celestial sky to one percent or better. It is also necessary to limit to similar accuracy systematic uncertainty in the calibrated broadband magnitudes due to uncertainty in the spectrum of the source. Here we present a "Forward Global Calibration Method (FGCM)" for photometric calibration of the DES, and we present results of its application to the first three years of the survey (Y3A1). The FGCM combines data taken with auxiliary instrumentation at the observatory with data from the broad-band survey imaging itself and models of the instrument and atmosphere to estimate the spatial- and time-dependence of the passbands of individual DES survey exposures. "Standard" passbands are chosen that are typical of the passbands encountered during the survey. The passband of any individual observation is combined with an estimate of the source spectral shape to yield a magnitude $m_b^{\\mathrm{std}}$ in the standard system. This "chromatic correction" to the standard system is necessary to achieve sub-percent calibrations. The FGCM achieves reproducible and stable photometric calibration of standard magnitudes $m_b^{\\mathrm{std}}$ of stellar sources over the multi-year Y3A1 data sample with residual random calibration errors of $\\sigma=5-6\\,\\mathrm{mmag}$ per exposure. The accuracy of the calibration is uniform across the $5000\\,\\mathrm{deg}^2$ DES footprint to within $\\sigma=7\\,\\mathrm{mmag}$. The systematic uncertainties of magnitudes in the standard system due to the spectra of sources are less than $5\\,\\mathrm{mmag}$ for main sequence stars with $0.5

  18. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ahlers, C.F.; Liu, H.H.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M and O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  19. Regional Calibration of SCS-CN L-THIA Model: Application for Ungauged Basins

    Directory of Open Access Journals (Sweden)

    Ji-Hong Jeon

    2014-05-01

    Full Text Available Estimating surface runoff for ungauged watershed is an important issue. The Soil Conservation Service Curve Number (SCS-CN method developed from long-term experimental data is widely used to estimate surface runoff from gaged or ungauged watersheds. Many modelers have used the documented SCS-CN parameters without calibration, sometimes resulting in significant errors in estimating surface runoff. Several methods for regionalization of SCS-CN parameters were evaluated. The regionalization methods include: (1 average; (2 land use area weighted average; (3 hydrologic soil group area weighted average; (4 area combined land use and hydrologic soil group weighted average; (5 spatial nearest neighbor; (6 inverse distance weighted average; and (7 global calibration method, and model performance for each method was evaluated with application to 14 watersheds located in Indiana. Eight watersheds were used for calibration and six watersheds for validation. For the validation results, the spatial nearest neighbor method provided the highest average Nash-Sutcliffe (NS value at 0.58 for six watersheds but it included the lowest NS value and variance of NS values of this method was the highest. The global calibration method provided the second highest average NS value at 0.56 with low variation of NS values. Although the spatial nearest neighbor method provided the highest average NS value, this method was not statistically different than other methods. However, the global calibration method was significantly different than other methods except the spatial nearest neighbor method. Therefore, we conclude that the global calibration method is appropriate to regionalize SCS-CN parameters for ungauged watersheds.

  20. Using genetic algorithms to calibrate a water quality model.

    Science.gov (United States)

    Liu, Shuming; Butler, David; Brazier, Richard; Heathwaite, Louise; Khu, Soon-Thiam

    2007-03-15

    With the increasing concern over the impact of diffuse pollution on water bodies, many diffuse pollution models have been developed in the last two decades. A common obstacle in using such models is how to determine the values of the model parameters. This is especially true when a model has a large number of parameters, which makes a full range of calibration expensive in terms of computing time. Compared with conventional optimisation approaches, soft computing techniques often have a faster convergence speed and are more efficient for global optimum searches. This paper presents an attempt to calibrate a diffuse pollution model using a genetic algorithm (GA). Designed to simulate the export of phosphorus from diffuse sources (agricultural land) and point sources (human), the Phosphorus Indicators Tool (PIT) version 1.1, on which this paper is based, consisted of 78 parameters. Previous studies have indicated the difficulty of full range model calibration due to the number of parameters involved. In this paper, a GA was employed to carry out the model calibration in which all parameters were involved. A sensitivity analysis was also performed to investigate the impact of operators in the GA on its effectiveness in optimum searching. The calibration yielded satisfactory results and required reasonable computing time. The application of the PIT model to the Windrush catchment with optimum parameter values was demonstrated. The annual P loss was predicted as 4.4 kg P/ha/yr, which showed a good fitness to the observed value.

  1. A global calibration method for multiple vision sensors based on multiple targets

    International Nuclear Information System (INIS)

    Liu, Zhen; Zhang, Guangjun; Wei, Zhenzhong; Sun, Junhua

    2011-01-01

    The global calibration of multiple vision sensors (MVS) has been widely studied in the last two decades. In this paper, we present a global calibration method for MVS with non-overlapping fields of view (FOVs) using multiple targets (MT). MT is constructed by fixing several targets, called sub-targets, together. The mutual coordinate transformations between sub-targets need not be known. The main procedures of the proposed method are as follows: one vision sensor is selected from MVS to establish the global coordinate frame (GCF). MT is placed in front of the vision sensors for several (at least four) times. Using the constraint that the relative positions of all sub-targets are invariant, the transformation matrix from the coordinate frame of each vision sensor to GCF can be solved. Both synthetic and real experiments are carried out and good result is obtained. The proposed method has been applied to several real measurement systems and shown to be both flexible and accurate. It can serve as an attractive alternative to existing global calibration methods

  2. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ghezzehej, T.

    2004-01-01

    The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency

  3. Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin

    Science.gov (United States)

    Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.

    2006-01-01

    The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.

  4. Including sugar cane in the agro-ecosystem model ORCHIDEE-STICS: calibration and validation

    Science.gov (United States)

    Valade, A.; Vuichard, N.; Ciais, P.; Viovy, N.

    2011-12-01

    Sugarcane is currently the most efficient bioenergy crop with regards to the energy produced per hectare. With approximately half the global bioethanol production in 2005, and a devoted land area expected to expand globally in the years to come, sugar cane is at the heart of the biofuel debate. Dynamic global vegetation models coupled with agronomical models are powerful and novel tools to tackle many of the environmental issues related to biofuels if they are carefully calibrated and validated against field observations. Here we adapt the agro-terrestrial model ORCHIDEE-STICS for sugar cane simulations. Observation data of LAI are used to evaluate the sensitivity of the model to parameters of nitrogen absorption and phenology, which are calibrated in a systematic way for six sites in Australia and La Reunion. We find that the optimal set of parameters is highly dependent on the sites' characteristics and that the model can reproduce satisfactorily the evolution of LAI. This careful calibration of ORCHIDEE-STICS for sugar cane biomass production for different locations and technical itineraries provides a strong basis for further analysis of the impacts of bioenergy-related land use change on carbon cycle budgets. As a next step, a sensitivity analysis is carried out to estimate the uncertainty of the model in biomass and carbon flux simulation due to its parameterization.

  5. On Inertial Body Tracking in the Presence of Model Calibration Errors.

    Science.gov (United States)

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-07-22

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments-the IMU-to-segment calibrations, subsequently called I2S calibrations-to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and

  6. Observation models in radiocarbon calibration

    International Nuclear Information System (INIS)

    Jones, M.D.; Nicholls, G.K.

    2001-01-01

    The observation model underlying any calibration process dictates the precise mathematical details of the calibration calculations. Accordingly it is important that an appropriate observation model is used. Here this is illustrated with reference to the use of reservoir offsets where the standard calibration approach is based on a different model to that which the practitioners clearly believe is being applied. This sort of error can give rise to significantly erroneous calibration results. (author). 12 refs., 1 fig

  7. Multiple-Objective Stepwise Calibration Using Luca

    Science.gov (United States)

    Hay, Lauren E.; Umemoto, Makiko

    2007-01-01

    This report documents Luca (Let us calibrate), a multiple-objective, stepwise, automated procedure for hydrologic model calibration and the associated graphical user interface (GUI). Luca is a wizard-style user-friendly GUI that provides an easy systematic way of building and executing a calibration procedure. The calibration procedure uses the Shuffled Complex Evolution global search algorithm to calibrate any model compiled with the U.S. Geological Survey's Modular Modeling System. This process assures that intermediate and final states of the model are simulated consistently with measured values.

  8. Calibration Plans for the Global Precipitation Measurement (GPM)

    Science.gov (United States)

    Bidwell, S. W.; Flaming, G. M.; Adams, W. J.; Everett, D. F.; Mendelsohn, C. R.; Smith, E. A.; Turk, J.

    2002-01-01

    The Global Precipitation Measurement (GPM) is an international effort led by the National Aeronautics and Space Administration (NASA) of the U.S.A. and the National Space Development Agency of Japan (NASDA) for the purpose of improving research into the global water and energy cycle. GPM will improve climate, weather, and hydrological forecasts through more frequent and more accurate measurement of precipitation world-wide. Comprised of U.S. domestic and international partners, GPM will incorporate and assimilate data streams from many spacecraft with varied orbital characteristics and instrument capabilities. Two of the satellites will be provided directly by GPM, the core satellite and a constellation member. The core satellite, at the heart of GPM, is scheduled for launch in November 2007. The core will carry a conical scanning microwave radiometer, the GPM Microwave Imager (GMI), and a two-frequency cross-track-scanning radar, the Dual-frequency Precipitation Radar (DPR). The passive microwave channels and the two radar frequencies of the core are carefully chosen for investigating the varying character of precipitation over ocean and land, and from the tropics to the high-latitudes. The DPR will enable microphysical characterization and three-dimensional profiling of precipitation. The GPM-provided constellation spacecraft will carry a GMI radiometer identical to that on the core spacecraft. This paper presents calibration plans for the GPM, including on-board instrument calibration, external calibration methods, and the role of ground validation. Particular emphasis is on plans for inter-satellite calibration of the GPM constellation. With its Unique instrument capabilities, the core spacecraft will serve as a calibration transfer standard to the GPM constellation. In particular the Dual-frequency Precipitation Radar aboard the core will check the accuracy of retrievals from the GMI radiometer and will enable improvement of the radiometer retrievals

  9. Gradient-based model calibration with proxy-model assistance

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  10. Presentation, calibration and validation of the low-order, DCESS Earth System Model

    DEFF Research Database (Denmark)

    Shaffer, G.; Olsen, S. Malskaer; Pedersen, Jens Olaf Pepke

    2008-01-01

    A new, low-order Earth system model is described, calibrated and tested against Earth system data. The model features modules for the atmosphere, ocean, ocean sediment, land biosphere and lithosphere and has been designed to simulate global change on time scales of years to millions of years...... remineralization. The lithosphere module considers outgassing, weathering of carbonate and silicate rocks and weathering of rocks containing old organic carbon and phosphorus. Weathering rates are related to mean atmospheric temperatures. A pre-industrial, steady state calibration to Earth system data is carried...

  11. Using genetic algorithm and TOPSIS for Xinanjiang model calibration with a single procedure

    Science.gov (United States)

    Cheng, Chun-Tian; Zhao, Ming-Yan; Chau, K. W.; Wu, Xin-Yu

    2006-01-01

    Genetic Algorithm (GA) is globally oriented in searching and thus useful in optimizing multiobjective problems, especially where the objective functions are ill-defined. Conceptual rainfall-runoff models that aim at predicting streamflow from the knowledge of precipitation over a catchment have become a basic tool for flood forecasting. The parameter calibration of a conceptual model usually involves the multiple criteria for judging the performances of observed data. However, it is often difficult to derive all objective functions for the parameter calibration problem of a conceptual model. Thus, a new method to the multiple criteria parameter calibration problem, which combines GA with TOPSIS (technique for order performance by similarity to ideal solution) for Xinanjiang model, is presented. This study is an immediate further development of authors' previous research (Cheng, C.T., Ou, C.P., Chau, K.W., 2002. Combining a fuzzy optimal model with a genetic algorithm to solve multi-objective rainfall-runoff model calibration. Journal of Hydrology, 268, 72-86), whose obvious disadvantages are to split the whole procedure into two parts and to become difficult to integrally grasp the best behaviors of model during the calibration procedure. The current method integrates the two parts of Xinanjiang rainfall-runoff model calibration together, simplifying the procedures of model calibration and validation and easily demonstrated the intrinsic phenomenon of observed data in integrity. Comparison of results with two-step procedure shows that the current methodology gives similar results to the previous method, is also feasible and robust, but simpler and easier to apply in practice.

  12. Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

    Energy Technology Data Exchange (ETDEWEB)

    Soltanzadeh, I. [Tehran Univ. (Iran, Islamic Republic of). Inst. of Geophysics; Azadi, M.; Vakili, G.A. [Atmospheric Science and Meteorological Research Center (ASMERC), Teheran (Iran, Islamic Republic of)

    2011-07-01

    Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast. (orig.)

  13. Using Bayesian Model Averaging (BMA to calibrate probabilistic surface temperature forecasts over Iran

    Directory of Open Access Journals (Sweden)

    I. Soltanzadeh

    2011-07-01

    Full Text Available Using Bayesian Model Averaging (BMA, an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM, with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP Global Forecast System (GFS and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009 over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  14. Calibration of communication skills items in OSCE checklists according to the MAAS-Global.

    Science.gov (United States)

    Setyonugroho, Winny; Kropmans, Thomas; Kennedy, Kieran M; Stewart, Brian; van Dalen, Jan

    2016-01-01

    Communication skills (CS) are commonly assessed using 'communication items' in Objective Structured Clinical Examination (OSCE) station checklists. Our aim is to calibrate the communication component of OSCE station checklists according to the MAAS-Global which is a valid and reliable standard to assess CS in undergraduate medical education. Three raters independently compared 280 checklists from 4 disciplines contributing to the undergraduate year 4 OSCE against the 17 items of the MAAS-Global standard. G-theory was used to analyze the reliability of this calibration procedure. G-Kappa was 0.8. For two raters G-Kappa is 0.72 and it fell to 0.57 for one rater. 46% of the checklist items corresponded to section three of the MAAS-Global (i.e. medical content of the consultation), whilst 12% corresponded to section two (i.e. general CS), and 8.2% to section one (i.e. CS for each separate phase of the consultation). 34% of the items were not considered to be CS. A G-Kappa of 0.8 confirms a reliable and valid procedure for calibrating OSCE CS checklist items using the MAAS-Global. We strongly suggest that such a procedure is more widely employed to arrive at a stable (valid and reliable) judgment of the communication component in existing checklists for medical students' communication behaviours. It is possible to measure the 'true' caliber of CS in OSCE stations. Students' results are thereby comparable between and across stations, students and institutions. A reliable calibration procedure requires only two raters. Copyright © 2015. Published by Elsevier Ireland Ltd.

  15. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    Science.gov (United States)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  16. Error-in-variables models in calibration

    Science.gov (United States)

    Lira, I.; Grientschnig, D.

    2017-12-01

    In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.

  17. Tropospheric and ionospheric media calibrations based on global navigation satellite system observation data

    Science.gov (United States)

    Feltens, Joachim; Bellei, Gabriele; Springer, Tim; Kints, Mark V.; Zandbergen, René; Budnik, Frank; Schönemann, Erik

    2018-06-01

    Context: Calibration of radiometric tracking data for effects in the Earth atmosphere is a crucial element in the field of deep-space orbit determination (OD). The troposphere can induce propagation delays in the order of several meters, the ionosphere up to the meter level for X-band signals and up to tens of meters, in extreme cases, for L-band ones. The use of media calibrations based on Global Navigation Satellite Systems (GNSS) measurement data can improve the accuracy of the radiometric observations modelling and, as a consequence, the quality of orbit determination solutions. Aims: ESOC Flight Dynamics employs ranging, Doppler and delta-DOR (Delta-Differential One-Way Ranging) data for the orbit determination of interplanetary spacecraft. Currently, the media calibrations for troposphere and ionosphere are either computed based on empirical models or, under mission specific agreements, provided by external parties such as the Jet Propulsion Laboratory (JPL) in Pasadena, California. In order to become independent from external models and sources, decision fell to establish a new in-house internal service to create these media calibrations based on GNSS measurements recorded at the ESA tracking sites and processed in-house by the ESOC Navigation Support Office with comparable accuracy and quality. Methods: For its concept, the new service was designed to be as much as possible depending on own data and resources and as less as possible depending on external models and data. Dedicated robust and simple algorithms, well suited for operational use, were worked out for that task. This paper describes the approach built up to realize this new in-house internal media calibration service. Results: Test results collected during three months of running the new media calibrations in quasi-operational mode indicate that GNSS-based tropospheric corrections can remove systematic signatures from the Doppler observations and biases from the range ones. For the ionosphere, a

  18. Calibration of a surface mass balance model for global-scale applications

    NARCIS (Netherlands)

    Giesen, R. H.; Oerlemans, J.

    2012-01-01

    Global applications of surface mass balance models have large uncertainties, as a result of poor climate input data and limited availability of mass balance measurements. This study addresses several possible consequences of these limitations for the modelled mass balance. This is done by applying a

  19. Model Calibration in Option Pricing

    Directory of Open Access Journals (Sweden)

    Andre Loerx

    2012-04-01

    Full Text Available We consider calibration problems for models of pricing derivatives which occur in mathematical finance. We discuss various approaches such as using stochastic differential equations or partial differential equations for the modeling process. We discuss the development in the past literature and give an outlook into modern approaches of modelling. Furthermore, we address important numerical issues in the valuation of options and likewise the calibration of these models. This leads to interesting problems in optimization, where, e.g., the use of adjoint equations or the choice of the parametrization for the model parameters play an important role.

  20. Model Calibration in Watershed Hydrology

    Science.gov (United States)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  1. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    Science.gov (United States)

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of

  2. A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization

    Science.gov (United States)

    Foster, John V.; Cunningham, Kevin

    2010-01-01

    Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the

  3. Effects of temporal and spatial resolution of calibration data on integrated hydrologic water quality model identification

    Science.gov (United States)

    Jiang, Sanyuan; Jomaa, Seifeddine; Büttner, Olaf; Rode, Michael

    2014-05-01

    Hydrological water quality modeling is increasingly used for investigating runoff and nutrient transport processes as well as watershed management but it is mostly unclear how data availablity determins model identification. In this study, the HYPE (HYdrological Predictions for the Environment) model, which is a process-based, semi-distributed hydrological water quality model, was applied in two different mesoscale catchments (Selke (463 km2) and Weida (99 km2)) located in central Germany to simulate discharge and inorganic nitrogen (IN) transport. PEST and DREAM(ZS) were combined with the HYPE model to conduct parameter calibration and uncertainty analysis. Split-sample test was used for model calibration (1994-1999) and validation (1999-2004). IN concentration and daily IN load were found to be highly correlated with discharge, indicating that IN leaching is mainly controlled by runoff. Both dynamics and balances of water and IN load were well captured with NSE greater than 0.83 during validation period. Multi-objective calibration (calibrating hydrological and water quality parameters simultaneously) was found to outperform step-wise calibration in terms of model robustness. Multi-site calibration was able to improve model performance at internal sites, decrease parameter posterior uncertainty and prediction uncertainty. Nitrogen-process parameters calibrated using continuous daily averages of nitrate-N concentration observations produced better and more robust simulations of IN concentration and load, lower posterior parameter uncertainty and IN concentration prediction uncertainty compared to the calibration against uncontinuous biweekly nitrate-N concentration measurements. Both PEST and DREAM(ZS) are efficient in parameter calibration. However, DREAM(ZS) is more sound in terms of parameter identification and uncertainty analysis than PEST because of its capability to evolve parameter posterior distributions and estimate prediction uncertainty based on global

  4. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  5. Optical modeling and polarization calibration for CMB measurements with ACTPol and Advanced ACTPol

    Science.gov (United States)

    Koopman, Brian; Austermann, Jason; Cho, Hsiao-Mei; Coughlin, Kevin P.; Duff, Shannon M.; Gallardo, Patricio A.; Hasselfield, Matthew; Henderson, Shawn W.; Ho, Shuay-Pwu Patty; Hubmayr, Johannes; Irwin, Kent D.; Li, Dale; McMahon, Jeff; Nati, Federico; Niemack, Michael D.; Newburgh, Laura; Page, Lyman A.; Salatino, Maria; Schillaci, Alessandro; Schmitt, Benjamin L.; Simon, Sara M.; Vavagiakis, Eve M.; Ward, Jonathan T.; Wollack, Edward J.

    2016-07-01

    The Atacama Cosmology Telescope Polarimeter (ACTPol) is a polarization sensitive upgrade to the Atacama Cosmology Telescope, located at an elevation of 5190 m on Cerro Toco in Chile. ACTPol uses transition edge sensor bolometers coupled to orthomode transducers to measure both the temperature and polarization of the Cosmic Microwave Background (CMB). Calibration of the detector angles is a critical step in producing polarization maps of the CMB. Polarization angle offsets in the detector calibration can cause leakage in polarization from E to B modes and induce a spurious signal in the EB and TB cross correlations, which eliminates our ability to measure potential cosmological sources of EB and TB signals, such as cosmic birefringence. We calibrate the ACTPol detector angles by ray tracing the designed detector angle through the entire optical chain to determine the projection of each detector angle on the sky. The distribution of calibrated detector polarization angles are consistent with a global offset angle from zero when compared to the EB-nulling offset angle, the angle required to null the EB cross-correlation power spectrum. We present the optical modeling process. The detector angles can be cross checked through observations of known polarized sources, whether this be a galactic source or a laboratory reference standard. To cross check the ACTPol detector angles, we use a thin film polarization grid placed in front of the receiver of the telescope, between the receiver and the secondary reflector. Making use of a rapidly rotating half-wave plate (HWP) mount we spin the polarizing grid at a constant speed, polarizing and rotating the incoming atmospheric signal. The resulting sinusoidal signal is used to determine the detector angles. The optical modeling calibration was shown to be consistent with a global offset angle of zero when compared to EB nulling in the first ACTPol results and will continue to be a part of our calibration implementation. The first

  6. Financial model calibration using consistency hints.

    Science.gov (United States)

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  7. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  8. CALIBRATED HYDRODYNAMIC MODEL

    Directory of Open Access Journals (Sweden)

    Sezar Gülbaz

    2015-01-01

    Full Text Available The land development and increase in urbanization in a watershed affect water quantityand water quality. On one hand, urbanization provokes the adjustment of geomorphicstructure of the streams, ultimately raises peak flow rate which causes flood; on theother hand, it diminishes water quality which results in an increase in Total SuspendedSolid (TSS. Consequently, sediment accumulation in downstream of urban areas isobserved which is not preferred for longer life of dams. In order to overcome thesediment accumulation problem in dams, the amount of TSS in streams and inwatersheds should be taken under control. Low Impact Development (LID is a BestManagement Practice (BMP which may be used for this purpose. It is a land planningand engineering design method which is applied in managing storm water runoff inorder to reduce flooding as well as simultaneously improve water quality. LID includestechniques to predict suspended solid loads in surface runoff generated over imperviousurban surfaces. In this study, the impact of LID-BMPs on surface runoff and TSS isinvestigated by employing a calibrated hydrodynamic model for Sazlidere Watershedwhich is located in Istanbul, Turkey. For this purpose, a calibrated hydrodynamicmodel was developed by using Environmental Protection Agency Storm WaterManagement Model (EPA SWMM. For model calibration and validation, we set up arain gauge and a flow meter into the field and obtain rainfall and flow rate data. Andthen, we select several LID types such as retention basins, vegetative swales andpermeable pavement and we obtain their influence on peak flow rate and pollutantbuildup and washoff for TSS. Consequently, we observe the possible effects ofLID on surface runoff and TSS in Sazlidere Watershed.

  9. A system-theory-based model for monthly river runoff forecasting: model calibration and optimization

    Directory of Open Access Journals (Sweden)

    Wu Jianhua

    2014-03-01

    Full Text Available River runoff is not only a crucial part of the global water cycle, but it is also an important source for hydropower and an essential element of water balance. This study presents a system-theory-based model for river runoff forecasting taking the Hailiutu River as a case study. The forecasting model, designed for the Hailiutu watershed, was calibrated and verified by long-term precipitation observation data and groundwater exploitation data from the study area. Additionally, frequency analysis, taken as an optimization technique, was applied to improve prediction accuracy. Following model optimization, the overall relative prediction errors are below 10%. The system-theory-based prediction model is applicable to river runoff forecasting, and following optimization by frequency analysis, the prediction error is acceptable.

  10. Deriving global parameter estimates for the Noah land surface model using FLUXNET and machine learning

    Science.gov (United States)

    Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.

    2016-11-01

    With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.

  11. Some tests of wet tropospheric calibration for the CASA Uno Global Positioning System experiment

    Science.gov (United States)

    Dixon, T. H.; Wolf, S. Kornreich

    1990-01-01

    Wet tropospheric path delay can be a major error source for Global Positioning System (GPS) geodetic experiments. Strategies for minimizing this error are investigted using data from CASA Uno, the first major GPS experiment in Central and South America, where wet path delays may be both high and variable. Wet path delay calibration using water vapor radiometers (WVRs) and residual delay estimation is compared with strategies where the entire wet path delay is estimated stochastically without prior calibration, using data from a 270-km test baseline in Costa Rica. Both approaches yield centimeter-level baseline repeatability and similar tropospheric estimates, suggesting that WVR calibration is not critical for obtaining high precision results with GPS in the CASA region.

  12. Cloud-Based Model Calibration Using OpenStudio: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.

    2014-03-01

    OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.

  13. Significant uncertainty in global scale hydrological modeling from precipitation data errors

    Science.gov (United States)

    Sperna Weiland, Frederiek C.; Vrugt, Jasper A.; van Beek, Rens (L.) P. H.; Weerts, Albrecht H.; Bierkens, Marc F. P.

    2015-10-01

    In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we focus on large-scale hydrologic modeling and analyze the effect of parameter and rainfall data uncertainty on simulated discharge dynamics with the global hydrologic model PCR-GLOBWB. We use three rainfall data products; the CFSR reanalysis, the ERA-Interim reanalysis, and a combined ERA-40 reanalysis and CRU dataset. Parameter uncertainty is derived from Latin Hypercube Sampling (LHS) using monthly discharge data from five of the largest river systems in the world. Our results demonstrate that the default parameterization of PCR-GLOBWB, derived from global datasets, can be improved by calibrating the model against monthly discharge observations. Yet, it is difficult to find a single parameterization of PCR-GLOBWB that works well for all of the five river basins considered herein and shows consistent performance during both the calibration and evaluation period. Still there may be possibilities for regionalization based on catchment similarities. Our simulations illustrate that parameter uncertainty constitutes only a minor part of predictive uncertainty. Thus, the apparent dichotomy between simulations of global-scale hydrologic behavior and actual data cannot be resolved by simply increasing the model complexity of PCR-GLOBWB and resolving sub-grid processes. Instead, it would be more productive to improve the characterization of global rainfall amounts at spatial resolutions of 0.5° and smaller.

  14. Fermentation process tracking through enhanced spectral calibration modeling.

    Science.gov (United States)

    Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah

    2007-06-15

    The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.

  15. Cumulative error models for the tank calibration problem

    International Nuclear Information System (INIS)

    Goldman, A.; Anderson, L.G.; Weber, J.

    1983-01-01

    The purpose of a tank calibration equation is to obtain an estimate of the liquid volume that corresponds to a liquid level measurement. Calibration experimental errors occur in both liquid level and liquid volume measurements. If one of the errors is relatively small, the calibration equation can be determined from wellknown regression and calibration methods. If both variables are assumed to be in error, then for linear cases a prototype model should be considered. Many investigators are not familiar with this model or do not have computing facilities capable of obtaining numerical solutions. This paper discusses and compares three linear models that approximate the prototype model and have the advantage of much simpler computations. Comparisons among the four models and recommendations of suitability are made from simulations and from analyses of six sets of experimental data

  16. Root zone water quality model (RZWQM2): Model use, calibration and validation

    Science.gov (United States)

    Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.

    2012-01-01

    The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.

  17. Calibration of the Site-Scale Saturated Zone Flow Model

    International Nuclear Information System (INIS)

    Zyvoloski, G. A.

    2001-01-01

    The purpose of the flow calibration analysis work is to provide Performance Assessment (PA) with the calibrated site-scale saturated zone (SZ) flow model that will be used to make radionuclide transport calculations. As such, it is one of the most important models developed in the Yucca Mountain project. This model will be a culmination of much of our knowledge of the SZ flow system. The objective of this study is to provide a defensible site-scale SZ flow and transport model that can be used for assessing total system performance. A defensible model would include geologic and hydrologic data that are used to form the hydrogeologic framework model; also, it would include hydrochemical information to infer transport pathways, in-situ permeability measurements, and water level and head measurements. In addition, the model should include information on major model sensitivities. Especially important are those that affect calibration, the direction of transport pathways, and travel times. Finally, if warranted, alternative calibrations representing different conceptual models should be included. To obtain a defensible model, all available data should be used (or at least considered) to obtain a calibrated model. The site-scale SZ model was calibrated using measured and model-generated water levels and hydraulic head data, specific discharge calculations, and flux comparisons along several of the boundaries. Model validity was established by comparing model-generated permeabilities with the permeability data from field and laboratory tests; by comparing fluid pathlines obtained from the SZ flow model with those inferred from hydrochemical data; and by comparing the upward gradient generated with the model with that observed in the field. This analysis is governed by the Office of Civilian Radioactive Waste Management (OCRWM) Analysis and Modeling Report (AMR) Development Plan ''Calibration of the Site-Scale Saturated Zone Flow Model'' (CRWMS M and O 1999a)

  18. MT3DMS: Model use, calibration, and validation

    Science.gov (United States)

    Zheng, C.; Hill, Mary C.; Cao, G.; Ma, R.

    2012-01-01

    MT3DMS is a three-dimensional multi-species solute transport model for solving advection, dispersion, and chemical reactions of contaminants in saturated groundwater flow systems. MT3DMS interfaces directly with the U.S. Geological Survey finite-difference groundwater flow model MODFLOW for the flow solution and supports the hydrologic and discretization features of MODFLOW. MT3DMS contains multiple transport solution techniques in one code, which can often be important, including in model calibration. Since its first release in 1990 as MT3D for single-species mass transport modeling, MT3DMS has been widely used in research projects and practical field applications. This article provides a brief introduction to MT3DMS and presents recommendations about calibration and validation procedures for field applications of MT3DMS. The examples presented suggest the need to consider alternative processes as models are calibrated and suggest opportunities and difficulties associated with using groundwater age in transport model calibration.

  19. A single model procedure for estimating tank calibration equations

    International Nuclear Information System (INIS)

    Liebetrau, A.M.

    1997-10-01

    A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes

  20. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  1. Comparison of different multi-objective calibration criteria using a conceptual rainfall-runoff model of flood events

    Directory of Open Access Journals (Sweden)

    R. Moussa

    2009-04-01

    Full Text Available A conceptual lumped rainfall-runoff flood event model was developed and applied on the Gardon catchment located in Southern France and various single-objective and multi-objective functions were used for its calibration. The model was calibrated on 15 events and validated on 14 others. The results of both the calibration and validation phases are compared on the basis of their performance with regards to six criteria, three global criteria and three relative criteria representing volume, peakflow, and the root mean square error. The first type of criteria gives more weight to large events whereas the second considers all events to be of equal weight. The results show that the calibrated parameter values are dependent on the type of criteria used. Significant trade-offs are observed between the different objectives: no unique set of parameters is able to satisfy all objectives simultaneously. Instead, the solution to the calibration problem is given by a set of Pareto optimal solutions. From this set of optimal solutions, a balanced aggregated objective function is proposed, as a compromise between up to three objective functions. The single-objective and multi-objective calibration strategies are compared both in terms of parameter variation bounds and simulation quality. The results of this study indicate that two well chosen and non-redundant objective functions are sufficient to calibrate the model and that the use of three objective functions does not necessarily yield different results. The problems of non-uniqueness in model calibration, and the choice of the adequate objective functions for flood event models, emphasise the importance of the modeller's intervention. The recent advances in automatic optimisation techniques do not minimise the user's responsibility, who has to choose multiple criteria based on the aims of the study, his appreciation on the errors induced by data and model structure and his knowledge of the

  2. Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics

    Science.gov (United States)

    Lazarus, S. M.; Holman, B. P.; Splitt, M. E.

    2017-12-01

    A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.

  3. Influence of rainfall observation network on model calibration and application

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-01-01

    Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as

  4. Caliver: An R package for CALIbration and VERification of forest fire gridded model outputs.

    Science.gov (United States)

    Vitolo, Claudia; Di Giuseppe, Francesca; D'Andrea, Mirko

    2018-01-01

    The name caliver stands for CALIbration and VERification of forest fire gridded model outputs. This is a package developed for the R programming language and available under an APACHE-2 license from a public repository. In this paper we describe the functionalities of the package and give examples using publicly available datasets. Fire danger model outputs are taken from the modeling components of the European Forest Fire Information System (EFFIS) and observed burned areas from the Global Fire Emission Database (GFED). Complete documentation, including a vignette, is also available within the package.

  5. The cost of uniqueness in groundwater model calibration

    Science.gov (United States)

    Moore, Catherine; Doherty, John

    2006-04-01

    Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration

  6. Calibration of a Distributed Hydrological Model using Remote Sensing Evapotranspiration data in the Semi-Arid Punjab Region of Pakista

    Science.gov (United States)

    Becker, R.; Usman, M.

    2017-12-01

    A SWAT (Soil Water Assessment Tool) model is applied in the semi-arid Punjab region in Pakistan. The physically based hydrological model is set up to simulate hydrological processes and water resources demands under future land use, climate change and irrigation management scenarios. In order to successfully run the model, detailed focus is laid on the calibration procedure of the model. The study deals with the following calibration issues:i. lack of reliable calibration/validation data, ii. difficulty to accurately model a highly managed system with a physically based hydrological model and iii. use of alternative and spatially distributed data sets for model calibration. In our study area field observations are rare and the entirely human controlled irrigation system renders central calibration parameters (e.g. runoff/curve number) unsuitable, as it can't be assumed that they represent the natural behavior of the hydrological system. From evapotranspiration (ET) however principal hydrological processes can still be inferred. Usman et al. (2015) derived satellite based monthly ET data for our study area based on SEBAL (Surface Energy Balance Algorithm) and created a reliable ET data set which we use in this study to calibrate our SWAT model. The initial SWAT model performance is evaluated with respect to the SEBAL results using correlation coefficients, RMSE, Nash-Sutcliffe efficiencies and mean differences. Particular focus is laid on the spatial patters, investigating the potential of a spatially differentiated parameterization instead of just using spatially uniform calibration data. A sensitivity analysis reveals the most sensitive parameters with respect to changes in ET, which are then selected for the calibration process.Using the SEBAL-ET product we calibrate the SWAT model for the time period 2005-2006 using a dynamically dimensioned global search algorithm to minimize RMSE. The model improvement after the calibration procedure is finally evaluated based

  7. Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing

    Directory of Open Access Journals (Sweden)

    Mingchi Feng

    2017-10-01

    Full Text Available Multi-camera systems are widely applied in the three dimensional (3D computer vision, especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-camera systems are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-camera systems based on transparent glass checkerboards and ray tracing is described, and is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera are obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on the other side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce a calibration error. A multi-camera calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of the four-camera system are 0.00007 and 0.4543 pixels, respectively. The proposed method is flexible, highly accurate, and simple to carry out.

  8. Hand-eye calibration using a target registration error model.

    Science.gov (United States)

    Chen, Elvis C S; Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M

    2017-10-01

    Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.

  9. Calibration of CORSIM models under saturated traffic flow conditions.

    Science.gov (United States)

    2013-09-01

    This study proposes a methodology to calibrate microscopic traffic flow simulation models. : The proposed methodology has the capability to calibrate simultaneously all the calibration : parameters as well as demand patterns for any network topology....

  10. Calibrating cellular automaton models for pedestrians walking through corners

    Science.gov (United States)

    Dias, Charitha; Lovreglio, Ruggiero

    2018-05-01

    Cellular Automata (CA) based pedestrian simulation models have gained remarkable popularity as they are simpler and easier to implement compared to other microscopic modeling approaches. However, incorporating traditional floor field representations in CA models to simulate pedestrian corner navigation behavior could result in unrealistic behaviors. Even though several previous studies have attempted to enhance CA models to realistically simulate pedestrian maneuvers around bends, such modifications have not been calibrated or validated against empirical data. In this study, two static floor field (SFF) representations, namely 'discrete representation' and 'continuous representation', are calibrated for CA-models to represent pedestrians' walking behavior around 90° bends. Trajectory data collected through a controlled experiment are used to calibrate these model representations. Calibration results indicate that although both floor field representations can represent pedestrians' corner navigation behavior, the 'continuous' representation fits the data better. Output of this study could be beneficial for enhancing the reliability of existing CA-based models by representing pedestrians' corner navigation behaviors more realistically.

  11. Investigation of the transferability of hydrological models and a method to improve model calibration

    Directory of Open Access Journals (Sweden)

    G. Hartmann

    2005-01-01

    Full Text Available In order to find a model parameterization such that the hydrological model performs well even under different conditions, appropriate model performance measures have to be determined. A common performance measure is the Nash Sutcliffe efficiency. Usually it is calculated comparing observed and modelled daily values. In this paper a modified version is suggested in order to calibrate a model on different time scales simultaneously (days up to years. A spatially distributed hydrological model based on HBV concept was used. The modelling was applied on the Upper Neckar catchment, a mesoscale river in south western Germany with a basin size of about 4000 km2. The observation period 1961-1990 was divided into four different climatic periods, referred to as "warm", "cold", "wet" and "dry". These sub periods were used to assess the transferability of the model calibration and of the measure of performance. In a first step, the hydrological model was calibrated on a certain period and afterwards applied on the same period. Then, a validation was performed on the climatologically opposite period than the calibration, e.g. the model calibrated on the cold period was applied on the warm period. Optimal parameter sets were identified by an automatic calibration procedure based on Simulated Annealing. The results show, that calibrating a hydrological model that is supposed to handle short as well as long term signals becomes an important task. Especially the objective function has to be chosen very carefully.

  12. Global sensitivity analysis of a filtration model for submerged anaerobic membrane bioreactors (AnMBR).

    Science.gov (United States)

    Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J

    2014-04-01

    The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Streamflow characteristics from modelled runoff time series: Importance of calibration criteria selection

    Science.gov (United States)

    Poole, Sandra; Vis, Marc; Knight, Rodney; Seibert, Jan

    2017-01-01

    Ecologically relevant streamflow characteristics (SFCs) of ungauged catchments are often estimated from simulated runoff of hydrologic models that were originally calibrated on gauged catchments. However, SFC estimates of the gauged donor catchments and subsequently the ungauged catchments can be substantially uncertain when models are calibrated using traditional approaches based on optimization of statistical performance metrics (e.g., Nash–Sutcliffe model efficiency). An improved calibration strategy for gauged catchments is therefore crucial to help reduce the uncertainties of estimated SFCs for ungauged catchments. The aim of this study was to improve SFC estimates from modeled runoff time series in gauged catchments by explicitly including one or several SFCs in the calibration process. Different types of objective functions were defined consisting of the Nash–Sutcliffe model efficiency, single SFCs, or combinations thereof. We calibrated a bucket-type runoff model (HBV – Hydrologiska Byråns Vattenavdelning – model) for 25 catchments in the Tennessee River basin and evaluated the proposed calibration approach on 13 ecologically relevant SFCs representing major flow regime components and different flow conditions. While the model generally tended to underestimate the tested SFCs related to mean and high-flow conditions, SFCs related to low flow were generally overestimated. The highest estimation accuracies were achieved by a SFC-specific model calibration. Estimates of SFCs not included in the calibration process were of similar quality when comparing a multi-SFC calibration approach to a traditional model efficiency calibration. For practical applications, this implies that SFCs should preferably be estimated from targeted runoff model calibration, and modeled estimates need to be carefully interpreted.

  14. The DarkSide-50 Experiment: Electron Recoil Calibrations and A Global Energy Variable

    Energy Technology Data Exchange (ETDEWEB)

    Hackett, Brianne Rae [Hawaii U.

    2017-01-01

    Over the course of decades, there has been mounting astronomical evidence for non-baryonic dark matter, yet its precise nature remains elusive. A favored candidate for dark matter is the Weakly Interacting Massive Particle (WIMP) which arises naturally out of extensions to the Standard Model. WIMPs are expected to occasionally interact with particles of normal matter through nuclear recoils. DarkSide-50 aims to detect this type of particle through the use of a two-phase liquid argon time projection chamber. To make a claim of discovery, an accurate understanding of the background and WIMP search region is imperative. Knowledge of the backgrounds is done through extensive studies of DarkSide-50's response to electron and nuclear recoils. The CALibration Insertion System (CALIS) was designed and built for the purpose of introduc- ing radioactive sources into or near the detector in a joint eort between Fermi National Laboratory (FNAL) and the University of Hawai'i at Manoa. This work describes the testing, installation, and commissioning of CALIS at the Laboratori Nazionali del Gran Sasso. CALIS has been used in mul- tiple calibration campaigns with both neutron and sources. In this work, DarkSide-50's response to electron recoils, which are important for background estimations, was studied through the use of calibration sources by constructing a global energy variable which takes into account the anti- correlation between scintillation and ionization signals produced by interactions in the liquid argon. Accurately reconstructing the event energy correlates directly with quantitatively understanding the WIMP sensitivity in DarkSide-50. This work also validates the theoretically predicted decay spectrum of 39Ar against 39Ar decay data collected in the early days of DarkSide-50 while it was lled with atmospheric argon; a validation of this type is not readily found in the literature. Finally, we show how well the constructed energy variable can predict

  15. Calibrating the sqHIMMELI v1.0 wetland methane emission model with hierarchical modeling and adaptive MCMC

    Science.gov (United States)

    Susiluoto, Jouni; Raivonen, Maarit; Backman, Leif; Laine, Marko; Makela, Jarmo; Peltola, Olli; Vesala, Timo; Aalto, Tuula

    2018-03-01

    the early spring net primary production could be used to predict parameters affecting the annual methane production. Even though the calibration is specific to the Siikaneva site, the hierarchical modeling approach is well suited for larger-scale studies and the results of the estimation pave way for a regional or global-scale Bayesian calibration of wetland emission models.

  16. Calibration of hydrological models using flow-duration curves

    Directory of Open Access Journals (Sweden)

    I. K. Westerberg

    2011-07-01

    Full Text Available The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1 uncertain discharge data, (2 variable sensitivity of different performance measures to different flow magnitudes, (3 influence of unknown input/output errors and (4 inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested – based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of

  17. Calibration by Hydrological Response Unit of a National Hydrologic Model to Improve Spatial Representation and Distribution of Parameters

    Science.gov (United States)

    Norton, P. A., II

    2015-12-01

    The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.

  18. Assessment of the terrestrial water balance using the global water availability and use model WaterGAP - status and challenges

    Science.gov (United States)

    Müller Schmied, Hannes; Döll, Petra

    2017-04-01

    The estimation of the World's water resources has a long tradition and numerous methods for quantification exists. The resulting numbers vary significantly, leaving room for improvement. Since some decades, global hydrological models (GHMs) are being used for large scale water budget assessments. GHMs are designed to represent the macro-scale hydrological processes and many of those models include human water management, e.g. irrigation or reservoir operation, making them currently the first choice for global scale assessments of the terrestrial water balance within the Anthropocene. The Water - Global Assessment and Prognosis (WaterGAP) is a model framework that comprises both the natural and human water dimension and is in development and application since the 1990s. In recent years, efforts were made to assess the sensitivity of water balance components to alternative climate forcing input data and, e.g., how this sensitivity is affected by WaterGAP's calibration scheme. This presentation shows the current best estimate of terrestrial water balance components as simulated with WaterGAP by 1) assessing global and continental water balance components for the climate period 1971-2000 and the IPCC reference period 1986-2005 for the most current WaterGAP version using a homogenized climate forcing data, 2) investigating variations of water balance components for a number of state-of-the-art climate forcing data and 3) discussing the benefit of the calibration approach for a better observation-data constrained global water budget. For the most current WaterGAP version 2.2b and a homogenized combination of the two WATCH Forcing Datasets, global scale (excluding Antarctica and Greenland) river discharge into oceans and inland sinks (Q) is assessed to be 40 000 km3 yr-1 for 1971-2000 and 39 200 km3 yr-1 for 1986-2005. Actual evapotranspiration (AET) is close to each other with around 70 600 (70 700) km3 yr-1 as well as water consumption with 1000 (1100) km3 yr-1. The

  19. Logarithmic transformed statistical models in calibration

    International Nuclear Information System (INIS)

    Zeis, C.D.

    1975-01-01

    A general type of statistical model used for calibration of instruments having the property that the standard deviations of the observed values increase as a function of the mean value is described. The application to the Helix Counter at the Rocky Flats Plant is primarily from a theoretical point of view. The Helix Counter measures the amount of plutonium in certain types of chemicals. The method described can be used also for other calibrations. (U.S.)

  20. Cosmic CARNage I: on the calibration of galaxy formation models

    Science.gov (United States)

    Knebe, Alexander; Pearce, Frazer R.; Gonzalez-Perez, Violeta; Thomas, Peter A.; Benson, Andrew; Asquith, Rachel; Blaizot, Jeremy; Bower, Richard; Carretero, Jorge; Castander, Francisco J.; Cattaneo, Andrea; Cora, Sofía A.; Croton, Darren J.; Cui, Weiguang; Cunnama, Daniel; Devriendt, Julien E.; Elahi, Pascal J.; Font, Andreea; Fontanot, Fabio; Gargiulo, Ignacio D.; Helly, John; Henriques, Bruno; Lee, Jaehyun; Mamon, Gary A.; Onions, Julian; Padilla, Nelson D.; Power, Chris; Pujol, Arnau; Ruiz, Andrés N.; Srisawat, Chaichalit; Stevens, Adam R. H.; Tollet, Edouard; Vega-Martínez, Cristian A.; Yi, Sukyoung K.

    2018-04-01

    We present a comparison of nine galaxy formation models, eight semi-analytical, and one halo occupation distribution model, run on the same underlying cold dark matter simulation (cosmological box of comoving width 125h-1 Mpc, with a dark-matter particle mass of 1.24 × 109h-1M⊙) and the same merger trees. While their free parameters have been calibrated to the same observational data sets using two approaches, they nevertheless retain some `memory' of any previous calibration that served as the starting point (especially for the manually tuned models). For the first calibration, models reproduce the observed z = 0 galaxy stellar mass function (SMF) within 3σ. The second calibration extended the observational data to include the z = 2 SMF alongside the z ˜ 0 star formation rate function, cold gas mass, and the black hole-bulge mass relation. Encapsulating the observed evolution of the SMF from z = 2 to 0 is found to be very hard within the context of the physics currently included in the models. We finally use our calibrated models to study the evolution of the stellar-to-halo mass (SHM) ratio. For all models, we find that the peak value of the SHM relation decreases with redshift. However, the trends seen for the evolution of the peak position as well as the mean scatter in the SHM relation are rather weak and strongly model dependent. Both the calibration data sets and model results are publicly available.

  1. A high resolution global scale groundwater model

    Science.gov (United States)

    de Graaf, Inge; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc

    2014-05-01

    As the world's largest accessible source of freshwater, groundwater plays a vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater storage provides a large natural buffer against water shortage and sustains flows to rivers and wetlands, supporting ecosystem habitats and biodiversity. Yet, the current generation of global scale hydrological models (GHMs) do not include a groundwater flow component, although it is a crucial part of the hydrological cycle. Thus, a realistic physical representation of the groundwater system that allows for the simulation of groundwater head dynamics and lateral flows is essential for GHMs that increasingly run at finer resolution. In this study we present a global groundwater model with a resolution of 5 arc-minutes (approximately 10 km at the equator) using MODFLOW (McDonald and Harbaugh, 1988). With this global groundwater model we eventually intend to simulate the changes in the groundwater system over time that result from variations in recharge and abstraction. Aquifer schematization and properties of this groundwater model were developed from available global lithological maps and datasets (Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moosdorf, 2013), combined with our estimate of aquifer thickness for sedimentary basins. We forced the groundwater model with the output from the global hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the net groundwater recharge and average surface water levels derived from routed channel discharge. For the parameterization, we relied entirely on available global datasets and did not calibrate the model so that it can equally be expanded to data poor environments. Based on our sensitivity analysis, in which we run the model with various hydrogeological parameter settings, we observed that most variance in groundwater

  2. Stochastic calibration and learning in nonstationary hydroeconomic models

    Science.gov (United States)

    Maneta, M. P.; Howitt, R.

    2014-05-01

    Concern about water scarcity and adverse climate events over agricultural regions has motivated a number of efforts to develop operational integrated hydroeconomic models to guide adaptation and optimal use of water. Once calibrated, these models are used for water management and analysis assuming they remain valid under future conditions. In this paper, we present and demonstrate a methodology that permits the recursive calibration of economic models of agricultural production from noisy but frequently available data. We use a standard economic calibration approach, namely positive mathematical programming, integrated in a data assimilation algorithm based on the ensemble Kalman filter equations to identify the economic model parameters. A moving average kernel ensures that new and past information on agricultural activity are blended during the calibration process, avoiding loss of information and overcalibration for the conditions of a single year. A regularization constraint akin to the standard Tikhonov regularization is included in the filter to ensure its stability even in the presence of parameters with low sensitivity to observations. The results show that the implementation of the PMP methodology within a data assimilation framework based on the enKF equations is an effective method to calibrate models of agricultural production even with noisy information. The recursive nature of the method incorporates new information as an added value to the known previous observations of agricultural activity without the need to store historical information. The robustness of the method opens the door to the use of new remote sensing algorithms for operational water management.

  3. High Accuracy Transistor Compact Model Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  4. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  5. Predicting the fate of micropollutants during wastewater treatment: Calibration and sensitivity analysis.

    Science.gov (United States)

    Baalbaki, Zeina; Torfs, Elena; Yargeau, Viviane; Vanrolleghem, Peter A

    2017-12-01

    The presence of micropollutants in the environment and their toxic impacts on the aquatic environment have raised concern about their inefficient removal in wastewater treatment plants. In this study, the fate of micropollutants of four different classes was simulated in a conventional activated sludge plant using a bioreactor micropollutant fate model coupled to a settler model. The latter was based on the Bürger-Diehl model extended for the first time to include micropollutant fate processes. Calibration of model parameters was completed by matching modelling results with full-scale measurements (i.e. including aqueous and particulate phase concentrations of micropollutants) obtained from a 4-day sampling campaign. Modelling results showed that further biodegradation takes place in the sludge blanket of the settler for the highly biodegradable caffeine, underlining the need for a reactive settler model. The adopted Monte Carlo based calibration approach also provided an overview of the model's global sensitivity to the parameters. This analysis showed that for each micropollutant and according to the dominant fate process, a different set of one or more parameters had a significant impact on the model fit, justifying the selection of parameter subsets for model calibration. A dynamic local sensitivity analysis was also performed with the calibrated parameters. This analysis supported the conclusions from the global sensitivity and provided guidance for future sampling campaigns. This study expands the understanding of micropollutant fate models when applied to different micropollutants, in terms of global and local sensitivity to model parameters, as well as the identifiability of the parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. A calibration hierarchy for risk models was defined: from utopia to empirical data.

    Science.gov (United States)

    Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W

    2016-06-01

    Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Bayesian calibration of the Community Land Model using surrogates

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Swiler, Laura Painton

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.

  8. Application of heuristic and machine-learning approach to engine model calibration

    Science.gov (United States)

    Cheng, Jie; Ryu, Kwang R.; Newman, C. E.; Davis, George C.

    1993-03-01

    Automation of engine model calibration procedures is a very challenging task because (1) the calibration process searches for a goal state in a huge, continuous state space, (2) calibration is often a lengthy and frustrating task because of complicated mutual interference among the target parameters, and (3) the calibration problem is heuristic by nature, and often heuristic knowledge for constraining a search cannot be easily acquired from domain experts. A combined heuristic and machine learning approach has, therefore, been adopted to improve the efficiency of model calibration. We developed an intelligent calibration program called ICALIB. It has been used on a daily basis for engine model applications, and has reduced the time required for model calibrations from many hours to a few minutes on average. In this paper, we describe the heuristic control strategies employed in ICALIB such as a hill-climbing search based on a state distance estimation function, incremental problem solution refinement by using a dynamic tolerance window, and calibration target parameter ordering for guiding the search. In addition, we present the application of a machine learning program called GID3* for automatic acquisition of heuristic rules for ordering target parameters.

  9. Model Calibration of Exciter and PSS Using Extended Kalman Filter

    Energy Technology Data Exchange (ETDEWEB)

    Kalsi, Karanjit; Du, Pengwei; Huang, Zhenyu

    2012-07-26

    Power system modeling and controls continue to become more complex with the advent of smart grid technologies and large-scale deployment of renewable energy resources. As demonstrated in recent studies, inaccurate system models could lead to large-scale blackouts, thereby motivating the need for model calibration. Current methods of model calibration rely on manual tuning based on engineering experience, are time consuming and could yield inaccurate parameter estimates. In this paper, the Extended Kalman Filter (EKF) is used as a tool to calibrate exciter and Power System Stabilizer (PSS) models of a particular type of machine in the Western Electricity Coordinating Council (WECC). The EKF-based parameter estimation is a recursive prediction-correction process which uses the mismatch between simulation and measurement to adjust the model parameters at every time step. Numerical simulations using actual field test data demonstrate the effectiveness of the proposed approach in calibrating the parameters.

  10. European extra-tropical storm damage risk from a multi-model ensemble of dynamically-downscaled global climate models

    Science.gov (United States)

    Haylock, M. R.

    2011-10-01

    Uncertainty in the return levels of insured loss from European wind storms was quantified using storms derived from twenty-two 25 km regional climate model runs driven by either the ERA40 reanalyses or one of four coupled atmosphere-ocean global climate models. Storms were identified using a model-dependent storm severity index based on daily maximum 10 m wind speed. The wind speed from each model was calibrated to a set of 7 km historical storm wind fields using the 70 storms with the highest severity index in the period 1961-2000, employing a two stage calibration methodology. First, the 25 km daily maximum wind speed was downscaled to the 7 km historical model grid using the 7 km surface roughness length and orography, also adopting an empirical gust parameterisation. Secondly, downscaled wind gusts were statistically scaled to the historical storms to match the geographically-dependent cumulative distribution function of wind gust speed. The calibrated wind fields were run through an operational catastrophe reinsurance risk model to determine the return level of loss to a European population density-derived property portfolio. The risk model produced a 50-yr return level of loss of between 0.025% and 0.056% of the total insured value of the portfolio.

  11. European extra-tropical storm damage risk from a multi-model ensemble of dynamically-downscaled global climate models

    Directory of Open Access Journals (Sweden)

    M. R. Haylock

    2011-10-01

    Full Text Available Uncertainty in the return levels of insured loss from European wind storms was quantified using storms derived from twenty-two 25 km regional climate model runs driven by either the ERA40 reanalyses or one of four coupled atmosphere-ocean global climate models. Storms were identified using a model-dependent storm severity index based on daily maximum 10 m wind speed. The wind speed from each model was calibrated to a set of 7 km historical storm wind fields using the 70 storms with the highest severity index in the period 1961–2000, employing a two stage calibration methodology. First, the 25 km daily maximum wind speed was downscaled to the 7 km historical model grid using the 7 km surface roughness length and orography, also adopting an empirical gust parameterisation. Secondly, downscaled wind gusts were statistically scaled to the historical storms to match the geographically-dependent cumulative distribution function of wind gust speed.

    The calibrated wind fields were run through an operational catastrophe reinsurance risk model to determine the return level of loss to a European population density-derived property portfolio. The risk model produced a 50-yr return level of loss of between 0.025% and 0.056% of the total insured value of the portfolio.

  12. Multi-Site Calibration of Linear Reservoir Based Geomorphologic Rainfall-Runoff Models

    Directory of Open Access Journals (Sweden)

    Bahram Saeidifarzad

    2014-09-01

    Full Text Available Multi-site optimization of two adapted event-based geomorphologic rainfall-runoff models was presented using Non-dominated Sorting Genetic Algorithm (NSGA-II method for the South Fork Eel River watershed, California. The first model was developed based on Unequal Cascade of Reservoirs (UECR and the second model was presented as a modified version of Geomorphological Unit Hydrograph based on Nash’s model (GUHN. Two calibration strategies were considered as semi-lumped and semi-distributed for imposing (or unimposing the geomorphology relations in the models. The results of models were compared with Nash’s model. Obtained results using the observed data of two stations in the multi-site optimization framework showed reasonable efficiency values in both the calibration and the verification steps. The outcomes also showed that semi-distributed calibration of the modified GUHN model slightly outperformed other models in both upstream and downstream stations during calibration. Both calibration strategies for the developed UECR model during the verification phase showed slightly better performance in the downstream station, but in the upstream station, the modified GUHN model in the semi-lumped strategy slightly outperformed the other models. The semi-lumped calibration strategy could lead to logical lag time parameters related to the basin geomorphology and may be more suitable for data-based statistical analyses of the rainfall-runoff process.

  13. SURFplus Model Calibration for PBX 9502

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-12-06

    The SURFplus reactive burn model is calibrated for the TATB based explosive PBX 9502 at three initial temperatures; hot (75 C), ambient (23 C) and cold (-55 C). The CJ state depends on the initial temperature due to the variation in the initial density and initial specific energy of the PBX reactants. For the reactants, a porosity model for full density TATB is used. This allows the initial PBX density to be set to its measured value even though the coeffcient of thermal expansion for the TATB and the PBX differ. The PBX products EOS is taken as independent of the initial PBX state. The initial temperature also affects the sensitivity to shock initiation. The model rate parameters are calibrated to Pop plot data, the failure diameter, the limiting detonation speed just above the failure diameters, and curvature effect data for small curvature.

  14. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.

  15. Using Active Learning for Speeding up Calibration in Simulation Models.

    Science.gov (United States)

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  16. Simulating vegetation response to climate change in the Blue Mountains with MC2 dynamic global vegetation model

    Directory of Open Access Journals (Sweden)

    John B. Kim

    2018-04-01

    Full Text Available Warming temperatures are projected to greatly alter many forests in the Pacific Northwest. MC2 is a dynamic global vegetation model, a climate-aware, process-based, and gridded vegetation model. We calibrated and ran MC2 simulations for the Blue Mountains Ecoregion, Oregon, USA, at 30 arc-second spatial resolution. We calibrated MC2 using the best available spatial datasets from land managers. We ran future simulations using climate projections from four global circulation models (GCM under representative concentration pathway 8.5. Under this scenario, forest productivity is projected to increase as the growing season lengthens, and fire occurrence is projected to increase steeply throughout the century, with burned area peaking early- to mid-century. Subalpine forests are projected to disappear, and the coniferous forests to contract by 32.8%. Large portions of the dry and mesic forests are projected to convert to woodlands, unless precipitation were to increase. Low levels of change are projected for the Umatilla National Forest consistently across the four GCM’s. For the Wallowa-Whitman and the Malheur National Forest, forest conversions are projected to vary more across the four GCM-based simulations, reflecting high levels of uncertainty arising from climate. For simulations based on three of the four GCMs, sharply increased fire activity results in decreases in forest carbon stocks by the mid-century, and the fire activity catalyzes widespread biome shift across the study area. We document the full cycle of a structured approach to calibrating and running MC2 for transparency and to serve as a template for applications of MC2. Keywords: Climate change, Regional change, Simulation, Calibration, Forests, Fire, Dynamic global vegetation model

  17. Grid based calibration of SWAT hydrological models

    Directory of Open Access Journals (Sweden)

    D. Gorgan

    2012-07-01

    Full Text Available The calibration and execution of large hydrological models, such as SWAT (soil and water assessment tool, developed for large areas, high resolution, and huge input data, need not only quite a long execution time but also high computation resources. SWAT hydrological model supports studies and predictions of the impact of land management practices on water, sediment, and agricultural chemical yields in complex watersheds. The paper presents the gSWAT application as a web practical solution for environmental specialists to calibrate extensive hydrological models and to run scenarios, by hiding the complex control of processes and heterogeneous resources across the grid based high computation infrastructure. The paper highlights the basic functionalities of the gSWAT platform, and the features of the graphical user interface. The presentation is concerned with the development of working sessions, interactive control of calibration, direct and basic editing of parameters, process monitoring, and graphical and interactive visualization of the results. The experiments performed on different SWAT models and the obtained results argue the benefits brought by the grid parallel and distributed environment as a solution for the processing platform. All the instances of SWAT models used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea catchment area.

  18. Effect of Kepler calibration on global seismic and background parameters

    Directory of Open Access Journals (Sweden)

    Salabert David

    2017-01-01

    Full Text Available Calibration issues associated to scrambled collateral smear affecting the Kepler short-cadence data were discovered in the Data Release 24 and were found to be present in all the previous data releases since launch. In consequence, a new Data Release 25 was reprocessed to correct for these problems. We perform here a preliminary study to evaluate the impact on the extracted global seismic and background parameters between data releases. We analyze the sample of seismic solar analogs observed by Kepler in short cadence between Q5 and Q17. We start with this set of stars as it constitutes the best sample to put the Sun into context along its evolution, and any significant differences on the seismic and background parameters need to be investigated before any further studies of this sample can take place. We use the A2Z pipeline to derive both global seismic parameters and background parameters from the Data Release 25 and previous data releases and report on the measured differences.

  19. Calibration of a stochastic health evolution model using NHIS data

    Science.gov (United States)

    Gupta, Aparna; Li, Zhisheng

    2011-10-01

    This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.

  20. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  1. A simple topography-driven, calibration-free runoff generation model

    Science.gov (United States)

    Gao, H.; Birkel, C.; Hrachowitz, M.; Tetzlaff, D.; Soulsby, C.; Savenije, H. H. G.

    2017-12-01

    Determining the amount of runoff generation from rainfall occupies a central place in rainfall-runoff modelling. Moreover, reading landscapes and developing calibration-free runoff generation models that adequately reflect land surface heterogeneities remains the focus of much hydrological research. In this study, we created a new method to estimate runoff generation - HAND-based Storage Capacity curve (HSC) which uses a topographic index (HAND, Height Above the Nearest Drainage) to identify hydrological similarity and partially the saturated areas of catchments. We then coupled the HSC model with the Mass Curve Technique (MCT) method to estimate root zone storage capacity (SuMax), and obtained the calibration-free runoff generation model HSC-MCT. Both the two models (HSC and HSC-MCT) allow us to estimate runoff generation and simultaneously visualize the spatial dynamic of saturated area. We tested the two models in the data-rich Bruntland Burn (BB) experimental catchment in Scotland with an unusual time series of the field-mapped saturation area extent. The models were subsequently tested in 323 MOPEX (Model Parameter Estimation Experiment) catchments in the United States. HBV and TOPMODEL were used as benchmarks. We found that the HSC performed better in reproducing the spatio-temporal pattern of the observed saturated areas in the BB catchment compared with TOPMODEL which is based on the topographic wetness index (TWI). The HSC also outperformed HBV and TOPMODEL in the MOPEX catchments for both calibration and validation. Despite having no calibrated parameters, the HSC-MCT model also performed comparably well with the calibrated HBV and TOPMODEL, highlighting the robustness of the HSC model to both describe the spatial distribution of the root zone storage capacity and the efficiency of the MCT method to estimate the SuMax. Moreover, the HSC-MCT model facilitated effective visualization of the saturated area, which has the potential to be used for broader

  2. A single model procedure for tank calibration function estimation

    International Nuclear Information System (INIS)

    York, J.C.; Liebetrau, A.M.

    1995-01-01

    Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages

  3. Predictive sensor based x-ray calibration using a physical model

    International Nuclear Information System (INIS)

    Fuente, Matias de la; Lutz, Peter; Wirtz, Dieter C.; Radermacher, Klaus

    2007-01-01

    Many computer assisted surgery systems are based on intraoperative x-ray images. To achieve reliable and accurate results these images have to be calibrated concerning geometric distortions, which can be distinguished between constant distortions and distortions caused by magnetic fields. Instead of using an intraoperative calibration phantom that has to be visible within each image resulting in overlaying markers, the presented approach directly takes advantage of the physical background of the distortions. Based on a computed physical model of an image intensifier and a magnetic field sensor, an online compensation of distortions can be achieved without the need of an intraoperative calibration phantom. The model has to be adapted once to each specific image intensifier through calibration, which is based on an optimization algorithm systematically altering the physical model parameters, until a minimal error is reached. Once calibrated, the model is able to predict the distortions caused by the measured magnetic field vector and build an appropriate dewarping function. The time needed for model calibration is not yet optimized and takes up to 4 h on a 3 GHz CPU. In contrast, the time needed for distortion correction is less than 1 s and therefore absolutely acceptable for intraoperative use. First evaluations showed that by using the model based dewarping algorithm the distortions of an XRII with a 21 cm FOV could be significantly reduced. The model was able to predict and compensate distortions by approximately 80% to a remaining error of 0.45 mm (max) (0.19 mm rms)

  4. Calibration models for high enthalpy calorimetric probes.

    Science.gov (United States)

    Kannel, A

    1978-07-01

    The accuracy of gas-aspirated liquid-cooled calorimetric probes used for measuring the enthalpy of high-temperature gas streams is studied. The error in the differential temperature measurements caused by internal and external heat transfer interactions is considered and quantified by mathematical models. The analysis suggests calibration methods for the evaluation of dimensionless heat transfer parameters in the models, which then can give a more accurate value for the enthalpy of the sample. Calibration models for four types of calorimeters are applied to results from the literature and from our own experiments: a circular slit calorimeter developed by the author, single-cooling jacket probe, double-cooling jacket probe, and split-flow cooling jacket probe. The results show that the models are useful for describing and correcting the temperature measurements.

  5. In-Flight Pitot-Static Calibration

    Science.gov (United States)

    Foster, John V. (Inventor); Cunningham, Kevin (Inventor)

    2016-01-01

    A GPS-based pitot-static calibration system uses global output-error optimization. High data rate measurements of static and total pressure, ambient air conditions, and GPS-based ground speed measurements are used to compute pitot-static pressure errors over a range of airspeed. System identification methods rapidly compute optimal pressure error models with defined confidence intervals.

  6. SURF Model Calibration Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    SURF and SURFplus are high explosive reactive burn models for shock initiation and propagation of detonation waves. They are engineering models motivated by the ignition & growth concept of high spots and for SURFplus a second slow reaction for the energy release from carbon clustering. A key feature of the SURF model is that there is a partial decoupling between model parameters and detonation properties. This enables reduced sets of independent parameters to be calibrated sequentially for the initiation and propagation regimes. Here we focus on a methodology for tting the initiation parameters to Pop plot data based on 1-D simulations to compute a numerical Pop plot. In addition, the strategy for tting the remaining parameters for the propagation regime and failure diameter is discussed.

  7. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    Directory of Open Access Journals (Sweden)

    Chengyi Yu

    2017-01-01

    Full Text Available A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method.

  8. Computing diffuse fraction of global horizontal solar radiation: A model comparison.

    Science.gov (United States)

    Dervishi, Sokol; Mahdavi, Ardeshir

    2012-06-01

    For simulation-based prediction of buildings' energy use or expected gains from building-integrated solar energy systems, information on both direct and diffuse component of solar radiation is necessary. Available measured data are, however, typically restricted to global horizontal irradiance. There have been thus many efforts in the past to develop algorithms for the derivation of the diffuse fraction of solar irradiance. In this context, the present paper compares eight models for estimating diffuse fraction of irradiance based on a database of measured irradiance from Vienna, Austria. These models generally involve mathematical formulations with multiple coefficients whose values are typically valid for a specific location. Subsequent to a first comparison of these eight models, three better performing models were selected for a more detailed analysis. Thereby, the coefficients of the models were modified to account for Vienna data. The results suggest that some models can provide relatively reliable estimations of the diffuse fractions of the global irradiance. The calibration procedure could only slightly improve the models' performance.

  9. Testing of a one dimensional model for Field II calibration

    DEFF Research Database (Denmark)

    Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten

    2008-01-01

    Field II is a program for simulating ultrasound transducer fields. It is capable of calculating the emitted and pulse-echoed fields for both pulsed and continuous wave transducers. To make it fully calibrated a model of the transducer’s electro-mechanical impulse response must be included. We...... examine an adapted one dimensional transducer model originally proposed by Willatzen [9] to calibrate Field II. This model is modified to calculate the required impulse responses needed by Field II for a calibrated field pressure and external circuit current calculation. The testing has been performed...... to the calibrated Field II program for 1, 4, and 10 cycle excitations. Two parameter sets were applied for modeling, one real valued Pz27 parameter set, manufacturer supplied, and one complex valued parameter set found in literature, Alguer´o et al. [11]. The latter implicitly accounts for attenuation. Results show...

  10. The effects of model complexity and calibration period on groundwater recharge simulations

    Science.gov (United States)

    Moeck, Christian; Van Freyberg, Jana; Schirmer, Mario

    2017-04-01

    A significant number of groundwater recharge models exist that vary in terms of complexity (i.e., structure and parametrization). Typically, model selection and conceptualization is very subjective and can be a key source of uncertainty in the recharge simulations. Another source of uncertainty is the implicit assumption that model parameters, calibrated over historical periods, are also valid for the simulation period. To the best of our knowledge there is no systematic evaluation of the effect of the model complexity and calibration strategy on the performance of recharge models. To address this gap, we utilized a long-term recharge data set (20 years) from a large weighting lysimeter. We performed a differential split sample test with four groundwater recharge models that vary in terms of complexity. They were calibrated using six calibration periods with climatically contrasting conditions in a constrained Monte Carlo approach. Despite the climatically contrasting conditions, all models performed similarly well during the calibration. However, during validation a clear effect of the model structure on model performance was evident. The more complex, physically-based models predicted recharge best, even when calibration and prediction periods had very different climatic conditions. In contrast, more simplistic soil-water balance and lumped model performed poorly under such conditions. For these models we found a strong dependency on the chosen calibration period. In particular, our analysis showed that this can have relevant implications when using recharge models as decision-making tools in a broad range of applications (e.g. water availability, climate change impact studies, water resource management, etc.).

  11. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    Science.gov (United States)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  12. Calibration process of highly parameterized semi-distributed hydrological model

    Science.gov (United States)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group

  13. A mathematical model for camera calibration based on straight lines

    Directory of Open Access Journals (Sweden)

    Antonio M. G. Tommaselli

    2005-12-01

    Full Text Available In other to facilitate the automation of camera calibration process, a mathematical model using straight lines was developed, which is based on the equivalent planes mathematical model. Parameter estimation of the developed model is achieved by the Least Squares Method with Conditions and Observations. The same method of adjustment was used to implement camera calibration with bundles, which is based on points. Experiments using simulated and real data have shown that the developed model based on straight lines gives results comparable to the conventional method with points. Details concerning the mathematical development of the model and experiments with simulated and real data will be presented and the results with both methods of camera calibration, with straight lines and with points, will be compared.

  14. Enhanced Single Seed Trait Predictions in Soybean (Glycine max) and Robust Calibration Model Transfer with Near-Infrared Reflectance Spectroscopy.

    Science.gov (United States)

    Hacisalihoglu, Gokhan; Gustin, Jeffery L; Louisma, Jean; Armstrong, Paul; Peter, Gary F; Walker, Alejandro R; Settles, A Mark

    2016-02-10

    Single seed near-infrared reflectance (NIR) spectroscopy predicts soybean (Glycine max) seed quality traits of moisture, oil, and protein. We tested the accuracy of transferring calibrations between different single seed NIR analyzers of the same design by collecting NIR spectra and analytical trait data for globally diverse soybean germplasm. X-ray microcomputed tomography (μCT) was used to collect seed density and shape traits to enhance the number of soybean traits that can be predicted from single seed NIR. Partial least-squares (PLS) regression gave accurate predictive models for oil, weight, volume, protein, and maximal cross-sectional area of the seed. PLS models for width, length, and density were not predictive. Although principal component analysis (PCA) of the NIR spectra showed that black seed coat color had significant signal, excluding black seeds from the calibrations did not impact model accuracies. Calibrations for oil and protein developed in this study as well as earlier calibrations for a separate NIR analyzer of the same design were used to test the ability to transfer PLS regressions between platforms. PLS models built from data collected on one NIR analyzer had minimal differences in accuracy when applied to spectra collected from a sister device. Model transfer was more robust when spectra were trimmed from 910 to 1679 nm to 955-1635 nm due to divergence of edge wavelengths between the two devices. The ability to transfer calibrations between similar single seed NIR spectrometers facilitates broader adoption of this high-throughput, nondestructive, seed phenotyping technology.

  15. Modeling Global Urbanization Supported by Nighttime Light Remote Sensing

    Science.gov (United States)

    Zhou, Y.

    2015-12-01

    Urbanization, a major driver of global change, profoundly impacts our physical and social world, for example, altering carbon cycling and climate. Understanding these consequences for better scientific insights and effective decision-making unarguably requires accurate information on urban extent and its spatial distributions. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the nighttime light remote sensing data, extended this method to the global domain by developing a computational method (parameterization) to estimate the key parameters in the cluster-based method, and built a consistent 20-year global urban map series to evaluate the time-reactive nature of global urbanization (e.g. 2000 in Fig. 1). Supported by urban maps derived from nightlights remote sensing data and socio-economic drivers, we developed an integrated modeling framework to project future urban expansion by integrating a top-down macro-scale statistical model with a bottom-up urban growth model. With the models calibrated and validated using historical data, we explored urban growth at the grid level (1-km) over the next two decades under a number of socio-economic scenarios. The derived spatiotemporal information of historical and potential future urbanization will be of great value with practical implications for developing adaptation and risk management measures for urban infrastructure, transportation, energy, and water systems when considered together with other factors such as climate variability and change, and high impact weather events.

  16. Model calibration and beam control systems for storage rings

    International Nuclear Information System (INIS)

    Corbett, W.J.; Lee, M.J.; Ziemann, V.

    1993-04-01

    Electron beam storage rings and linear accelerators are rapidly gaining worldwide popularity as scientific devices for the production of high-brightness synchrotron radiation. Today, everybody agrees that there is a premium on calibrating the storage ring model and determining errors in the machine as soon as possible after the beam is injected. In addition, the accurate optics model enables machine operators to predictably adjust key performance parameters, and allows reliable identification of new errors that occur during operation of the machine. Since the need for model calibration and beam control systems is common to all storage rings, software packages should be made that are portable between different machines. In this paper, we report on work directed toward achieving in-situ calibration of the optics model, detection of alignment errors, and orbit control techniques, with an emphasis on developing a portable system incorporating these tools

  17. Improvement, calibration and validation of a distributed hydrological model over France

    Directory of Open Access Journals (Sweden)

    P. Quintana Seguí

    2009-02-01

    Full Text Available The hydrometeorological model SAFRAN-ISBA-MODCOU (SIM computes water and energy budgets on the land surface and riverflows and the level of several aquifers at the scale of France. SIM is composed of a meteorological analysis system (SAFRAN, a land surface model (ISBA, and a hydrogeological model (MODCOU. In this study, an exponential profile of hydraulic conductivity at saturation is introduced to the model and its impact analysed. It is also studied how calibration modifies the performance of the model. A very simple method of calibration is implemented and applied to the parameters of hydraulic conductivity and subgrid runoff. The study shows that a better description of the hydraulic conductivity of the soil is important to simulate more realistic discharges. It also shows that the calibrated model is more robust than the original SIM. In fact, the calibration mainly affects the processes related to the dynamics of the flow (drainage and runoff, and the rest of relevant processes (like evaporation remain stable. It is also proven that it is only worth introducing the new empirical parameterization of hydraulic conductivity if it is accompanied by a calibration of its parameters, otherwise the simulations can be degraded. In conclusion, it is shown that the new parameterization is necessary to obtain good simulations. Calibration is a tool that must be used to improve the performance of distributed models like SIM that have some empirical parameters.

  18. Towards Global QSAR Model Building for Acute Toxicity: Munro Database Case Study

    Directory of Open Access Journals (Sweden)

    Swapnil Chavan

    2014-10-01

    Full Text Available A series of 436 Munro database chemicals were studied with respect to their corresponding experimental LD50 values to investigate the possibility of establishing a global QSAR model for acute toxicity. Dragon molecular descriptors were used for the QSAR model development and genetic algorithms were used to select descriptors better correlated with toxicity data. Toxic values were discretized in a qualitative class on the basis of the Globally Harmonized Scheme: the 436 chemicals were divided into 3 classes based on their experimental LD50 values: highly toxic, intermediate toxic and low to non-toxic. The k-nearest neighbor (k-NN classification method was calibrated on 25 molecular descriptors and gave a non-error rate (NER equal to 0.66 and 0.57 for internal and external prediction sets, respectively. Even if the classification performances are not optimal, the subsequent analysis of the selected descriptors and their relationship with toxicity levels constitute a step towards the development of a global QSAR model for acute toxicity.

  19. Calibration of Mine Ventilation Network Models Using the Non-Linear Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Guang Xu

    2017-12-01

    Full Text Available Effective ventilation planning is vital to underground mining. To ensure stable operation of the ventilation system and to avoid airflow disorder, mine ventilation network (MVN models have been widely used in simulating and optimizing the mine ventilation system. However, one of the challenges for MVN model simulation is that the simulated airflow distribution results do not match the measured data. To solve this problem, a simple and effective calibration method is proposed based on the non-linear optimization algorithm. The calibrated model not only makes simulated airflow distribution results in accordance with the on-site measured data, but also controls the errors of other parameters within a minimum range. The proposed method was then applied to calibrate an MVN model in a real case, which is built based on ventilation survey results and Ventsim software. Finally, airflow simulation experiments are carried out respectively using data before and after calibration, whose results were compared and analyzed. This showed that the simulated airflows in the calibrated model agreed much better to the ventilation survey data, which verifies the effectiveness of calibrating method.

  20. An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model

    Science.gov (United States)

    Tiernan, E. D.; Hodges, B. R.

    2017-12-01

    The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.

  1. Hydrological processes and model representation: impact of soft data on calibration

    Science.gov (United States)

    J.G. Arnold; M.A. Youssef; H. Yen; M.J. White; A.Y. Sheshukov; A.M. Sadeghi; D.N. Moriasi; J.L. Steiner; Devendra Amatya; R.W. Skaggs; E.B. Haney; J. Jeong; M. Arabi; P.H. Gowda

    2015-01-01

    Hydrologic and water quality models are increasingly used to determine the environmental impacts of climate variability and land management. Due to differing model objectives and differences in monitored data, there are currently no universally accepted procedures for model calibration and validation in the literature. In an effort to develop accepted model calibration...

  2. Stochastic isotropic hyperelastic materials: constitutive calibration and model selection

    Science.gov (United States)

    Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain

    2018-03-01

    Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.

  3. A critical comparison of systematic calibration protocols for activated sludge models: a SWOT analysis.

    Science.gov (United States)

    Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A

    2005-07-01

    Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.

  4. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    Energy Technology Data Exchange (ETDEWEB)

    and Ben Polly, Joseph Robertson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Polly, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Collis, Jon [Colorado School of Mines, Golden, CO (United States)

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  5. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  6. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der

    2014-01-01

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  7. Evaluation of multivariate calibration models transferred between spectroscopic instruments

    DEFF Research Database (Denmark)

    Eskildsen, Carl Emil Aae; Hansen, Per W.; Skov, Thomas

    2016-01-01

    In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions for the ......In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions...... for the same samples using the transferred model. However, sometimes the success of a model transfer is evaluated by comparing the transferred model predictions with the reference values. This is not optimal, as uncertainties in the reference method will impact the evaluation. This paper proposes a new method...... for calibration model transfer evaluation. The new method is based on comparing predictions from different instruments, rather than comparing predictions and reference values. A total of 75 flour samples were available for the study. All samples were measured on ten near infrared (NIR) instruments from two...

  8. Model calibration for building energy efficiency simulation

    International Nuclear Information System (INIS)

    Mustafaraj, Giorgio; Marini, Dashamir; Costa, Andrea; Keane, Marcus

    2014-01-01

    Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE) hourly from −5.6% to 7.5% and CV(RMSE) hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases

  9. Global estimates of evapotranspiration and gross primary production based on MODIS and global meteorology data

    Science.gov (United States)

    Yuan, W.; Liu, S.; Yu, G.; Bonnefond, J.-M.; Chen, J.; Davis, K.; Desai, A.R.; Goldstein, Allen H.; Gianelle, D.; Rossi, F.; Suyker, A.E.; Verma, S.B.

    2010-01-01

    The simulation of gross primary production (GPP) at various spatial and temporal scales remains a major challenge for quantifying the global carbon cycle. We developed a light use efficiency model, called EC-LUE, driven by only four variables: normalized difference vegetation index (NDVI), photosynthetically active radiation (PAR), air temperature, and the Bowen ratio of sensible to latent heat flux. The EC-LUE model may have the most potential to adequately address the spatial and temporal dynamics of GPP because its parameters (i.e., the potential light use efficiency and optimal plant growth temperature) are invariant across the various land cover types. However, the application of the previous EC-LUE model was hampered by poor prediction of Bowen ratio at the large spatial scale. In this study, we substituted the Bowen ratio with the ratio of evapotranspiration (ET) to net radiation, and revised the RS-PM (Remote Sensing-Penman Monteith) model for quantifying ET. Fifty-four eddy covariance towers, including various ecosystem types, were selected to calibrate and validate the revised RS-PM and EC-LUE models. The revised RS-PM model explained 82% and 68% of the observed variations of ET for all the calibration and validation sites, respectively. Using estimated ET as input, the EC-LUE model performed well in calibration and validation sites, explaining 75% and 61% of the observed GPP variation for calibration and validation sites respectively.Global patterns of ET and GPP at a spatial resolution of 0.5° latitude by 0.6° longitude during the years 2000–2003 were determined using the global MERRA dataset (Modern Era Retrospective-Analysis for Research and Applications) and MODIS (Moderate Resolution Imaging Spectroradiometer). The global estimates of ET and GPP agreed well with the other global models from the literature, with the highest ET and GPP over tropical forests and the lowest values in dry and high latitude areas. However, comparisons with observed

  10. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide

  11. Calibration of two complex ecosystem models with different likelihood functions

    Science.gov (United States)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model

  12. Efficient Calibration of Distributed Catchment Models Using Perceptual Understanding and Hydrologic Signatures

    Science.gov (United States)

    Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.

    2015-12-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.

  13. Nitrous oxide emissions from cropland: a procedure for calibrating the DayCent biogeochemical model using inverse modelling

    Science.gov (United States)

    Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.

    2013-01-01

    DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.

  14. Validation and calibration of structural models that combine information from multiple sources.

    Science.gov (United States)

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  15. Optical model and calibration of a sun tracker

    International Nuclear Information System (INIS)

    Volkov, Sergei N.; Samokhvalov, Ignatii V.; Cheong, Hai Du; Kim, Dukhyeon

    2016-01-01

    Sun trackers are widely used to investigate scattering and absorption of solar radiation in the Earth's atmosphere. We present a method for optimization of the optical altazimuth sun tracker model with output radiation direction aligned with the axis of a stationary spectrometer. The method solves the problem of stability loss in tracker pointing at the Sun near the zenith. An optimal method for tracker calibration at the measurement site is proposed in the present work. A method of moving calibration is suggested for mobile applications in the presence of large temperature differences and errors in the alignment of the optical system of the tracker. - Highlights: • We present an optimal optical sun tracker model for atmospheric spectroscopy. • The problem of loss of stability of tracker pointing at the Sun has been solved. • We propose an optimal method for tracker calibration at a measurement site. • Test results demonstrate the efficiency of the proposed optimization methods.

  16. Infusion of SMAP Data into Offline and Coupled Models: Evaluation, Calibration, and Assimilation

    Science.gov (United States)

    Lawston, P.; Santanello, J. A., Jr.; Dennis, E. J.; Kumar, S.

    2017-12-01

    The impact of the land surface on the water and energy cycle is modulated by its coupling to the planetary boundary layer (PBL), and begins at the local scale. A core component of the local land-atmosphere coupling (LoCo) effort requires understanding the `links in the chain' between soil moisture and precipitation, most notably through surface heat fluxes and PBL evolution. To date, broader (i.e. global) application of LoCo diagnostics has been limited by observational data requirements of the coupled system (and in particular, soil moisture) that are typically only met during localized, short-term field campaigns. SMAP offers, for the first time, the ability to map high quality, near-surface soil moisture globally every few days at a spatial resolution comparable to current modeling efforts. As a result, there are numerous potential avenues for SMAP model-data fusion that can be explored in the context of improving understanding of L-A interaction and NWP. In this study, we assess multiple points of intersection of SMAP products with offline and coupled models and evaluate impacts using process-level diagnostics. Results will inform upon the importance of high-resolution soil moisture mapping for improved coupled prediction and model development, as well as reconciling differences in modeled, retrieved, and measured soil moisture. Specifically, NASA model (LIS, NU-WRF) and observation (SMAP, NLDAS-2) products are combined with in-situ standard and IOP measurements (soil moisture, flux, and radiosonde) over the ARM-SGP. An array of land surface model spinups (via LIS-Noah) are performed with varying atmospheric forcing, greenness fraction, and soil layering permutations. Calibration of LIS-Noah soil hydraulic parameters is then performed using an array of in-situ soil moisture and flux and SMAP products. In addition, SMAP assimilation is performed in LIS-Noah both at the scale of the observation (36 and 9km) and the model grid (1km). The focus is on the

  17. Evaluation of an ASM1 Model Calibration Precedure on a Municipal-Industrial Wastewater Treatment Plant

    DEFF Research Database (Denmark)

    Petersen, Britta; Gernaey, Krist; Henze, Mogens

    2002-01-01

    treatment plant. In the case that was studied it was important to have a detailed description of the process dynamics, since the model was to be used as the basis for optimisation scenarios in a later phase. Therefore, a complete model calibration procedure was applied including: (1) a description......The purpose of the calibrated model determines how to approach a model calibration, e.g. which information is needed and to which level of detail the model should be calibrated. A systematic model calibration procedure was therefore defined and evaluated for a municipal–industrial wastewater...

  18. Secondary clarifier hybrid model calibration in full scale pulp and paper activated sludge wastewater treatment

    Energy Technology Data Exchange (ETDEWEB)

    Sreckovic, G.; Hall, E.R. [British Columbia Univ., Dept. of Civil Engineering, Vancouver, BC (Canada); Thibault, J. [Laval Univ., Dept. of Chemical Engineering, Ste-Foy, PQ (Canada); Savic, D. [Exeter Univ., School of Engineering, Exeter (United Kingdom)

    1999-05-01

    The issue of proper model calibration techniques applied to mechanistic mathematical models relating to activated sludge systems was discussed. Such calibrations are complex because of the non-linearity and multi-model objective functions of the process. This paper presents a hybrid model which was developed using two techniques to model and calibrate secondary clarifier parts of an activated sludge system. Genetic algorithms were used to successfully calibrate the settler mechanistic model, and neural networks were used to reduce the error between the mechanistic model output and real world data. Results of the modelling study show that the long term response of a one-dimensional settler mechanistic model calibrated by genetic algorithms and compared to full scale plant data can be improved by coupling the calibrated mechanistic model to as black-box model, such as a neural network. 11 refs., 2 figs.

  19. CALIBRATING THE JOHNSON-HOLMQUIST CERAMIC MODEL FOR SIC USING CTH

    International Nuclear Information System (INIS)

    Cazamias, J. U.; Bilyk, S. R.

    2009-01-01

    The Johnson-Holmquist ceramic material model has been calibrated and successfully applied to numerically simulate ballistic events using the Lagrangian code EPIC. While the majority of the constants are ''physics'' based, two of the constants for the failed material response are calibrated using ballistic experiments conducted on a confined cylindrical ceramic target. The maximum strength of the failed ceramic is calibrated by matching the penetration velocity. The second refers to the equivalent plastic strain at failure under constant pressure and is calibrated using the dwell time. Use of these two constants in the CTH Eulerian hydrocode does not predict the ballistic response. This difference may be due to the phenomenological nature of the model and the different numerical schemes used by the codes. This paper determines the aforementioned material constants for SiC suitable for simulating ballistic events using CTH.

  20. Development of a generic auto-calibration package for regional ecological modeling and application in the Central Plains of the United States

    Science.gov (United States)

    Wu, Yiping; Liu, Shuguang; Li, Zhengpeng; Dahal, Devendra; Young, Claudia J.; Schmidt, Gail L.; Liu, Jinxun; Davis, Brian; Sohl, Terry L.; Werner, Jeremy M.; Oeding, Jennifer

    2014-01-01

    Process-oriented ecological models are frequently used for predicting potential impacts of global changes such as climate and land-cover changes, which can be useful for policy making. It is critical but challenging to automatically derive optimal parameter values at different scales, especially at regional scale, and validate the model performance. In this study, we developed an automatic calibration (auto-calibration) function for a well-established biogeochemical model—the General Ensemble Biogeochemical Modeling System (GEMS)-Erosion Deposition Carbon Model (EDCM)—using data assimilation technique: the Shuffled Complex Evolution algorithm and a model-inversion R package—Flexible Modeling Environment (FME). The new functionality can support multi-parameter and multi-objective auto-calibration of EDCM at the both pixel and regional levels. We also developed a post-processing procedure for GEMS to provide options to save the pixel-based or aggregated county-land cover specific parameter values for subsequent simulations. In our case study, we successfully applied the updated model (EDCM-Auto) for a single crop pixel with a corn–wheat rotation and a large ecological region (Level II)—Central USA Plains. The evaluation results indicate that EDCM-Auto is applicable at multiple scales and is capable to handle land cover changes (e.g., crop rotations). The model also performs well in capturing the spatial pattern of grain yield production for crops and net primary production (NPP) for other ecosystems across the region, which is a good example for implementing calibration and validation of ecological models with readily available survey data (grain yield) and remote sensing data (NPP) at regional and national levels. The developed platform for auto-calibration can be readily expanded to incorporate other model inversion algorithms and potential R packages, and also be applied to other ecological models.

  1. Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates

    Science.gov (United States)

    Todorovic, Andrijana; Plavsic, Jasna

    2015-04-01

    A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters

  2. Can climate models be tuned to simulate the global mean absolute temperature correctly?

    Science.gov (United States)

    Duan, Q.; Shi, Y.; Gong, W.

    2016-12-01

    The Inter-government Panel on Climate Change (IPCC) has already issued five assessment reports (ARs), which include the simulation of the past climate and the projection of the future climate under various scenarios. The participating models can simulate reasonably well the trend in global mean temperature change, especially of the last 150 years. However, there is a large, constant discrepancy in terms of global mean absolute temperature simulations over this period. This discrepancy remained in the same range between IPCC-AR4 and IPCC-AR5, which amounts to about 3oC between the coldest model and the warmest model. This discrepancy has great implications to the land processes, particularly the processes related to the cryosphere, and casts doubts over if land-atmosphere-ocean interactions are correctly considered in those models. This presentation aims to explore if this discrepancy can be reduced through model tuning. We present an automatic model calibration strategy to tune the parameters of a climate model so the simulated global mean absolute temperature would match the observed data over the last 150 years. An intermediate complexity model known as LOVECLIM is used in the study. This presentation will show the preliminary results.

  3. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  4. Effect of calibration data series length on performance and optimal parameters of hydrological model

    Directory of Open Access Journals (Sweden)

    Chuan-zhe Li

    2010-12-01

    Full Text Available In order to assess the effects of calibration data series length on the performance and optimal parameter values of a hydrological model in ungauged or data-limited catchments (data are non-continuous and fragmental in some catchments, we used non-continuous calibration periods for more independent streamflow data for SIMHYD (simple hydrology model calibration. Nash-Sutcliffe efficiency and percentage water balance error were used as performance measures. The particle swarm optimization (PSO method was used to calibrate the rainfall-runoff models. Different lengths of data series ranging from one year to ten years, randomly sampled, were used to study the impact of calibration data series length. Fifty-five relatively unimpaired catchments located all over Australia with daily precipitation, potential evapotranspiration, and streamflow data were tested to obtain more general conclusions. The results show that longer calibration data series do not necessarily result in better model performance. In general, eight years of data are sufficient to obtain steady estimates of model performance and parameters for the SIMHYD model. It is also shown that most humid catchments require fewer calibration data to obtain a good performance and stable parameter values. The model performs better in humid and semi-humid catchments than in arid catchments. Our results may have useful and interesting implications for the efficiency of using limited observation data for hydrological model calibration in different climates.

  5. Effect of Using Extreme Years in Hydrologic Model Calibration Performance

    Science.gov (United States)

    Goktas, R. K.; Tezel, U.; Kargi, P. G.; Ayvaz, T.; Tezyapar, I.; Mesta, B.; Kentel, E.

    2017-12-01

    Hydrological models are useful in predicting and developing management strategies for controlling the system behaviour. Specifically they can be used for evaluating streamflow at ungaged catchments, effect of climate change, best management practices on water resources, or identification of pollution sources in a watershed. This study is a part of a TUBITAK project named "Development of a geographical information system based decision-making tool for water quality management of Ergene Watershed using pollutant fingerprints". Within the scope of this project, first water resources in Ergene Watershed is studied. Streamgages found in the basin are identified and daily streamflow measurements are obtained from State Hydraulic Works of Turkey. Streamflow data is analysed using box-whisker plots, hydrographs and flow-duration curves focusing on identification of extreme periods, dry or wet. Then a hydrological model is developed for Ergene Watershed using HEC-HMS in the Watershed Modeling System (WMS) environment. The model is calibrated for various time periods including dry and wet ones and the performance of calibration is evaluated using Nash-Sutcliffe Efficiency (NSE), correlation coefficient, percent bias (PBIAS) and root mean square error. It is observed that calibration period affects the model performance, and the main purpose of the development of the hydrological model should guide calibration period selection. Acknowledgement: This study is funded by The Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 115Y064.

  6. Global cross-station assessment of neuro-fuzzy models for estimating daily reference evapotranspiration

    Science.gov (United States)

    Shiri, Jalal; Nazemi, Amir Hossein; Sadraddini, Ali Ashraf; Landeras, Gorka; Kisi, Ozgur; Fard, Ahmad Fakheri; Marti, Pau

    2013-02-01

    SummaryAccurate estimation of reference evapotranspiration is important for irrigation scheduling, water resources management and planning and other agricultural water management issues. In the present paper, the capabilities of generalized neuro-fuzzy models were evaluated for estimating reference evapotranspiration using two separate sets of weather data from humid and non-humid regions of Spain and Iran. In this way, the data from some weather stations in the Basque Country and Valencia region (Spain) were used for training the neuro-fuzzy models [in humid and non-humid regions, respectively] and subsequently, the data from these regions were pooled to evaluate the generalization capability of a general neuro-fuzzy model in humid and non-humid regions. The developed models were tested in stations of Iran, located in humid and non-humid regions. The obtained results showed the capabilities of generalized neuro-fuzzy model in estimating reference evapotranspiration in different climatic zones. Global GNF models calibrated using both non-humid and humid data were found to successfully estimate ET0 in both non-humid and humid regions of Iran (the lowest MAE values are about 0.23 mm for non-humid Iranian regions and 0.12 mm for humid regions). non-humid GNF models calibrated using non-humid data performed much better than the humid GNF models calibrated using humid data in non-humid region while the humid GNF model gave better estimates in humid region.

  7. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    Science.gov (United States)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  8. Calibration and analysis of genome-based models for microbial ecology.

    Science.gov (United States)

    Louca, Stilianos; Doebeli, Michael

    2015-10-16

    Microbial ecosystem modeling is complicated by the large number of unknown parameters and the lack of appropriate calibration tools. Here we present a novel computational framework for modeling microbial ecosystems, which combines genome-based model construction with statistical analysis and calibration to experimental data. Using this framework, we examined the dynamics of a community of Escherichia coli strains that emerged in laboratory evolution experiments, during which an ancestral strain diversified into two coexisting ecotypes. We constructed a microbial community model comprising the ancestral and the evolved strains, which we calibrated using separate monoculture experiments. Simulations reproduced the successional dynamics in the evolution experiments, and pathway activation patterns observed in microarray transcript profiles. Our approach yielded detailed insights into the metabolic processes that drove bacterial diversification, involving acetate cross-feeding and competition for organic carbon and oxygen. Our framework provides a missing link towards a data-driven mechanistic microbial ecology.

  9. Considering Decision Variable Diversity in Multi-Objective Optimization: Application in Hydrologic Model Calibration

    Science.gov (United States)

    Sahraei, S.; Asadzadeh, M.

    2017-12-01

    Any modern multi-objective global optimization algorithm should be able to archive a well-distributed set of solutions. While the solution diversity in the objective space has been explored extensively in the literature, little attention has been given to the solution diversity in the decision space. Selection metrics such as the hypervolume contribution and crowding distance calculated in the objective space would guide the search toward solutions that are well-distributed across the objective space. In this study, the diversity of solutions in the decision-space is used as the main selection criteria beside the dominance check in multi-objective optimization. To this end, currently archived solutions are clustered in the decision space and the ones in less crowded clusters are given more chance to be selected for generating new solution. The proposed approach is first tested on benchmark mathematical test problems. Second, it is applied to a hydrologic model calibration problem with more than three objective functions. Results show that the chance of finding more sparse set of high-quality solutions increases, and therefore the analyst would receive a well-diverse set of options with maximum amount of information. Pareto Archived-Dynamically Dimensioned Search, which is an efficient and parsimonious multi-objective optimization algorithm for model calibration, is utilized in this study.

  10. Ionosphere Delay Calibration and Calibration Errors for Satellite Navigation of Aircraft

    Science.gov (United States)

    Harris, Ian; Manucci, Anthony; Iijima, Byron; Lindqwister, Ulf; Muna, Demitri; Pi, Xiaoqing; Wilson, Brian

    2000-01-01

    The Federal Aviation Administration (FAA) is implementing a satellite-based navigation system for aircraft using the Global Positioning System (GPS). Positioning accuracy of a few meters will be achieved by broadcasting corrections to the direct GPS signal. These corrections are derived using the wide-area augmentation system (WAAS), which includes a ground network of at least 24 GPS receivers across the Continental US (CONUS). WAAS will provide real-time total electron content (TEC) measurements that can be mapped to fixed grid points using a real-time mapping algorithm. These TECs will be converted into vertical delay corrections for the GPS L1 frequency and broadcast to users every five minutes via geosynchronous satellite. Users will convert these delays to slant calibrations along their own lines-of-sight (LOS) to GPS satellites. Uncertainties in the delay calibrations will also be broadcast, allowing users to estimate the uncertainty of their position. To maintain user safety without reverting to excessive safety margins an empirical model of user calibration errors has been developed. WAAS performance depends on factors that include geographic location (errors increase near WAAS borders), and ionospheric conditions, such as the enhanced spatial electron density gradients found during ionospheric storms.

  11. Multivariate Calibration Models for Sorghum Composition using Near-Infrared Spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Wolfrum, E.; Payne, C.; Stefaniak, T.; Rooney, W.; Dighe, N.; Bean, B.; Dahlberg, J.

    2013-03-01

    NREL developed calibration models based on near-infrared (NIR) spectroscopy coupled with multivariate statistics to predict compositional properties relevant to cellulosic biofuels production for a variety of sorghum cultivars. A robust calibration population was developed in an iterative fashion. The quality of models developed using the same sample geometry on two different types of NIR spectrometers and two different sample geometries on the same spectrometer did not vary greatly.

  12. Evaluation of Uncertainties in hydrogeological modeling and groundwater flow analyses. Model calibration

    International Nuclear Information System (INIS)

    Ijiri, Yuji; Ono, Makoto; Sugihara, Yutaka; Shimo, Michito; Yamamoto, Hajime; Fumimura, Kenichi

    2003-03-01

    This study involves evaluation of uncertainty in hydrogeological modeling and groundwater flow analysis. Three-dimensional groundwater flow in Shobasama site in Tono was analyzed using two continuum models and one discontinuous model. The domain of this study covered area of four kilometers in east-west direction and six kilometers in north-south direction. Moreover, for the purpose of evaluating how uncertainties included in modeling of hydrogeological structure and results of groundwater simulation decreased with progress of investigation research, updating and calibration of the models about several modeling techniques of hydrogeological structure and groundwater flow analysis techniques were carried out, based on the information and knowledge which were newly acquired. The acquired knowledge is as follows. As a result of setting parameters and structures in renewal of the models following to the circumstances by last year, there is no big difference to handling between modeling methods. The model calibration is performed by the method of matching numerical simulation with observation, about the pressure response caused by opening and closing of a packer in MIU-2 borehole. Each analysis technique attains reducing of residual sum of squares of observations and results of numerical simulation by adjusting hydrogeological parameters. However, each model adjusts different parameters as water conductivity, effective porosity, specific storage, and anisotropy. When calibrating models, sometimes it is impossible to explain the phenomena only by adjusting parameters. In such case, another investigation may be required to clarify details of hydrogeological structure more. As a result of comparing research from beginning to this year, the following conclusions are obtained about investigation. (1) The transient hydraulic data are effective means in reducing the uncertainty of hydrogeological structure. (2) Effective porosity for calculating pore water velocity of

  13. Ideas for fast accelerator model calibration

    International Nuclear Information System (INIS)

    Corbett, J.

    1997-05-01

    With the advent of a simple matrix inversion technique, measurement-based storage ring modeling has made rapid progress in recent years. Using fast computers with large memory, the matrix inversion procedure typically adjusts up to 10 3 model variables to fit the order of 10 5 measurements. The results have been surprisingly accurate. Physics aside, one of the next frontiers is to simplify the process and to reduce computation time. In this paper, the authors discuss two approaches to speed up the model calibration process: recursive least-squares fitting and a piecewise fitting approach

  14. Spatial and Temporal Self-Calibration of a Hydroeconomic Model

    Science.gov (United States)

    Howitt, R. E.; Hansen, K. M.

    2008-12-01

    Hydroeconomic modeling of water systems where risk and reliability of water supply are of critical importance must address explicitly how to model water supply uncertainty. When large fluctuations in annual precipitation and significant variation in flows by location are present, a model which solves with perfect foresight of future water conditions may be inappropriate for some policy and research questions. We construct a simulation-optimization model with limited foresight of future water conditions using positive mathematical programming and self-calibration techniques. This limited foresight netflow (LFN) model signals the value of storing water for future use and reflects a more accurate economic value of water at key locations, given that future water conditions are unknown. Failure to explicitly model this uncertainty could lead to undervaluation of storage infrastructure and contractual mechanisms for managing water supply risk. A model based on sequentially updated information is more realistic, since water managers make annual storage decisions without knowledge of yet to be realized future water conditions. The LFN model runs historical hydrological conditions through the current configuration of the California water system to determine the economically efficient allocation of water under current economic conditions and infrastructure. The model utilizes current urban and agricultural demands, storage and conveyance infrastructure, and the state's hydrological history to indicate the scarcity value of water at key locations within the state. Further, the temporal calibration penalty functions vary by year type, reflecting agricultural water users' ability to alter cropping patterns in response to water conditions. The model employs techniques from positive mathematical programming (Howitt, 1995; Howitt, 1998; Cai and Wang, 2006) to generate penalty functions that are applied to deviations from observed data. The functions are applied to monthly flows

  15. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  16. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    Science.gov (United States)

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  17. Bayesian model calibration of ramp compression experiments on Z

    Science.gov (United States)

    Brown, Justin; Hund, Lauren

    2017-06-01

    Bayesian model calibration (BMC) is a statistical framework to estimate inputs for a computational model in the presence of multiple uncertainties, making it well suited to dynamic experiments which must be coupled with numerical simulations to interpret the results. Often, dynamic experiments are diagnosed using velocimetry and this output can be modeled using a hydrocode. Several calibration issues unique to this type of scenario including the functional nature of the output, uncertainty of nuisance parameters within the simulation, and model discrepancy identifiability are addressed, and a novel BMC process is proposed. As a proof of concept, we examine experiments conducted on Sandia National Laboratories' Z-machine which ramp compressed tantalum to peak stresses of 250 GPa. The proposed BMC framework is used to calibrate the cold curve of Ta (with uncertainty), and we conclude that the procedure results in simple, fast, and valid inferences. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  18. The Value of Hydrograph Partitioning Curves for Calibrating Hydrological Models in Glacierized Basins

    Science.gov (United States)

    He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno

    2018-03-01

    This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.

  19. Modified calibration protocol evaluated in a model-based testing of SBR flexibility

    DEFF Research Database (Denmark)

    Corominas, Lluís; Sin, Gürkan; Puig, Sebastià

    2011-01-01

    The purpose of this paper is to refine the BIOMATH calibration protocol for SBR systems, in particular to develop a pragmatic calibration protocol that takes advantage of SBR information-rich data, defines a simulation strategy to obtain proper initial conditions for model calibration and provide...

  20. Development and Calibration of a Model for the Determination of Hurricane Wind Speed Field at the Peninsula of Yucatan

    Directory of Open Access Journals (Sweden)

    L.E. Fernández–Baqueiro

    2009-01-01

    Full Text Available In this work a model to calculate the wind speed field produced by hurricanes that hit the Yucatan Peninsula is developed. The model variables are calculated using equations recently developed, that include new advances in meteorology. The steps in the model are described and implemented in a computer program to systematize and facilitate the use of this model. The model and the program are calibrated using two data bases; the first one includes trajectories and maximum wind velocities of hurricanes; the second one includes records of wind velocities obtained from the Automatic Meteorology Stations of the National Meteorology Service. The hurricane wind velocity field is calculated using the model and information of the first data base. The model results are compared with field data from the second data base. The model is calibrated adjusting the Holland's pressure radial profile parameter B; this is carried out for three hurricane records: Isidore, Emily and Wilma. It is concluded that a value of B of 1.3 adjusts globally the three hurricane records and that the developed model is capable of reproducing satisfactorily the wind velocity records.

  1. A new method to calibrate Lagrangian model with ASAR images for oil slick trajectory.

    Science.gov (United States)

    Tian, Siyu; Huang, Xiaoxia; Li, Hongga

    2017-03-15

    Since Lagrangian model coefficients vary with different conditions, it is necessary to calibrate the model to obtain optimal coefficient combination for special oil spill accident. This paper focuses on proposing a new method to calibrate Lagrangian model with time series of Envisat ASAR images. Oil slicks extracted from time series images form a detected trajectory of special oil slick. Lagrangian model is calibrated by minimizing the difference between simulated trajectory and detected trajectory. mean center position distance difference (MCPD) and rotation difference (RD) of Oil slicks' or particles' standard deviational ellipses (SDEs) are calculated as two evaluations. The two parameters are taken to evaluate the performance of Lagrangian transport model with different coefficient combinations. This method is applied to Penglai 19-3 oil spill accident. The simulation result with calibrated model agrees well with related satellite observations. It is suggested the new method is effective to calibrate Lagrangian model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Calibrating corneal material model parameters using only inflation data: an ill-posed problem

    CSIR Research Space (South Africa)

    Kok, S

    2014-08-01

    Full Text Available is to perform numerical modelling using the finite element method, for which a calibrated material model is required. These material models are typically calibrated using experimental inflation data by solving an inverse problem. In the inverse problem...

  3. A multi-objective approach to improve SWAT model calibration in alpine catchments

    Science.gov (United States)

    Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele

    2018-04-01

    Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.

  4. Innovative Calibration Method for System Level Simulation Models of Internal Combustion Engines

    Directory of Open Access Journals (Sweden)

    Ivo Prah

    2016-09-01

    Full Text Available The paper outlines a procedure for the computer-controlled calibration of the combined zero-dimensional (0D and one-dimensional (1D thermodynamic simulation model of a turbocharged internal combustion engine (ICE. The main purpose of the calibration is to determine input parameters of the simulation model in such a way as to achieve the smallest difference between the results of the measurements and the results of the numerical simulations with minimum consumption of the computing time. An innovative calibration methodology is based on a novel interaction between optimization methods and physically based methods of the selected ICE sub-systems. Therein physically based methods were used for steering the division of the integral ICE to several sub-models and for determining parameters of selected components considering their governing equations. Innovative multistage interaction between optimization methods and physically based methods allows, unlike the use of well-established methods that rely only on the optimization techniques, for successful calibration of a large number of input parameters with low time consumption. Therefore, the proposed method is suitable for efficient calibration of simulation models of advanced ICEs.

  5. Visible spectroscopy calibration transfer model in determining pH of Sala mangoes

    International Nuclear Information System (INIS)

    Yahaya, O.K.M.; MatJafri, M.Z.; Aziz, A.A.; Omar, A.F.

    2015-01-01

    The purpose of this study is to compare the efficiency of calibration transfer procedures between three spectrometers involving two Ocean Optics Inc. spectrometers, namely, QE65000 and Jaz, and also, ASD FieldSpec 3 in measuring the pH of Sala mango by visible reflectance spectroscopy. This study evaluates the ability of these spectrometers in measuring the pH of Sala mango by applying similar calibration algorithms through direct calibration transfer. This visible reflectance spectroscopy technique defines a spectrometer as a master instrument and another spectrometer as a slave. The multiple linear regression (MLR) of calibration model generated using the QE65000 spectrometer is transferred to the Jaz spectrometer and vice versa for Set 1. The same technique is applied for Set 2 with QE65000 spectrometer is transferred to the FieldSpec3 spectrometer and vice versa. For Set 1, the result showed that the QE65000 spectrometer established a calibration model with higher accuracy than that of the Jaz spectrometer. In addition, the calibration model developed on Jaz spectrometer successfully predicted the pH of Sala mango, which was measured using QE65000 spectrometer, with a root means square error of prediction RMSEP = 0.092 pH and coefficients of determination R 2  = 0.892. Moreover, the best prediction result is obtained for Set 2 when the calibration model developed on QE65000 spectrometer is successfully transferred to FieldSpec 3 with R 2  = 0.839 and RMSEP = 0.16 pH

  6. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.

    2013-12-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.

  7. Multi-objective calibration of a reservoir model: aggregation and non-dominated sorting approaches

    Science.gov (United States)

    Huang, Y.

    2012-12-01

    Numerical reservoir models can be helpful tools for water resource management. These models are generally calibrated against historical measurement data made in reservoirs. In this study, two methods are proposed for the multi-objective calibration of such models: aggregation and non-dominated sorting methods. Both methods use a hybrid genetic algorithm as an optimization engine and are different in fitness assignment. In the aggregation method, a weighted sum of scaled simulation errors is designed as an overall objective function to measure the fitness of solutions (i.e. parameter values). The contribution of this study to the aggregation method is the correlation analysis and its implication to the choice of weight factors. In the non-dominated sorting method, a novel method based on non-dominated sorting and the method of minimal distance is used to calculate the dummy fitness of solutions. The proposed methods are illustrated using a water quality model that was set up to simulate the water quality of Pepacton Reservoir, which is located to the north of New York City and is used for water supply of city. The study also compares the aggregation and the non-dominated sorting methods. The purpose of this comparison is not to evaluate the pros and cons between the two methods but to determine whether the parameter values, objective function values (simulation errors) and simulated results obtained are significantly different with each other. The final results (objective function values) from the two methods are good compromise between all objective functions, and none of these results are the worst for any objective function. The calibrated model provides an overall good performance and the simulated results with the calibrated parameter values match the observed data better than the un-calibrated parameters, which supports and justifies the use of multi-objective calibration. The results achieved in this study can be very useful for the calibration of water

  8. A physically based model of global freshwater surface temperature

    Science.gov (United States)

    van Beek, Ludovicus P. H.; Eikelboom, Tessa; van Vliet, Michelle T. H.; Bierkens, Marc F. P.

    2012-09-01

    Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through their tolerance to parasites and diseases. Models used to predict surface water temperature range between physically based deterministic models and statistical approaches. Here we present the initial results of a physically based deterministic model of global freshwater surface temperature. The model adds a surface water energy balance to river discharge modeled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff, and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by shortwave and longwave radiation and sensible and latent heat fluxes. Also included are ice formation and its effect on heat storage and river hydraulics. We use the coupled surface water and energy balance model to simulate global freshwater surface temperature at daily time steps with a spatial resolution of 0.5° on a regular grid for the period 1976-2000. We opt to parameterize the model with globally available data and apply it without calibration in order to preserve its physical basis with the outlook of evaluating the effects of atmospheric warming on freshwater surface temperature. We validate our simulation results with daily temperature data from rivers and lakes (U.S. Geological Survey (USGS), limited to the USA) and compare mean monthly temperatures with those recorded in the Global Environment Monitoring System (GEMS) data set. Results show that the model is able to capture the mean monthly surface temperature for the majority of the GEMS stations, while the interannual variability as derived from the USGS and NOAA data was captured reasonably well. Results are poorest for

  9. Applying Hierarchical Model Calibration to Automatically Generated Items.

    Science.gov (United States)

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  10. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Dixon, P.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M and O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty

  11. Influence of smoothing of X-ray spectra on parameters of calibration model

    International Nuclear Information System (INIS)

    Antoniak, W.; Urbanski, P.; Kowalska, E.

    1998-01-01

    Parameters of the calibration model before and after smoothing of X-ray spectra have been investigated. The calibration model was calculated using multivariate procedure - namely the partial least square regression (PLS). Investigations have been performed on an example of six sets of various standards used for calibration of some instruments based on X-ray fluorescence principle. The smoothing methods were compared: regression splines, Savitzky-Golay and Discrete Fourier Transform. The calculations were performed using a software package MATLAB and some home-made programs. (author)

  12. Application of Iterative Robust Model-based Optimal Experimental Design for the Calibration of Biocatalytic Models

    DEFF Research Database (Denmark)

    Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer

    2017-01-01

    The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during...... experimentation is not actively used to optimise the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω......-transaminase catalysed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is a more accurate, but also a computationally more expensive method. As a result, an important deviation between both approaches...

  13. How does higher frequency monitoring data affect the calibration of a process-based water quality model?

    Science.gov (United States)

    Jackson-Blake, Leah; Helliwell, Rachel

    2015-04-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated

  14. A globally calibrated scheme for generating daily meteorology from monthly statistics: Global-WGEN (GWGEN) v1.0

    Science.gov (United States)

    Sommer, Philipp S.; Kaplan, Jed O.

    2017-10-01

    While a wide range of Earth system processes occur at daily and even subdaily timescales, many global vegetation and other terrestrial dynamics models historically used monthly meteorological forcing both to reduce computational demand and because global datasets were lacking. Recently, dynamic land surface modeling has moved towards resolving daily and subdaily processes, and global datasets containing daily and subdaily meteorology have become available. These meteorological datasets, however, cover only the instrumental era of the last approximately 120 years at best, are subject to considerable uncertainty, and represent extremely large data files with associated computational costs of data input/output and file transfer. For periods before the recent past or in the future, global meteorological forcing can be provided by climate model output, but the quality of these data at high temporal resolution is low, particularly for daily precipitation frequency and amount. Here, we present GWGEN, a globally applicable statistical weather generator for the temporal downscaling of monthly climatology to daily meteorology. Our weather generator is parameterized using a global meteorological database and simulates daily values of five common variables: minimum and maximum temperature, precipitation, cloud cover, and wind speed. GWGEN is lightweight, modular, and requires a minimal set of monthly mean variables as input. The weather generator may be used in a range of applications, for example, in global vegetation, crop, soil erosion, or hydrological models. While GWGEN does not currently perform spatially autocorrelated multi-point downscaling of daily weather, this additional functionality could be implemented in future versions.

  15. A method for improving global pyranometer measurements by modeling responsivity functions

    Energy Technology Data Exchange (ETDEWEB)

    Lester, A. [Smith College, Northampton, MA 01063 (United States); Myers, D.R. [National Renewable Energy Laboratory, 1617 Cole Blvd., Golden, CO 80401 (United States)

    2006-03-15

    Accurate global solar radiation measurements are crucial to climate change research and the development of solar energy technologies. Pyranometers produce an electrical signal proportional to global irradiance. The signal-to-irradiance ratio is the responsivity (RS) of the instrument (RS=signal/irradiance=microvolts/(W/m{sup 2})). Most engineering measurements are made using a constant RS. It is known that RS varies with day of year, zenith angle, and net infrared radiation. This study proposes a method to find an RS function to model a pyranometer's changing RS. Using a reference irradiance calculated from direct and diffuse instruments, we found instantaneous RS for two global pyranometers over 31 sunny days in a two-year period. We performed successive independent regressions of the error between the constant and instantaneous RS with respect to zenith angle, day of year, and net infrared to obtain an RS function. An alternative method replaced the infrared regression with an independently developed technique to account for thermal offset. Results show improved uncertainties with the function method than with the single-calibration value. Lower uncertainties also occur using a black-and-white (8-48), rather than all-black (PSP), shaded pyranometer as the diffuse reference instrument. We conclude that the function method is extremely effective in reducing uncertainty in the irradiance measurements for global PSP pyranometers if they are calibrated at the deployment site. Furthermore, it was found that the function method accounts for the pyranometer's thermal offset, rendering further corrections unnecessary. The improvements in irradiance data achieved in this study will serve to increase the accuracy of solar energy assessments and atmospheric research. (author)

  16. Our calibrated model has poor predictive value: An example from the petroleum industry

    Energy Technology Data Exchange (ETDEWEB)

    Carter, J.N. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom)]. E-mail: j.n.carter@ic.ac.uk; Ballester, P.J. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom); Tavassoli, Z. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom); King, P.R. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom)

    2006-10-15

    It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way. Using an example from the petroleum industry, we show that cases can exit where calibrated models have limited predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability. We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not.

  17. Our calibrated model has poor predictive value: An example from the petroleum industry

    International Nuclear Information System (INIS)

    Carter, J.N.; Ballester, P.J.; Tavassoli, Z.; King, P.R.

    2006-01-01

    It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way. Using an example from the petroleum industry, we show that cases can exit where calibrated models have limited predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability. We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not

  18. Calibration of a distributed hydrology and land surface model using energy flux measurements

    DEFF Research Database (Denmark)

    Larsen, Morten Andreas Dahl; Refsgaard, Jens Christian; Jensen, Karsten H.

    2016-01-01

    In this study we develop and test a calibration approach on a spatially distributed groundwater-surface water catchment model (MIKE SHE) coupled to a land surface model component with particular focus on the water and energy fluxes. The model is calibrated against time series of eddy flux measure...

  19. Assessing global vegetation activity using spatio-temporal Bayesian modelling

    Science.gov (United States)

    Mulder, Vera L.; van Eck, Christel M.; Friedlingstein, Pierre; Regnier, Pierre A. G.

    2016-04-01

    This work demonstrates the potential of modelling vegetation activity using a hierarchical Bayesian spatio-temporal model. This approach allows modelling changes in vegetation and climate simultaneous in space and time. Changes of vegetation activity such as phenology are modelled as a dynamic process depending on climate variability in both space and time. Additionally, differences in observed vegetation status can be contributed to other abiotic ecosystem properties, e.g. soil and terrain properties. Although these properties do not change in time, they do change in space and may provide valuable information in addition to the climate dynamics. The spatio-temporal Bayesian models were calibrated at a regional scale because the local trends in space and time can be better captured by the model. The regional subsets were defined according to the SREX segmentation, as defined by the IPCC. Each region is considered being relatively homogeneous in terms of large-scale climate and biomes, still capturing small-scale (grid-cell level) variability. Modelling within these regions is hence expected to be less uncertain due to the absence of these large-scale patterns, compared to a global approach. This overall modelling approach allows the comparison of model behavior for the different regions and may provide insights on the main dynamic processes driving the interaction between vegetation and climate within different regions. The data employed in this study encompasses the global datasets for soil properties (SoilGrids), terrain properties (Global Relief Model based on SRTM DEM and ETOPO), monthly time series of satellite-derived vegetation indices (GIMMS NDVI3g) and climate variables (Princeton Meteorological Forcing Dataset). The findings proved the potential of a spatio-temporal Bayesian modelling approach for assessing vegetation dynamics, at a regional scale. The observed interrelationships of the employed data and the different spatial and temporal trends support

  20. Intercomparison of hydrological model structures and calibration approaches in climate scenario impact projections

    Science.gov (United States)

    Vansteenkiste, Thomas; Tavakoli, Mohsen; Ntegeka, Victor; De Smedt, Florimond; Batelaan, Okke; Pereira, Fernando; Willems, Patrick

    2014-11-01

    The objective of this paper is to investigate the effects of hydrological model structure and calibration on climate change impact results in hydrology. The uncertainty in the hydrological impact results is assessed by the relative change in runoff volumes and peak and low flow extremes from historical and future climate conditions. The effect of the hydrological model structure is examined through the use of five hydrological models with different spatial resolutions and process descriptions. These were applied to a medium sized catchment in Belgium. The models vary from the lumped conceptual NAM, PDM and VHM models over the intermediate detailed and distributed WetSpa model to the fully distributed MIKE SHE model. The latter model accounts for the 3D groundwater processes and interacts bi-directionally with a full hydrodynamic MIKE 11 river model. After careful and manual calibration of these models, accounting for the accuracy of the peak and low flow extremes and runoff subflows, and the changes in these extremes for changing rainfall conditions, the five models respond in a similar way to the climate scenarios over Belgium. Future projections on peak flows are highly uncertain with expected increases as well as decreases depending on the climate scenario. The projections on future low flows are more uniform; low flows decrease (up to 60%) for all models and for all climate scenarios. However, the uncertainties in the impact projections are high, mainly in the dry season. With respect to the model structural uncertainty, the PDM model simulates significantly higher runoff peak flows under future wet scenarios, which is explained by its specific model structure. For the low flow extremes, the MIKE SHE model projects significantly lower low flows in dry scenario conditions in comparison to the other models, probably due to its large difference in process descriptions for the groundwater component, the groundwater-river interactions. The effect of the model

  1. A new sewage exfiltration model--parameters and calibration.

    Science.gov (United States)

    Karpf, Christian; Krebs, Peter

    2011-01-01

    Exfiltration of waste water from sewer systems represents a potential danger for the soil and the aquifer. Common models, which are used to describe the exfiltration process, are based on the law of Darcy, extended by a more or less detailed consideration of the expansion of leaks, the characteristics of the soil and the colmation layer. But, due to the complexity of the exfiltration process, the calibration of these models includes a significant uncertainty. In this paper, a new exfiltration approach is introduced, which implements the dynamics of the clogging process and the structural conditions near sewer leaks. The calibration is realised according to experimental studies and analysis of groundwater infiltration to sewers. Furthermore, exfiltration rates and the sensitivity of the approach are estimated and evaluated, respectively, by Monte-Carlo simulations.

  2. An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition

    Science.gov (United States)

    Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.

    2018-04-01

    Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  3. A hydrological prediction system based on the SVS land-surface scheme: efficient calibration of GEM-Hydro for streamflow simulation over the Lake Ontario basin

    Directory of Open Access Journals (Sweden)

    É. Gaborit

    2017-09-01

    Full Text Available This work explores the potential of the distributed GEM-Hydro runoff modeling platform, developed at Environment and Climate Change Canada (ECCC over the last decade. More precisely, the aim is to develop a robust implementation methodology to perform reliable streamflow simulations with a distributed model over large and partly ungauged basins, in an efficient manner. The latest version of GEM-Hydro combines the SVS (Soil, Vegetation and Snow land-surface scheme and the WATROUTE routing scheme. SVS has never been evaluated from a hydrological point of view, which is done here for all major rivers flowing into Lake Ontario. Two established hydrological models are confronted to GEM-Hydro, namely MESH and WATFLOOD, which share the same routing scheme (WATROUTE but rely on different land-surface schemes. All models are calibrated using the same meteorological forcings, objective function, calibration algorithm, and basin delineation. GEM-Hydro is shown to be competitive with MESH and WATFLOOD: the NSE  √  (Nash–Sutcliffe criterion computed on the square root of the flows is for example equal to 0.83 for MESH and GEM-Hydro in validation on the Moira River basin, and to 0.68 for WATFLOOD. A computationally efficient strategy is proposed to calibrate SVS: a simple unit hydrograph is used for routing instead of WATROUTE. Global and local calibration strategies are compared in order to estimate runoff for ungauged portions of the Lake Ontario basin. Overall, streamflow predictions obtained using a global calibration strategy, in which a single parameter set is identified for the whole basin of Lake Ontario, show accuracy comparable to the predictions based on local calibration: the average NSE  √  in validation and over seven subbasins is 0.73 and 0.61, respectively for local and global calibrations. Hence, global calibration provides spatially consistent parameter values, robust performance at gauged locations, and reduces the

  4. A hydrological prediction system based on the SVS land-surface scheme: efficient calibration of GEM-Hydro for streamflow simulation over the Lake Ontario basin

    Science.gov (United States)

    Gaborit, Étienne; Fortin, Vincent; Xu, Xiaoyong; Seglenieks, Frank; Tolson, Bryan; Fry, Lauren M.; Hunter, Tim; Anctil, François; Gronewold, Andrew D.

    2017-09-01

    This work explores the potential of the distributed GEM-Hydro runoff modeling platform, developed at Environment and Climate Change Canada (ECCC) over the last decade. More precisely, the aim is to develop a robust implementation methodology to perform reliable streamflow simulations with a distributed model over large and partly ungauged basins, in an efficient manner. The latest version of GEM-Hydro combines the SVS (Soil, Vegetation and Snow) land-surface scheme and the WATROUTE routing scheme. SVS has never been evaluated from a hydrological point of view, which is done here for all major rivers flowing into Lake Ontario. Two established hydrological models are confronted to GEM-Hydro, namely MESH and WATFLOOD, which share the same routing scheme (WATROUTE) but rely on different land-surface schemes. All models are calibrated using the same meteorological forcings, objective function, calibration algorithm, and basin delineation. GEM-Hydro is shown to be competitive with MESH and WATFLOOD: the NSE √ (Nash-Sutcliffe criterion computed on the square root of the flows) is for example equal to 0.83 for MESH and GEM-Hydro in validation on the Moira River basin, and to 0.68 for WATFLOOD. A computationally efficient strategy is proposed to calibrate SVS: a simple unit hydrograph is used for routing instead of WATROUTE. Global and local calibration strategies are compared in order to estimate runoff for ungauged portions of the Lake Ontario basin. Overall, streamflow predictions obtained using a global calibration strategy, in which a single parameter set is identified for the whole basin of Lake Ontario, show accuracy comparable to the predictions based on local calibration: the average NSE √ in validation and over seven subbasins is 0.73 and 0.61, respectively for local and global calibrations. Hence, global calibration provides spatially consistent parameter values, robust performance at gauged locations, and reduces the complexity and computation burden of the

  5. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    Science.gov (United States)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this

  6. Calibration of the APEX Model to Simulate Management Practice Effects on Runoff, Sediment, and Phosphorus Loss.

    Science.gov (United States)

    Bhandari, Ammar B; Nelson, Nathan O; Sweeney, Daniel W; Baffaut, Claire; Lory, John A; Senaviratne, Anomaa; Pierzynski, Gary M; Janssen, Keith A; Barnes, Philip L

    2017-11-01

    Process-based computer models have been proposed as a tool to generate data for Phosphorus (P) Index assessment and development. Although models are commonly used to simulate P loss from agriculture using managements that are different from the calibration data, this use of models has not been fully tested. The objective of this study is to determine if the Agricultural Policy Environmental eXtender (APEX) model can accurately simulate runoff, sediment, total P, and dissolved P loss from 0.4 to 1.5 ha of agricultural fields with managements that are different from the calibration data. The APEX model was calibrated with field-scale data from eight different managements at two locations (management-specific models). The calibrated models were then validated, either with the same management used for calibration or with different managements. Location models were also developed by calibrating APEX with data from all managements. The management-specific models resulted in satisfactory performance when used to simulate runoff, total P, and dissolved P within their respective systems, with > 0.50, Nash-Sutcliffe efficiency > 0.30, and percent bias within ±35% for runoff and ±70% for total and dissolved P. When applied outside the calibration management, the management-specific models only met the minimum performance criteria in one-third of the tests. The location models had better model performance when applied across all managements compared with management-specific models. Our results suggest that models only be applied within the managements used for calibration and that data be included from multiple management systems for calibration when using models to assess management effects on P loss or evaluate P Indices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  7. Global SWOT Data Assimilation of River Hydrodynamic Model; the Twin Simulation Test of CaMa-Flood

    Science.gov (United States)

    Ikeshima, D.; Yamazaki, D.; Kanae, S.

    2016-12-01

    CaMa-Flood is a global scale model for simulating hydrodynamics in large scale rivers. It can simulate river hydrodynamics such as river discharge, flooded area, water depth and so on by inputting water runoff derived from land surface model. Recently many improvements at parameters or terrestrial data are under process to enhance the reproducibility of true natural phenomena. However, there are still some errors between nature and simulated result due to uncertainties in each model. SWOT (Surface water and Ocean Topography) is a satellite, which is going to be launched in 2021, can measure open water surface elevation. SWOT observed data can be used to calibrate hydrodynamics model at river flow forecasting and is expected to improve model's accuracy. Combining observation data into model to calibrate is called data assimilation. In this research, we developed data-assimilated river flow simulation system in global scale, using CaMa-Flood as river hydrodynamics model and simulated SWOT as observation data. Generally at data assimilation, calibrating "model value" with "observation value" makes "assimilated value". However, the observed data of SWOT satellite will not be available until its launch in 2021. Instead, we simulated the SWOT observed data using CaMa-Flood. Putting "pure input" into CaMa-Flood produce "true water storage". Extracting actual daily swath of SWOT from "true water storage" made simulated observation. For "model value", we made "disturbed water storage" by putting "noise disturbed input" to CaMa-Flood. Since both "model value" and "observation value" are made by same model, we named this twin simulation. At twin simulation, simulated observation of "true water storage" is combined with "disturbed water storage" to make "assimilated value". As the data assimilation method, we used ensemble Kalman filter. If "assimilated value" is closer to "true water storage" than "disturbed water storage", the data assimilation can be marked effective. Also

  8. Calibration Under Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  9. Calibration of a distributed hydrologic model using observed spatial patterns from MODIS data

    Science.gov (United States)

    Demirel, Mehmet C.; González, Gorka M.; Mai, Juliane; Stisen, Simon

    2016-04-01

    Distributed hydrologic models are typically calibrated against streamflow observations at the outlet of the basin. Along with these observations from gauging stations, satellite based estimates offer independent evaluation data such as remotely sensed actual evapotranspiration (aET) and land surface temperature. The primary objective of the study is to compare model calibrations against traditional downstream discharge measurements with calibrations against simulated spatial patterns and combinations of both types of observations. While the discharge based model calibration typically improves the temporal dynamics of the model, it seems to give rise to minimum improvement of the simulated spatial patterns. In contrast, objective functions specifically targeting the spatial pattern performance could potentially increase the spatial model performance. However, most modeling studies, including the model formulations and parameterization, are not designed to actually change the simulated spatial pattern during calibration. This study investigates the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale hydrologic model (mHM). This model is selected as it allows for a change in the spatial distribution of key soil parameters through the optimization of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) values directly as input. In addition the simulated aET can be estimated at a spatial resolution suitable for comparison to the spatial patterns observed with MODIS data. To increase our control on spatial calibration we introduced three additional parameters to the model. These new parameters are part of an empirical equation to the calculate crop coefficient (Kc) from daily LAI maps and used to update potential evapotranspiration (PET) as model inputs. This is done instead of correcting/updating PET with just a uniform (or aspect driven) factor used in the mHM model

  10. Improving SWAT model prediction using an upgraded denitrification scheme and constrained auto calibration

    Science.gov (United States)

    The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...

  11. A Linear Viscoelastic Model Calibration of Sylgard 184.

    Energy Technology Data Exchange (ETDEWEB)

    Long, Kevin Nicholas; Brown, Judith Alice

    2017-04-01

    We calibrate a linear thermoviscoelastic model for solid Sylgard 184 (90-10 formulation), a lightly cross-linked, highly flexible isotropic elastomer for use both in Sierra / Solid Mechanics via the Universal Polymer Model as well as in Sierra / Structural Dynamics (Salinas) for use as an isotropic viscoelastic material. Material inputs for the calibration in both codes are provided. The frequency domain master curve of oscillatory shear was obtained from a report from Los Alamos National Laboratory (LANL). However, because the form of that data is different from the constitutive models in Sierra, we also present the mapping of the LANL data onto Sandia’s constitutive models. Finally, blind predictions of cyclic tension and compression out to moderate strains of 40 and 20% respectively are compared with Sandia’s legacy cure schedule material. Although the strain rate of the data is unknown, the linear thermoviscoelastic model accurately predicts the experiments out to moderate strains for the slower strain rates, which is consistent with the expectation that quasistatic test procedures were likely followed. This good agreement comes despite the different cure schedules between the Sandia and LANL data.

  12. Technical note: Bayesian calibration of dynamic ruminant nutrition models.

    Science.gov (United States)

    Reed, K F; Arhonditsis, G B; France, J; Kebreab, E

    2016-08-01

    Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. Global sensitivity analysis applied to drying models for one or a population of granules

    DEFF Research Database (Denmark)

    Mortier, Severine Therese F. C.; Gernaey, Krist; Thomas, De Beer

    2014-01-01

    The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring sensitiv......The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring...... sensitivity in a broad parameter space, is performed to detect the most sensitive factors in two models, that is, one for drying of a single granule and one for the drying of a population of granules [using population balance model (PBM)], which was extended by including the gas velocity as extra input...... compared to our earlier work. beta(2) was found to be the most important factor for the single particle model which is useful information when performing model calibration. For the PBM-model, the granule radius and gas temperature were found to be most sensitive. The former indicates that granulator...

  14. CALIBRATION OF DISTRIBUTED SHALLOW LANDSLIDE MODELS IN FORESTED LANDSCAPES

    Directory of Open Access Journals (Sweden)

    Gian Battista Bischetti

    2010-09-01

    Full Text Available In mountainous-forested soil mantled landscapes all around the world, rainfall-induced shallow landslides are one of the most common hydro-geomorphic hazards, which frequently impact the environment and human lives and properties. In order to produce shallow landslide susceptibility maps, several models have been proposed in the last decade, combining simplified steady state topography- based hydrological models with the infinite slope scheme, in a GIS framework. In the present paper, two of the still open issues are investigated: the assessment of the validity of slope stability models and the inclusion of root cohesion values. In such a perspective the “Stability INdex MAPping” has been applied to a small forested pre-Alpine catchment, adopting different calibrating approaches and target indexes. The Single and the Multiple Calibration Regions modality and three quantitative target indexes – the common Success Rate (SR, the Modified Success Rate (MSR, and a Weighted Modified Success Rate (WMSR herein introduced – are considered. The results obtained show that the target index can 34 003_Bischetti(569_23 1-12-2010 9:48 Pagina 34 significantly affect the values of a model’s parameters and lead to different proportions of stable/unstable areas, both for the Single and the Multiple Calibration Regions approach. The use of SR as the target index leads to an over-prediction of the unstable areas, whereas the use of MSR and WMSR, seems to allow a better discrimination between stable and unstable areas. The Multiple Calibration Regions approach should be preferred, using information on space distribution of vegetation to define the Regions. The use of field-based estimation of root cohesion and sliding depth allows the implementation of slope stability models (SINMAP in our case also without the data needed for calibration. To maximize the inclusion of such parameters into SINMAP, however, the assumption of a uniform distribution of

  15. A Generic Software Framework for Data Assimilation and Model Calibration

    NARCIS (Netherlands)

    Van Velzen, N.

    2010-01-01

    The accuracy of dynamic simulation models can be increased by using observations in conjunction with a data assimilation or model calibration algorithm. However, implementing such algorithms usually increases the complexity of the model software significantly. By using concepts from object oriented

  16. Setting up a hydrological model based on global data for the Ayeyarwady basin in Myanmar

    Science.gov (United States)

    ten Velden, Corine; Sloff, Kees; Nauta, Tjitte

    2017-04-01

    The use of global datasets in local hydrological modelling can be of great value. It opens up the possibility to include data for areas where local data is not or only sparsely available. In hydrological modelling the existence of both static physical data such as elevation and land use, and dynamic meteorological data such as precipitation and temperature, is essential for setting up a hydrological model, but often such data is difficult to obtain at the local level. For the Ayeyarwady catchment in Myanmar a distributed hydrological model (Wflow: https://github.com/openstreams/wflow) was set up with only global datasets, as part of a water resources study. Myanmar is an emerging economy, which has only recently become more receptive to foreign influences. It has a very limited hydrometeorological measurement network, with large spatial and temporal gaps, and data that are of uncertain quality and difficult to obtain. The hydrological model was thus set up based on resampled versions of the SRTM digital elevation model, the GlobCover land cover dataset and the HWSD soil dataset. Three global meteorological datasets were assessed and compared for use in the hydrological model: TRMM, WFDEI and MSWEP. The meteorological datasets were assessed based on their conformity with several precipitation station measurements, and the overall model performance was assessed by calculating the NSE and RVE based on discharge measurements of several gauging stations. The model was run for the period 1979-2012 on a daily time step, and the results show an acceptable applicability of the used global datasets in the hydrological model. The WFDEI forcing dataset gave the best results, with a NSE of 0.55 at the outlet of the model and a RVE of 8.5%, calculated over the calibration period 2006-2012. As a general trend the modelled discharge at the upstream stations tends to be underestimated, and at the downstream stations slightly overestimated. The quality of the discharge measurements

  17. Michelson Interferometer for Global High-Resolution Thermospheric Imaging (MIGHTI): Instrument Design and Calibration

    Science.gov (United States)

    Englert, Christoph R.; Harlander, John M.; Brown, Charles M.; Marr, Kenneth D.; Miller, Ian J.; Stump, J. Eloise; Hancock, Jed; Peterson, James Q.; Kumler, Jay; Morrow, William H.; Mooney, Thomas A.; Ellis, Scott; Mende, Stephen B.; Harris, Stewart E.; Stevens, Michael H.; Makela, Jonathan J.; Harding, Brian J.; Immel, Thomas J.

    2017-10-01

    The Michelson Interferometer for Global High-resolution Thermospheric Imaging (MIGHTI) instrument was built for launch and operation on the NASA Ionospheric Connection Explorer (ICON) mission. The instrument was designed to measure thermospheric horizontal wind velocity profiles and thermospheric temperature in altitude regions between 90 km and 300 km, during day and night. For the wind measurements it uses two perpendicular fields of view pointed at the Earth's limb, observing the Doppler shift of the atomic oxygen red and green lines at 630.0 nm and 557.7 nm wavelength. The wavelength shift is measured using field-widened, temperature compensated Doppler Asymmetric Spatial Heterodyne (DASH) spectrometers, employing low order échelle gratings operating at two different orders for the different atmospheric lines. The temperature measurement is accomplished by a multichannel photometric measurement of the spectral shape of the molecular oxygen A-band around 762 nm wavelength. For each field of view, the signals of the two oxygen lines and the A-band are detected on different regions of a single, cooled, frame transfer charge coupled device (CCD) detector. On-board calibration sources are used to periodically quantify thermal drifts, simultaneously with observing the atmosphere. The MIGHTI requirements, the resulting instrument design and the calibration are described.

  18. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    S. Finsterle

    2004-09-02

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross

  19. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Finsterle, S.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross-Drift to obtain the permeability structure for the seepage model

  20. SWAT application in intensive irrigation systems: Model modification, calibration and validation

    Science.gov (United States)

    Dechmi, Farida; Burguete, Javier; Skhiri, Ahmed

    2012-11-01

    SummaryThe Soil and Water Assessment Tool (SWAT) is a well established, distributed, eco-hydrologic model. However, using the study case of an agricultural intensive irrigated watershed, it was shown that all the model versions are not able to appropriately reproduce the total streamflow in such system when the irrigation source is outside the watershed. The objective of this study was to modify the SWAT2005 version for correctly simulating the main hydrological processes. Crop yield, total streamflow, total suspended sediment (TSS) losses and phosphorus load calibration and validation were performed using field survey information and water quantity and quality data recorded during 2008 and 2009 years in Del Reguero irrigated watershed in Spain. The goodness of the calibration and validation results was assessed using five statistical measures, including the Nash-Sutcliffe efficiency (NSE). Results indicated that the average annual crop yield and actual evapotranspiration estimations were quite satisfactory. On a monthly basis, the values of NSE were 0.90 (calibration) and 0.80 (validation) indicating that the modified model could reproduce accurately the observed streamflow. The TSS losses were also satisfactorily estimated (NSE = 0.72 and 0.52 for the calibration and validation steps). The monthly temporal patterns and all the statistical parameters indicated that the modified SWAT-IRRIG model adequately predicted the total phosphorus (TP) loading. Therefore, the model could be used to assess the impacts of different best management practices on nonpoint phosphorus losses in irrigated systems.

  1. Beyond discrimination: A comparison of calibration methods and clinical usefulness of predictive models of readmission risk.

    Science.gov (United States)

    Walsh, Colin G; Sharman, Kavya; Hripcsak, George

    2017-12-01

    Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration

  2. Calibration of the heat balance model for prediction of car climate

    Science.gov (United States)

    Pokorný, Jan; Fišer, Jan; Jícha, Miroslav

    2012-04-01

    In the paper, the authors refer to development a heat balance model to predict car climate and power heat load. Model is developed in Modelica language using Dymola as interpreter. It is a dynamical system, which describes a heat exchange between car cabin and ambient. Inside a car cabin, there is considered heat exchange between air zone, interior and air-conditioning system. It is considered 1D heat transfer with a heat accumulation and a relative movement Sun respect to the car cabin, whilst car is moving. Measurements of the real operating conditions of gave us data for model calibration. The model was calibrated for Škoda Felicia parking-summer scenarios.

  3. Dynamic calibration of agent-based models using data assimilation.

    Science.gov (United States)

    Ward, Jonathan A; Evans, Andrew J; Malleson, Nicolas S

    2016-04-01

    A widespread approach to investigating the dynamical behaviour of complex social systems is via agent-based models (ABMs). In this paper, we describe how such models can be dynamically calibrated using the ensemble Kalman filter (EnKF), a standard method of data assimilation. Our goal is twofold. First, we want to present the EnKF in a simple setting for the benefit of ABM practitioners who are unfamiliar with it. Second, we want to illustrate to data assimilation experts the value of using such methods in the context of ABMs of complex social systems and the new challenges these types of model present. We work towards these goals within the context of a simple question of practical value: how many people are there in Leeds (or any other major city) right now? We build a hierarchy of exemplar models that we use to demonstrate how to apply the EnKF and calibrate these using open data of footfall counts in Leeds.

  4. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  5. SWAT Model Configuration, Calibration and Validation for Lake Champlain Basin

    Science.gov (United States)

    The Soil and Water Assessment Tool (SWAT) model was used to develop phosphorus loading estimates for sources in the Lake Champlain Basin. This document describes the model setup and parameterization, and presents calibration results.

  6. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    Science.gov (United States)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that

  7. Hydrologic Model Development and Calibration: Contrasting a Single- and Multi-Objective Approach for Comparing Model Performance

    Science.gov (United States)

    Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.

    2009-05-01

    Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment

  8. Statistical validation of engineering and scientific models : bounds, calibration, and extrapolation.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Hills, Richard Guy (New Mexico State University, Las Cruces, NM)

    2005-04-01

    Numerical models of complex phenomena often contain approximations due to our inability to fully model the underlying physics, the excessive computational resources required to fully resolve the physics, the need to calibrate constitutive models, or in some cases, our ability to only bound behavior. Here we illustrate the relationship between approximation, calibration, extrapolation, and model validation through a series of examples that use the linear transient convective/dispersion equation to represent the nonlinear behavior of Burgers equation. While the use of these models represents a simplification relative to the types of systems we normally address in engineering and science, the present examples do support the tutorial nature of this document without obscuring the basic issues presented with unnecessarily complex models.

  9. Hydrological model calibration for flood prediction in current and future climates using probability distributions of observed peak flows and model based rainfall

    Science.gov (United States)

    Haberlandt, Uwe; Wallner, Markus; Radtke, Imke

    2013-04-01

    Derived flood frequency analysis based on continuous hydrological modelling is very demanding regarding the required length and temporal resolution of precipitation input data. Often such flood predictions are obtained using long precipitation time series from stochastic approaches or from regional climate models as input. However, the calibration of the hydrological model is usually done using short time series of observed data. This inconsistent employment of different data types for calibration and application of a hydrological model increases its uncertainty. Here, it is proposed to calibrate a hydrological model directly on probability distributions of observed peak flows using model based rainfall in line with its later application. Two examples are given to illustrate the idea. The first one deals with classical derived flood frequency analysis using input data from an hourly stochastic rainfall model. The second one concerns a climate impact analysis using hourly precipitation from a regional climate model. The results show that: (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated on extreme conditions works quite well for average conditions but not vice versa, (III) the calibration of the hydrological model using regional climate model data works as an implicit bias correction method and (IV) the best performance for flood estimation is usually obtained when model based precipitation and observed probability distribution of peak flows are used for model calibration.

  10. AN IMPROVED INTERFEROMETRIC CALIBRATION METHOD BASED ON INDEPENDENT PARAMETER DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    J. Fan

    2018-04-01

    Full Text Available Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM. The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs. However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD. Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  11. Flood Inundation Modelling Under Uncertainty Using Globally and Freely Available Remote Sensing Data

    Science.gov (United States)

    Yan, K.; Di Baldassarre, G.; Giustarini, L.; Solomatine, D. P.

    2012-04-01

    The extreme consequences of recent catastrophic events have highlighted that flood risk prevention still needs to be improved to reduce human losses and economic damages, which have considerably increased worldwide in recent years. Flood risk management and long term floodplain planning are vital for living with floods, which is the currently proposed approach to cope with floods. To support the decision making processes, a significant issue is the availability of data to build appropriate and reliable models, from which the needed information could be obtained. The desirable data for model building, calibration and validation are often not sufficient or available. A unique opportunity is offered nowadays by globally available data which can be freely downloaded from internet. This might open new opportunities for filling the gap between available and needed data, in order to build reliable models and potentially lead to the development of global inundation models to produce floodplain maps for the entire globe. However, there remains the question of what is the real potential of those global remote sensing data, characterized by different accuracy, for global inundation monitoring and how to integrate them with inundation models. This research aims at contributing to understand whether the current globally and freely available remote sensing data (e.g. SRTM, SAR) can be actually used to appropriately support inundation modelling. In this study, the SRTM DEM is used for hydraulic model building, while ENVISAT-ASAR satellite imagery is used for model validation. To test the usefulness of these globally and freely available data, a model based on the high resolution LiDAR DEM and ground data (high water marks) is used as benchmark. The work is carried out on a data-rich test site: the River Alzette in the north of Luxembourg City. Uncertainties are estimated for both SRTM and LiDAR based models. Probabilistic flood inundation maps are produced under the framework of

  12. Calibration of hydrological model with programme PEST

    Science.gov (United States)

    Brilly, Mitja; Vidmar, Andrej; Kryžanowski, Andrej; Bezak, Nejc; Šraj, Mojca

    2016-04-01

    PEST is tool based on minimization of an objective function related to the root mean square error between the model output and the measurement. We use "singular value decomposition", section of the PEST control file, and Tikhonov regularization method for successfully estimation of model parameters. The PEST sometimes failed if inverse problems were ill-posed, but (SVD) ensures that PEST maintains numerical stability. The choice of the initial guess for the initial parameter values is an important issue in the PEST and need expert knowledge. The flexible nature of the PEST software and its ability to be applied to whole catchments at once give results of calibration performed extremely well across high number of sub catchments. Use of parallel computing version of PEST called BeoPEST was successfully useful to speed up calibration process. BeoPEST employs smart slaves and point-to-point communications to transfer data between the master and slaves computers. The HBV-light model is a simple multi-tank-type model for simulating precipitation-runoff. It is conceptual balance model of catchment hydrology which simulates discharge using rainfall, temperature and estimates of potential evaporation. Version of HBV-light-CLI allows the user to run HBV-light from the command line. Input and results files are in XML form. This allows to easily connecting it with other applications such as pre and post-processing utilities and PEST itself. The procedure was applied on hydrological model of Savinja catchment (1852 km2) and consists of twenty one sub-catchments. Data are temporary processed on hourly basis.

  13. PEST modules with regularization for the acceleration of the automatic calibration in hydrodynamic models

    Directory of Open Access Journals (Sweden)

    Polomčić Dušan M.

    2015-01-01

    Full Text Available The calibration process of hydrodynamic model is done usually manually by 'testing' with different values of hydrogeological parameters and hydraulic characteristics of the boundary conditions. By using the PEST program, automatic calibration of models has been introduced, and it has proved to significantly reduce the subjective influence of the model creator on results. With the relatively new approach of PEST, i.e. with the introduction of so-called 'pilot points', the concept of homogeneous zones with parameter values of porous media or zones with the given boundary conditions has been outdated. However, the consequence of this kind of automatic calibration is that a significant amount of time is required to perform the calculation. The duration of calibration is measured in hours, sometimes even days. PEST contains two modules for the shortening of that process - Parallel PEST and BeoPEST. The paper presents performed experiments and analysis of different cases of PEST module usage, based on which the reduction in the time required to calibrate the model is done.

  14. The Open Global Glacier Model

    Science.gov (United States)

    Marzeion, B.; Maussion, F.

    2017-12-01

    Mountain glaciers are one of the few remaining sub-systems of the global climate system for which no globally applicable, open source, community-driven model exists. Notable examples from the ice sheet community include the Parallel Ice Sheet Model or Elmer/Ice. While the atmospheric modeling community has a long tradition of sharing models (e.g. the Weather Research and Forecasting model) or comparing them (e.g. the Coupled Model Intercomparison Project or CMIP), recent initiatives originating from the glaciological community show a new willingness to better coordinate global research efforts following the CMIP example (e.g. the Glacier Model Intercomparison Project or the Glacier Ice Thickness Estimation Working Group). In the recent past, great advances have been made in the global availability of data and methods relevant for glacier modeling, spanning glacier outlines, automatized glacier centerline identification, bed rock inversion methods, and global topographic data sets. Taken together, these advances now allow the ice dynamics of glaciers to be modeled on a global scale, provided that adequate modeling platforms are available. Here, we present the Open Global Glacier Model (OGGM), developed to provide a global scale, modular, and open source numerical model framework for consistently simulating past and future global scale glacier change. Global not only in the sense of leading to meaningful results for all glaciers combined, but also for any small ensemble of glaciers, e.g. at the headwater catchment scale. Modular to allow combinations of different approaches to the representation of ice flow and surface mass balance, enabling a new kind of model intercomparison. Open source so that the code can be read and used by anyone and so that new modules can be added and discussed by the community, following the principles of open governance. Consistent in order to provide uncertainty measures at all realizable scales.

  15. [Outlier sample discriminating methods for building calibration model in melons quality detecting using NIR spectra].

    Science.gov (United States)

    Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang

    2012-11-01

    Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).

  16. High Gain Antenna Calibration on Three Spacecraft

    Science.gov (United States)

    Hashmall, Joseph A.

    2011-01-01

    This paper describes the alignment calibration of spacecraft High Gain Antennas (HGAs) for three missions. For two of the missions (the Lunar Reconnaissance Orbiter and the Solar Dynamics Observatory) the calibration was performed on orbit. For the third mission (the Global Precipitation Measurement core satellite) ground simulation of the calibration was performed in a calibration feasibility study. These three satellites provide a range of calibration situations-Lunar orbit transmitting to a ground antenna for LRO, geosynchronous orbit transmitting to a ground antenna fer SDO, and low Earth orbit transmitting to TDRS satellites for GPM The calibration results depend strongly on the quality and quantity of calibration data. With insufficient data the calibration Junction may give erroneous solutions. Manual intervention in the calibration allowed reliable parameters to be generated for all three missions.

  17. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  18. Calibration of a complex activated sludge model for the full-scale wastewater treatment plant

    OpenAIRE

    Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw

    2011-01-01

    In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that u...

  19. Calibration of Ocean Forcing with satellite Flux Estimates (COFFEE)

    Science.gov (United States)

    Barron, Charlie; Jan, Dastugue; Jackie, May; Rowley, Clark; Smith, Scott; Spence, Peter; Gremes-Cordero, Silvia

    2016-04-01

    Predicting the evolution of ocean temperature in regional ocean models depends on estimates of surface heat fluxes and upper-ocean processes over the forecast period. Within the COFFEE project (Calibration of Ocean Forcing with satellite Flux Estimates, real-time satellite observations are used to estimate shortwave, longwave, sensible, and latent air-sea heat flux corrections to a background estimate from the prior day's regional or global model forecast. These satellite-corrected fluxes are used to prepare a corrected ocean hindcast and to estimate flux error covariances to project the heat flux corrections for a 3-5 day forecast. In this way, satellite remote sensing is applied to not only inform the initial ocean state but also to mitigate errors in surface heat flux and model representations affecting the distribution of heat in the upper ocean. While traditional assimilation of sea surface temperature (SST) observations re-centers ocean models at the start of each forecast cycle, COFFEE endeavors to appropriately partition and reduce among various surface heat flux and ocean dynamics sources. A suite of experiments in the southern California Current demonstrates a range of COFFEE capabilities, showing the impact on forecast error relative to a baseline three-dimensional variational (3DVAR) assimilation using operational global or regional atmospheric forcing. Experiment cases combine different levels of flux calibration with assimilation alternatives. The cases use the original fluxes, apply full satellite corrections during the forecast period, or extend hindcast corrections into the forecast period. Assimilation is either baseline 3DVAR or standard strong-constraint 4DVAR, with work proceeding to add a 4DVAR expanded to include a weak constraint treatment of the surface flux errors. Covariance of flux errors is estimated from the recent time series of forecast and calibrated flux terms. While the California Current examples are shown, the approach is

  20. Calibration of the heat balance model for prediction of car climate

    Directory of Open Access Journals (Sweden)

    Jícha Miroslav

    2012-04-01

    Full Text Available In the paper, the authors refer to development a heat balance model to predict car climate and power heat load. Model is developed in Modelica language using Dymola as interpreter. It is a dynamical system, which describes a heat exchange between car cabin and ambient. Inside a car cabin, there is considered heat exchange between air zone, interior and air-conditioning system. It is considered 1D heat transfer with a heat accumulation and a relative movement Sun respect to the car cabin, whilst car is moving. Measurements of the real operating conditions of gave us data for model calibration. The model was calibrated for Škoda Felicia parking-summer scenarios.

  1. Global evaluation of runoff from ten state-of-the-art hydrological models

    Science.gov (United States)

    Beck, Hylke; de Roo, Ad; van Dijk, Albert; Schellekens, Jaap; Dutra, Emanuel; Fink, Gabriel; Orth, Rene

    2016-04-01

    Observed streamflow data from 966 medium sized catchments (1000 to 5000 km2) around the globe were used to comprehensively evaluate the daily runoff estimates (1979-2012) of six global hydrological models (GHMs) and four land surface models (LSMs) produced as part of Tier-1 of the eartH2Observe project. The models were all driven by the WATCH Forcing Data ERA-Interim (WFDEI) meteorological dataset, but used different datasets for non-meteorologic inputs and were run at various spatial and temporal resolutions, although all data were re-sampled to a common 0.5° spatial and daily temporal resolution. For the evaluation, we used a broad range of performance metrics related to important aspects of the hydrograph. We found pronounced inter-model performance differences, underscoring the importance of hydrological model uncertainty in addition to climate input uncertainty, for example in studies assessing the hydrological impacts of climate change. The (uncalibrated) GHMs were found to perform better than the LSMs in snow-dominated regions, and the ensemble mean was found to perform only slightly worse than the best (calibrated) model. The models generally showed an early bias in the spring snowmelt peak. We further found that, despite adjustments using gauge observations, the WFDEI precipitation data still contain substantial biases which propagate in the simulated runoff. Overall, more effort should be devoted to calibrating and regionalizing the parameters of macro-scale models.

  2. Can We Use Regression Modeling to Quantify Mean Annual Streamflow at a Global-Scale?

    Science.gov (United States)

    Barbarossa, V.; Huijbregts, M. A. J.; Hendriks, J. A.; Beusen, A.; Clavreul, J.; King, H.; Schipper, A.

    2016-12-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF using observations of discharge and catchment characteristics from 1,885 catchments worldwide, ranging from 2 to 106 km2 in size. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB [van Beek et al., 2011] by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area, mean annual precipitation and air temperature, average slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error values were lower (0.29 - 0.38 compared to 0.49 - 0.57) and the modified index of agreement was higher (0.80 - 0.83 compared to 0.72 - 0.75). Our regression model can be applied globally at any point of the river network, provided that the input parameters are within the range of values employed in the calibration of the model. The performance is reduced for water scarce regions and further research should focus on improving such an aspect for regression-based global hydrological models.

  3. The design and realization of calibration apparatus for measuring the concentration of radon in three models

    Energy Technology Data Exchange (ETDEWEB)

    Huiping, Guo [The Second Artillery Engineering College, Xi' an (China)

    2007-06-15

    For satisfying calibration request of radon measure in the laboratory, the calibration apparatus for radon activity measure is designed and realized. The calibration apparatus can auto-control and auto-measure in three models. sequent mode, pulse mode and constant mode. The stability and reliability of the calibration apparatus was tested under the three models. The experimental result shows that the apparatus can provides an adjustable and steady radon activity concentration environment for the research of radon and its progeny and for the calibration of its measure. (authors)

  4. Absolute radiometric calibration of Landsat using a pseudo invariant calibration site

    Science.gov (United States)

    Helder, D.; Thome, K.J.; Mishra, N.; Chander, G.; Xiong, Xiaoxiong; Angal, A.; Choi, Tae-young

    2013-01-01

    Pseudo invariant calibration sites (PICS) have been used for on-orbit radiometric trending of optical satellite systems for more than 15 years. This approach to vicarious calibration has demonstrated a high degree of reliability and repeatability at the level of 1-3% depending on the site, spectral channel, and imaging geometries. A variety of sensors have used this approach for trending because it is broadly applicable and easy to implement. Models to describe the surface reflectance properties, as well as the intervening atmosphere have also been developed to improve the precision of the method. However, one limiting factor of using PICS is that an absolute calibration capability has not yet been fully developed. Because of this, PICS are primarily limited to providing only long term trending information for individual sensors or cross-calibration opportunities between two sensors. This paper builds an argument that PICS can be used more extensively for absolute calibration. To illustrate this, a simple empirical model is developed for the well-known Libya 4 PICS based on observations by Terra MODIS and EO-1 Hyperion. The model is validated by comparing model predicted top-of-atmosphere reflectance values to actual measurements made by the Landsat ETM+ sensor reflective bands. Following this, an outline is presented to develop a more comprehensive and accurate PICS absolute calibration model that can be Système international d'unités (SI) traceable. These initial concepts suggest that absolute calibration using PICS is possible on a broad scale and can lead to improved on-orbit calibration capabilities for optical satellite sensors.

  5. Procedure for the Selection and Validation of a Calibration Model I-Description and Application.

    Science.gov (United States)

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2017-05-01

    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Comparison of Performance between Genetic Algorithm and SCE-UA for Calibration of SCS-CN Surface Runoff Simulation

    OpenAIRE

    Jeon, Ji-Hong; Park, Chan-Gi; Engel, Bernard

    2014-01-01

    Global optimization methods linked with simulation models are widely used for automated calibration and serve as useful tools for searching for cost-effective alternatives for environmental management. A genetic algorithm (GA) and shuffled complex evolution (SCE-UA) algorithm were linked with the Long-Term Hydrologic Impact Assessment (L-THIA) model, which employs the curve number (SCS-CN) method. The performance of the two optimization methods was compared by automatically calibrating L-THI...

  7. Thermodynamically consistent model calibration in chemical kinetics

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2011-05-01

    Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new

  8. Diagnosing the impact of alternative calibration strategies on coupled hydrologic models

    Science.gov (United States)

    Smith, T. J.; Perera, C.; Corrigan, C.

    2017-12-01

    Hydrologic models represent a significant tool for understanding, predicting, and responding to the impacts of water on society and society on water resources and, as such, are used extensively in water resources planning and management. Given this important role, the validity and fidelity of hydrologic models is imperative. While extensive focus has been paid to improving hydrologic models through better process representation, better parameter estimation, and better uncertainty quantification, significant challenges remain. In this study, we explore a number of competing model calibration scenarios for simple, coupled snowmelt-runoff models to better understand the sensitivity / variability of parameterizations and its impact on model performance, robustness, fidelity, and transferability. Our analysis highlights the sensitivity of coupled snowmelt-runoff model parameterizations to alterations in calibration approach, underscores the concept of information content in hydrologic modeling, and provides insight into potential strategies for improving model robustness / fidelity.

  9. A multi-source satellite data approach for modelling Lake Turkana water level: calibration and validation using satellite altimetry data

    Directory of Open Access Journals (Sweden)

    N. M. Velpuri

    2012-01-01

    satellite-driven water balance model for (i quantitative assessment of the impact of basin developmental activities on lake levels and for (ii forecasting lake level changes and their impact on fisheries. From this study, we suggest that globally available satellite altimetry data provide a unique opportunity for calibration and validation of hydrologic models in ungauged basins.

  10. A multi-source satellite data approach for modelling Lake Turkana water level: Calibration and validation using satellite altimetry data

    Science.gov (United States)

    Velpuri, N.M.; Senay, G.B.; Asante, K.O.

    2012-01-01

    model for (i) quantitative assessment of the impact of basin developmental activities on lake levels and for (ii) forecasting lake level changes and their impact on fisheries. From this study, we suggest that globally available satellite altimetry data provide a unique opportunity for calibration and validation of hydrologic models in ungauged basins. ?? Author(s) 2012.

  11. Description, calibration and sensitivity analysis of the local ecosystem submodel of a global model of carbon and nitrogen cycling and the water balance in the terrestrial biosphere

    Energy Technology Data Exchange (ETDEWEB)

    Kercher, J.R. [Lawrence Livermore National Lab., CA (United States); Chambers, J.Q. [Lawrence Livermore National Lab., CA (United States)]|[California Univ., Santa Barbara, CA (United States). Dept. of Biological Sciences

    1995-10-01

    We have developed a geographically-distributed ecosystem model for the carbon, nitrogen, and water dynamics of the terrestrial biosphere TERRA. The local ecosystem model of TERRA consists of coupled, modified versions of TEM and DAYTRANS. The ecosystem model in each grid cell calculates water fluxes of evaporation, transpiration, and runoff; carbon fluxes of gross primary productivity, litterfall, and plant and soil respiration; and nitrogen fluxes of vegetation uptake, litterfall, mineralization, immobilization, and system loss. The state variables are soil water content; carbon in live vegetation; carbon in soil; nitrogen in live vegetation; organic nitrogen in soil and fitter; available inorganic nitrogen aggregating nitrites, nitrates, and ammonia; and a variable for allocation. Carbon and nitrogen dynamics are calibrated to specific sites in 17 vegetation types. Eight parameters are determined during calibration for each of the 17 vegetation types. At calibration, the annual average values of carbon in vegetation C, show site differences that derive from the vegetation-type specific parameters and intersite variation in climate and soils. From calibration, we recover the average C{sub v} of forests, woodlands, savannas, grasslands, shrublands, and tundra that were used to develop the model initially. The timing of the phases of the annual variation is driven by temperature and light in the high latitude and moist temperate zones. The dry temperate zones are driven by temperature, precipitation, and light. In the tropics, precipitation is the key variable in annual variation. The seasonal responses are even more clearly demonstrated in net primary production and show the same controlling factors.

  12. Calibration of a distributed hydrologic model for six European catchments using remote sensing data

    Science.gov (United States)

    Stisen, S.; Demirel, M. C.; Mendiguren González, G.; Kumar, R.; Rakovec, O.; Samaniego, L. E.

    2017-12-01

    While observed streamflow has been the single reference for most conventional hydrologic model calibration exercises, the availability of spatially distributed remote sensing observations provide new possibilities for multi-variable calibration assessing both spatial and temporal variability of different hydrologic processes. In this study, we first identify the key transfer parameters of the mesoscale Hydrologic Model (mHM) controlling both the discharge and the spatial distribution of actual evapotranspiration (AET) across six central European catchments (Elbe, Main, Meuse, Moselle, Neckar and Vienne). These catchments are selected based on their limited topographical and climatic variability which enables to evaluate the effect of spatial parameterization on the simulated evapotranspiration patterns. We develop a European scale remote sensing based actual evapotranspiration dataset at a 1 km grid scale driven primarily by land surface temperature observations from MODIS using the TSEB approach. Using the observed AET maps we analyze the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mHM model. This model allows calibrating one-basin-at-a-time or all-basins-together using its unique structure and multi-parameter regionalization approach. Results will indicate any tradeoffs between spatial pattern and discharge simulation during model calibration and through validation against independent internal discharge locations. Moreover, added value on internal water balances will be analyzed.

  13. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  14. Uncertainty analyses of the calibrated parameter values of a water quality model

    Science.gov (United States)

    Rode, M.; Suhr, U.; Lindenschmidt, K.-E.

    2003-04-01

    For river basin management water quality models are increasingly used for the analysis and evaluation of different management measures. However substantial uncertainties exist in parameter values depending on the available calibration data. In this paper an uncertainty analysis for a water quality model is presented, which considers the impact of available model calibration data and the variance of input variables. The investigation was conducted based on four extensive flowtime related longitudinal surveys in the River Elbe in the years 1996 to 1999 with varying discharges and seasonal conditions. For the model calculations the deterministic model QSIM of the BfG (Germany) was used. QSIM is a one dimensional water quality model and uses standard algorithms for hydrodynamics and phytoplankton dynamics in running waters, e.g. Michaelis Menten/Monod kinetics, which are used in a wide range of models. The multi-objective calibration of the model was carried out with the nonlinear parameter estimator PEST. The results show that for individual flow time related measuring surveys very good agreements between model calculation and measured values can be obtained. If these parameters are applied to deviating boundary conditions, substantial errors in model calculation can occur. These uncertainties can be decreased with an increased calibration database. More reliable model parameters can be identified, which supply reasonable results for broader boundary conditions. The extension of the application of the parameter set on a wider range of water quality conditions leads to a slight reduction of the model precision for the specific water quality situation. Moreover the investigations show that highly variable water quality variables like the algal biomass always allow a smaller forecast accuracy than variables with lower coefficients of variation like e.g. nitrate.

  15. Calibration, Projection, and Final Image Products of MESSENGER's Mercury Dual Imaging System

    Science.gov (United States)

    Denevi, Brett W.; Chabot, Nancy L.; Murchie, Scott L.; Becker, Kris J.; Blewett, David T.; Domingue, Deborah L.; Ernst, Carolyn M.; Hash, Christopher D.; Hawkins, S. Edward; Keller, Mary R.; Laslo, Nori R.; Nair, Hari; Robinson, Mark S.; Seelos, Frank P.; Stephens, Grant K.; Turner, F. Scott; Solomon, Sean C.

    2018-02-01

    We present an overview of the operations, calibration, geodetic control, photometric standardization, and processing of images from the Mercury Dual Imaging System (MDIS) acquired during the orbital phase of the MESSENGER spacecraft's mission at Mercury (18 March 2011-30 April 2015). We also provide a summary of all of the MDIS products that are available in NASA's Planetary Data System (PDS). Updates to the radiometric calibration included slight modification of the frame-transfer smear correction, updates to the flat fields of some wide-angle camera (WAC) filters, a new model for the temperature dependence of narrow-angle camera (NAC) and WAC sensitivity, and an empirical correction for temporal changes in WAC responsivity. Further, efforts to characterize scattered light in the WAC system are described, along with a mosaic-dependent correction for scattered light that was derived for two regional mosaics. Updates to the geometric calibration focused on the focal lengths and distortions of the NAC and all WAC filters, NAC-WAC alignment, and calibration of the MDIS pivot angle and base. Additionally, two control networks were derived so that the majority of MDIS images can be co-registered with sub-pixel accuracy; the larger of the two control networks was also used to create a global digital elevation model. Finally, we describe the image processing and photometric standardization parameters used in the creation of the MDIS advanced products in the PDS, which include seven large-scale mosaics, numerous targeted local mosaics, and a set of digital elevation models ranging in scale from local to global.

  16. A Fundamental Parameter-Based Calibration Model for an Intrinsic Germanium X-Ray Fluorescence Spectrometer

    DEFF Research Database (Denmark)

    Christensen, Leif Højslet; Pind, Niels

    1982-01-01

    A matrix-independent fundamental parameter-based calibration model for an energy-dispersive X-ray fluorescence spectrometer has been developed. This model, which is part of a fundamental parameter approach quantification method, accounts for both the excitation and detection probability. For each...... secondary target a number of relative calibration constants are calculated on the basis of knowledge of the irradiation geometry, the detector specifications, and tabulated fundamental physical parameters. The absolute calibration of the spectrometer is performed by measuring one pure element standard per...

  17. Sensitivity analysis and calibration of a soil carbon model (SoilGen2 in two contrasting loess forest soils

    Directory of Open Access Journals (Sweden)

    Y. Y. Yu

    2013-01-01

    Full Text Available To accurately estimate past terrestrial carbon pools is the key to understanding the global carbon cycle and its relationship with the climate system. SoilGen2 is a useful tool to obtain aspects of soil properties (including carbon content by simulating soil formation processes; thus it offers an opportunity for both past soil carbon pool reconstruction and future carbon pool prediction. In order to apply it to various environmental conditions, parameters related to carbon cycle process in SoilGen2 are calibrated based on six soil pedons from two typical loess deposition regions (Belgium and China. Sensitivity analysis using the Morris method shows that decomposition rate of humus (kHUM, fraction of incoming plant material as leaf litter (frecto and decomposition rate of resistant plant material (kRPM are the three most sensitive parameters that would cause the greatest uncertainty in simulated change of soil organic carbon in both regions. According to the principle of minimizing the difference between simulated and measured organic carbon by comparing quality indices, the suited values of kHUM, (frecto and kRPM in the model are deduced step by step and validated for independent soil pedons. The difference of calibrated parameters between Belgium and China may be attributed to their different vegetation types and climate conditions. This calibrated model allows more accurate simulation of carbon change in the whole pedon and has potential for future modeling of carbon cycle over long timescales.

  18. The Wally plot approach to assess the calibration of clinical prediction models.

    Science.gov (United States)

    Blanche, Paul; Gerds, Thomas A; Ekstrøm, Claus T

    2017-12-06

    A prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. Typically, the calibration assumption is assessed graphically but in practice it is often challenging to judge whether a "disappointing" calibration plot is the consequence of a departure from the calibration assumption, or alternatively just "bad luck" due to sampling variability. We propose a graphical approach which enables the visualization of how much a calibration plot agrees with the calibration assumption to address this issue. The approach is mainly based on the idea of generating new plots which mimic the available data under the calibration assumption. The method handles the common non-trivial situations in which the data contain censored observations and occurrences of competing events. This is done by building on ideas from constrained non-parametric maximum likelihood estimation methods. Two examples from large cohort data illustrate our proposal. The 'wally' R package is provided to make the methodology easily usable.

  19. Applying downscaled Global Climate Model data to a groundwater model of the Suwannee River Basin, Florida, USA

    Science.gov (United States)

    Swain, Eric D.; Davis, J. Hal

    2016-01-01

    The application of Global Climate Model (GCM) output to a hydrologic model allows for comparisons between simulated recent and future conditions and provides insight into the dynamics of hydrology as it may be affected by climate change. A previously developed numerical model of the Suwannee River Basin, Florida, USA, was modified and calibrated to represent transient conditions. A simulation of recent conditions was developed for the 372-month period 1970-2000 and was compared with a simulation of future conditions for a similar-length period 2039-2069, which uses downscaled GCM data. The MODFLOW groundwater-simulation code was used in both of these simulations, and two different MODFLOW boundary condition “packages” (River and Streamflow-Routing Packages) were used to represent interactions between surface-water and groundwater features.

  20. Absolute calibration of SARAL/AltiKa in Kavaratti during its initial calibration-validation phase

    Digital Repository Service at National Institute of Oceanography (India)

    Babu, K.N.; Shukla, A.K.; Suchandra, A.B.; ArunKumar, S.V.V.; Bonnefond, P.; Testut, L.; Mehra, P.; Laurain, O.

    globally distributed region will offer assessment of the altimetry system, and allow us to check in specific conditions leading to different estimation of absolute bias of the instrument (Shum et al. 2003). In collaboration with National Institute... of Oceanography (NIO), Goa, Space Applica- tions Centre–Indian Space Research Organisation (SAC-ISRO) established a calibration- verification site in Kavaratti. This site offers a number of advantages as a calibration site for altimeters. Having very small land...

  1. ANN-based calibration model of FTIR used in transformer online monitoring

    Science.gov (United States)

    Li, Honglei; Liu, Xian-yong; Zhou, Fangjie; Tan, Kexiong

    2005-02-01

    Recently, chromatography column and gas sensor have been used in online monitoring device of dissolved gases in transformer oil. But some disadvantages still exist in these devices: consumption of carrier gas, requirement of calibration, etc. Since FTIR has high accuracy, consume no carrier gas and require no calibration, the researcher studied the application of FTIR in such monitoring device. Experiments of "Flow gas method" were designed, and spectrum of mixture composed of different gases was collected with A BOMEM MB104 FTIR Spectrometer. A key question in the application of FTIR is that: the absorbance spectrum of 3 fault key gases, including C2H4, CH4 and C2H6, are overlapped seriously at 2700~3400cm-1. Because Absorbance Law is no longer appropriate, a nonlinear calibration model based on BP ANN was setup to in the quantitative analysis. The height absorbance of C2H4, CH4 and C2H6 were adopted as quantitative feature, and all the data were normalized before training the ANN. Computing results show that the calibration model can effectively eliminate the cross disturbance to measurement.

  2. How does observation uncertainty influence which stream water samples are most informative for model calibration?

    Science.gov (United States)

    Wang, Ling; van Meerveld, Ilja; Seibert, Jan

    2016-04-01

    Streamflow isotope samples taken during rainfall-runoff events are very useful for multi-criteria model calibration because they can help decrease parameter uncertainty and improve internal model consistency. However, the number of samples that can be collected and analysed is often restricted by practical and financial constraints. It is, therefore, important to choose an appropriate sampling strategy and to obtain samples that have the highest information content for model calibration. We used the Birkenes hydrochemical model and synthetic rainfall, streamflow and isotope data to explore which samples are most informative for model calibration. Starting with error-free observations, we investigated how many samples are needed to obtain a certain model fit. Based on different parameter sets, representing different catchments, and different rainfall events, we also determined which sampling times provide the most informative data for model calibration. Our results show that simulation performance for models calibrated with the isotopic data from two intelligently selected samples was comparable to simulations based on isotopic data for all 100 time steps. The models calibrated with the intelligently selected samples also performed better than the model calibrations with two benchmark sampling strategies (random selection and selection based on hydrologic information). Surprisingly, samples on the rising limb and at the peak were less informative than expected and, generally, samples taken at the end of the event were most informative. The timing of the most informative samples depends on the proportion of different flow components (baseflow, slow response flow, fast response flow and overflow). For events dominated by baseflow and slow response flow, samples taken at the end of the event after the fast response flow has ended were most informative; when the fast response flow was dominant, samples taken near the peak were most informative. However when overflow

  3. Calibration of the 7—Equation Transition Model for High Reynolds Flows at Low Mach

    Science.gov (United States)

    Colonia, S.; Leble, V.; Steijl, R.; Barakos, G.

    2016-09-01

    The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling concept represents a valid way to include transitional effects into practical CFD simulations. However, the model involves coefficients that need tuning. In this paper, the γ—equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. An aerofoil is used to evaluate the original model and calibrate it; while a large scale wind turbine blade is employed to show that the calibrated model can lead to reliable solutions for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected.

  4. BETR global - A geographically-explicit global-scale multimedia contaminant fate model

    International Nuclear Information System (INIS)

    MacLeod, Matthew; Waldow, Harald von; Tay, Pascal; Armitage, James M.; Woehrnschimmel, Henry; Riley, William J.; McKone, Thomas E.; Hungerbuhler, Konrad

    2011-01-01

    We present two new software implementations of the BETR Global multimedia contaminant fate model. The model uses steady-state or non-steady-state mass-balance calculations to describe the fate and transport of persistent organic pollutants using a desktop computer. The global environment is described using a database of long-term average monthly conditions on a 15 o x 15 o grid. We demonstrate BETR Global by modeling the global sources, transport, and removal of decamethylcyclopentasiloxane (D5). - Two new software implementations of the Berkeley-Trent Global Contaminant Fate Model are available. The new model software is illustrated using a case study of the global fate of decamethylcyclopentasiloxane (D5).

  5. Generation and performance assessment of the global TanDEM-X digital elevation model

    Science.gov (United States)

    Rizzoli, Paola; Martone, Michele; Gonzalez, Carolina; Wecklich, Christopher; Borla Tridon, Daniela; Bräutigam, Benjamin; Bachmann, Markus; Schulze, Daniel; Fritz, Thomas; Huber, Martin; Wessel, Birgit; Krieger, Gerhard; Zink, Manfred; Moreira, Alberto

    2017-10-01

    The primary objective of the TanDEM-X mission is the generation of a global, consistent, and high-resolution digital elevation model (DEM) with unprecedented global accuracy. The goal is achieved by exploiting the interferometric capabilities of the two twin SAR satellites TerraSAR-X and TanDEM-X, which fly in a close orbit formation, acting as an X-band single-pass interferometer. Between December 2010 and early 2015 all land surfaces have been acquired at least twice, difficult terrain up to seven or eight times. The acquisition strategy, data processing, and DEM calibration and mosaicking have been systematically monitored and optimized throughout the entire mission duration, in order to fulfill the specification. The processing of all data has finally been completed in September 2016 and this paper reports on the final performance of the TanDEM-X global DEM and presents the acquisition and processing strategy which allowed to obtain the final DEM quality. The results confirm the outstanding global accuracy of the delivered product, which can be now utilized for both scientific and commercial applications.

  6. An alternative method for calibration of narrow band radiometer using a radiative transfer model

    Energy Technology Data Exchange (ETDEWEB)

    Salvador, J; Wolfram, E; D' Elia, R [Centro de Investigaciones en Laseres y Aplicaciones, CEILAP (CITEFA-CONICET), Juan B. de La Salle 4397 (B1603ALO), Villa Martelli, Buenos Aires (Argentina); Zamorano, F; Casiccia, C [Laboratorio de Ozono y Radiacion UV, Universidad de Magallanes, Punta Arenas (Chile) (Chile); Rosales, A [Universidad Nacional de la Patagonia San Juan Bosco, UNPSJB, Facultad de Ingenieria, Trelew (Argentina) (Argentina); Quel, E, E-mail: jsalvador@citefa.gov.ar [Universidad Nacional de la Patagonia Austral, Unidad Academica Rio Gallegos Avda. Lisandro de la Torre 1070 ciudad de Rio Gallegos-Sta Cruz (Argentina) (Argentina)

    2011-01-01

    The continual monitoring of solar UV radiation is one of the major objectives proposed by many atmosphere research groups. The purpose of this task is to determine the status and degree of progress over time of the anthropogenic composition perturbation of the atmosphere. Such changes affect the intensity of the UV solar radiation transmitted through the atmosphere that then interacts with living organisms and all materials, causing serious consequences in terms of human health and durability of materials that interact with this radiation. One of the many challenges that need to be faced to perform these measurements correctly is the maintenance of periodic calibrations of these instruments. Otherwise, damage caused by the UV radiation received will render any one calibration useless after the passage of some time. This requirement makes the usage of these instruments unattractive, and the lack of frequent calibration may lead to the loss of large amounts of acquired data. Motivated by this need to maintain calibration or, at least, know the degree of stability of instrumental behavior, we have developed a calibration methodology that uses the potential of radiative transfer models to model solar radiation with 5% accuracy or better relative to actual conditions. Voltage values in each radiometer channel involved in the calibration process are carefully selected from clear sky data. Thus, tables are constructed with voltage values corresponding to various atmospheric conditions for a given solar zenith angle. Then we model with a radiative transfer model using the same conditions as for the measurements to assemble sets of values for each zenith angle. The ratio of each group (measured and modeled) allows us to calculate the calibration coefficient value as a function of zenith angle as well as the cosine response presented by the radiometer. The calibration results obtained by this method were compared with those obtained with a Brewer MKIII SN 80 located in the

  7. Imaging 2015 Mw 7.8 Gorkha Earthquake and Its Aftershock Sequence Combining Multiple Calibrated Global Seismic Arrays

    Science.gov (United States)

    LI, B.; Ghosh, A.

    2016-12-01

    The 2015 Mw 7.8 Gorkha earthquake provides a good opportunity to study the tectonics and earthquake hazards in the Himalayas, one of the most seismically active plate boundaries. Details of the seismicity patterns and associated structures in the Himalayas are poorly understood mainly due to limited instrumentation. Here, we apply a back-projection method to study the mainshock rupture and the following aftershock sequence using four large aperture global seismic arrays. All the arrays show eastward rupture propagation of about 130 km and reveal similar evolution of seismic energy radiation, with strong high-frequency energy burst about 50 km north of Kathmandu. Each single array, however, is typically limited by large azimuthal gap, low resolution, and artifacts due to unmodeled velocity structures. Therefore, we use a self-consistent empirical calibration method to combine four different arrays to image the Gorkha event. It greatly improves the resolution, can better track rupture and reveal details that cannot be resolved by any individual array. In addition, we also use the same arrays at teleseismic distances and apply a back-projection technique to detect and locate the aftershocks immediately following the Gorkha earthquake. We detect about 2.5 times the aftershocks recorded by the Advance National Seismic System comprehensive earthquake catalog during the 19 days following the mainshock. The aftershocks detected by the arrays show an east-west trend in general, with majority of the aftershocks located at the eastern part of the rupture patch and surrounding the rupture zone of the largest Mw 7.3 aftershock. Overall spatiotemporal aftershock pattern agrees well with global catalog, with our catalog showing more details relative to the standard global catalog. The improved aftershock catalog enables us to better study the aftershock dynamics, stress evolution in this region. Moreover, rapid and better imaging of aftershock distribution may aid rapid response

  8. Global evaluation of runoff from 10 state-of-the-art hydrological models

    Science.gov (United States)

    Beck, Hylke E.; van Dijk, Albert I. J. M.; de Roo, Ad; Dutra, Emanuel; Fink, Gabriel; Orth, Rene; Schellekens, Jaap

    2017-06-01

    Observed streamflow data from 966 medium sized catchments (1000-5000 km2) around the globe were used to comprehensively evaluate the daily runoff estimates (1979-2012) of six global hydrological models (GHMs) and four land surface models (LSMs) produced as part of tier-1 of the eartH2Observe project. The models were all driven by the WATCH Forcing Data ERA-Interim (WFDEI) meteorological dataset, but used different datasets for non-meteorologic inputs and were run at various spatial and temporal resolutions, although all data were re-sampled to a common 0. 5° spatial and daily temporal resolution. For the evaluation, we used a broad range of performance metrics related to important aspects of the hydrograph. We found pronounced inter-model performance differences, underscoring the importance of hydrological model uncertainty in addition to climate input uncertainty, for example in studies assessing the hydrological impacts of climate change. The uncalibrated GHMs were found to perform, on average, better than the uncalibrated LSMs in snow-dominated regions, while the ensemble mean was found to perform only slightly worse than the best (calibrated) model. The inclusion of less-accurate models did not appreciably degrade the ensemble performance. Overall, we argue that more effort should be devoted on calibrating and regionalizing the parameters of macro-scale models. We further found that, despite adjustments using gauge observations, the WFDEI precipitation data still contain substantial biases that propagate into the simulated runoff. The early bias in the spring snowmelt peak exhibited by most models is probably primarily due to the widespread precipitation underestimation at high northern latitudes.

  9. A case study on robust optimal experimental design for model calibration of ω-Transaminase

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hauwermeiren, Daan; Ringborg, Rolf Hoffmeyer

    the experimental space. However, it is expected that more informative experiments can be designed to increase the confidence of the parameter estimates. Therefore, we apply Optimal Experimental Design (OED) to the calibrated model of Shin and Kim (1998). The total number of samples was retained to allow fair......” parameter values are not known before finishing the model calibration. However, it is important that the chosen parameter values are close to the real parameter values, otherwise the OED can possibly yield non-informative experiments. To counter this problem, one can use robust OED. The idea of robust OED......Proper calibration of models describing enzyme kinetics can be quite challenging. This is especially the case for more complex models like transaminase models (Shin and Kim, 1998). The latter fitted model parameters, but the confidence on the parameter estimation was not derived. Hence...

  10. Balance between calibration objectives in a conceptual hydrological model

    NARCIS (Netherlands)

    Booij, Martijn J.; Krol, Martinus S.

    2010-01-01

    Three different measures to determine the optimum balance between calibration objectives are compared: the combined rank method, parameter identifiability and model validation. Four objectives (water balance, hydrograph shape, high flows, low flows) are included in each measure. The contributions of

  11. The global carbon cycle

    International Nuclear Information System (INIS)

    Maier-Reimer, E.

    1991-01-01

    Basic concepts of the global carbon cycle on earth are described; by careful analyses of isotopic ratios, emission history and oceanic ventilation rates are derived, which provide crucial tests for constraining and calibrating models. Effects of deforestation, fertilizing, fossil fuel burning, soil erosion, etc. are quantified and compared, and the oceanic carbon process is evaluated. Oceanic and terrestrial biosphere modifications are discussed and a carbon cycle model is proposed

  12. Constraining the global carbon budget from global to regional scales - The measurement challenge

    International Nuclear Information System (INIS)

    Francey, R.J.; Rayner, P.J.; Allison, C.E.

    2002-01-01

    The Global Carbon Cycle can be modelled by a Bayesian synthesis inversion technique, where measured atmospheric CO 2 concentrations and isotopic compositions are analysed by use of an atmospheric transport model and estimates of regional sources and sinks of atmospheric carbon. The uncertainty associated to carbon flux estimates even on a regional scale can be improved considerably using the inversion technique. In this approach, besides the necessary control on the precision of atmospheric transport models and on the constraints for surface fluxes, an important component is the calibration of atmospheric CO 2 concentration and isotope measurements. The recent improved situation in respect to data comparability is discussed using results of conducted interlaboratory comparison exercises and larger scale calibration programs are proposed for the future to further improve the comparability of analytical data. (author)

  13. The PCR-GLOBWB global hydrological reanalysis product

    Science.gov (United States)

    Wanders, Niko; Bierkens, Marc; Sutanudjaja, Edwin; van Beek, Rens

    2014-05-01

    Accurate and long time series of hydrological data are important for understanding land surface water and energy budgets in many parts of the world, as well as for improving real-time hydrological monitoring and climate change anticipation. The ultimate goal of the present work is to produce a multi-decadal "land surface hydrological reanalysis" dataset with retrospective and updated hydrological states and fluxes that are constrained to available in-situ river discharge measurements. Here we use PCR-GLOBWB (van Beek et al., 2011), which is a large-scale hydrological model intended for global to regional studies. PCR-GLOBWB provides a grid-based representation of terrestrial hydrology with a typical spatial resolution of approximately 50×50 km (currently 0.5° globally) on a daily basis. For each grid cell, PCR-GLOBWB simulates moisture storage in two vertically stacked soil layers as well as the water exchange between the soil and the atmosphere and the underlying groundwater reservoir. Exchange to the atmosphere comprises precipitation, evaporation and transpiration, as well as snow accumulation and melt, which are all simulated by considering vegetation phenology and sub-grid variations of elevation, land cover and soil saturation distribution. The model includes improved schemes for runoff-infiltration partitioning, interflow, groundwater recharge and baseflow, as well as river routing of discharge. It also dynamically simulates water storage in reservoirs, water demand and the withdrawal, allocation and consumptive use of surface water and groundwater resources. By embedding the PCR-GLOBWB model in an Ensemble Kalman Filter framework, we calibrate the model parameters based on the discharge observations from the Global Runoff Data Centre. The parameters calibrated are related to snow accumulation and melt, runoff-infiltration partitioning, groundwater recharge, channel discharge and baseflow processes, as well as pre-factors to correct forcing precipitation

  14. When to Make Mountains out of Molehills: The Pros and Cons of Simple and Complex Model Calibration Procedures

    Science.gov (United States)

    Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.

    2017-12-01

    Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi

  15. Model independent approach to the single photoelectron calibration of photomultiplier tubes

    Energy Technology Data Exchange (ETDEWEB)

    Saldanha, R.; Grandi, L.; Guardincerri, Y.; Wester, T.

    2017-08-01

    The accurate calibration of photomultiplier tubes is critical in a wide variety of applications in which it is necessary to know the absolute number of detected photons or precisely determine the resolution of the signal. Conventional calibration methods rely on fitting the photomultiplier response to a low intensity light source with analytical approximations to the single photoelectron distribution, often leading to biased estimates due to the inability to accurately model the full distribution, especially at low charge values. In this paper we present a simple statistical method to extract the relevant single photoelectron calibration parameters without making any assumptions about the underlying single photoelectron distribution. We illustrate the use of this method through the calibration of a Hamamatsu R11410 photomultiplier tube and study the accuracy and precision of the method using Monte Carlo simulations. The method is found to have significantly reduced bias compared to conventional methods and works under a wide range of light intensities, making it suitable for simultaneously calibrating large arrays of photomultiplier tubes.

  16. Calibration of HEC-Ras hydrodynamic model using gauged discharge data and flood inundation maps

    Science.gov (United States)

    Tong, Rui; Komma, Jürgen

    2017-04-01

    The estimation of flood is essential for disaster alleviation. Hydrodynamic models are implemented to predict the occurrence and variance of flood in different scales. In practice, the calibration of hydrodynamic models aims to search the best possible parameters for the representation the natural flow resistance. Recent years have seen the calibration of hydrodynamic models being more actual and faster following the advance of earth observation products and computer based optimization techniques. In this study, the Hydrologic Engineering River Analysis System (HEC-Ras) model was set up with high-resolution digital elevation model from Laser scanner for the river Inn in Tyrol, Austria. 10 largest flood events from 19 hourly discharge gauges and flood inundation maps were selected to calibrate the HEC-Ras model. Manning roughness values and lateral inflow factors as parameters were automatically optimized with the Shuffled complex with Principal component analysis (SP-UCI) algorithm developed from the Shuffled Complex Evolution (SCE-UA). Different objective functions (Nash-Sutcliffe model efficiency coefficient, the timing of peak, peak value and Root-mean-square deviation) were used in single or multiple way. It was found that the lateral inflow factor was the most sensitive parameter. SP-UCI algorithm could avoid the local optimal and achieve efficient and effective parameters in the calibration of HEC-Ras model using flood extension images. As results showed, calibration by means of gauged discharge data and flood inundation maps, together with objective function of Nash-Sutcliffe model efficiency coefficient, was very robust to obtain more reliable flood simulation, and also to catch up with the peak value and the timing of peak.

  17. Using satellite data to improve the leaf phenology of a global terrestrial biosphere model

    Science.gov (United States)

    MacBean, N.; Maignan, F.; Peylin, P.; Bacour, C.; Bréon, F.-M.; Ciais, P.

    2015-12-01

    Correct representation of seasonal leaf dynamics is crucial for terrestrial biosphere models (TBMs), but many such models cannot accurately reproduce observations of leaf onset and senescence. Here we optimised the phenology-related parameters of the ORCHIDEE TBM using satellite-derived Normalized Difference Vegetation Index data (MODIS NDVI v5) that are linearly related to the model fAPAR. We found the misfit between the observations and the model decreased after optimisation for all boreal and temperate deciduous plant functional types, primarily due to an earlier onset of leaf senescence. The model bias was only partially reduced for tropical deciduous trees and no improvement was seen for natural C4 grasses. Spatial validation demonstrated the generality of the posterior parameters for use in global simulations, with an increase in global median correlation of 0.56 to 0.67. The simulated global mean annual gross primary productivity (GPP) decreased by ~ 10 PgC yr-1 over the 1990-2010 period due to the substantially shortened growing season length (GSL - by up to 30 days in the Northern Hemisphere), thus reducing the positive bias and improving the seasonal dynamics of ORCHIDEE compared to independent data-based estimates. Finally, the optimisations led to changes in the strength and location of the trends in the simulated vegetation productivity as represented by the GSL and mean annual fraction of absorbed photosynthetically active radiation (fAPAR), suggesting care should be taken when using un-calibrated models in attribution studies. We suggest that the framework presented here can be applied for improving the phenology of all global TBMs.

  18. Global Delivery Models

    DEFF Research Database (Denmark)

    Manning, Stephan; Larsen, Marcus M.; Bharati, Pratyush

    2013-01-01

    This article examines antecedents and performance implications of global delivery models (GDMs) in global business services. GDMs require geographically distributed operations to exploit both proximity to clients and time-zone spread for efficient service delivery. We propose and empirically show...

  19. Modeling microelectrode biosensors: free-flow calibration can substantially underestimate tissue concentrations.

    Science.gov (United States)

    Newton, Adam J H; Wall, Mark J; Richardson, Magnus J E

    2017-03-01

    Microelectrode amperometric biosensors are widely used to measure concentrations of analytes in solution and tissue including acetylcholine, adenosine, glucose, and glutamate. A great deal of experimental and modeling effort has been directed at quantifying the response of the biosensors themselves; however, the influence that the macroscopic tissue environment has on biosensor response has not been subjected to the same level of scrutiny. Here we identify an important issue in the way microelectrode biosensors are calibrated that is likely to have led to underestimations of analyte tissue concentrations. Concentration in tissue is typically determined by comparing the biosensor signal to that measured in free-flow calibration conditions. In a free-flow environment the concentration of the analyte at the outer surface of the biosensor can be considered constant. However, in tissue the analyte reaches the biosensor surface by diffusion through the extracellular space. Because the enzymes in the biosensor break down the analyte, a density gradient is set up resulting in a significantly lower concentration of analyte near the biosensor surface. This effect is compounded by the diminished volume fraction (porosity) and reduction in the diffusion coefficient due to obstructions (tortuosity) in tissue. We demonstrate this effect through modeling and experimentally verify our predictions in diffusive environments. NEW & NOTEWORTHY Microelectrode biosensors are typically calibrated in a free-flow environment where the concentrations at the biosensor surface are constant. However, when in tissue, the analyte reaches the biosensor via diffusion and so analyte breakdown by the biosensor results in a concentration gradient and consequently a lower concentration around the biosensor. This effect means that naive free-flow calibration will underestimate tissue concentration. We develop mathematical models to better quantify the discrepancy between the calibration and tissue

  20. Calibration of discrete element model parameters: soybeans

    Science.gov (United States)

    Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal

    2018-05-01

    Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.

  1. Technical Note: Calibration and validation of geophysical observation models

    NARCIS (Netherlands)

    Salama, M.S.; van der Velde, R.; van der Woerd, H.J.; Kromkamp, J.C.; Philippart, C.J.M.; Joseph, A.T.; O'Neill, P.E.; Lang, R.H.; Gish, T.; Werdell, P.J.; Su, Z.

    2012-01-01

    We present a method to calibrate and validate observational models that interrelate remotely sensed energy fluxes to geophysical variables of land and water surfaces. Coincident sets of remote sensing observation of visible and microwave radiations and geophysical data are assembled and subdivided

  2. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Forsmark

    International Nuclear Information System (INIS)

    Aneljung, Maria; Gustafsson, Lars-Goeran

    2007-04-01

    . Differences in the aquifer refilling process subsequent to dry periods, for example a too slow refill when the groundwater table rises after dry summers. This may be due to local deviations in the applied pF-curves in the unsaturated zone description. Differences in near-surface groundwater elevations. For example, the calculated groundwater level reaches the ground surface during the fall and spring at locations where the measured groundwater depth is just below the ground surface. This may be due to the presence of near-surface high-conductive layers. A sensitivity analysis has been made on calibration parameters. For parameters that have 'global' effects, such as the hydraulic conductivity in the saturated zone, the analysis was performed using the 'full' model. For parameters with more local effects, such as parameters influencing the evapotranspiration and the net recharge, the model was scaled down to a column model, representing two different type areas. The most important conclusions that can be drawn from the sensitivity analysis are the following: The results indicate that the horizontal hydraulic conductivity generally should be increased at topographic highs, and reduced at local depressions in the topography. The results indicate that no changes should be made to the vertical hydraulic conductivity at locations where the horizontal conductivity has been increased, and that the vertical conductivity generally should be decreased where the horizontal conductivity has been decreased. The vegetation parameters that have the largest influence on the total groundwater recharge are the root mass distribution and the crop coefficient. The unsaturated zone parameter that have the largest influence on the total groundwater recharge is the effective porosity given in the pF-curve. In addition, the shape of the pF-curve above the water content at field capacity is also of great importance. The general conclusion is that the surrounding conditions have large effects on water

  3. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Forsmark

    Energy Technology Data Exchange (ETDEWEB)

    Aneljung, Maria; Gustafsson, Lars-Goeran [DHI Water and Environment AB, Goeteborg (Sweden)

    2007-04-15

    . Differences in the aquifer refilling process subsequent to dry periods, for example a too slow refill when the groundwater table rises after dry summers. This may be due to local deviations in the applied pF-curves in the unsaturated zone description. Differences in near-surface groundwater elevations. For example, the calculated groundwater level reaches the ground surface during the fall and spring at locations where the measured groundwater depth is just below the ground surface. This may be due to the presence of near-surface high-conductive layers. A sensitivity analysis has been made on calibration parameters. For parameters that have 'global' effects, such as the hydraulic conductivity in the saturated zone, the analysis was performed using the 'full' model. For parameters with more local effects, such as parameters influencing the evapotranspiration and the net recharge, the model was scaled down to a column model, representing two different type areas. The most important conclusions that can be drawn from the sensitivity analysis are the following: The results indicate that the horizontal hydraulic conductivity generally should be increased at topographic highs, and reduced at local depressions in the topography. The results indicate that no changes should be made to the vertical hydraulic conductivity at locations where the horizontal conductivity has been increased, and that the vertical conductivity generally should be decreased where the horizontal conductivity has been decreased. The vegetation parameters that have the largest influence on the total groundwater recharge are the root mass distribution and the crop coefficient. The unsaturated zone parameter that have the largest influence on the total groundwater recharge is the effective porosity given in the pF-curve. In addition, the shape of the pF-curve above the water content at field capacity is also of great importance. The general conclusion is that the surrounding conditions have

  4. Calibrating Vadose Zone Models with Time-Lapse Gravity Data

    DEFF Research Database (Denmark)

    Christiansen, Lars; Hansen, A. B.; Looms, M. C.

    2009-01-01

    A change in soil water content is a change in mass stored in the subsurface. Given that the mass change is big enough, the change can be measured with a gravity meter. Attempts have been made with varying success over the last decades to use ground-based time-lapse gravity measurements to infer...... hydrogeological parameters. These studies focused on the saturated zone with specific yield as the most prominent target parameter. Any change in storage in the vadose zone has been considered as noise. Our modeling results show a measureable change in gravity from the vadose zone during a forced infiltration...... experiment on 10m by 10m grass land. Simulation studies show a potential for vadose zone model calibration using gravity data in conjunction with other geophysical data, e.g. cross-borehole georadar. We present early field data and calibration results from a forced infiltration experiment conducted over 30...

  5. Model calibration of a variable refrigerant flow system with a dedicated outdoor air system: A case study

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dongsu [Mississippi State Univ., Starkville, MS (United States); Cox, Sam J. [Mississippi State Univ., Starkville, MS (United States); Cho, Heejin [Mississippi State Univ., Starkville, MS (United States); Im, Piljae [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-10-16

    With increased use of variable refrigerant flow (VRF) systems in the U.S. building sector, interests in capability and rationality of various building energy modeling tools to simulate VRF systems are rising. This paper presents the detailed procedures for model calibration of a VRF system with a dedicated outdoor air system (DOAS) by comparing to detailed measured data from an occupancy emulated small office building. The building energy model is first developed based on as-built drawings, and building and system characteristics available. The whole building energy modeling tool used for the study is U.S. DOE’s EnergyPlus version 8.1. The initial model is, then, calibrated with the hourly measured data from the target building and VRF-DOAS system. In a detailed calibration procedures of the VRF-DOAS, the original EnergyPlus source code is modified to enable the modeling of the specific VRF-DOAS installed in the building. After a proper calibration during cooling and heating seasons, the VRF-DOAS model can reasonably predict the performance of the actual VRF-DOAS system based on the criteria from ASHRAE Guideline 14-2014. The calibration results show that hourly CV-RMSE and NMBE would be 15.7% and 3.8%, respectively, which is deemed to be calibrated. As a result, the whole-building energy usage after calibration of the VRF-DOAS model is 1.9% (78.8 kWh) lower than that of the measurements during comparison period.

  6. Electroweak Calibration of the Higgs Characterization Model

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will present the preliminary results of histogram fits using the Higgs Combine histogram fitting package. These fits can be used to estimate the effects of electroweak contributions to the p p -> H mu+ mu- Higgs production channel and calibrate Beyond Standard Model (BSM) simulations which ignore these effects. I will emphasize my findings' significance in the context of other research here at CERN and in the broader world of high energy physics.

  7. A fundamental parameter-based calibration model for an intrinsic germanium X-ray fluorescence spectrometer

    International Nuclear Information System (INIS)

    Christensen, L.H.; Pind, N.

    1982-01-01

    A matrix-independent fundamental parameter-based calibration model for an energy-dispersive X-ray fluorescence spectrometer has been developed. This model, which is part of a fundamental parameter approach quantification method, accounts for both the excitation and detection probability. For each secondary target a number of relative calibration constants are calculated on the basis of knowledge of the irradiation geometry, the detector specifications, and tabulated fundamental physical parameters. The absolute calibration of the spectrometer is performed by measuring one pure element standard per secondary target. For sample systems where all elements can be analyzed by means of the same secondary target the absolute calibration constant can be determined during the iterative solution of the basic equation. Calculated and experimentally determined relative calibration constants agree to within 5-10% of each other and so do the results obtained from the analysis of an NBS certified alloy using the two sets of constants. (orig.)

  8. Targeting the right input data to improve crop modeling at global level

    Science.gov (United States)

    Adam, M.; Robertson, R.; Gbegbelegbe, S.; Jones, J. W.; Boote, K. J.; Asseng, S.

    2012-12-01

    Designed for location-specific simulations, the use of crop models at a global level raises important questions. Crop models are originally premised on small unit areas where environmental conditions and management practices are considered homogeneous. Specific information describing soils, climate, management, and crop characteristics are used in the calibration process. However, when scaling up for global application, we rely on information derived from geographical information systems and weather generators. To run crop models at broad, we use a modeling platform that assumes a uniformly generated grid cell as a unit area. Specific weather, specific soil and specific management practices for each crop are represented for each of the cell grids. Studies on the impacts of the uncertainties of weather information and climate change on crop yield at a global level have been carried out (Osborne et al, 2007, Nelson et al., 2010, van Bussel et al, 2011). Detailed information on soils and management practices at global level are very scarce but recognized to be of critical importance (Reidsma et al., 2009). Few attempts to assess the impact of their uncertainties on cropping systems performances can be found. The objectives of this study are (i) to determine sensitivities of a crop model to soil and management practices, inputs most relevant to low input rainfed cropping systems, and (ii) to define hotspots of sensitivity according to the input data. We ran DSSAT v4.5 globally (CERES-CROPSIM) to simulate wheat yields at 45arc-minute resolution. Cultivar parameters were calibrated and validated for different mega-environments (results not shown). The model was run for nitrogen-limited production systems. This setting was chosen as the most representative to simulate actual yield (especially for low-input rainfed agricultural systems) and assumes crop growth to be free of any pest and diseases damages. We conducted a sensitivity analysis on contrasting management

  9. LED-based Photometric Stereo: Modeling, Calibration and Numerical Solutions

    DEFF Research Database (Denmark)

    Quéau, Yvain; Durix, Bastien; Wu, Tao

    2018-01-01

    We conduct a thorough study of photometric stereo under nearby point light source illumination, from modeling to numerical solution, through calibration. In the classical formulation of photometric stereo, the luminous fluxes are assumed to be directional, which is very difficult to achieve in pr...

  10. Global ice sheet modeling

    International Nuclear Information System (INIS)

    Hughes, T.J.; Fastook, J.L.

    1994-05-01

    The University of Maine conducted this study for Pacific Northwest Laboratory (PNL) as part of a global climate modeling task for site characterization of the potential nuclear waste respository site at Yucca Mountain, NV. The purpose of the study was to develop a global ice sheet dynamics model that will forecast the three-dimensional configuration of global ice sheets for specific climate change scenarios. The objective of the third (final) year of the work was to produce ice sheet data for glaciation scenarios covering the next 100,000 years. This was accomplished using both the map-plane and flowband solutions of our time-dependent, finite-element gridpoint model. The theory and equations used to develop the ice sheet models are presented. Three future scenarios were simulated by the model and results are discussed

  11. Using genetic algorithms for calibrating simplified models of nuclear reactor dynamics

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Canetta, Raffaele

    2004-01-01

    In this paper the use of genetic algorithms for the estimation of the effective parameters of a model of nuclear reactor dynamics is investigated. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest (e.g., reactor power, average fuel and coolant temperatures) to the actual evolution profiles, here simulated by the Quandry based reactor kinetics (Quark) code available from the Nuclear Energy Agency. Alternative schemes of single- and multi-objective optimization are investigated. The efficiency of convergence of the algorithm with respect to the different effective parameters to be calibrated is studied with reference to the physical relationships involved

  12. Using genetic algorithms for calibrating simplified models of nuclear reactor dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Marseguerra, Marzio E-mail: marzio.marseguerra@polimi.it; Zio, Enrico E-mail: enrico.zio@polimi.it; Canetta, Raffaele

    2004-07-01

    In this paper the use of genetic algorithms for the estimation of the effective parameters of a model of nuclear reactor dynamics is investigated. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest (e.g., reactor power, average fuel and coolant temperatures) to the actual evolution profiles, here simulated by the Quandry based reactor kinetics (Quark) code available from the Nuclear Energy Agency. Alternative schemes of single- and multi-objective optimization are investigated. The efficiency of convergence of the algorithm with respect to the different effective parameters to be calibrated is studied with reference to the physical relationships involved.

  13. Calibration of Airframe and Occupant Models for Two Full-Scale Rotorcraft Crash Tests

    Science.gov (United States)

    Annett, Martin S.; Horta, Lucas G.; Polanco, Michael A.

    2012-01-01

    Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. Accelerations and kinematic data collected from the crash tests were compared to a system integrated finite element model of the test article. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the second full-scale crash test. This combination of heuristic and quantitative methods was used to identify modeling deficiencies, evaluate parameter importance, and propose required model changes. It is shown that the multi-dimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and co-pilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. This approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and test planning guidance. Complete crash simulations with validated finite element models can be used

  14. Calibration of Yucca Mountain unsaturated zone flow and transport model using porewater chloride data

    International Nuclear Information System (INIS)

    Liu, Jianchun; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.

    2002-01-01

    In this study, porewater chloride data from Yucca Mountain, Nevada, are analyzed and modeled by 3-D chemical transport simulations and analytical methods. The simulation modeling approach is based on a continuum formulation of coupled multiphase fluid flow and tracer transport processes through fractured porous rock, using a dual-continuum concept. Infiltration-rate calibrations were using the pore water chloride data. Model results of chloride distributions were improved in matching the observed data with the calibrated infiltration rates. Statistical analyses of the frequency distribution for overall percolation fluxes and chloride concentration in the unsaturated zone system demonstrate that the use of the calibrated infiltration rates had insignificant effect on the distribution of simulated percolation fluxes but significantly changed the predicated distribution of simulated chloride concentrations. An analytical method was also applied to model transient chloride transport. The method was verified by 3-D simulation results as able to capture major chemical transient behavior and trends. Effects of lateral flow in the Paintbrush nonwelded unit on percolation fluxes and chloride distribution were studied by 3-D simulations with increased horizontal permeability. The combined results from these model calibrations furnish important information for the UZ model studies, contributing to performance assessment of the potential repository

  15. Calibration of uncertain inputs to computer models using experimentally measured quantities and the BMARS emulator

    International Nuclear Information System (INIS)

    Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.

    2011-01-01

    We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to produce posterior distributions of the uncertain inputs such that when samples from the posteriors are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments within confidence bounds. The method is similar to the Markov chain Monte Carlo (MCMC) calibration methods with independent sampling with the exception that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our system, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The results of the calibration are posterior distributions that both agree with intuition and improve the accuracy and decrease the uncertainty in experimental predictions. (author)

  16. Cryogenic thermometer calibration system using a helium cooling loop and a temperature controller [for LHC magnets

    CERN Document Server

    Chanzy, E; Thermeau, J P; Bühler, S; Joly, C; Casas-Cubillos, J; Balle, C

    1998-01-01

    The IPN-Orsay and CERN are designing in close collaboration a fully automated cryogenic thermometer calibration facility which will calibrate in 3 years 10,000 cryogenic thermometers required for the Large Hadron Collider (LHC) operation. A reduced-scale model of the calibration facility has been developed, which enables the calibration of ten thermometers by comparison with two rhodium-iron standard thermometers in the 1.8 K to 300 K temperature range under vacuum conditions. The particular design, based on a helium cooling loop and an electrical temperature controller, gives good dynamic performances. This paper describes the experimental set-up and the data acquisition system. Results of experimental runs are also presented along with the estimated global accuracy for the calibration. (3 refs).

  17. Generator Dynamic Model Validation and Parameter Calibration Using Phasor Measurements at the Point of Connection

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry; Yang, Steve

    2013-05-01

    Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.

  18. PEST modules with regularization for the acceleration of the automatic calibration in hydrodynamic models

    OpenAIRE

    Polomčić, Dušan M.; Bajić, Dragoljub I.; Močević, Jelena M.

    2015-01-01

    The calibration process of hydrodynamic model is done usually manually by 'testing' with different values of hydrogeological parameters and hydraulic characteristics of the boundary conditions. By using the PEST program, automatic calibration of models has been introduced, and it has proved to significantly reduce the subjective influence of the model creator on results. With the relatively new approach of PEST, i.e. with the introduction of so-called 'pilot points', the concept of homogeneou...

  19. Radiolytic modelling of spent fuel oxidative dissolution mechanism. Calibration against UO2 dynamic leaching experiments

    International Nuclear Information System (INIS)

    Merino, J.; Cera, E.; Bruno, J.; Quinones, J.; Casas, I.; Clarens, F.; Gimenez, J.; Pablo, J. de; Rovira, M.; Martinez-Esparza, A.

    2005-01-01

    Calibration and testing are inherent aspects of any modelling exercise and consequently they are key issues in developing a model for the oxidative dissolution of spent fuel. In the present work we present the outcome of the calibration process for the kinetic constants of a UO 2 oxidative dissolution mechanism developed for using in a radiolytic model. Experimental data obtained in dynamic leaching experiments of unirradiated UO 2 has been used for this purpose. The iterative calibration process has provided some insight into the detailed mechanism taking place in the alteration of UO 2 , particularly the role of · OH radicals and their interaction with the carbonate system. The results show that, although more simulations are needed for testing in different experimental systems, the calibrated oxidative dissolution mechanism could be included in radiolytic models to gain confidence in the prediction of the long-term alteration rate of the spent fuel under repository conditions

  20. HYDROGRAV - Hydrological model calibration and terrestrial water storage monitoring from GRACE gravimetry and satellite altimetry, First results

    DEFF Research Database (Denmark)

    Andersen, O.B.; Krogh, P.E.; Michailovsky, C.

    2008-01-01

    Space-borne and ground-based time-lapse gravity observations provide new data for water balance monitoring and hydrological model calibration in the future. The HYDROGRAV project (www.hydrograv.dk) will explore the utility of time-lapse gravity surveys for hydrological model calibration and terre...... change from 2002 to 2008 along with in-situ gravity time-lapse observations and radar altimetry monitoring of surface water for the southern Africa river basins will be presented.......Space-borne and ground-based time-lapse gravity observations provide new data for water balance monitoring and hydrological model calibration in the future. The HYDROGRAV project (www.hydrograv.dk) will explore the utility of time-lapse gravity surveys for hydrological model calibration...... and terrestrial water storage monitoring. Merging remote sensing data from GRACE with other remote sensing data like satellite altimetry and also ground based observations are important to hydrological model calibration and water balance monitoring of large regions and can serve as either supplement or as vital...

  1. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  2. Global evaluation of runoff from 10 state-of-the-art hydrological models

    Directory of Open Access Journals (Sweden)

    H. E. Beck

    2017-06-01

    Full Text Available Observed streamflow data from 966 medium sized catchments (1000–5000 km2 around the globe were used to comprehensively evaluate the daily runoff estimates (1979–2012 of six global hydrological models (GHMs and four land surface models (LSMs produced as part of tier-1 of the eartH2Observe project. The models were all driven by the WATCH Forcing Data ERA-Interim (WFDEI meteorological dataset, but used different datasets for non-meteorologic inputs and were run at various spatial and temporal resolutions, although all data were re-sampled to a common 0. 5° spatial and daily temporal resolution. For the evaluation, we used a broad range of performance metrics related to important aspects of the hydrograph. We found pronounced inter-model performance differences, underscoring the importance of hydrological model uncertainty in addition to climate input uncertainty, for example in studies assessing the hydrological impacts of climate change. The uncalibrated GHMs were found to perform, on average, better than the uncalibrated LSMs in snow-dominated regions, while the ensemble mean was found to perform only slightly worse than the best (calibrated model. The inclusion of less-accurate models did not appreciably degrade the ensemble performance. Overall, we argue that more effort should be devoted on calibrating and regionalizing the parameters of macro-scale models. We further found that, despite adjustments using gauge observations, the WFDEI precipitation data still contain substantial biases that propagate into the simulated runoff. The early bias in the spring snowmelt peak exhibited by most models is probably primarily due to the widespread precipitation underestimation at high northern latitudes.

  3. Stochastic Modeling of Overtime Occupancy and Its Application in Building Energy Simulation and Calibration

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Kaiyu; Yan, Da; Hong, Tianzhen; Guo, Siyue

    2014-02-28

    Overtime is a common phenomenon around the world. Overtime drives both internal heat gains from occupants, lighting and plug-loads, and HVAC operation during overtime periods. Overtime leads to longer occupancy hours and extended operation of building services systems beyond normal working hours, thus overtime impacts total building energy use. Current literature lacks methods to model overtime occupancy because overtime is stochastic in nature and varies by individual occupants and by time. To address this gap in the literature, this study aims to develop a new stochastic model based on the statistical analysis of measured overtime occupancy data from an office building. A binomial distribution is used to represent the total number of occupants working overtime, while an exponential distribution is used to represent the duration of overtime periods. The overtime model is used to generate overtime occupancy schedules as an input to the energy model of a second office building. The measured and simulated cooling energy use during the overtime period is compared in order to validate the overtime model. A hybrid approach to energy model calibration is proposed and tested, which combines ASHRAE Guideline 14 for the calibration of the energy model during normal working hours, and a proposed KS test for the calibration of the energy model during overtime. The developed stochastic overtime model and the hybrid calibration approach can be used in building energy simulations to improve the accuracy of results, and better understand the characteristics of overtime in office buildings.

  4. Calibration plots for risk prediction models in the presence of competing risks.

    Science.gov (United States)

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-08-15

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Overview of IMAGE 2.0. An integrated model of climate change and the global environment

    International Nuclear Information System (INIS)

    Alcamo, J.; Battjes, C.; Van den Born, G.J.; Bouwman, A.F.; De Haan, B.J.; Klein Goldewijk, K.; Klepper, O.; Kreileman, G.J.J.; Krol, M.; Leemans, R.; Van Minnen, J.G.; Olivier, J.G.J.; De Vries, H.J.M.; Toet, A.M.C.; Van den Wijngaart, R.A.; Van der Woerd, H.J.; Zuidema, G.

    1995-01-01

    The IMAGE 2.0 model is a multi-disciplinary, integrated model, designed to simulate the dynamics of the global society-biosphere-climate system. In this paper the focus is on the scientific aspects of the model, while another paper in this volume emphasizes its political aspects. The objectives of IMAGE 2.0 are to investigate linkages and feedbacks in the global system, and to evaluate consequences of climate policies. Dynamic calculations are performed to the year 2100, with a spatial scale ranging from grid (0.5x0.5 latitude-longitude) to world political regions, depending on the sub-model. A total of 13 sub-models make up IMAGE 2.0, and they are organized into three fully linked sub-systems: Energy-Industry, Terrestrial Environment, and Atmosphere-Ocean. The fully linked model has been tested against data from 1970 to 1990, and after calibration it can reproduce the following observed trends: regional energy consumption and energy-related emissions, terrestrial flux of carbon dioxide and emissions of greenhouse gases, concentrations of greenhouse gases in the atmosphere, and transformation of land cover. The model can also simulate current zonal average surface and vertical temperatures. 1 fig., 10 refs

  6. A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors

    Directory of Open Access Journals (Sweden)

    Shuang Wang

    2015-12-01

    Full Text Available In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF and Least Square Methods (LSM is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.

  7. Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models

    Energy Technology Data Exchange (ETDEWEB)

    Vilches-Freixas, Gloria; Létang, Jean Michel; Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1206, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, Lyon 69373 Cedex 08 (France); Brousmiche, Sébastien [Ion Beam Application, Louvain-la-Neuve 1348 (Belgium); Romero, Edward; Vila Oliva, Marc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1206, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, Lyon 69373 Cedex 08, France and Ion Beam Application, Louvain-la-Neuve 1348 (Belgium); Kellner, Daniel; Deutschmann, Heinz; Keuschnigg, Peter; Steininger, Philipp [Institute for Research and Development on Advanced Radiation Technologies, Paracelsus Medical University, Salzburg 5020 (Austria)

    2016-09-15

    Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performed at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in

  8. Model calibration and validation for OFMSW and sewage sludge co-digestion reactors

    International Nuclear Information System (INIS)

    Esposito, G.; Frunzo, L.; Panico, A.; Pirozzi, F.

    2011-01-01

    Highlights: → Disintegration is the limiting step of the anaerobic co-digestion process. → Disintegration kinetic constant does not depend on the waste particle size. → Disintegration kinetic constant depends only on the waste nature and composition. → The model calibration can be performed on organic waste of any particle size. - Abstract: A mathematical model has recently been proposed by the authors to simulate the biochemical processes that prevail in a co-digestion reactor fed with sewage sludge and the organic fraction of municipal solid waste. This model is based on the Anaerobic Digestion Model no. 1 of the International Water Association, which has been extended to include the co-digestion processes, using surface-based kinetics to model the organic waste disintegration and conversion to carbohydrates, proteins and lipids. When organic waste solids are present in the reactor influent, the disintegration process is the rate-limiting step of the overall co-digestion process. The main advantage of the proposed modeling approach is that the kinetic constant of such a process does not depend on the waste particle size distribution (PSD) and rather depends only on the nature and composition of the waste particles. The model calibration aimed to assess the kinetic constant of the disintegration process can therefore be conducted using organic waste samples of any PSD, and the resulting value will be suitable for all the organic wastes of the same nature as the investigated samples, independently of their PSD. This assumption was proven in this study by biomethane potential experiments that were conducted on organic waste samples with different particle sizes. The results of these experiments were used to calibrate and validate the mathematical model, resulting in a good agreement between the simulated and observed data for any investigated particle size of the solid waste. This study confirms the strength of the proposed model and calibration procedure

  9. Calibration and validation of a model describing complete autotrophic nitrogen removal in a granular SBR system

    DEFF Research Database (Denmark)

    Vangsgaard, Anna Katrine; Mutlu, Ayten Gizem; Gernaey, Krist

    2013-01-01

    BACKGROUND: A validated model describing the nitritation-anammox process in a granular sequencing batch reactor (SBR) system is an important tool for: a) design of future experiments and b) prediction of process performance during optimization, while applying process control, or during system scale......-up. RESULTS: A model was calibrated using a step-wise procedure customized for the specific needs of the system. The important steps in the procedure were initialization, steady-state and dynamic calibration, and validation. A fast and effective initialization approach was developed to approximate pseudo...... screening of the parameter space proposed by Sin et al. (2008) - to find the best fit of the model to dynamic data. Finally, the calibrated model was validated with an independent data set. CONCLUSION: The presented calibration procedure is the first customized procedure for this type of system...

  10. Multi-site calibration, validation, and sensitivity analysis of the MIKE SHE Model for a large watershed in northern China

    Directory of Open Access Journals (Sweden)

    S. Wang

    2012-12-01

    Full Text Available Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped calibration protocol that used streamflow measured at one single watershed outlet to a multi-site calibration method which employed streamflow measurements at three stations within the large Chaohe River basin in northern China. Simulation results showed that the single-site calibrated model was able to sufficiently simulate the hydrographs for two of the three stations (Nash-Sutcliffe coefficient of 0.65–0.75, and correlation coefficient 0.81–0.87 during the testing period, but the model performed poorly for the third station (Nash-Sutcliffe coefficient only 0.44. Sensitivity analysis suggested that streamflow of upstream area of the watershed was dominated by slow groundwater, whilst streamflow of middle- and down- stream areas by relatively quick interflow. Therefore, a multi-site calibration protocol was deemed necessary. Due to the potential errors and uncertainties with respect to the representation of spatial variability, performance measures from the multi-site calibration protocol slightly decreased for two of the three stations, whereas it was improved greatly for the third station. We concluded that multi-site calibration protocol reached a compromise in term of model performance for the three stations, reasonably representing the hydrographs of all three stations with Nash-Sutcliffe coefficient ranging from 0.59–072. The multi-site calibration protocol applied in the analysis generally has advantages to the single site calibration protocol.

  11. A joint calibration model for combining predictive distributions

    Directory of Open Access Journals (Sweden)

    Patrizia Agati

    2013-05-01

    Full Text Available In many research fields, as for example in probabilistic weather forecasting, valuable predictive information about a future random phenomenon may come from several, possibly heterogeneous, sources. Forecast combining methods have been developed over the years in order to deal with ensembles of sources: the aim is to combine several predictions in such a way to improve forecast accuracy and reduce risk of bad forecasts.In this context, we propose the use of a Bayesian approach to information combining, which consists in treating the predictive probability density functions (pdfs from the individual ensemble members as data in a Bayesian updating problem. The likelihood function is shown to be proportional to the product of the pdfs, adjusted by a joint “calibration function” describing the predicting skill of the sources (Morris, 1977. In this paper, after rephrasing Morris’ algorithm in a predictive context, we propose to model the calibration function in terms of bias, scale and correlation and to estimate its parameters according to the least squares criterion. The performance of our method is investigated and compared with that of Bayesian Model Averaging (Raftery, 2005 on simulated data.

  12. Calibration and Validation Parameter of Hydrologic Model HEC-HMS using Particle Swarm Optimization Algorithms – Single Objective

    Directory of Open Access Journals (Sweden)

    R. Garmeh

    2016-02-01

    Full Text Available Introduction: Planning and management of water resource and river basins needs use of conceptual hydrologic models which play a significant role in predicting basins response to different climatic and meteorological processes. Evaluating watershed response through mathematical hydrologic models requires finding a set of parameter values of the model which provides thebest fit between observed and estimated hydrographs in a procedure called calibration. Asmanual calibration is tedious, time consuming and requires personal experience, automaticcalibration methods make application of more significant CRR models which are based onusing a systematic search procedure to find good parameter sets in terms of at least oneobjective function. Materials and Methods: Conceptual hydrologic models play a significant role inpredicting a basin’s response to different climatic and meteorological processes within natural systems. However, these models require a number of estimated parameters. Model calibration is the procedure of adjusting the parametervalues until the model predictions match the observed data. Manual calibration of high-fidelity hydrologic (simulation models is tedious, time consuming and sometimesimpractical, especially when the number of parameters islarge. Moreover, the high degrees of nonlinearity involved in different hydrologic processes and non-uniqueness ofinverse-type calibration problems make it difficult to find asingle set of parameter values. In this research, the conceptual HEC-HMS model is integrated with the Particle Swarm Optimization (PSO algorithm.The HEC-HMS model was developed as areplacement for HEC-1, which has long been considered as astandard model for hydrologic simulation. Most of thehydrologic models employed in HEC-HMS are event-basedmodels simulating a single storm requiring the specificationof all conditions at the beginning of the simulation. The soil moistureaccounting model in the HEC-HMS is the onlycontinuous

  13. Three-dimensional DFN Model Development and Calibration: A Case Study for Pahute Mesa, Nevada National Security Site

    Science.gov (United States)

    Pham, H. V.; Parashar, R.; Sund, N. L.; Pohlmann, K.

    2017-12-01

    Pahute Mesa, located in the north-western region of the Nevada National Security Site, is an area where numerous underground nuclear tests were conducted. The mesa contains several fractured aquifers that can potentially provide high permeability pathways for migration of radionuclides away from testing locations. The BULLION Forced-Gradient Experiment (FGE) conducted on Pahute Mesa injected and pumped solute and colloid tracers from a system of three wells for obtaining site-specific information about the transport of radionuclides in fractured rock aquifers. This study aims to develop reliable three-dimensional discrete fracture network (DFN) models to simulate the BULLION FGE as a means for computing realistic ranges of important parameters describing fractured rock. Multiple conceptual DFN models were developed using dfnWorks, a parallelized computational suite developed by Los Alamos National Laboratory, to simulate flow and conservative particle movement in subsurface fractured rocks downgradient from the BULLION test. The model domain is 100x200x100 m and includes the three tracer-test wells of the BULLION FGE and the Pahute Mesa Lava-flow aquifer. The model scenarios considered differ from each other in terms of boundary conditions and fracture density. For each conceptual model, a number of statistically equivalent fracture network realizations were generated using data from fracture characterization studies. We adopt the covariance matrix adaptation-evolution strategy (CMA-ES) as a global local stochastic derivative-free optimization method to calibrate the DFN models using groundwater levels and tracer breakthrough data obtained from the three wells. Models of fracture apertures based on fracture type and size are proposed and the values of apertures in each model are estimated during model calibration. The ranges of fracture aperture values resulting from this study are expected to enhance understanding of radionuclide transport in fractured rocks and

  14. Robustness of near-infrared calibration models for the prediction of milk constituents during the milking process.

    Science.gov (United States)

    Melfsen, Andreas; Hartung, Eberhard; Haeussermann, Angelika

    2013-02-01

    The robustness of in-line raw milk analysis with near-infrared spectroscopy (NIRS) was tested with respect to the prediction of the raw milk contents fat, protein and lactose. Near-infrared (NIR) spectra of raw milk (n = 3119) were acquired on three different farms during the milking process of 354 milkings over a period of six months. Calibration models were calculated for: a random data set of each farm (fully random internal calibration); first two thirds of the visits per farm (internal calibration); whole datasets of two of the three farms (external calibration), and combinations of external and internal datasets. Validation was done either on the remaining data set per farm (internal validation) or on data of the remaining farms (external validation). Excellent calibration results were obtained when fully randomised internal calibration sets were used for milk analysis. In this case, RPD values of around ten, five and three for the prediction of fat, protein and lactose content, respectively, were achieved. Farm internal calibrations achieved much poorer prediction results especially for the prediction of protein and lactose with RPD values of around two and one respectively. The prediction accuracy improved when validation was done on spectra of an external farm, mainly due to the higher sample variation in external calibration sets in terms of feeding diets and individual cow effects. The results showed that further improvements were achieved when additional farm information was added to the calibration set. One of the main requirements towards a robust calibration model is the ability to predict milk constituents in unknown future milk samples. The robustness and quality of prediction increases with increasing variation of, e.g., feeding and cow individual milk composition in the calibration model.

  15. A new calibration model for pointing a radio telescope that considers nonlinear errors in the azimuth axis

    International Nuclear Information System (INIS)

    Kong De-Qing; Wang Song-Gen; Zhang Hong-Bo; Wang Jin-Qing; Wang Min

    2014-01-01

    A new calibration model of a radio telescope that includes pointing error is presented, which considers nonlinear errors in the azimuth axis. For a large radio telescope, in particular for a telescope with a turntable, it is difficult to correct pointing errors using a traditional linear calibration model, because errors produced by the wheel-on-rail or center bearing structures are generally nonlinear. Fourier expansion is made for the oblique error and parameters describing the inclination direction along the azimuth axis based on the linear calibration model, and a new calibration model for pointing is derived. The new pointing model is applied to the 40m radio telescope administered by Yunnan Observatories, which is a telescope that uses a turntable. The results show that this model can significantly reduce the residual systematic errors due to nonlinearity in the azimuth axis compared with the linear model

  16. Intersatellite Calibration of Microwave Radiometers for GPM

    Science.gov (United States)

    Wilheit, T. T.

    2010-12-01

    The aim of the GPM mission is to measure precipitation globally with high temporal resolution by using a constellation of satellites logically united by the GPM Core Satellite which will be in a non-sunsynchronous, medium inclination orbit. The usefulness of the combined product depends on the consistency of precipitation retrievals from the various microwave radiometers. The calibration requirements for this consistency are quite daunting requiring a multi-layered approach. The radiometers can vary considerably in their frequencies, view angles, polarizations and spatial resolutions depending on their primary application and other constraints. The planned parametric algorithms will correct for the varying viewing parameters, but they are still vulnerable to calibration errors, both relative and absolute. The GPM Intersatellite Calibration Working Group (aka X-CAL) will adjust the calibration of all the radiometers to a common consensus standard for the GPM Level 1C product to be used in precipitation retrievals. Finally, each Precipitation Algorithm Working Group must have its own strategy for removing the residual errors. If the final adjustments are small, the credibility of the precipitation retrievals will be enhanced. Before intercomparing, the radiometers must be self consistent on a scan-wise and orbit-wise basis. Pre-screening for this consistency constitutes the first step in the intercomparison. The radiometers are then compared pair-wise with the microwave radiometer (GMI) on the GPM Core Satellite. Two distinct approaches are used for sake of cross-checking the results. On the one hand, nearly simultaneous observations are collected at the cross-over points of the orbits and the observations of one are converted to virtual observations of the other using a radiative transfer model to permit comparisons. The complementary approach collects histograms of brightness temperature from each instrument. In each case a model is needed to translate the

  17. Calibration plots for risk prediction models in the presence of competing risks

    DEFF Research Database (Denmark)

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-01-01

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks...... prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves...

  18. Assessing water resources in Azerbaijan using a local distributed model forced and constrained with global data

    Science.gov (United States)

    Bouaziz, Laurène; Hegnauer, Mark; Schellekens, Jaap; Sperna Weiland, Frederiek; ten Velden, Corine

    2017-04-01

    with NOAA stations and that MSWEP slightly overestimated precipitation amounts. On a daily basis, there were discrepancies in the peak timing and magnitude between measured precipitation and the global products. A bias between EU-WATCH and WFDEI temperature and potential evaporation was observed and to model the water balance correctly, it was needed to correct EU-WATCH to WFDEI mean monthly values. Overall, the available sources enabled rapid set-up of a hydrological model including the forcing of the model with a relatively good performance to assess water resources in Azerbaijan with a limited calibration effort and allow for a similar set-up anywhere in the world. Timing and quantification of peak volume remains a weakness in global data, making it difficult to be used for some applications (flooding) and for detailed calibration. Selecting and comparing different sources of global meteorological data is important to have a reliable set which improves model performance. - Beck et al., 2016. MSWEP: 3-hourly 0.25° global gridded precipitation (1979-2014) by merging gauge, satellite, and reanalysis data. Hydrol. Earth Syst. Sci. Discuss. - Dai Y. et al. ,2013. Development of a China Dataset of Soil Hydraulic Parameters Using Pedotransfer Functions for Land Surface Modeling. Journal of Hydrometeorology - Harding, R. et al., 2011., WATCH: Current knowledge of the Terrestrial global water cycle, J. Hydrometeorol. - Schellekens, J. et al., 2014. Rapid setup of hydrological and hydraulic models using OpenStreetMap and the SRTM derived digital elevation model. Environmental Modelling&Software - Wang-Erlandsson L. et al., 2016. Global Root Zone Storage Capacity from Satellite-Based Evaporation. Hydrology and Earth System Sciences - Weedon, G. et al., 2014. The WFDEI meteorological forcing data set: WATCH Forcing Data methodology applied to ERA-Interim reanalysis data, Water Resources Research.

  19. Global long-term ozone trends derived from different observed and modelled data sets

    Science.gov (United States)

    Coldewey-Egbers, M.; Loyola, D.; Zimmer, W.; van Roozendael, M.; Lerot, C.; Dameris, M.; Garny, H.; Braesicke, P.; Koukouli, M.; Balis, D.

    2012-04-01

    The long-term behaviour of stratospheric ozone amounts during the past three decades is investigated on a global scale using different observed and modelled data sets. Three European satellite sensors GOME/ERS-2, SCIAMACHY/ENVISAT, and GOME-2/METOP are combined and a merged global monthly mean total ozone product has been prepared using an inter-satellite calibration approach. The data set covers the 16-years period from June 1995 to June 2011 and it exhibits an excellent long-term stability, which is required for such trend studies. A multiple linear least-squares regression algorithm using different explanatory variables is applied to the time series and statistically significant positive trends are detected in the northern mid latitudes and subtropics. Global trends are also estimated using a second satellite-based Merged Ozone Data set (MOD) provided by NASA. For few selected geographical regions ozone trends are additionally calculated using well-maintained measurements of individual Dobson/Brewer ground-based instruments. A reasonable agreement in the spatial patterns of the trends is found amongst the European satellite, the NASA satellite, and the ground-based observations. Furthermore, two long-term simulations obtained with the Chemistry-Climate Models E39C-A provided by German Aerospace Center and UMUKCA-UCAM provided by University of Cambridge are analysed.

  20. The worth of data to reduce predictive uncertainty of an integrated catchment model by multi-constraint calibration

    Science.gov (United States)

    Koch, J.; Jensen, K. H.; Stisen, S.

    2017-12-01

    Hydrological models that integrate numerical process descriptions across compartments of the water cycle are typically required to undergo thorough model calibration in order to estimate suitable effective model parameters. In this study, we apply a spatially distributed hydrological model code which couples the saturated zone with the unsaturated zone and the energy portioning at the land surface. We conduct a comprehensive multi-constraint model calibration against nine independent observational datasets which reflect both the temporal and the spatial behavior of hydrological response of a 1000km2 large catchment in Denmark. The datasets are obtained from satellite remote sensing and in-situ measurements and cover five keystone hydrological variables: discharge, evapotranspiration, groundwater head, soil moisture and land surface temperature. Results indicate that a balanced optimization can be achieved where errors on objective functions for all nine observational datasets can be reduced simultaneously. The applied calibration framework was tailored with focus on improving the spatial pattern performance; however results suggest that the optimization is still more prone to improve the temporal dimension of model performance. This study features a post-calibration linear uncertainty analysis. This allows quantifying parameter identifiability which is the worth of a specific observational dataset to infer values to model parameters through calibration. Furthermore the ability of an observation to reduce predictive uncertainty is assessed as well. Such findings determine concrete implications on the design of model calibration frameworks and, in more general terms, the acquisition of data in hydrological observatories.

  1. Methods and strategy for modeling daily global solar radiation with measured meteorological data - A case study in Nanchang station, China

    International Nuclear Information System (INIS)

    Wu, Guofeng; Liu, Yaolin; Wang, Tiejun

    2007-01-01

    Solar radiation is a primary driver for many physical, chemical and biological processes on the earth's surface, and complete and accurate solar radiation data at a specific region are quite indispensable to the solar energy related researches. This study, with Nanchang station, China, as a case study, aimed to calibrate existing models and develop new models for estimating missing global solar radiation data using commonly measured meteorological data and to propose a strategy for selecting the optimal models under different situations of available meteorological data. Using daily global radiation, sunshine hours, temperature, total precipitation and dew point data covering the years from 1994 to 2005, we calibrated or developed and evaluated seven existing models and two new models. Validation criteria included intercept, slope, coefficient of determination, mean bias error and root mean square error. The best result (R 2 = 0.93) was derived from Chen model 2, which uses sunshine hours and temperature as predictors. The Bahel model, which only uses sunshine hours, was almost as good, explaining 92% of the solar radiation variance. Temperature based models (Bristow and Campbell, Allen, Hargreaves and Chen 1 models) provided less accurate results, of which the best one (R 2 = 0.69) is the Bristow and Campbell model. The temperature based models were improved by adding other variables (daily mean total precipitation and mean dew point). Two such models could explain 77% (Wu model 1) and 80% (Wu model 2) of the solar radiation variance. We, thus, propose a strategy for selecting an optimal method for calculating missing daily values of global solar radiation: (1) when sunshine hour and temperature data are available, use Chen model 2; (2) when only sunshine hour data are available, use Bahel model; (3) when temperature, total precipitation and dew point data are available but not sunshine hours, use Wu model 2; (4) when only temperature and total precipitation are

  2. Global Hail Model

    Science.gov (United States)

    Werner, A.; Sanderson, M.; Hand, W.; Blyth, A.; Groenemeijer, P.; Kunz, M.; Puskeiler, M.; Saville, G.; Michel, G.

    2012-04-01

    Hail risk models are rare for the insurance industry. This is opposed to the fact that average annual hail losses can be large and hail dominates losses for many motor portfolios worldwide. Insufficient observational data, high spatio-temporal variability and data inhomogenity have hindered creation of credible models so far. In January 2012, a selected group of hail experts met at Willis in London in order to discuss ways to model hail risk at various scales. Discussions aimed at improving our understanding of hail occurrence and severity, and covered recent progress in the understanding of microphysical processes and climatological behaviour and hail vulnerability. The final outcome of the meeting was the formation of a global hail risk model initiative and the launch of a realistic global hail model in order to assess hail loss occurrence and severities for the globe. The following projects will be tackled: Microphysics of Hail and hail severity measures: Understand the physical drivers of hail and hailstone size development in different regions on the globe. Proposed factors include updraft and supercooled liquid water content in the troposphere. What are the thresholds drivers of hail formation around the globe? Hail Climatology: Consider ways to build a realistic global climatological set of hail events based on physical parameters including spatial variations in total availability of moisture, aerosols, among others, and using neural networks. Vulnerability, Exposure, and financial model: Use historical losses and event footprints available in the insurance market to approximate fragility distributions and damage potential for various hail sizes for property, motor, and agricultural business. Propagate uncertainty distributions and consider effects of policy conditions along with aggregating and disaggregating exposure and losses. This presentation provides an overview of ideas and tasks that lead towards a comprehensive global understanding of hail risk for

  3. Applying a Global Sensitivity Analysis Workflow to Improve the Computational Efficiencies in Physiologically-Based Pharmacokinetic Modeling

    Directory of Open Access Journals (Sweden)

    Nan-Hung Hsieh

    2018-06-01

    Full Text Available Traditionally, the solution to reduce parameter dimensionality in a physiologically-based pharmacokinetic (PBPK model is through expert judgment. However, this approach may lead to bias in parameter estimates and model predictions if important parameters are fixed at uncertain or inappropriate values. The purpose of this study was to explore the application of global sensitivity analysis (GSA to ascertain which parameters in the PBPK model are non-influential, and therefore can be assigned fixed values in Bayesian parameter estimation with minimal bias. We compared the elementary effect-based Morris method and three variance-based Sobol indices in their ability to distinguish “influential” parameters to be estimated and “non-influential” parameters to be fixed. We illustrated this approach using a published human PBPK model for acetaminophen (APAP and its two primary metabolites APAP-glucuronide and APAP-sulfate. We first applied GSA to the original published model, comparing Bayesian model calibration results using all the 21 originally calibrated model parameters (OMP, determined by “expert judgment”-based approach vs. the subset of original influential parameters (OIP, determined by GSA from the OMP. We then applied GSA to all the PBPK parameters, including those fixed in the published model, comparing the model calibration results using this full set of 58 model parameters (FMP vs. the full set influential parameters (FIP, determined by GSA from FMP. We also examined the impact of different cut-off points to distinguish the influential and non-influential parameters. We found that Sobol indices calculated by eFAST provided the best combination of reliability (consistency with other variance-based methods and efficiency (lowest computational cost to achieve convergence in identifying influential parameters. We identified several originally calibrated parameters that were not influential, and could be fixed to improve computational

  4. Satellite-based Calibration of Heat Flux at the Ocean Surface

    Science.gov (United States)

    Barron, C. N.; Dastugue, J. M.; May, J. C.; Rowley, C. D.; Smith, S. R.; Spence, P. L.; Gremes-Cordero, S.

    2016-02-01

    Model forecasts of upper ocean heat content and variability on diurnal to daily scales are highly dependent on estimates of heat flux through the air-sea interface. Satellite remote sensing is applied to not only inform the initial ocean state but also to mitigate errors in surface heat flux and model representations affecting the distribution of heat in the upper ocean. Traditional assimilation of sea surface temperature (SST) observations re-centers ocean models at the start of each forecast cycle. Subsequent evolution depends on estimates of surface heat fluxes and upper-ocean processes over the forecast period. The COFFEE project (Calibration of Ocean Forcing with satellite Flux Estimates) endeavors to correct ocean forecast bias through a responsive error partition among surface heat flux and ocean dynamics sources. A suite of experiments in the southern California Current demonstrates a range of COFFEE capabilities, showing the impact on forecast error relative to a baseline three-dimensional variational (3DVAR) assimilation using Navy operational global or regional atmospheric forcing. COFFEE addresses satellite-calibration of surface fluxes to estimate surface error covariances and links these to the ocean interior. Experiment cases combine different levels of flux calibration with different assimilation alternatives. The cases may use the original fluxes, apply full satellite corrections during the forecast period, or extend hindcast corrections into the forecast period. Assimilation is either baseline 3DVAR or standard strong-constraint 4DVAR, with work proceeding to add a 4DVAR expanded to include a weak constraint treatment of the surface flux errors. Covariance of flux errors is estimated from the recent time series of forecast and calibrated flux terms. While the California Current examples are shown, the approach is equally applicable to other regions. These approaches within a 3DVAR application are anticipated to be useful for global and larger

  5. BETR Global - A geographically explicit global-scale multimedia contaminant fate model

    Energy Technology Data Exchange (ETDEWEB)

    Macleod, M.; Waldow, H. von; Tay, P.; Armitage, J. M.; Wohrnschimmel, H.; Riley, W.; McKone, T. E.; Hungerbuhler, K.

    2011-04-01

    We present two new software implementations of the BETR Global multimedia contaminant fate model. The model uses steady-state or non-steady-state mass-balance calculations to describe the fate and transport of persistent organic pollutants using a desktop computer. The global environment is described using a database of long-term average monthly conditions on a 15{sup o} x 15{sup o} grid. We demonstrate BETR Global by modeling the global sources, transport, and removal of decamethylcyclopentasiloxane (D5).

  6. Model- and calibration-independent test of cosmic acceleration

    International Nuclear Information System (INIS)

    Seikel, Marina; Schwarz, Dominik J.

    2009-01-01

    We present a calibration-independent test of the accelerated expansion of the universe using supernova type Ia data. The test is also model-independent in the sense that no assumptions about the content of the universe or about the parameterization of the deceleration parameter are made and that it does not assume any dynamical equations of motion. Yet, the test assumes the universe and the distribution of supernovae to be statistically homogeneous and isotropic. A significant reduction of systematic effects, as compared to our previous, calibration-dependent test, is achieved. Accelerated expansion is detected at significant level (4.3σ in the 2007 Gold sample, 7.2σ in the 2008 Union sample) if the universe is spatially flat. This result depends, however, crucially on supernovae with a redshift smaller than 0.1, for which the assumption of statistical isotropy and homogeneity is less well established

  7. Fiction and reality in the modelling world - Balance between simplicity and complexity, calibration and identifiability, verification and falsification

    DEFF Research Database (Denmark)

    Harremoës, P.; Madsen, H.

    1999-01-01

    Where is the balance between simplicity and complexity in model prediction of urban drainage structures? The calibration/verification approach to testing of model performance gives an exaggerated sense of certainty. Frequently, the model structure and the parameters are not identifiable by calibr......Where is the balance between simplicity and complexity in model prediction of urban drainage structures? The calibration/verification approach to testing of model performance gives an exaggerated sense of certainty. Frequently, the model structure and the parameters are not identifiable...... by calibration/verification on the basis of the data series available, which generates elements of sheer guessing - unless the universality of the model is be based on induction, i.e. experience from the sum of all previous investigations. There is a need to deal more explicitly with uncertainty...

  8. Influence of selecting secondary settling tank sub-models on the calibration of WWTP models – A global sensitivity analysis using BSM2

    DEFF Research Database (Denmark)

    Ramin, Elham; Flores Alsina, Xavier; Sin, Gürkan

    2014-01-01

    This study investigates the sensitivity of wastewater treatment plant (WWTP) model performance to the selection of one-dimensional secondary settling tanks (1-D SST) models with first-order and second-order mathematical structures. We performed a global sensitivity analysis (GSA) on the benchmark...... simulation model No.2 with the input uncertainty associated to the biokinetic parameters in the activated sludge model No. 1 (ASM1), a fractionation parameter in the primary clarifier, and the settling parameters in the SST model. Based on the parameter sensitivity rankings obtained in this study......, the settling parameters were found to be as influential as the biokinetic parameters on the uncertainty of WWTP model predictions, particularly for biogas production and treated water quality. However, the sensitivity measures were found to be dependent on the 1-D SST models selected. Accordingly, we suggest...

  9. Electronic transport in VO2—Experimentally calibrated Boltzmann transport modeling

    International Nuclear Information System (INIS)

    Kinaci, Alper; Rosenmann, Daniel; Chan, Maria K. Y.; Kado, Motohisa; Ling, Chen; Zhu, Gaohua; Banerjee, Debasish

    2015-01-01

    Materials that undergo metal-insulator transitions (MITs) are under intense study, because the transition is scientifically fascinating and technologically promising for various applications. Among these materials, VO 2 has served as a prototype due to its favorable transition temperature. While the physical underpinnings of the transition have been heavily investigated experimentally and computationally, quantitative modeling of electronic transport in the two phases has yet to be undertaken. In this work, we establish a density-functional-theory (DFT)-based approach with Hubbard U correction (DFT + U) to model electronic transport properties in VO 2 in the semiconducting and metallic regimes, focusing on band transport using the Boltzmann transport equations. We synthesized high quality VO 2 films and measured the transport quantities across the transition, in order to calibrate the free parameters in the model. We find that the experimental calibration of the Hubbard correction term can efficiently and adequately model the metallic and semiconducting phases, allowing for further computational design of MIT materials for desirable transport properties

  10. Calibration of a Plastic Classification System with the Ccw Model

    International Nuclear Information System (INIS)

    Barcala Riveira, J. M.; Fernandez Marron, J. L.; Alberdi Primicia, J.; Navarrete Marin, J. J.; Oller Gonzalez, J. C.

    2003-01-01

    This document describes the calibration of a plastic Classification system with the Ccw model (Classification by Quantum's built with Wavelet Coefficients). The method is applied to spectra of plastics usually present in domestic wastes. Obtained results are showed. (Author) 16 refs

  11. Calibration procedure of Hukseflux SR25 to Establish the Diffuse Reference for the Outdoor Broadband Radiometer Calibration

    Energy Technology Data Exchange (ETDEWEB)

    Reda, Ibrahim M. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Andreas, Afshin M. [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-01

    Accurate pyranometer calibrations, traceable to internationally recognized standards, are critical for solar irradiance measurements. One calibration method is the component summation method, where the pyranometers are calibrated outdoors under clear sky conditions, and the reference global solar irradiance is calculated as the sum of two reference components, the diffuse horizontal and subtended beam solar irradiances. The beam component is measured with pyrheliometers traceable to the World Radiometric Reference, while there is no internationally recognized reference for the diffuse component. In the absence of such a reference, we present a method to consistently calibrate pyranometers for measuring the diffuse component. The method is based on using a modified shade/unshade method and a pyranometer with less than 0.5 W/m2 thermal offset. The calibration result shows that the responsivity of Hukseflux SR25 pyranometer equals 10.98 uV/(W/m2) with +/-0.86 percent uncertainty.

  12. Evaluating the Efficiency of a Multi-core Aware Multi-objective Optimization Tool for Calibrating the SWAT Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, X. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Izaurralde, R. C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zong, Z. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhao, K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Thomson, A. M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2012-08-20

    The efficiency of calibrating physically-based complex hydrologic models is a major concern in the application of those models to understand and manage natural and human activities that affect watershed systems. In this study, we developed a multi-core aware multi-objective evolutionary optimization algorithm (MAMEOA) to improve the efficiency of calibrating a worldwide used watershed model (Soil and Water Assessment Tool (SWAT)). The test results show that MAMEOA can save about 1-9%, 26-51%, and 39-56% time consumed by calibrating SWAT as compared with sequential method by using dual-core, quad-core, and eight-core machines, respectively. Potential and limitations of MAMEOA for calibrating SWAT are discussed. MAMEOA is open source software.

  13. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    Science.gov (United States)

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William

    2017-09-01

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  14. CALIPSO lidar calibration at 532 nm: version 4 nighttime algorithm

    Science.gov (United States)

    Kar, Jayanta; Vaughan, Mark A.; Lee, Kam-Pui; Tackett, Jason L.; Avery, Melody A.; Garnier, Anne; Getzewich, Brian J.; Hunt, William H.; Josset, Damien; Liu, Zhaoyan; Lucker, Patricia L.; Magill, Brian; Omar, Ali H.; Pelon, Jacques; Rogers, Raymond R.; Toth, Travis D.; Trepte, Charles R.; Vernier, Jean-Paul; Winker, David M.; Young, Stuart A.

    2018-03-01

    Data products from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on board Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) were recently updated following the implementation of new (version 4) calibration algorithms for all of the Level 1 attenuated backscatter measurements. In this work we present the motivation for and the implementation of the version 4 nighttime 532 nm parallel channel calibration. The nighttime 532 nm calibration is the most fundamental calibration of CALIOP data, since all of CALIOP's other radiometric calibration procedures - i.e., the 532 nm daytime calibration and the 1064 nm calibrations during both nighttime and daytime - depend either directly or indirectly on the 532 nm nighttime calibration. The accuracy of the 532 nm nighttime calibration has been significantly improved by raising the molecular normalization altitude from 30-34 km to the upper possible signal acquisition range of 36-39 km to substantially reduce stratospheric aerosol contamination. Due to the greatly reduced molecular number density and consequently reduced signal-to-noise ratio (SNR) at these higher altitudes, the signal is now averaged over a larger number of samples using data from multiple adjacent granules. Additionally, an enhanced strategy for filtering the radiation-induced noise from high-energy particles was adopted. Further, the meteorological model used in the earlier versions has been replaced by the improved Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2), model. An aerosol scattering ratio of 1.01 ± 0.01 is now explicitly used for the calibration altitude. These modifications lead to globally revised calibration coefficients which are, on average, 2-3 % lower than in previous data releases. Further, the new calibration procedure is shown to eliminate biases at high altitudes that were present in earlier versions and consequently leads to an improved representation of

  15. Calibration of Automatically Generated Items Using Bayesian Hierarchical Modeling.

    Science.gov (United States)

    Johnson, Matthew S.; Sinharay, Sandip

    For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden…

  16. Nanotechnology in global medicine and human biosecurity: private interests, policy dilemmas, and the calibration of public health law.

    Science.gov (United States)

    Faunce, Thomas A

    2007-01-01

    This paper considers how best to approach dilemmas posed to global health and biosecurity policy by increasing advances in practical applications of nanotechnology. The type of nano-technology policy dilemmas discussed include: (1) expenditure of public funds, (2) public-funded research priorities, (3) public confidence in government and science and, finally, (4) public safety. The article examines the value in this context of a legal obligation that the development of relevant public health law be calibrated against less corporate-influenced norms issuing from bioethics and international human rights.

  17. An experimental test of CSR theory using a globally calibrated ordination method.

    Science.gov (United States)

    Li, Yuanzhi; Shipley, Bill

    2017-01-01

    Can CSR theory, in conjunction with a recently proposed globally calibrated CSR ordination ("StrateFy"), using only three easily measured leaf traits (leaf area, specific leaf area and leaf dry matter content) predict the functional signature of herbaceous vegetation along experimentally manipulated gradients of soil fertility and disturbance? To determine this, we grew 37 herbaceous species in mixture for five years in 24 experimental mesocosms differing in factorial levels of soil resources (stress) and density-independent mortality (disturbance). We measured 16 different functional traits and then ordinated the resulting vegetation within the CSR triangle using StrateFy. We then calculated community-weighted mean (CWM) values of the competitor (CCWM), stress-tolerator (SCWM) and ruderal (RCWM) scores for each mesocosm. We found a significant increase in SCWM from low to high stress mesocosms, and an increase in RCWM from lowly to highly disturbed mesocosms. However, CCWM did not decline significantly as intensity of stress or disturbance increased, as predicted by CSR theory. This last result likely arose because our herbaceous species were relatively poor competitors in global comparisons and thus no strong competitors in our species pool were selectively favoured in low stress and low disturbed mesocosms. Variation in the 13 other traits, not used by StrateFy, largely argeed with the predictions of CSR theory. StrateFy worked surprisingly well in our experimental study except for the C-dimension. Despite loss of some precision, it has great potential applicability in future studies due to its simplicity and generality.

  18. Hydrological model calibration for derived flood frequency analysis using stochastic rainfall and probability distributions of peak flows

    Science.gov (United States)

    Haberlandt, U.; Radtke, I.

    2014-01-01

    Derived flood frequency analysis allows the estimation of design floods with hydrological modeling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices regarding precipitation input, discharge output and consequently the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets and to propose the most suitable approach. Event based and continuous, observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output, short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in northern Germany with the hydrological model HEC-HMS (Hydrologic Engineering Center's Hydrologic Modeling System). The results show that (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, and (III) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the

  19. Calibration of a complex activated sludge model for the full-scale wastewater treatment plant.

    Science.gov (United States)

    Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw

    2011-08-01

    In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that upon the calculations of normalized sensitivity coefficient (S(i,j)) 17 (steady-state) or 19 (dynamic conditions) kinetic and stoichiometric parameters are sensitive. Most of them are associated with growth and decay of ordinary heterotrophic organisms and phosphorus accumulating organisms. The rankings of ten most sensitive parameters established on the basis of the calculations of the mean square sensitivity measure (δ(msqr)j) indicate that irrespective of the fact, whether the steady-state or dynamic calibration was performed, there is an agreement in the sensitivity of parameters.

  20. Global nuclear material control model

    International Nuclear Information System (INIS)

    Dreicer, J.S.; Rutherford, D.A.

    1996-01-01

    The nuclear danger can be reduced by a system for global management, protection, control, and accounting as part of a disposition program for special nuclear materials. The development of an international fissile material management and control regime requires conceptual research supported by an analytical and modeling tool that treats the nuclear fuel cycle as a complete system. Such a tool must represent the fundamental data, information, and capabilities of the fuel cycle including an assessment of the global distribution of military and civilian fissile material inventories, a representation of the proliferation pertinent physical processes, and a framework supportive of national or international perspective. They have developed a prototype global nuclear material management and control systems analysis capability, the Global Nuclear Material Control (GNMC) model. The GNMC model establishes the framework for evaluating the global production, disposition, and safeguards and security requirements for fissile nuclear material

  1. Parallel Genetic Algorithms for calibrating Cellular Automata models: Application to lava flows

    International Nuclear Information System (INIS)

    D'Ambrosio, D.; Spataro, W.; Di Gregorio, S.; Calabria Univ., Cosenza; Crisci, G.M.; Rongo, R.; Calabria Univ., Cosenza

    2005-01-01

    Cellular Automata are highly nonlinear dynamical systems which are suitable far simulating natural phenomena whose behaviour may be specified in terms of local interactions. The Cellular Automata model SCIARA, developed far the simulation of lava flows, demonstrated to be able to reproduce the behaviour of Etnean events. However, in order to apply the model far the prediction of future scenarios, a thorough calibrating phase is required. This work presents the application of Genetic Algorithms, general-purpose search algorithms inspired to natural selection and genetics, far the parameters optimisation of the model SCIARA. Difficulties due to the elevated computational time suggested the adoption a Master-Slave Parallel Genetic Algorithm far the calibration of the model with respect to the 2001 Mt. Etna eruption. Results demonstrated the usefulness of the approach, both in terms of computing time and quality of performed simulations

  2. Calibration and verification of numerical runoff and erosion model

    Directory of Open Access Journals (Sweden)

    Gabrić Ognjen

    2015-01-01

    Full Text Available Based on the field and laboratory measurements, and analogous with development of computational techniques, runoff and erosion models based on equations which describe the physics of the process are also developed. Based on the KINEROS2 model, this paper presents basic modelling principles of runoff and erosion processes based on the St. Venant's equations. Alternative equations for friction calculation, calculation of source and deposition elements and transport capacity are also shown. Numerical models based on original and alternative equations are calibrated and verified on laboratory scale model. According to the results, friction calculation based on the analytic solution of laminar flow must be included in all runoff and erosion models.

  3. Calibration of a Chemistry Test Using the Rasch Model

    Directory of Open Access Journals (Sweden)

    Nancy Coromoto Martín Guaregua

    2011-11-01

    Full Text Available The Rasch model was used to calibrate a general chemistry test for the purpose of analyzing the advantages and information the model provides. The sample was composed of 219 college freshmen. Of the 12 questions used, good fit was achieved in 10. The evaluation shows that although there are items of variable difficulty, there are gaps on the scale; in order to make the test complete, it will be necessary to design new items to fill in these gaps.

  4. The hydrological calibration and validation of a complexly-linked watershed reservoir model for the Occoquan watershed, Virginia

    Science.gov (United States)

    Xu, Zhongyan; Godrej, Adil N.; Grizzard, Thomas J.

    2007-10-01

    SummaryRunoff models such as HSPF and reservoir models such as CE-QUAL-W2 are used to model water quality in watersheds. Most often, the models are independently calibrated to observed data. While this approach can achieve good calibration, it does not replicate the physically-linked nature of the system. When models are linked by using the model output from an upstream model as input to a downstream model, the physical reality of a continuous watershed, where the overland and waterbody portions are parts of the whole, is better represented. There are some additional challenges in the calibration of such linked models, because the aim is to simulate the entire system as a whole, rather than piecemeal. When public entities are charged with model development, one of the driving forces is to use public-domain models. This paper describes the use of two such models, HSPF and CE-QUAL-W2, in the linked modeling of the Occoquan watershed located in northern Virginia, USA. The description of the process is provided, and results from the hydrological calibration and validation are shown. The Occoquan model consists of six HSPF and two CE-QUAL-W2 models, linked in a complex way, to simulate two major reservoirs and the associated drainage areas. The overall linked model was calibrated for a three-year period and validated for a two-year period. The results show that a successful calibration can be achieved using the linked approach, with moderate additional effort. Overall flow balances based on the three-year calibration period at four stream stations showed agreement ranging from -3.95% to +3.21%. Flow balances for the two reservoirs, compared via the daily water surface elevations, also showed good agreement ( R2 values of 0.937 for Lake Manassas and 0.926 for Occoquan Reservoir), when missing (un-monitored) flows were included. Validation of the models ranged from poor to fair for the watershed models and excellent for the waterbody models, thus indicating that the

  5. Calibrating a numerical model's morphology using high-resolution spatial and temporal datasets from multithread channel flume experiments.

    Science.gov (United States)

    Javernick, L.; Bertoldi, W.; Redolfi, M.

    2017-12-01

    Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical

  6. Electromagnetic Cell Level Calibration for ATLAS Tile Calorimeter Modules

    CERN Document Server

    Kulchitskii, Yu A; Budagov, Yu A; Khubua, J I; Rusakovitch, N A; Vinogradov, V B; Henriques, A; Davidek, T; Tokar, S; Solodkov, A; Vichou, I

    2006-01-01

    We have determined the electromagnetic calibration constants of 11% TileCal modules exposed to electron beams with incident angles of 20 and 90 degrees. The gain of all the calorimeter cells have been pre-equalized using the radioactive Cs-source that will be also used in situ. The average values for these modules are equal to: for the flat filter method 1.154+/-0.002 pC/GeV and 1.192+/-0.002 pC/GeV for 20 and 90 degrees, for the fit method 1.040+/-0.002 pC/GeV and 1.068+/-0.003 pC/GeV, respectively. These average values for all cells of calibrated modules agree with the weighted average calibration constants for separate modules within the errors. Using the individual calibration constants for every module the RMS spread value of constants will be 1.9+/-0.1 %. In the case of the global constant this value will be 2.6+/-0.1 %. Finally, we present the global constants which should be used for the electromagnetic calibration of the ATLAS Tile hadronic calorimeter data in the ATHENA framework. These constants ar...

  7. Calibration Modeling Methodology to Optimize Performance for Low Range Applications

    Science.gov (United States)

    McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.

    2010-01-01

    Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.

  8. Modeling and Experimental Analysis of Piezoelectric Shakers for High-Frequency Calibration of Accelerometers

    International Nuclear Information System (INIS)

    Vogl, Gregory W.; Harper, Kari K.; Payne, Bev

    2010-01-01

    Piezoelectric shakers have been developed and used at the National Institute of Standards and Technology (NIST) for decades for high-frequency calibration of accelerometers. Recently, NIST researchers built new piezoelectric shakers in the hopes of reducing the uncertainties in the calibrations of accelerometers while extending the calibration frequency range beyond 20 kHz. The ability to build and measure piezoelectric shakers invites modeling of these systems in order to improve their design for increased performance, which includes a sinusoidal motion with lower distortion, lower cross-axial motion, and an increased frequency range. In this paper, we present a model of piezoelectric shakers and match it to experimental data. The equations of motion for all masses are solved along with the coupled state equations for the piezoelectric actuator. Finally, additional electrical elements like inductors, capacitors, and resistors are added to the piezoelectric actuator for matching of experimental and theoretical frequency responses.

  9. Global spatiotemporal distribution of soil respiration modeled using a global database

    Science.gov (United States)

    Hashimoto, S.; Carvalhais, N.; Ito, A.; Migliavacca, M.; Nishina, K.; Reichstein, M.

    2015-07-01

    The flux of carbon dioxide from the soil to the atmosphere (soil respiration) is one of the major fluxes in the global carbon cycle. At present, the accumulated field observation data cover a wide range of geographical locations and climate conditions. However, there are still large uncertainties in the magnitude and spatiotemporal variation of global soil respiration. Using a global soil respiration data set, we developed a climate-driven model of soil respiration by modifying and updating Raich's model, and the global spatiotemporal distribution of soil respiration was examined using this model. The model was applied at a spatial resolution of 0.5°and a monthly time step. Soil respiration was divided into the heterotrophic and autotrophic components of respiration using an empirical model. The estimated mean annual global soil respiration was 91 Pg C yr-1 (between 1965 and 2012; Monte Carlo 95 % confidence interval: 87-95 Pg C yr-1) and increased at the rate of 0.09 Pg C yr-2. The contribution of soil respiration from boreal regions to the total increase in global soil respiration was on the same order of magnitude as that of tropical and temperate regions, despite a lower absolute magnitude of soil respiration in boreal regions. The estimated annual global heterotrophic respiration and global autotrophic respiration were 51 and 40 Pg C yr-1, respectively. The global soil respiration responded to the increase in air temperature at the rate of 3.3 Pg C yr-1 °C-1, and Q10 = 1.4. Our study scaled up observed soil respiration values from field measurements to estimate global soil respiration and provide a data-oriented estimate of global soil respiration. The estimates are based on a semi-empirical model parameterized with over one thousand data points. Our analysis indicates that the climate controls on soil respiration may translate into an increasing trend in global soil respiration and our analysis emphasizes the relevance of the soil carbon flux from soil to

  10. Multiobjective Optimal Algorithm for Automatic Calibration of Daily Streamflow Forecasting Model

    Directory of Open Access Journals (Sweden)

    Yi Liu

    2016-01-01

    Full Text Available Single-objection function cannot describe the characteristics of the complicated hydrologic system. Consequently, it stands to reason that multiobjective functions are needed for calibration of hydrologic model. The multiobjective algorithms based on the theory of nondominate are employed to solve this multiobjective optimal problem. In this paper, a novel multiobjective optimization method based on differential evolution with adaptive Cauchy mutation and Chaos searching (MODE-CMCS is proposed to optimize the daily streamflow forecasting model. Besides, to enhance the diversity performance of Pareto solutions, a more precise crowd distance assigner is presented in this paper. Furthermore, the traditional generalized spread metric (SP is sensitive with the size of Pareto set. A novel diversity performance metric, which is independent of Pareto set size, is put forward in this research. The efficacy of the new algorithm MODE-CMCS is compared with the nondominated sorting genetic algorithm II (NSGA-II on a daily streamflow forecasting model based on support vector machine (SVM. The results verify that the performance of MODE-CMCS is superior to the NSGA-II for automatic calibration of hydrologic model.

  11. Nonlinear propagation model for ultrasound hydrophones calibration in the frequency range up to 100 MHz.

    Science.gov (United States)

    Radulescu, E G; Wójcik, J; Lewin, P A; Nowicki, A

    2003-06-01

    To facilitate the implementation and verification of the new ultrasound hydrophone calibration techniques described in the companion paper (somewhere in this issue) a nonlinear propagation model was developed. A brief outline of the theoretical considerations is presented and the model's advantages and disadvantages are discussed. The results of simulations yielding spatial and temporal acoustic pressure amplitude are also presented and compared with those obtained using KZK and Field II models. Excellent agreement between all models is evidenced. The applicability of the model in discrete wideband calibration of hydrophones is documented in the companion paper somewhere in this volume.

  12. Bayesian Calibration, Validation and Uncertainty Quantification for Predictive Modelling of Tumour Growth: A Tutorial.

    Science.gov (United States)

    Collis, Joe; Connor, Anthony J; Paczkowski, Marcin; Kannan, Pavitra; Pitt-Francis, Joe; Byrne, Helen M; Hubbard, Matthew E

    2017-04-01

    In this work, we present a pedagogical tumour growth example, in which we apply calibration and validation techniques to an uncertain, Gompertzian model of tumour spheroid growth. The key contribution of this article is the discussion and application of these methods (that are not commonly employed in the field of cancer modelling) in the context of a simple model, whose deterministic analogue is widely known within the community. In the course of the example, we calibrate the model against experimental data that are subject to measurement errors, and then validate the resulting uncertain model predictions. We then analyse the sensitivity of the model predictions to the underlying measurement model. Finally, we propose an elementary learning approach for tuning a threshold parameter in the validation procedure in order to maximize predictive accuracy of our validated model.

  13. Calibration of sea ice dynamic parameters in an ocean-sea ice model using an ensemble Kalman filter

    Science.gov (United States)

    Massonnet, F.; Goosse, H.; Fichefet, T.; Counillon, F.

    2014-07-01

    The choice of parameter values is crucial in the course of sea ice model development, since parameters largely affect the modeled mean sea ice state. Manual tuning of parameters will soon become impractical, as sea ice models will likely include more parameters to calibrate, leading to an exponential increase of the number of possible combinations to test. Objective and automatic methods for parameter calibration are thus progressively called on to replace the traditional heuristic, "trial-and-error" recipes. Here a method for calibration of parameters based on the ensemble Kalman filter is implemented, tested and validated in the ocean-sea ice model NEMO-LIM3. Three dynamic parameters are calibrated: the ice strength parameter P*, the ocean-sea ice drag parameter Cw, and the atmosphere-sea ice drag parameter Ca. In twin, perfect-model experiments, the default parameter values are retrieved within 1 year of simulation. Using 2007-2012 real sea ice drift data, the calibration of the ice strength parameter P* and the oceanic drag parameter Cw improves clearly the Arctic sea ice drift properties. It is found that the estimation of the atmospheric drag Ca is not necessary if P* and Cw are already estimated. The large reduction in the sea ice speed bias with calibrated parameters comes with a slight overestimation of the winter sea ice areal export through Fram Strait and a slight improvement in the sea ice thickness distribution. Overall, the estimation of parameters with the ensemble Kalman filter represents an encouraging alternative to manual tuning for ocean-sea ice models.

  14. Calibrated Blade-Element/Momentum Theory Aerodynamic Model of the MARIN Stock Wind Turbine: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Goupee, A.; Kimball, R.; de Ridder, E. J.; Helder, J.; Robertson, A.; Jonkman, J.

    2015-04-02

    In this paper, a calibrated blade-element/momentum theory aerodynamic model of the MARIN stock wind turbine is developed and documented. The model is created using open-source software and calibrated to closely emulate experimental data obtained by the DeepCwind Consortium using a genetic algorithm optimization routine. The provided model will be useful for those interested in validating interested in validating floating wind turbine numerical simulators that rely on experiments utilizing the MARIN stock wind turbine—for example, the International Energy Agency Wind Task 30’s Offshore Code Comparison Collaboration Continued, with Correlation project.

  15. Multi-gauge Calibration for modeling the Semi-Arid Santa Cruz Watershed in Arizona-Mexico Border Area Using SWAT

    Science.gov (United States)

    Niraula, Rewati; Norman, Laura A.; Meixner, Thomas; Callegary, James B.

    2012-01-01

    In most watershed-modeling studies, flow is calibrated at one monitoring site, usually at the watershed outlet. Like many arid and semi-arid watersheds, the main reach of the Santa Cruz watershed, located on the Arizona-Mexico border, is discontinuous for most of the year except during large flood events, and therefore the flow characteristics at the outlet do not represent the entire watershed. Calibration is required at multiple locations along the Santa Cruz River to improve model reliability. The objective of this study was to best portray surface water flow in this semiarid watershed and evaluate the effect of multi-gage calibration on flow predictions. In this study, the Soil and Water Assessment Tool (SWAT) was calibrated at seven monitoring stations, which improved model performance and increased the reliability of flow, in the Santa Cruz watershed. The most sensitive parameters to affect flow were found to be curve number (CN2), soil evaporation and compensation coefficient (ESCO), threshold water depth in shallow aquifer for return flow to occur (GWQMN), base flow alpha factor (Alpha_Bf), and effective hydraulic conductivity of the soil layer (Ch_K2). In comparison, when the model was established with a single calibration at the watershed outlet, flow predictions at other monitoring gages were inaccurate. This study emphasizes the importance of multi-gage calibration to develop a reliable watershed model in arid and semiarid environments. The developed model, with further calibration of water quality parameters will be an integral part of the Santa Cruz Watershed Ecosystem Portfolio Model (SCWEPM), an online decision support tool, to assess the impacts of climate change and urban growth in the Santa Cruz watershed.

  16. DEM Calibration Approach: design of experiment

    Science.gov (United States)

    Boikov, A. V.; Savelev, R. V.; Payor, V. A.

    2018-05-01

    The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.

  17. A global central banker competency model

    Directory of Open Access Journals (Sweden)

    David W. Brits

    2014-07-01

    Full Text Available Orientation: No comprehensive, integrated competency model exists for central bankers. Due to the importance of central banks in the context of the ongoing global financial crisis, it was deemed necessary to design and validate such a model. Research purpose: To craft and validate a comprehensive, integrated global central banker competency model (GCBCM and to assess whether central banks using the GCBCM for training have a higher global influence. Motivation for the study: Limited consensus exists globally about what constitutes a ‘competent’ central banker. A quantitatively validated GCBCM would make a significant contribution to enhancing central banker effectiveness, and also provide a solid foundation for effective people management. Research approach, design and method: A blended quantitative and qualitative research approach was taken. Two sets of hypotheses were tested regarding the relationships between the GCBCM and the training offered, using the model on the one hand, and a central bank’s global influence on the other. Main findings: The GCBCM was generally accepted across all participating central banks globally, although some differences were found between central banks with higher and lower global influence. The actual training offered by central banks in terms of the model, however, is generally limited to technical-functional skills. The GCBCM is therefore at present predominantly aspirational. Significant differences were found regarding the training offered. Practical/managerial implications: By adopting the GCBCM, central banks would be able to develop organisation-specific competency models in order to enhance their organisational capabilities and play their increasingly important global role more effectively. Contribution: A generic conceptual framework for the crafting of a competency model with evaluation criteria was developed. A GCBCM was quantitatively validated.

  18. Calibration of a rainfall-runoff hydrological model and flood simulation using data assimilation

    Science.gov (United States)

    Piacentini, A.; Ricci, S. M.; Thual, O.; Coustau, M.; Marchandise, A.

    2010-12-01

    Rainfall-runoff models are crucial tools for long-term assessment of flash floods or real-time forecasting. This work focuses on the calibration of a distributed parsimonious event-based rainfall-runoff model using data assimilation. The model combines a SCS-derived runoff model and a Lag and Route routing model for each cell of a regular grid mesh. The SCS-derived runoff model is parametrized by the initial water deficit, the discharge coefficient for the soil reservoir and a lagged discharge coefficient. The Lag and Route routing model is parametrized by the velocity of travel and the lag parameter. These parameters are assumed to be constant for a given catchment except for the initial water deficit and the velocity travel that are event-dependent (landuse, soil type and moisture initial conditions). In the present work, a BLUE filtering technique was used to calibrate the initial water deficit and the velocity travel for each flood event assimilating the first available discharge measurements at the catchment outlet. The advantages of the BLUE algorithm are its low computational cost and its convenient implementation, especially in the context of the calibration of a reduced number of parameters. The assimilation algorithm was applied on two Mediterranean catchment areas of different size and dynamics: Gardon d'Anduze and Lez. The Lez catchment, of 114 km2 drainage area, is located upstream Montpellier. It is a karstic catchment mainly affected by floods in autumn during intense rainstorms with short Lag-times and high discharge peaks (up to 480 m3.s-1 in September 2005). The Gardon d'Anduze catchment, mostly granite and schistose, of 545 km2 drainage area, lies over the departements of Lozère and Gard. It is often affected by flash and devasting floods (up to 3000 m3.s-1 in September 2002). The discharge observations at the beginning of the flood event are assimilated so that the BLUE algorithm provides optimal values for the initial water deficit and the

  19. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    Directory of Open Access Journals (Sweden)

    K. Ichii

    2010-07-01

    Full Text Available Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine – based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID, we conducted two simulations: (1 point simulations at four eddy flux sites in Japan and (2 spatial simulations for Japan with a default model (based on original settings and a modified model (based on model parameter tuning using eddy flux data. Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP, most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  20. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    Science.gov (United States)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  1. Estimating the daily global solar radiation spatial distribution from diurnal temperature ranges over the Tibetan Plateau in China

    International Nuclear Information System (INIS)

    Pan, Tao; Wu, Shaohong; Dai, Erfu; Liu, Yujie

    2013-01-01

    Highlights: ► Bristow–Campbell model was calibrated and validated over the Tibetan Plateau. ► Develop a simple method to rasterise the daily global solar radiation and get gridded information. ► The daily global solar radiation spatial distribution over the Tibetan Plateau was estimated. - Abstract: Daily global solar radiation is fundamental to most ecological and biophysical processes because it plays a key role in the local and global energy budget. However, gridded information about the spatial distribution of solar radiation is limited. This study aims to parameterise the Bristow–Campbell model for the daily global solar radiation estimation in the Tibetan Plateau and propose a method to rasterise the daily global solar radiation. Observed daily solar radiation and diurnal temperature data from eleven stations over the Tibetan Plateau during 1971–2010 were used to calibrate and validate the Bristow–Campbell radiation model. The extra-terrestrial radiation and clear sky atmospheric transmittance were calculated on a Geographic Information System (GIS) platform. Results show that the Bristow–Campbell model performs well after adjusting the parameters, the average Pearson’s correlation coefficients (r), Nash–Sutcliffe equation (NSE), ratio of the root mean square error to the standard deviation of measured data (RSR), and root mean-square error (RMSE) of 11 stations are 0.85, 2.81 MJ m −2 day −1 , 0.3 and 0.77 respectively. Gridded maximum and minimum average temperature data were obtained using Parameter-elevation Regressions on Independent Slopes Model (PRISM) and validated by the Chinese Ecosystem Research Network (CERN) stations’ data. The spatial daily global solar radiation distribution pattern was estimated and analysed by combining the solar radiation model (Bristow–Campbell model) and meteorological interpolation model (PRISM). Based on the overall results, it can be concluded that a calibrated Bristow–Campbell performs well

  2. Modeling, Calibration and Control for Extreme-Precision MEMS Deformable Mirrors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Iris AO will develop electromechanical models and actuator calibration methods to enable open-loop control of MEMS deformable mirrors (DMs) with unprecedented...

  3. The Global Tsunami Model (GTM)

    Science.gov (United States)

    Lorito, S.; Basili, R.; Harbitz, C. B.; Løvholt, F.; Polet, J.; Thio, H. K.

    2017-12-01

    The tsunamis occurred worldwide in the last two decades have highlighted the need for a thorough understanding of the risk posed by relatively infrequent but often disastrous tsunamis and the importance of a comprehensive and consistent methodology for quantifying the hazard. In the last few years, several methods for probabilistic tsunami hazard analysis have been developed and applied to different parts of the world. In an effort to coordinate and streamline these activities and make progress towards implementing the Sendai Framework of Disaster Risk Reduction (SFDRR) we have initiated a Global Tsunami Model (GTM) working group with the aim of i) enhancing our understanding of tsunami hazard and risk on a global scale and developing standards and guidelines for it, ii) providing a portfolio of validated tools for probabilistic tsunami hazard and risk assessment at a range of scales, and iii) developing a global tsunami hazard reference model. This GTM initiative has grown out of the tsunami component of the Global Assessment of Risk (GAR15), which has resulted in an initial global model of probabilistic tsunami hazard and risk. Started as an informal gathering of scientists interested in advancing tsunami hazard analysis, the GTM is currently in the process of being formalized through letters of interest from participating institutions. The initiative has now been endorsed by the United Nations International Strategy for Disaster Reduction (UNISDR) and the World Bank's Global Facility for Disaster Reduction and Recovery (GFDRR). We will provide an update on the state of the project and the overall technical framework, and discuss the technical issues that are currently being addressed, including earthquake source recurrence models, the use of aleatory variability and epistemic uncertainty, and preliminary results for a probabilistic global hazard assessment, which is an update of the model included in UNISDR GAR15.

  4. Improved variable reduction in partial least squares modelling by Global-Minimum Error Uninformative-Variable Elimination.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2017-08-22

    The calibration performance of Partial Least Squares regression (PLS) can be improved by eliminating uninformative variables. For PLS, many variable elimination methods have been developed. One is the Uninformative-Variable Elimination for PLS (UVE-PLS). However, the number of variables retained by UVE-PLS is usually still large. In UVE-PLS, variable elimination is repeated as long as the root mean squared error of cross validation (RMSECV) is decreasing. The set of variables in this first local minimum is retained. In this paper, a modification of UVE-PLS is proposed and investigated, in which UVE is repeated until no further reduction in variables is possible, followed by a search for the global RMSECV minimum. The method is called Global-Minimum Error Uninformative-Variable Elimination for PLS, denoted as GME-UVE-PLS or simply GME-UVE. After each iteration, the predictive ability of the PLS model, built with the remaining variable set, is assessed by RMSECV. The variable set with the global RMSECV minimum is then finally selected. The goal is to obtain smaller sets of variables with similar or improved predictability than those from the classical UVE-PLS method. The performance of the GME-UVE-PLS method is investigated using four data sets, i.e. a simulated set, NIR and NMR spectra, and a theoretical molecular descriptors set, resulting in twelve profile-response (X-y) calibrations. The selective and predictive performances of the models resulting from GME-UVE-PLS are statistically compared to those from UVE-PLS and 1-step UVE, one-sided paired t-tests. The results demonstrate that variable reduction with the proposed GME-UVE-PLS method, usually eliminates significantly more variables than the classical UVE-PLS, while the predictive abilities of the resulting models are better. With GME-UVE-PLS, a lower number of uninformative variables, without a chemical meaning for the response, may be retained than with UVE-PLS. The selectivity of the classical UVE method

  5. Calibration and validation of models for short-term decomposition and N mineralization of plant residues in the tropics

    Directory of Open Access Journals (Sweden)

    Alexandre Ferreira do Nascimento

    2012-12-01

    Full Text Available Insight of nutrient release patterns associated with the decomposition of plant residues is important for their effective use as a green manure in food production systems. Thus, this study aimed to evaluate the ability of the Century, APSIM and NDICEA simulation models for predicting the decomposition and N mineralization of crop residues in the tropical Atlantic forest biome, Brazil. The simulation models were calibrated based on actual decomposition and N mineralization rates of three types of crop residues with different chemical and biochemical composition. The models were also validated for different pedo-climatic conditions and crop residues conditions. In general, the accuracy of decomposition and N mineralization improved after calibration. Overall RMSE values for the decomposition and N mineralization of the crop materials varied from 7.4 to 64.6% before models calibration compared to 3.7 to 16.3 % after calibration. Therefore, adequate calibration of the models is indispensable for use them under humid tropical conditions. The NDICEA model generally outperformed the other models. However, the decomposition and N mineralization was not very accurate during the first 30 days of incubation, especially for easily decomposable crop residues. An additional model variable may be required to capture initial microbiological growth as affected by the moisture dynamics of the residues, as is the case in surface residues decomposition models.

  6. Solar radiation modeling and measurements for renewable energy applications: data and model quality

    International Nuclear Information System (INIS)

    Myers, Daryl R.

    2005-01-01

    Measurement and modeling of broadband and spectral terrestrial solar radiation is important for the evaluation and deployment of solar renewable energy systems. We discuss recent developments in the calibration of broadband solar radiometric instrumentation and improving broadband solar radiation measurement accuracy. An improved diffuse sky reference and radiometer calibration and characterization software for outdoor pyranometer calibrations are outlined. Several broadband solar radiation model approaches, including some developed at the National Renewable Energy Laboratory, for estimating direct beam, total hemispherical and diffuse sky radiation are briefly reviewed. The latter include the Bird clear sky model for global, direct beam, and diffuse terrestrial solar radiation; the Direct Insolation Simulation Code (DISC) for estimating direct beam radiation from global measurements; and the METSTAT (Meteorological and Statistical) and Climatological Solar Radiation (CSR) models that estimate solar radiation from meteorological data. We conclude that currently the best model uncertainties are representative of the uncertainty in measured data

  7. Solar radiation modeling and measurements for renewable energy applications: data and model quality

    Energy Technology Data Exchange (ETDEWEB)

    Myers, D.R. [National Renewable Energy Laboratory, Golden, CO (United States)

    2005-07-01

    Measurement and modeling of broadband and spectral terrestrial solar radiation is important for the evaluation and deployment of solar renewable energy systems. We discuss recent developments in the calibration of broadband solar radiometric instrumentation and improving broadband solar radiation measurement accuracy. An improved diffuse sky reference and radiometer calibration and characterization software for outdoor pyranometer calibrations are outlined. Several broadband solar radiation model approaches, including some developed at the National Renewable Energy Laboratory, for estimating direct beam, total hemispherical and diffuse sky radiation are briefly reviewed. The latter include the Bird clear sky model for global, direct beam, and diffuse terrestrial solar radiation; the Direct Insolation Simulation Code (DISC) for estimating direct beam radiation from global measurements; and the METSTAT (Meteorological and Statistical) and Climatological Solar Radiation (CSR) models that estimate solar radiation from meteorological data. We conclude that currently the best model uncertainties are representative of the uncertainty in measured data. (author)

  8. Global model structures for ∗-modules

    DEFF Research Database (Denmark)

    Böhme, Benjamin

    2018-01-01

    We extend Schwede's work on the unstable global homotopy theory of orthogonal spaces and L-spaces to the category of ∗-modules (i.e., unstable S-modules). We prove a theorem which transports model structures and their properties from L-spaces to ∗-modules and show that the resulting global model...... structure for ∗-modules is monoidally Quillen equivalent to that of orthogonal spaces. As a consequence, there are induced Quillen equivalences between the associated model categories of monoids, which identify equivalent models for the global homotopy theory of A∞-spaces....

  9. Calibration and validation of a general infiltration model

    Science.gov (United States)

    Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.

    1999-08-01

    A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.

  10. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    Science.gov (United States)

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  11. A Bayesian modelling method for post-processing daily sub-seasonal to seasonal rainfall forecasts from global climate models and evaluation for 12 Australian catchments

    Science.gov (United States)

    Schepen, Andrew; Zhao, Tongtiegang; Wang, Quan J.; Robertson, David E.

    2018-03-01

    Rainfall forecasts are an integral part of hydrological forecasting systems at sub-seasonal to seasonal timescales. In seasonal forecasting, global climate models (GCMs) are now the go-to source for rainfall forecasts. For hydrological applications however, GCM forecasts are often biased and unreliable in uncertainty spread, and calibration is therefore required before use. There are sophisticated statistical techniques for calibrating monthly and seasonal aggregations of the forecasts. However, calibration of seasonal forecasts at the daily time step typically uses very simple statistical methods or climate analogue methods. These methods generally lack the sophistication to achieve unbiased, reliable and coherent forecasts of daily amounts and seasonal accumulated totals. In this study, we propose and evaluate a Rainfall Post-Processing method for Seasonal forecasts (RPP-S), which is based on the Bayesian joint probability modelling approach for calibrating daily forecasts and the Schaake Shuffle for connecting the daily ensemble members of different lead times. We apply the method to post-process ACCESS-S forecasts for 12 perennial and ephemeral catchments across Australia and for 12 initialisation dates. RPP-S significantly reduces bias in raw forecasts and improves both skill and reliability. RPP-S forecasts are also more skilful and reliable than forecasts derived from ACCESS-S forecasts that have been post-processed using quantile mapping, especially for monthly and seasonal accumulations. Several opportunities to improve the robustness and skill of RPP-S are identified. The new RPP-S post-processed forecasts will be used in ensemble sub-seasonal to seasonal streamflow applications.

  12. Calibration and Verification of the Hydraulic Model for Blue Nile River from Roseires Dam to Khartoum City

    Directory of Open Access Journals (Sweden)

    Kamal edin ELsidig Bashar

    2015-12-01

    Full Text Available This research represents a practical attempt applied to calibrate and verify a hydraulic model for the Blue Nile River. The calibration procedures are performed using the observed data for a previous period and comparing them with the calibration results while verification requirements are achieved with the application of the observed data for another future period and comparing them with the verification results. The study objective covered a relationship of the river terrain with the distance between the assumed points of the dam failures along the river length. The computed model values and the observed data should conform to the theoretical analysis and the overall verification performance of the model by comparing it with another set of data. The model was calibrated using data from gauging stations (Khartoum, Wad Medani, downstream Sennar, and downstream Roseires during the period from the 1st of May to 31 of October 1988 and the verification was done using the data of the same gauging stations for years 2003 and 2010 for the same period. The required available data from these stations were collected, processed and used in the model calibration. The geometry input files for the HEC-RAS models were created using a combination of ArcGIS and HEC-GeoRAS. The results revealed high correlation (R2 ˃ 0.9 between the observed and calibrated water levels in all gauging stations during 1988 and also high correlation between the observed and verification water levels was obtained in years 2003 and 2010. Verification results with the equation and degree of correlation can be used to predict future data of any expected data for the same stations.

  13. Modeling urbanization patterns at a global scale with generative adversarial networks

    Science.gov (United States)

    Albert, A. T.; Strano, E.; Gonzalez, M.

    2017-12-01

    Current demographic projections show that, in the next 30 years, global population growth will mostly take place in developing countries. Coupled with a decrease in density, such population growth could potentially double the land occupied by settlements by 2050. The lack of reliable and globally consistent socio-demographic data, coupled with the limited predictive performance underlying traditional urban spatial explicit models, call for developing better predictive methods, calibrated using a globally-consistent dataset. Thus, richer models of the spatial interplay between the urban built-up land, population distribution and energy use are central to the discussion around the expansion and development of cities, and their impact on the environment in the context of a changing climate. In this talk we discuss methods for, and present an analysis of, urban form, defined as the spatial distribution of macroeconomic quantities that characterize a city, using modern machine learning methods and best-available remote-sensing data for the world's largest 25,000 cities. We first show that these cities may be described by a small set of patterns in radial building density, nighttime luminosity, and population density, which highlight, to first order, differences in development and land use across the world. We observe significant, spatially-dependent variance around these typical patterns, which would be difficult to model using traditional statistical methods. We take a first step in addressing this challenge by developing CityGAN, a conditional generative adversarial network model for simulating realistic urban forms. To guide learning and measure the quality of the simulated synthetic cities, we develop a specialized loss function for GAN optimization that incorporates standard spatial statistics used by urban analysis experts. Our framework is a stark departure from both the standard physics-based approaches in the literature (that view urban forms as fractals with a

  14. Multi-site calibration, validation, and sensitivity analysis of the MIKE SHE Model for a large watershed in northern China

    Science.gov (United States)

    S. Wang; Z. Zhang; G. Sun; P. Strauss; J. Guo; Y. Tang; A. Yao

    2012-01-01

    Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped...

  15. Reactive Burn Model Calibration for PETN Using Ultra-High-Speed Phase Contrast Imaging

    Science.gov (United States)

    Johnson, Carl; Ramos, Kyle; Bolme, Cindy; Sanchez, Nathaniel; Barber, John; Montgomery, David

    2017-06-01

    A 1D reactive burn model (RBM) calibration for a plastic bonded high explosive (HE) requires run-to-detonation data. In PETN (pentaerythritol tetranitrate, 1.65 g/cc) the shock to detonation transition (SDT) is on the order of a few millimeters. This rapid SDT imposes experimental length scales that preclude application of traditional calibration methods such as embedded electromagnetic gauge methods (EEGM) which are very effective when used to study 10 - 20 mm thick HE specimens. In recent work at Argonne National Laboratory's Advanced Photon Source we have obtained run-to-detonation data in PETN using ultra-high-speed dynamic phase contrast imaging (PCI). A reactive burn model calibration valid for 1D shock waves is obtained using density profiles spanning the transition to detonation as opposed to particle velocity profiles from EEGM. Particle swarm optimization (PSO) methods were used to operate the LANL hydrocode FLAG iteratively to refine SURF RBM parameters until a suitable parameter set attained. These methods will be presented along with model validation simulations. The novel method described is generally applicable to `sensitive' energetic materials particularly those with areal densities amenable to radiography.

  16. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Laxemar

    International Nuclear Information System (INIS)

    Aneljung, Maria; Sassner, Mona; Gustafsson, Lars-Goeran

    2007-11-01

    This report describes modelling where the hydrological modelling system MIKE SHE has been used to describe surface hydrology, near-surface hydrogeology, advective transport mechanisms, and the contact between groundwater and surface water within the SKB site investigation area at Laxemar. In the MIKE SHE system, surface water flow is described with the one-dimensional modelling tool MIKE 11, which is fully and dynamically integrated with the groundwater flow module in MIKE SHE. In early 2008, a supplementary data set will be available and a process of updating, rebuilding and calibrating the MIKE SHE model based on this data set will start. Before the calibration on the new data begins, it is important to gather as much knowledge as possible on calibration methods, and to identify critical calibration parameters and areas within the model that require special attention. In this project, the MIKE SHE model has been further developed. The model area has been extended, and the present model also includes an updated bedrock model and a more detailed description of the surface stream network. The numerical model has been updated and optimized, especially regarding the modelling of evapotranspiration and the unsaturated zone, and the coupling between the surface stream network in MIKE 11 and the overland flow in MIKE SHE. An initial calibration has been made and a base case has been defined and evaluated. In connection with the calibration, the most important changes made in the model were the following: The evapotranspiration was reduced. The infiltration capacity was reduced. The hydraulic conductivities of the Quaternary deposits in the water-saturated part of the subsurface were reduced. Data from one surface water level monitoring station, four surface water discharge monitoring stations and 43 groundwater level monitoring stations (SSM series boreholes) have been used to evaluate and calibrate the model. The base case simulations showed a reasonable agreement

  17. On coupling global biome models with climate models

    International Nuclear Information System (INIS)

    Claussen, M.

    1994-01-01

    The BIOME model of Prentice et al. (1992), which predicts global vegetation patterns in equilibrium with climate, is coupled with the ECHAM climate model of the Max-Planck-Institut fuer Meteorologie, Hamburg. It is found that incorporation of the BIOME model into ECHAM, regardless at which frequency, does not enhance the simulated climate variability, expressed in terms of differences between global vegetation patterns. Strongest changes are seen only between the initial biome distribution and the biome distribution computed after the first simulation period, provided that the climate-biome model is started from a biome distribution that resembles the present-day distribution. After the first simulation period, there is no significant shrinking, expanding, or shifting of biomes. Likewise, no trend is seen in global averages of land-surface parameters and climate variables. (orig.)

  18. Measurement of oxygen extraction fraction (OEF): An optimized BOLD signal model for use with hypercapnic and hyperoxic calibration.

    Science.gov (United States)

    Merola, Alberto; Murphy, Kevin; Stone, Alan J; Germuska, Michael A; Griffeth, Valerie E M; Blockley, Nicholas P; Buxton, Richard B; Wise, Richard G

    2016-04-01

    Several techniques have been proposed to estimate relative changes in cerebral metabolic rate of oxygen consumption (CMRO2) by exploiting combined BOLD fMRI and cerebral blood flow data in conjunction with hypercapnic or hyperoxic respiratory challenges. More recently, methods based on respiratory challenges that include both hypercapnia and hyperoxia have been developed to assess absolute CMRO2, an important parameter for understanding brain energetics. In this paper, we empirically optimize a previously presented "original calibration model" relating BOLD and blood flow signals specifically for the estimation of oxygen extraction fraction (OEF) and absolute CMRO2. To do so, we have created a set of synthetic BOLD signals using a detailed BOLD signal model to reproduce experiments incorporating hypercapnic and hyperoxic respiratory challenges at 3T. A wide range of physiological conditions was simulated by varying input parameter values (baseline cerebral blood volume (CBV0), baseline cerebral blood flow (CBF0), baseline oxygen extraction fraction (OEF0) and hematocrit (Hct)). From the optimization of the calibration model for estimation of OEF and practical considerations of hypercapnic and hyperoxic respiratory challenges, a new "simplified calibration model" is established which reduces the complexity of the original calibration model by substituting the standard parameters α and β with a single parameter θ. The optimal value of θ is determined (θ=0.06) across a range of experimental respiratory challenges. The simplified calibration model gives estimates of OEF0 and absolute CMRO2 closer to the true values used to simulate the experimental data compared to those estimated using the original model incorporating literature values of α and β. Finally, an error propagation analysis demonstrates the susceptibility of the original and simplified calibration models to measurement errors and potential violations in the underlying assumptions of isometabolism

  19. Improved method for calibration of exchange flows for a physical transport box model of Tampa Bay, FL USA

    Science.gov (United States)

    Results for both sequential and simultaneous calibration of exchange flows between segments of a 10-box, one-dimensional, well-mixed, bifurcated tidal mixing model for Tampa Bay are reported. Calibrations were conducted for three model options with different mathematical expressi...

  20. Improved Regression Analysis of Temperature-Dependent Strain-Gage Balance Calibration Data

    Science.gov (United States)

    Ulbrich, N.

    2015-01-01

    An improved approach is discussed that may be used to directly include first and second order temperature effects in the load prediction algorithm of a wind tunnel strain-gage balance. The improved approach was designed for the Iterative Method that fits strain-gage outputs as a function of calibration loads and uses a load iteration scheme during the wind tunnel test to predict loads from measured gage outputs. The improved approach assumes that the strain-gage balance is at a constant uniform temperature when it is calibrated and used. First, the method introduces a new independent variable for the regression analysis of the balance calibration data. The new variable is designed as the difference between the uniform temperature of the balance and a global reference temperature. This reference temperature should be the primary calibration temperature of the balance so that, if needed, a tare load iteration can be performed. Then, two temperature{dependent terms are included in the regression models of the gage outputs. They are the temperature difference itself and the square of the temperature difference. Simulated temperature{dependent data obtained from Triumph Aerospace's 2013 calibration of NASA's ARC-30K five component semi{span balance is used to illustrate the application of the improved approach.

  1. On coupling global biome models with climate models

    OpenAIRE

    Claussen, M.

    1994-01-01

    The BIOME model of Prentice et al. (1992; J. Biogeogr. 19: 117-134), which predicts global vegetation patterns in equilibrium with climate, was coupled with the ECHAM climate model of the Max-Planck-Institut fiir Meteorologie, Hamburg, Germany. It was found that incorporation of the BIOME model into ECHAM, regardless at which frequency, does not enhance the simulated climate variability, expressed in terms of differences between global vegetation patterns. Strongest changes are seen only betw...

  2. Performance of the air2stream model that relates air and stream water temperatures depends on the calibration method

    Science.gov (United States)

    Piotrowski, Adam P.; Napiorkowski, Jaroslaw J.

    2018-06-01

    A number of physical or data-driven models have been proposed to evaluate stream water temperatures based on hydrological and meteorological observations. However, physical models require a large amount of information that is frequently unavailable, while data-based models ignore the physical processes. Recently the air2stream model has been proposed as an intermediate alternative that is based on physical heat budget processes, but it is so simplified that the model may be applied like data-driven ones. However, the price for simplicity is the need to calibrate eight parameters that, although have some physical meaning, cannot be measured or evaluated a priori. As a result, applicability and performance of the air2stream model for a particular stream relies on the efficiency of the calibration method. The original air2stream model uses an inefficient 20-year old approach called Particle Swarm Optimization with inertia weight. This study aims at finding an effective and robust calibration method for the air2stream model. Twelve different optimization algorithms are examined on six different streams from northern USA (states of Washington, Oregon and New York), Poland and Switzerland, located in both high mountains, hilly and lowland areas. It is found that the performance of the air2stream model depends significantly on the calibration method. Two algorithms lead to the best results for each considered stream. The air2stream model, calibrated with the chosen optimization methods, performs favorably against classical streamwater temperature models. The MATLAB code of the air2stream model and the chosen calibration procedure (CoBiDE) are available as Supplementary Material on the Journal of Hydrology web page.

  3. Geomechanical Simulation of Bayou Choctaw Strategic Petroleum Reserve - Model Calibration.

    Energy Technology Data Exchange (ETDEWEB)

    Park, Byoung [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    A finite element numerical analysis model has been constructed that consists of a realistic mesh capturing the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multi - mechanism deformation ( M - D ) salt constitutive model using the daily data of actual wellhead pressure and oil - brine interface. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt is limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used for the field baseline measurement. The structure factor, A 2 , and transient strain limit factor, K 0 , in the M - D constitutive model are used for the calibration. The A 2 value obtained experimentally from the BC salt and K 0 value of Waste Isolation Pilot Plant (WIPP) salt are used for the baseline values. T o adjust the magnitude of A 2 and K 0 , multiplication factors A2F and K0F are defined, respectively. The A2F and K0F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back fitting analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict past and future geomechanical behaviors of the salt dome, caverns, caprock , and interbed layers. The geological concerns issued in the BC site will be explained from this model in a follow - up report .

  4. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    Science.gov (United States)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  5. Electricity Price Forecast Using Combined Models with Adaptive Weights Selected and Errors Calibrated by Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Da Liu

    2013-01-01

    Full Text Available A combined forecast with weights adaptively selected and errors calibrated by Hidden Markov model (HMM is proposed to model the day-ahead electricity price. Firstly several single models were built to forecast the electricity price separately. Then the validation errors from every individual model were transformed into two discrete sequences: an emission sequence and a state sequence to build the HMM, obtaining a transmission matrix and an emission matrix, representing the forecasting ability state of the individual models. The combining weights of the individual models were decided by the state transmission matrixes in HMM and the best predict sample ratio of each individual among all the models in the validation set. The individual forecasts were averaged to get the combining forecast with the weights obtained above. The residuals of combining forecast were calibrated by the possible error calculated by the emission matrix of HMM. A case study of day-ahead electricity market of Pennsylvania-New Jersey-Maryland (PJM, USA, suggests that the proposed method outperforms individual techniques of price forecasting, such as support vector machine (SVM, generalized regression neural networks (GRNN, day-ahead modeling, and self-organized map (SOM similar days modeling.

  6. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Aneljung, Maria; Sassner, Mona; Gustafsson, Lars-Goeran (DHI Sverige AB, Lilla Bommen 1, SE-411 04 Goeteborg (Sweden))

    2007-11-15

    This report describes modelling where the hydrological modelling system MIKE SHE has been used to describe surface hydrology, near-surface hydrogeology, advective transport mechanisms, and the contact between groundwater and surface water within the SKB site investigation area at Laxemar. In the MIKE SHE system, surface water flow is described with the one-dimensional modelling tool MIKE 11, which is fully and dynamically integrated with the groundwater flow module in MIKE SHE. In early 2008, a supplementary data set will be available and a process of updating, rebuilding and calibrating the MIKE SHE model based on this data set will start. Before the calibration on the new data begins, it is important to gather as much knowledge as possible on calibration methods, and to identify critical calibration parameters and areas within the model that require special attention. In this project, the MIKE SHE model has been further developed. The model area has been extended, and the present model also includes an updated bedrock model and a more detailed description of the surface stream network. The numerical model has been updated and optimized, especially regarding the modelling of evapotranspiration and the unsaturated zone, and the coupling between the surface stream network in MIKE 11 and the overland flow in MIKE SHE. An initial calibration has been made and a base case has been defined and evaluated. In connection with the calibration, the most important changes made in the model were the following: The evapotranspiration was reduced. The infiltration capacity was reduced. The hydraulic conductivities of the Quaternary deposits in the water-saturated part of the subsurface were reduced. Data from one surface water level monitoring station, four surface water discharge monitoring stations and 43 groundwater level monitoring stations (SSM series boreholes) have been used to evaluate and calibrate the model. The base case simulations showed a reasonable agreement

  7. Selecting the optimal method to calculate daily global reference potential evaporation from CFSR reanalysis data for application in a hydrological model study

    Directory of Open Access Journals (Sweden)

    F. C. Sperna Weiland

    2012-03-01

    Full Text Available Potential evaporation (PET is one of the main inputs of hydrological models. Yet, there is limited consensus on which PET equation is most applicable in hydrological climate impact assessments. In this study six different methods to derive global scale reference PET daily time series from Climate Forecast System Reanalysis (CFSR data are compared: Penman-Monteith, Priestley-Taylor and original and re-calibrated versions of the Hargreaves and Blaney-Criddle method. The calculated PET time series are (1 evaluated against global monthly Penman-Monteith PET time series calculated from CRU data and (2 tested on their usability for modeling of global discharge cycles.

    A major finding is that for part of the investigated basins the selection of a PET method may have only a minor influence on the resulting river flow. Within the hydrological model used in this study the bias related to the PET method tends to decrease while going from PET, AET and runoff to discharge calculations. However, the performance of individual PET methods appears to be spatially variable, which stresses the necessity to select the most accurate and spatially stable PET method. The lowest root mean squared differences and the least significant deviations (95% significance level between monthly CFSR derived PET time series and CRU derived PET were obtained for a cell-specific re-calibrated Blaney-Criddle equation. However, results show that this re-calibrated form is likely to be unstable under changing climate conditions and less reliable for the calculation of daily time series. Although often recommended, the Penman-Monteith equation applied to the CFSR data did not outperform the other methods in a evaluation against PET derived with the Penman-Monteith equation from CRU data. In arid regions (e.g. Sahara, central Australia, US deserts, the equation resulted in relatively low PET values and, consequently, led to relatively high discharge values for dry basins (e

  8. On the calibration strategies of the Johnson–Cook strength model: Discussion and applications to experimental data

    International Nuclear Information System (INIS)

    Gambirasio, Luca; Rizzi, Egidio

    2014-01-01

    The present paper aims at assessing the various procedures adoptable for calibrating the parameters of the so-called Johnson–Cook strength model, expressing the deviatoric behavior of elastoplastic materials, with particular reference to the description of High Strain Rate (HSR) phenomena. The procedures rely on input experimental data corresponding to a set of hardening functions recorded at different equivalent plastic strain rates and temperatures. After a brief review of the main characteristics of the Johnson–Cook strength model, five different calibration strategies are framed and widely described. The assessment is implemented through a systematic application of each calibration strategy to three different real material cases, i.e. a DH-36 structural steel, a commercially pure niobium and an AL-6XN stainless steel. Experimental data available in the literature are considered. Results are presented in terms of plots showing the predicted Johnson–Cook hardening functions against the experimental trends, together with tables describing the fitting problematics which arise in each case, by assessing both lower yield stress and overall plastic flow introduced errors. The consequences determined by each calibration approach are then carefully compared and evaluated. A discussion on the positive and negative aspects of each strategy is presented and some suggestions on how to choose the best calibration approach are outlined, by considering the available experimental data and the objectives of the following modeling process. The proposed considerations should provide a useful guideline in the process of determining the best Johnson–Cook parameters in each specific situation in which the model is going to be adopted. A last section introduces some considerations about the calibration of the Johnson–Cook strength model through experimental data different from those consisting in a set of hardening functions relative to different equivalent plastic strain

  9. The dielectric calibration of capacitance probes for soil hydrology using an oscillation frequency response model

    Directory of Open Access Journals (Sweden)

    D. A. Robinson

    1998-01-01

    Full Text Available Capacitance probes are a fast, safe and relatively inexpensive means of measuring the relative permittivity of soils, which can then be used to estimate soil water content. Initial experiments with capacitance probes used empirical calibrations between the frequency response of the instrument and soil water content. This has the disadvantage that the calibrations are instrument-dependent. A twofold calibration strategy is described in this paper; the instrument frequency is turned into relative permittivity (dielectric constant which can then be calibrated against soil water content. This approach offers the advantages of making the second calibration, from soil permittivity to soil water content. instrument-independent and allows comparison with other dielectric methods, such as time domain reflectometry. A physically based model, used to calibrate capacitance probes in terms of relative permittivity (εr is presented. The model, which was developed from circuit analysis, predicts, successfully, the frequency response of the instrument in liquids with different relative permittivities, using only measurements in air and water. lt was used successfully to calibrate 10 prototype surface capacitance insertion probes (SCIPS and a depth capacitance probe. The findings demonstrate that the geometric properties of the instrument electrodes were an important parameter in the model, the value of which could be fixed through measurement. The relationship between apparent soil permittivity and volumetric water content has been the subject of much research in the last 30 years. Two lines of investigation have developed, time domain reflectometry (TDR and capacitance. Both methods claim to measure relative permittivity and should therefore be comparable. This paper demonstrates that the IH capacitance probe overestimates relative permittivity as the ionic conductivity of the medium increases. Electrically conducting ionic solutions were used to test the

  10. Nimbus-7 Earth radiation budget calibration history. Part 2: The Earth flux channels

    Science.gov (United States)

    Kyle, H. Lee; Hucek, Douglas Richard R.; Ardanuy, Philip E.; Hickey, John R.; Maschhoff, Robert H.; Penn, Lanning M.; Groveman, Brian S.; Vallette, Brenda J.

    1994-01-01

    Nine years (November 1978 to October 1987) of Nimbus-7 Earth radiation budget (ERB) products have shown that the global annual mean emitted longwave, absorbed shortwave, and net radiation were constant to within about + 0.5 W/sq m. Further, most of the small annual variations in the emitted longwave have been shown to be real. To obtain this measurement accuracy, the wide-field-of-view (WFOV) Earth-viewing channels 12 (0.2 to over 50 micrometers), 13 (0.2 to 3.8 micrometers), and 14 (0.7 to 2.8 micrometers) have been characterized in their satellite environment to account for signal variations not considered in the prelaunch calibration equations. Calibration adjustments have been derived for (1) extraterrestrial radiation incident on the detectors, (2) long-term degradation of the sensors, and (3) thermal perturbations within the ERB instrument. The first item is important in all the channels; the second, mainly in channels 13 and 14, and the third, only in channels 13 and 14. The Sun is used as a stable calibration source to monitor the long-term degradation of the various channels. Channel 12, which is reasonably stable to both thermal perturbations and sensor degradation, is used as a reference and calibration transfer agent for the drifting sensitivities of the filtered channels 13 and 14. Redundant calibration procedures were utilized. Laboratory studies complemented analyses of the satellite data. Two nearly independent models were derived to account for the thermal perturbations in channels 13 and 14. The global annual mean terrestrial shortwave and longwave signals proved stable enough to act as secondary calibration sources. Instantaneous measurements may still, at times, be in error by as much as a few Wm(exp -2), but the long-term averages are stable to within a fraction of a Wm(exp -2).

  11. Astronomically calibrated 40Ar/39Ar age for the Toba supereruption and global synchronization of late Quaternary records

    Science.gov (United States)

    Storey, Michael; Roberts, Richard G.; Saidin, Mokhtar

    2012-11-01

    The Toba supereruption in Sumatra, ∼74 thousand years (ka) ago, was the largest terrestrial volcanic event of the Quaternary. Ash and sulfate aerosols were deposited in both hemispheres, forming a time-marker horizon that can be used to synchronize late Quaternary records globally. A precise numerical age for this event has proved elusive, with dating uncertainties larger than the millennial-scale climate cycles that characterized this period. We report an astronomically calibrated 40Ar/39Ar age of 73.88 ± 0.32 ka (1σ, full external errors) for sanidine crystals extracted from Toba deposits in the Lenggong Valley, Malaysia, 350 km from the eruption source and 6 km from an archaeological site with stone artifacts buried by ash. If these artifacts were made by Homo sapiens, as has been suggested, then our age indicates that modern humans had reached Southeast Asia by ∼74 ka ago. Our 40Ar/39Ar age is an order-of-magnitude more precise than previous estimates, resolving the timing of the eruption to the middle of the cold interval between Dansgaard-Oeschger events 20 and 19, when a peak in sulfate concentration occurred as registered by Greenland ice cores. This peak is followed by a ∼10 °C drop in the Greenland surface temperature over ∼150 y, revealing the possible climatic impact of the eruption. Our 40Ar/39Ar age also provides a high-precision calibration point for other ice, marine, and terrestrial archives containing Toba sulfates and ash, facilitating their global synchronization at unprecedented resolution for a critical period in Earth and human history beyond the range of 14C dating.

  12. Global Analysis, Interpretation and Modelling: An Earth Systems Modelling Program

    Science.gov (United States)

    Moore, Berrien, III; Sahagian, Dork

    1997-01-01

    The Goal of the GAIM is: To advance the study of the coupled dynamics of the Earth system using as tools both data and models; to develop a strategy for the rapid development, evaluation, and application of comprehensive prognostic models of the Global Biogeochemical Subsystem which could eventually be linked with models of the Physical-Climate Subsystem; to propose, promote, and facilitate experiments with existing models or by linking subcomponent models, especially those associated with IGBP Core Projects and with WCRP efforts. Such experiments would be focused upon resolving interface issues and questions associated with developing an understanding of the prognostic behavior of key processes; to clarify key scientific issues facing the development of Global Biogeochemical Models and the coupling of these models to General Circulation Models; to assist the Intergovernmental Panel on Climate Change (IPCC) process by conducting timely studies that focus upon elucidating important unresolved scientific issues associated with the changing biogeochemical cycles of the planet and upon the role of the biosphere in the physical-climate subsystem, particularly its role in the global hydrological cycle; and to advise the SC-IGBP on progress in developing comprehensive Global Biogeochemical Models and to maintain scientific liaison with the WCRP Steering Group on Global Climate Modelling.

  13. A hierarchical analysis of terrestrial ecosystem model Biome-BGC: Equilibrium analysis and model calibration

    Energy Technology Data Exchange (ETDEWEB)

    Thornton, Peter E [ORNL; Wang, Weile [ORNL; Law, Beverly E. [Oregon State University; Nemani, Ramakrishna R [NASA Ames Research Center

    2009-01-01

    The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically support the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.

  14. Displaced calibration of PM10 measurements using spatio-temporal models

    Directory of Open Access Journals (Sweden)

    Daniela Cocchi

    2007-12-01

    Full Text Available PM10 monitoring networks are equipped with heterogeneous samplers. Some of these samplers are known to underestimate true levels of concentrations (non-reference samplers. In this paper we propose a hierarchical spatio-temporal Bayesian model for the calibration of measurements recorded using non-reference samplers, by borrowing strength from non co-located reference sampler measurements.

  15. A Bayesian modelling method for post-processing daily sub-seasonal to seasonal rainfall forecasts from global climate models and evaluation for 12 Australian catchments

    Directory of Open Access Journals (Sweden)

    A. Schepen

    2018-03-01

    Full Text Available Rainfall forecasts are an integral part of hydrological forecasting systems at sub-seasonal to seasonal timescales. In seasonal forecasting, global climate models (GCMs are now the go-to source for rainfall forecasts. For hydrological applications however, GCM forecasts are often biased and unreliable in uncertainty spread, and calibration is therefore required before use. There are sophisticated statistical techniques for calibrating monthly and seasonal aggregations of the forecasts. However, calibration of seasonal forecasts at the daily time step typically uses very simple statistical methods or climate analogue methods. These methods generally lack the sophistication to achieve unbiased, reliable and coherent forecasts of daily amounts and seasonal accumulated totals. In this study, we propose and evaluate a Rainfall Post-Processing method for Seasonal forecasts (RPP-S, which is based on the Bayesian joint probability modelling approach for calibrating daily forecasts and the Schaake Shuffle for connecting the daily ensemble members of different lead times. We apply the method to post-process ACCESS-S forecasts for 12 perennial and ephemeral catchments across Australia and for 12 initialisation dates. RPP-S significantly reduces bias in raw forecasts and improves both skill and reliability. RPP-S forecasts are also more skilful and reliable than forecasts derived from ACCESS-S forecasts that have been post-processed using quantile mapping, especially for monthly and seasonal accumulations. Several opportunities to improve the robustness and skill of RPP-S are identified. The new RPP-S post-processed forecasts will be used in ensemble sub-seasonal to seasonal streamflow applications.

  16. Sediment plume model-a comparison between use of measured turbidity data and satellite images for model calibration.

    Science.gov (United States)

    Sadeghian, Amir; Hudson, Jeff; Wheater, Howard; Lindenschmidt, Karl-Erich

    2017-08-01

    In this study, we built a two-dimensional sediment transport model of Lake Diefenbaker, Saskatchewan, Canada. It was calibrated by using measured turbidity data from stations along the reservoir and satellite images based on a flood event in 2013. In June 2013, there was heavy rainfall for two consecutive days on the frozen and snow-covered ground in the higher elevations of western Alberta, Canada. The runoff from the rainfall and the melted snow caused one of the largest recorded inflows to the headwaters of the South Saskatchewan River and Lake Diefenbaker downstream. An estimated discharge peak of over 5200 m 3 /s arrived at the reservoir inlet with a thick sediment front within a few days. The sediment plume moved quickly through the entire reservoir and remained visible from satellite images for over 2 weeks along most of the reservoir, leading to concerns regarding water quality. The aims of this study are to compare, quantitatively and qualitatively, the efficacy of using turbidity data and satellite images for sediment transport model calibration and to determine how accurately a sediment transport model can simulate sediment transport based on each of them. Both turbidity data and satellite images were very useful for calibrating the sediment transport model quantitatively and qualitatively. Model predictions and turbidity measurements show that the flood water and suspended sediments entered upstream fairly well mixed and moved downstream as overflow with a sharp gradient at the plume front. The model results suggest that the settling and resuspension rates of sediment are directly proportional to flow characteristics and that the use of constant coefficients leads to model underestimation or overestimation unless more data on sediment formation become available. Hence, this study reiterates the significance of the availability of data on sediment distribution and characteristics for building a robust and reliable sediment transport model.

  17. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  18. New global ICT-based business models

    DEFF Research Database (Denmark)

    The New Global Business model (NEWGIBM) book describes the background, theory references, case studies, results and learning imparted by the NEWGIBM project, which is supported by ICT, to a research group during the period from 2005-2011. The book is a result of the efforts and the collaborative ...... The NEWGIBM Cases Show? The Strategy Concept in Light of the Increased Importance of Innovative Business Models Successful Implementation of Global BM Innovation Globalisation Of ICT Based Business Models: Today And In 2020......The New Global Business model (NEWGIBM) book describes the background, theory references, case studies, results and learning imparted by the NEWGIBM project, which is supported by ICT, to a research group during the period from 2005-2011. The book is a result of the efforts and the collaborative....... The NEWGIBM book serves as a part of the final evaluation and documentation of the NEWGIBM project and is supported by results from the following projects: M-commerce, Global Innovation, Global Ebusiness & M-commerce, The Blue Ocean project, International Center for Innovation and Women in Business, NEFFICS...

  19. Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers

    Science.gov (United States)

    Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)

    1996-01-01

    Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.

  20. Soil Moisture Active/Passive (SMAP) Radiometer Subband Calibration and Calibration Drift

    Science.gov (United States)

    Peng, Jinzheng; Piepmeier, Jeffrey R.; De Amici, Giovanni; Mohammed, Priscilla

    2016-01-01

    The SMAP is one of four first-tier missions recommended by the US National Research Council's Committee on Earth Science and Applications from Space (Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond, Space Studies Board, National Academies Press, 2007)]. The observatory was launched on Jan 31, 2015. The goal of the SMAP is to measure the global soil moisture and freeze/thaw from space. The L-band radiometer is the passive portion of the spaceborne instrument. It measures all four Stokes antenna temperatures and outputs counts. The Level 1B Brightness Temperature (L1B_TB) science algorithm converts radiometer counts to the Earths surface brightness temperature. The results are reported in the radiometer level 1B data product together with the calibrated antenna temperature (TA) and all of the corrections to the unwanted sources contribution. The calibrated L1B data product are required to satisfy the overall radiometer error budget of 1.3 K needed to meet the soil moisture requirement of 0.04 volumetric fraction uncertainty and the calibration drift requirement of no larger than 0.4 K per month.

  1. Soil Moisture Active Passive (SMAP) Radiometer Subband Calibration and Calibration Drift

    Science.gov (United States)

    Peng, Jinzheng; Piepmeier, Jeffrey R.; De Amici, Giovanni; Mohammed, Priscilla N.

    2016-01-01

    The SMAP is one of four first-tier missions recommended by the US National Research Council's Committee on Earth Science and Applications from Space (Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond, Space Studies Board, National Academies Press, 2007). The observatory was launched on Jan 31, 2015. The goal of the SMAP is to measure the global soil moisture and freeze/thaw from space. The L-band radiometer is the passive portion of the spaceborne instrument. It measures all four Stokes antenna temperatures and outputs counts. The Level 1B Brightness Temperature (L1B_TB) science algorithm converts radiometer counts to the Earths surface brightness temperature. The results are reported in the radiometer level 1B data product together with the calibrated antenna temperature (TA) and all of the corrections to the unwanted sources contribution. The calibrated L1B data product are required to satisfy the overall radiometer error budget of 1.3 K needed to meet the soil moisture requirement of 0.04 volumetric fraction uncertainty and the calibration drift requirement of no larger than 0.4 K per month.

  2. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    Science.gov (United States)

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  3. Increasing parameter certainty and data utility through multi-objective calibration of a spatially distributed temperature and solute model

    Directory of Open Access Journals (Sweden)

    C. Bandaragoda

    2011-05-01

    Full Text Available To support the goal of distributed hydrologic and instream model predictions based on physical processes, we explore multi-dimensional parameterization determined by a broad set of observations. We present a systematic approach to using various data types at spatially distributed locations to decrease parameter bounds sampled within calibration algorithms that ultimately provide information regarding the extent of individual processes represented within the model structure. Through the use of a simulation matrix, parameter sets are first locally optimized by fitting the respective data at one or two locations and then the best results are selected to resolve which parameter sets perform best at all locations, or globally. This approach is illustrated using the Two-Zone Temperature and Solute (TZTS model for a case study in the Virgin River, Utah, USA, where temperature and solute tracer data were collected at multiple locations and zones within the river that represent the fate and transport of both heat and solute through the study reach. The result was a narrowed parameter space and increased parameter certainty which, based on our results, would not have been as successful if only single objective algorithms were used. We also found that the global optimum is best defined by multiple spatially distributed local optima, which supports the hypothesis that there is a discrete and narrowly bounded parameter range that represents the processes controlling the dominant hydrologic responses. Further, we illustrate that the optimization process itself can be used to determine which observed responses and locations are most useful for estimating the parameters that result in a global fit to guide future data collection efforts.

  4. Differential Evolution algorithm applied to FSW model calibration

    Science.gov (United States)

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  5. Calibrating emergent phenomena in stock markets with agent based models.

    Science.gov (United States)

    Fievet, Lucas; Sornette, Didier

    2018-01-01

    Since the 2008 financial crisis, agent-based models (ABMs), which account for out-of-equilibrium dynamics, heterogeneous preferences, time horizons and strategies, have often been envisioned as the new frontier that could revolutionise and displace the more standard models and tools in economics. However, their adoption and generalisation is drastically hindered by the absence of general reliable operational calibration methods. Here, we start with a different calibration angle that qualifies an ABM for its ability to achieve abnormal trading performance with respect to the buy-and-hold strategy when fed with real financial data. Starting from the common definition of standard minority and majority agents with binary strategies, we prove their equivalence to optimal decision trees. This efficient representation allows us to exhaustively test all meaningful single agent models for their potential anomalous investment performance, which we apply to the NASDAQ Composite index over the last 20 years. We uncover large significant predictive power, with anomalous Sharpe ratio and directional accuracy, in particular during the dotcom bubble and crash and the 2008 financial crisis. A principal component analysis reveals transient convergence between the anomalous minority and majority models. A novel combination of the optimal single-agent models of both classes into a two-agents model leads to remarkable superior investment performance, especially during the periods of bubbles and crashes. Our design opens the field of ABMs to construct novel types of advanced warning systems of market crises, based on the emergent collective intelligence of ABMs built on carefully designed optimal decision trees that can be reversed engineered from real financial data.

  6. Calibrating emergent phenomena in stock markets with agent based models

    Science.gov (United States)

    Sornette, Didier

    2018-01-01

    Since the 2008 financial crisis, agent-based models (ABMs), which account for out-of-equilibrium dynamics, heterogeneous preferences, time horizons and strategies, have often been envisioned as the new frontier that could revolutionise and displace the more standard models and tools in economics. However, their adoption and generalisation is drastically hindered by the absence of general reliable operational calibration methods. Here, we start with a different calibration angle that qualifies an ABM for its ability to achieve abnormal trading performance with respect to the buy-and-hold strategy when fed with real financial data. Starting from the common definition of standard minority and majority agents with binary strategies, we prove their equivalence to optimal decision trees. This efficient representation allows us to exhaustively test all meaningful single agent models for their potential anomalous investment performance, which we apply to the NASDAQ Composite index over the last 20 years. We uncover large significant predictive power, with anomalous Sharpe ratio and directional accuracy, in particular during the dotcom bubble and crash and the 2008 financial crisis. A principal component analysis reveals transient convergence between the anomalous minority and majority models. A novel combination of the optimal single-agent models of both classes into a two-agents model leads to remarkable superior investment performance, especially during the periods of bubbles and crashes. Our design opens the field of ABMs to construct novel types of advanced warning systems of market crises, based on the emergent collective intelligence of ABMs built on carefully designed optimal decision trees that can be reversed engineered from real financial data. PMID:29499049

  7. Geomechanical Model Calibration Using Field Measurements for a Petroleum Reserve

    Science.gov (United States)

    Park, Byoung Yoon; Sobolik, Steven R.; Herrick, Courtney G.

    2018-03-01

    A finite element numerical analysis model has been constructed that consists of a mesh that effectively captures the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multimechanism deformation (M-D) salt constitutive model using the daily data of actual wellhead pressure and oil-brine interface location. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt are limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used as the field baseline measurement. The structure factor, A 2, and transient strain limit factor, K 0, in the M-D constitutive model are used for the calibration. The value of A 2, obtained experimentally from BC salt, and the value of K 0, obtained from Waste Isolation Pilot Plant salt, are used for the baseline values. To adjust the magnitude of A 2 and K 0, multiplication factors A 2 F and K 0 F are defined, respectively. The A 2 F and K 0 F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict behaviors of the salt dome, caverns, caprock, and interbed layers. The geotechnical concerns associated with the BC site from this analysis will be explained in a follow-up paper.

  8. Modeling Prairie Pothole Lakes: Linking Satellite Observation and Calibration (Invited)

    Science.gov (United States)

    Schwartz, F. W.; Liu, G.; Zhang, B.; Yu, Z.

    2009-12-01

    This paper examines the response of a complex lake wetland system to variations in climate. The focus is on the lakes and wetlands of the Missouri Coteau, which is part of the larger Prairie Pothole Region of the Central Plains of North America. Information on lake size was enumerated from satellite images, and yielded power law relationships for different hydrological conditions. More traditional lake-stage data were made available to us from the USGS Cottonwood Lake Study Site in North Dakota. A Probabilistic Hydrologic Model (PHM) was developed to simulate lake complexes comprised of tens-of-thousands or more individual closed-basin lakes and wetlands. What is new about this model is a calibration scheme that utilizes remotely-sensed data on lake area as well as stage data for individual lakes. Some ¼ million individual data points are used within a Genetic Algorithm to calibrate the model by comparing the simulated results with observed lake area-frequency power law relationships derived from Landsat images and water depths from seven individual lakes and wetlands. The simulated lake behaviors show good agreement with the observations under average, dry, and wet climatic conditions. The calibrated model is used to examine the impact of climate variability on a large lake complex in ND, in particular, the “Dust Bowl Drought” 1930s. This most famous drought of the 20th Century devastated the agricultural economy of the Great Plains with health and social impacts lingering for years afterwards. Interestingly, the drought of 1930s is unremarkable in relation to others of greater intensity and frequency before AD 1200 in the Great Plains. Major droughts and deluges have the ability to create marked variability of the power law function (e.g. up to one and a half orders of magnitude variability from the extreme Dust Bowl Drought to the extreme 1993-2001 deluge). This new probabilistic modeling approach provides a novel tool to examine the response of the

  9. Electronic transport in VO{sub 2}—Experimentally calibrated Boltzmann transport modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kinaci, Alper; Rosenmann, Daniel; Chan, Maria K. Y., E-mail: debasish.banerjee@toyota.com, E-mail: mchan@anl.gov [Center for Nanoscale Materials, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Kado, Motohisa [Higashifuji Technical Center, Toyota Motor Corporation, Susono, Shizuoka 410-1193 (Japan); Ling, Chen; Zhu, Gaohua; Banerjee, Debasish, E-mail: debasish.banerjee@toyota.com, E-mail: mchan@anl.gov [Materials Research Department, Toyota Motor Engineering and Manufacturing North America, Inc., Ann Arbor, Michigan 48105 (United States)

    2015-12-28

    Materials that undergo metal-insulator transitions (MITs) are under intense study, because the transition is scientifically fascinating and technologically promising for various applications. Among these materials, VO{sub 2} has served as a prototype due to its favorable transition temperature. While the physical underpinnings of the transition have been heavily investigated experimentally and computationally, quantitative modeling of electronic transport in the two phases has yet to be undertaken. In this work, we establish a density-functional-theory (DFT)-based approach with Hubbard U correction (DFT + U) to model electronic transport properties in VO{sub 2} in the semiconducting and metallic regimes, focusing on band transport using the Boltzmann transport equations. We synthesized high quality VO{sub 2} films and measured the transport quantities across the transition, in order to calibrate the free parameters in the model. We find that the experimental calibration of the Hubbard correction term can efficiently and adequately model the metallic and semiconducting phases, allowing for further computational design of MIT materials for desirable transport properties.

  10. Hierarchical Bayesian modelling of mobility metrics for hazard model input calibration

    Science.gov (United States)

    Calder, Eliza; Ogburn, Sarah; Spiller, Elaine; Rutarindwa, Regis; Berger, Jim

    2015-04-01

    In this work we present a method to constrain flow mobility input parameters for pyroclastic flow models using hierarchical Bayes modeling of standard mobility metrics such as H/L and flow volume etc. The advantage of hierarchical modeling is that it can leverage the information in global dataset for a particular mobility metric in order to reduce the uncertainty in modeling of an individual volcano, especially important where individual volcanoes have only sparse datasets. We use compiled pyroclastic flow runout data from Colima, Merapi, Soufriere Hills, Unzen and Semeru volcanoes, presented in an open-source database FlowDat (https://vhub.org/groups/massflowdatabase). While the exact relationship between flow volume and friction varies somewhat between volcanoes, dome collapse flows originating from the same volcano exhibit similar mobility relationships. Instead of fitting separate regression models for each volcano dataset, we use a variation of the hierarchical linear model (Kass and Steffey, 1989). The model presents a hierarchical structure with two levels; all dome collapse flows and dome collapse flows at specific volcanoes. The hierarchical model allows us to assume that the flows at specific volcanoes share a common distribution of regression slopes, then solves for that distribution. We present comparisons of the 95% confidence intervals on the individual regression lines for the data set from each volcano as well as those obtained from the hierarchical model. The results clearly demonstrate the advantage of considering global datasets using this technique. The technique developed is demonstrated here for mobility metrics, but can be applied to many other global datasets of volcanic parameters. In particular, such methods can provide a means to better contain parameters for volcanoes for which we only have sparse data, a ubiquitous problem in volcanology.

  11. Immune Algorithm Complex Method for Transducer Calibration

    Directory of Open Access Journals (Sweden)

    YU Jiangming

    2014-08-01

    Full Text Available As a key link in engineering test tasks, the transducer calibration has significant influence on accuracy and reliability of test results. Because of unknown and complex nonlinear characteristics, conventional method can’t achieve satisfactory accuracy. An Immune algorithm complex modeling approach is proposed, and the simulated studies on the calibration of third multiple output transducers is made respectively by use of the developed complex modeling. The simulated and experimental results show that the Immune algorithm complex modeling approach can improve significantly calibration precision comparison with traditional calibration methods.

  12. Cross-scale intercomparison of climate change impacts simulated by regional and global hydrological models in eleven large river basins

    Energy Technology Data Exchange (ETDEWEB)

    Hattermann, F. F.; Krysanova, V.; Gosling, S. N.; Dankers, R.; Daggupati, P.; Donnelly, C.; Flörke, M.; Huang, S.; Motovilov, Y.; Buda, S.; Yang, T.; Müller, C.; Leng, G.; Tang, Q.; Portmann, F. T.; Hagemann, S.; Gerten, D.; Wada, Y.; Masaki, Y.; Alemayehu, T.; Satoh, Y.; Samaniego, L.

    2017-01-04

    Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity of impact models designed for either scale to climate variability and change is comparable. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climate change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a much better reproduction of reference conditions. However, the sensitivity of two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases with distinct differences in others, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability, but whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models validated against observed discharge should be used.

  13. Radiometric modeling and calibration of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) ground based measurement experiment

    Science.gov (United States)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-12-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data

  14. Forward Model Studies of Water Vapor Using Scanning Microwave Radiometers, Global Positioning System, and Radiosondes during the Cloudiness Intercomparison Experiment

    International Nuclear Information System (INIS)

    Mattioli, Vinia; Westwater, Ed R.; Gutman, S.; Morris, Victor R.

    2005-01-01

    Brightness temperatures computed from five absorption models and radiosonde observations were analyzed by comparing them with measurements from three microwave radiometers at 23.8 and 31.4 GHz. Data were obtained during the Cloudiness Inter-Comparison experiment at the U.S. Department of Energy's Atmospheric Radiation Measurement Program's (ARM) site in North-Central Oklahoma in 2003. The radiometers were calibrated using two procedures, the so-called instantaneous ?tipcal? method and an automatic self-calibration algorithm. Measurements from the radiometers were in agreement, with less than a 0.4-K difference during clear skies, when the instantaneous method was applied. Brightness temperatures from the radiometer and the radiosonde showed an agreement of less than 0.55 K when the most recent absorption models were considered. Precipitable water vapor (PWV) computed from the radiometers were also compared to the PWV derived from a Global Positioning System station that operates at the ARM site. The instruments agree to within 0.1 cm in PWV retrieval

  15. Derived flood frequency analysis using different model calibration strategies based on various types of rainfall-runoff data - a comparison

    Science.gov (United States)

    Haberlandt, U.; Radtke, I.

    2013-08-01

    Derived flood frequency analysis allows to estimate design floods with hydrological modelling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices about precipitation input, discharge output and consequently regarding the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets. Event based and continuous observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in Northern Germany with the hydrological model HEC-HMS. The results show that: (i) the same type of precipitation input data should be used for calibration and application of the hydrological model, (ii) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, (iii) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the application for derived flood frequency analysis.

  16. On the possibility of calibrating urban storm-water drainage models using gauge-based adjusted radar rainfall estimates

    OpenAIRE

    Ochoa-Rodriguez, S; Wang, L; Simoes, N; Onof, C; Maksimovi?, ?

    2013-01-01

    24/07/14 meb. Authors did not sign CTA. Traditionally, urban storm water drainage models have been calibrated using only raingauge data, which may result in overly conservative models due to the lack of spatial description of rainfall. With the advent of weather radars, radar rainfall estimates with higher temporal and spatial resolution have become increasingly available and have started to be used operationally for urban storm water model calibration and real time operation. Nonetheless,...

  17. The Improved NRL Tropical Cyclone Monitoring System with a Unified Microwave Brightness Temperature Calibration Scheme

    Directory of Open Access Journals (Sweden)

    Song Yang

    2014-05-01

    Full Text Available The near real-time NRL global tropical cyclone (TC monitoring system based on multiple satellite passive microwave (PMW sensors is improved with a new inter-sensor calibration scheme to correct the biases caused by differences in these sensor’s high frequency channels. Since the PMW sensor 89 GHz channel is used in multiple current and near future operational and research satellites, a unified scheme to calibrate all satellite PMW sensor’s ice scattering channels to a common 89 GHz is created so that their brightness temperatures (TBs will be consistent and permit more accurate manual and automated analyses. In order to develop a physically consistent calibration scheme, cloud resolving model simulations of a squall line system over the west Pacific coast and hurricane Bonnie in the Atlantic Ocean are applied to simulate the views from different PMW sensors. To clarify the complicated TB biases due to the competing nature of scattering and emission effects, a four-cloud based calibration scheme is developed (rain, non-rain, light rain, and cloudy. This new physically consistent inter-sensor calibration scheme is then evaluated with the synthetic TBs of hurricane Bonnie and a squall line as well as observed TCs. Results demonstrate the large TB biases up to 13 K for heavy rain situations before calibration between TMI and AMSR-E are reduced to less than 3 K after calibration. The comparison stats show that the overall bias and RMSE are reduced by 74% and 66% for hurricane Bonnie, and 98% and 85% for squall lines, respectively. For the observed hurricane Igor, the bias and RMSE decrease 41% and 25% respectively. This study demonstrates the importance of TB calibrations between PMW sensors in order to systematically monitor the global TC life cycles in terms of intensity, inner core structure and convective organization. A physics-based calibration scheme on TC’s TB corrections developed in this study is able to significantly reduce the

  18. Calibration of Galileo signals for time metrology.

    Science.gov (United States)

    Defraigne, Pascale; Aerts, Wim; Cerretto, Giancarlo; Cantoni, Elena; Sleewaegen, Jean-Marie

    2014-12-01

    Using global navigation satellite system (GNSS) signals for accurate timing and time transfer requires the knowledge of all electric delays of the signals inside the receiving system. GNSS stations dedicated to timing or time transfer are classically calibrated only for Global Positioning System (GPS) signals. This paper proposes a procedure to determine the hardware delays of a GNSS receiving station for Galileo signals, once the delays of the GPS signals are known. This approach makes use of the broadcast satellite inter-signal biases, and is based on the ionospheric delay measured from dual-frequency combinations of GPS and Galileo signals. The uncertainty on the so-determined hardware delays is estimated to 3.7 ns for each isolated code in the L5 frequency band, and 4.2 ns for the ionosphere-free combination of E1 with a code of the L5 frequency band. For the calibration of a time transfer link between two stations, another approach can be used, based on the difference between the common-view time transfer results obtained with calibrated GPS data and with uncalibrated Galileo data. It is shown that the results obtained with this approach or with the ionospheric method are equivalent.

  19. Global scale groundwater flow model

    Science.gov (United States)

    Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc

    2013-04-01

    As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.

  20. Calibration factor or calibration coefficient?

    International Nuclear Information System (INIS)

    Meghzifene, A.; Shortt, K.R.

    2002-01-01

    Full text: The IAEA/WHO network of SSDLs was set up in order to establish links between SSDL members and the international measurement system. At the end of 2001, there were 73 network members in 63 Member States. The SSDL network members provide calibration services to end-users at the national or regional level. The results of the calibrations are summarized in a document called calibration report or calibration certificate. The IAEA has been using the term calibration certificate and will continue using the same terminology. The most important information in a calibration certificate is a list of calibration factors and their related uncertainties that apply to the calibrated instrument for the well-defined irradiation and ambient conditions. The IAEA has recently decided to change the term calibration factor to calibration coefficient, to be fully in line with ISO [ISO 31-0], which recommends the use of the term coefficient when it links two quantities A and B (equation 1) that have different dimensions. The term factor should only be used for k when it is used to link the terms A and B that have the same dimensions A=k.B. However, in a typical calibration, an ion chamber is calibrated in terms of a physical quantity such as air kerma, dose to water, ambient dose equivalent, etc. If the chamber is calibrated together with its electrometer, then the calibration refers to the physical quantity to be measured per electrometer unit reading. In this case, the terms referred have different dimensions. The adoption by the Agency of the term coefficient to express the results of calibrations is consistent with the 'International vocabulary of basic and general terms in metrology' prepared jointly by the BIPM, IEC, ISO, OIML and other organizations. The BIPM has changed from factor to coefficient. The authors believe that this is more than just a matter of semantics and recommend that the SSDL network members adopt this change in terminology. (author)

  1. The World gas model. A multi-period mixed complementarity model for the global natural gas market

    International Nuclear Information System (INIS)

    Egging, Ruud; Holz, Franziska; Gabriel, Steven A.

    2010-01-01

    We provide the description, mathematical formulation and illustrative results of the World Gas Model, a multi-period complementarity model for the global natural gas market with explicit consideration of market power in the upstream market. Market players include producers, traders, pipeline and storage operators, LNG (liquefied natural gas) liquefiers and regasifiers as well as marketers. The model data set contains more than 80 countries and regions and covers 98% of world wide natural gas production and consumption. We also include a detailed representation of cross-border natural gas pipelines and constraints imposed by long-term contracts in the LNG market. The model is calibrated to match production and consumption projections from the PRIMES [EC. European energy and transport: trends to 2030-update 2007. Brussels: European Commission; 2008] and POLES models [EC. World energy technology outlook - 2050 (WETO-H2). Brussels: European Commission; 2006] up to 2030. The results of our numerical simulations illustrate how the supply shares of pipeline and LNG in various regions in the world develop very differently over time. LNG will continue to play a major role in the Asian market, also for new importers like China and India. Europe will expand its pipeline import capacities benefiting from its relative proximity to major gas suppliers. (author)

  2. Calibration of a parsimonious distributed ecohydrological daily model in a data-scarce basin by exclusively using the spatio-temporal variation of NDVI

    Science.gov (United States)

    Ruiz-Pérez, Guiomar; Koch, Julian; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix

    2017-12-01

    Ecohydrological modeling studies in developing countries, such as sub-Saharan Africa, often face the problem of extensive parametrical requirements and limited available data. Satellite remote sensing data may be able to fill this gap, but require novel methodologies to exploit their spatio-temporal information that could potentially be incorporated into model calibration and validation frameworks. The present study tackles this problem by suggesting an automatic calibration procedure, based on the empirical orthogonal function, for distributed ecohydrological daily models. The procedure is tested with the support of remote sensing data in a data-scarce environment - the upper Ewaso Ngiro river basin in Kenya. In the present application, the TETIS-VEG model is calibrated using only NDVI (Normalized Difference Vegetation Index) data derived from MODIS. The results demonstrate that (1) satellite data of vegetation dynamics can be used to calibrate and validate ecohydrological models in water-controlled and data-scarce regions, (2) the model calibrated using only satellite data is able to reproduce both the spatio-temporal vegetation dynamics and the observed discharge at the outlet and (3) the proposed automatic calibration methodology works satisfactorily and it allows for a straightforward incorporation of spatio-temporal data into the calibration and validation framework of a model.

  3. New error calibration tests for gravity models using subset solutions and independent data - Applied to GEM-T3

    Science.gov (United States)

    Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.

    1993-01-01

    A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.

  4. Comparison between two calibration models of a measurement system for thyroid monitoring

    International Nuclear Information System (INIS)

    Venturini, Luzia

    2005-01-01

    This paper shows a comparison between two theoretical calibration that use two mathematical models to represent the neck region. In the first model thyroid is considered to be just a region limited by two concentric cylinders whose dimensions are those of trachea and neck. The second model uses functional functions to get a better representation of the thyroid geometry. Efficiency values are obtained using Monte Carlo simulation. (author)

  5. A machine learning calibration model using random forests to improve sensor performance for lower-cost air quality monitoring

    Science.gov (United States)

    Zimmerman, Naomi; Presto, Albert A.; Kumar, Sriniwasa P. N.; Gu, Jason; Hauryliuk, Aliaksei; Robinson, Ellis S.; Robinson, Allen L.; Subramanian, R.

    2018-01-01

    Low-cost sensing strategies hold the promise of denser air quality monitoring networks, which could significantly improve our understanding of personal air pollution exposure. Additionally, low-cost air quality sensors could be deployed to areas where limited monitoring exists. However, low-cost sensors are frequently sensitive to environmental conditions and pollutant cross-sensitivities, which have historically been poorly addressed by laboratory calibrations, limiting their utility for monitoring. In this study, we investigated different calibration models for the Real-time Affordable Multi-Pollutant (RAMP) sensor package, which measures CO, NO2, O3, and CO2. We explored three methods: (1) laboratory univariate linear regression, (2) empirical multiple linear regression, and (3) machine-learning-based calibration models using random forests (RF). Calibration models were developed for 16-19 RAMP monitors (varied by pollutant) using training and testing windows spanning August 2016 through February 2017 in Pittsburgh, PA, US. The random forest models matched (CO) or significantly outperformed (NO2, CO2, O3) the other calibration models, and their accuracy and precision were robust over time for testing windows of up to 16 weeks. Following calibration, average mean absolute error on the testing data set from the random forest models was 38 ppb for CO (14 % relative error), 10 ppm for CO2 (2 % relative error), 3.5 ppb for NO2 (29 % relative error), and 3.4 ppb for O3 (15 % relative error), and Pearson r versus the reference monitors exceeded 0.8 for most units. Model performance is explored in detail, including a quantification of model variable importance, accuracy across different concentration ranges, and performance in a range of monitoring contexts including the National Ambient Air Quality Standards (NAAQS) and the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. A key strength of the RF approach is that

  6. Improving plasma shaping accuracy through consolidation of control model maintenance, diagnostic calibration, and hardware change control

    International Nuclear Information System (INIS)

    Baggest, D.S.; Rothweil, D.A.; Pang, S.

    1995-12-01

    With the advent of more sophisticated techniques for control of tokamak plasmas comes the requirement for increasingly more accurate models of plasma processes and tokamak systems. Development of accurate models for DIII-D power systems, vessel, and poloidal coils is already complete, while work continues in development of general plasma response modeling techniques. Increased accuracy in estimates of parameters to be controlled is also required. It is important to ensure that errors in supporting systems such as diagnostic and command circuits do not limit the accuracy of plasma parameter estimates or inhibit the ability to derive accurate plasma/tokamak system models. To address this issue, we have developed more formal power systems change control and power system/magnetic diagnostics calibration procedures. This paper discusses our approach to consolidating the tasks in these closely related areas. This includes, for example, defining criteria for when diagnostics should be re-calibrated along with required calibration tolerances, and implementing methods for tracking power systems hardware modifications and the resultant changes to control models

  7. FAST Model Calibration and Validation of the OC5-DeepCwind Floating Offshore Wind System Against Wave Tank Test Data

    Energy Technology Data Exchange (ETDEWEB)

    Wendt, Fabian F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Robertson, Amy N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jonkman, Jason [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-06-03

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitch and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.

  8. Preliminary report on NTS spectral gamma logging and calibration models

    International Nuclear Information System (INIS)

    Mathews, M.A.; Warren, R.G.; Garcia, S.R.; Lavelle, M.J.

    1985-01-01

    Facilities are now available at the Nevada Test Site (NTS) in Building 2201 to calibrate spectral gamma logging equipment in environments of low radioactivity. Such environments are routinely encountered during logging of holes at the NTS. Four calibration models were delivered to Building 2201 in January 1985. Each model, or test pit, consists of a stone block with a 12-inch diameter cored borehole. Preliminary radioelement values from the core for the test pits range from 0.58 to 3.83% potassium (K), 0.48 to 29.11 ppm thorium (Th), and 0.62 to 40.42 ppm uranium (U). Two satellite holes, U19ab number2 and U19ab number3, were logged during the winter of 1984-1985. The response of these logs correlates with contents of the naturally radioactive elements K. Th. and U determined in samples from petrologic zones that occur within these holes. Based on these comparisons, the spectral gamma log aids in the recognition and mapping of subsurface stratigraphic units and alteration features associated with unusual concentration of these radioactive elements, such as clay-rich zones

  9. Impacts of Spatial Climatic Representation on Hydrological Model Calibration and Prediction Uncertainty: A Mountainous Catchment of Three Gorges Reservoir Region, China

    Directory of Open Access Journals (Sweden)

    Yan Li

    2016-02-01

    Full Text Available Sparse climatic observations represent a major challenge for hydrological modeling of mountain catchments with implications for decision-making in water resources management. Employing elevation bands in the Soil and Water Assessment Tool-Sequential Uncertainty Fitting (SWAT2012-SUFI2 model enabled representation of precipitation and temperature variation with altitude in the Daning river catchment (Three Gorges Reservoir Region, China where meteorological inputs are limited in spatial extent and are derived from observations from relatively low lying locations. Inclusion of elevation bands produced better model performance for 1987–1993 with the Nash–Sutcliffe efficiency (NSE increasing by at least 0.11 prior to calibration. During calibration prediction uncertainty was greatly reduced. With similar R-factors from the earlier calibration iterations, a further 11% of observations were included within the 95% prediction uncertainty (95PPU compared to the model without elevation bands. For behavioral simulations defined in SWAT calibration using a NSE threshold of 0.3, an additional 3.9% of observations were within the 95PPU while the uncertainty reduced by 7.6% in the model with elevation bands. The calibrated model with elevation bands reproduced observed river discharges with the performance in the calibration period changing to “very good” from “poor” without elevation bands. The output uncertainty of calibrated model with elevation bands was satisfactory, having 85% of flow observations included within the 95PPU. These results clearly demonstrate the requirement to account for orographic effects on precipitation and temperature in hydrological models of mountainous catchments.

  10. Development of a Regional Glycerol Dialkyl Glycerol Tetraether (GDGT) - Temperature Calibration for Antarctic and sub-Antarctic Lakes

    Science.gov (United States)

    Roberts, S. J.; Foster, L. C.; Pearson, E. J.; Steve, J.; Hodgson, D.; Saunders, K. M.; Verleyen, E.

    2016-12-01

    Temperature calibration models based on the relative abundances of sedimentary glycerol dialkyl glycerol tetraethers (GDGTs) have been used to reconstruct past temperatures in both marine and terrestrial environments, but have not been widely applied in high latitude environments. This is mainly because the performance of GDGT-temperature calibrations at lower temperatures and GDGT provenance in many lacustrine settings remains uncertain. To address these issues, we examined surface sediments from 32 Antarctic, sub-Antarctic and Southern Chilean lakes. First, we quantified GDGT compositions present and then investigated modern-day environmental controls on GDGT composition. GDGTs were found in all 32 lakes studied. Branched GDGTs (brGDGTs) were dominant in 31 lakes and statistical analyses showed that their composition was strongly correlated with mean summer air temperature (MSAT) rather than pH, conductivity or water depth. Second, we developed the first regional brGDGT-temperature calibration for Antarctic and sub-Antarctic lakes based on four brGDGT compounds (GDGT-Ib, GDGT-II, GDGT-III and GDGT-IIIb). Of these, GDGT-IIIb proved particularly important in cold lacustrine environments. Our brGDGT-Antarctic temperature calibration dataset has an improved statistical performance at low temperatures compared to previous global calibrations (r2=0.83, RMSE=1.45°C, RMSEP-LOO=1.68°C, n=36 samples), highlighting the importance of basing palaeotemperature reconstructions on regional GDGT-temperature calibrations, especially if specific compounds lead to improved model performance. Finally, we applied the new Antarctic brGDGT-temperature calibration to two key lake records from the Antarctic Peninsula and South Georgia. In both, downcore temperature reconstructions show similarities to known Holocene warm periods, providing proof of concept for the new Antarctic calibration model.

  11. Modeling the Heterogeneous Effects of GHG Mitigation Policies on Global Agriculture and Forestry

    Science.gov (United States)

    Golub, A.; Henderson, B.; Hertel, T. W.; Rose, S. K.; Sohngen, B.

    2010-12-01

    Agriculture and forestry are envisioned as potentially key sectors for climate change mitigation policy, yet the depth of analysis of mitigation options and their economic consequences remains remarkably shallow in comparison to that for industrial mitigation. Farming and land use change - much of it induced by agriculture -account for one-third of global greenhouse gas (GHG) emissions. Any serious attempt to curtail these emissions will involve changes in the way farming is conducted, as well as placing limits on agricultural expansion into areas currently under more carbon-intensive land cover. However, agriculture and forestry are extremely heterogeneous, both in the technology and intensity of production, as well as in the GHG emissions intensity of these activities. And these differences, in turn, give rise to significant changes in the distribution of agricultural production, trade and consumption in the wake of mitigation policies. This paper assesses such distributional impacts via a global economic analysis undertaken with a modified version of the GTAP model. The paper builds on a global general equilibrium GTAP-AEZ-GHG model (Golub et al., 2009). This is a unified modeling framework that links the agricultural, forestry, food processing and other sectors through land, and other factor markets and international trade, and incorporates different land-types, land uses and related CO2 and non-CO2 GHG emissions and sequestration. The economic data underlying this work is the global GTAP data base aggregated up to 19 regions and 29 sectors. The model incorporates mitigation cost curves for different regions and sectors based on information from the US-EPA. The forestry component of the model is calibrated to the results of the state of the art partial equilibrium global forestry model of Sohngen and Mendelson (2007). Forest carbon sequestration at both the extensive and intensive margins are modeled separately to better isolate land competition between

  12. Non-linear calibration models for near infrared spectroscopy

    DEFF Research Database (Denmark)

    Ni, Wangdong; Nørgaard, Lars; Mørup, Morten

    2014-01-01

    by ridge regression (RR). The performance of the different methods is demonstrated by their practical applications using three real-life near infrared (NIR) data sets. Different aspects of the various approaches including computational time, model interpretability, potential over-fitting using the non-linear...... models on linear problems, robustness to small or medium sample sets, and robustness to pre-processing, are discussed. The results suggest that GPR and BANN are powerful and promising methods for handling linear as well as nonlinear systems, even when the data sets are moderately small. The LS......-SVM), relevance vector machines (RVM), Gaussian process regression (GPR), artificial neural network (ANN), and Bayesian ANN (BANN). In this comparison, partial least squares (PLS) regression is used as a linear benchmark, while the relationship of the methods is considered in terms of traditional calibration...

  13. Planck 2015 results. V. LFI calibration

    CERN Document Server

    Ade, P.A.R.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartolo, N.; Battaglia, P.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cardoso, J.F.; Catalano, A.; Chamballu, A.; Christensen, P.R.; Colombi, S.; Colombo, L.P.L.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J.M.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F.K.; Hanson, D.; Harrison, D.L.; Henrot-Versille, S.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Hurier, G.; Jaffe, A.H.; Jaffe, T.R.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kisner, T.S.; Knoche, J.; Krachmalnicoff, N.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Lawrence, C.R.; Leahy, J.P.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P.B.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P.R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J.A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Novikov, D.; Novikov, I.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T.J.; Peel, M.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Pierpaoli, E.; Pietrobon, D.; Pointecouteau, E.; Polenta, G.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Rocha, G.; Romelli, E.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Stolyarov, V.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vassallo, T.; Vielva, P.; Villa, F.; Wade, L.A.; Wandelt, B.D.; Watson, R.; Wehus, I.K.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-01-01

    We present a description of the pipeline used to calibrate the Planck Low Frequency Instrument (LFI) timelines into thermodynamic temperatures for the Planck 2015 data release, covering 4 years of uninterrupted operations. As in the 2013 data release, our calibrator is provided by the spin-synchronous modulation of the CMB dipole, exploiting both the orbital and solar components. Our 2015 LFI analysis provides an independent Solar dipole estimate in excellent agreement with that of HFI and within $1\\sigma$ (0.3 % in amplitude) of the WMAP value. This 0.3 % shift in the peak-to-peak dipole temperature from WMAP and a global overhaul of the iterative calibration code increases the overall level of the LFI maps by 0.45 % (30 GHz), 0.64 % (44 GHz), and 0.82 % (70 GHz) in temperature with respect to the 2013 Planck data release, thus reducing the discrepancy with the power spectrum measured by WMAP. We estimate that the LFI calibration uncertainty is at the level of 0.20 % for the 70 GHz map, 0.26 % for the 44 GHz...

  14. Global Atmosphere Watch Workshop on Measurement-Model ...

    Science.gov (United States)

    The World Meteorological Organization’s (WMO) Global Atmosphere Watch (GAW) Programme coordinates high-quality observations of atmospheric composition from global to local scales with the aim to drive high-quality and high-impact science while co-producing a new generation of products and services. In line with this vision, GAW’s Scientific Advisory Group for Total Atmospheric Deposition (SAG-TAD) has a mandate to produce global maps of wet, dry and total atmospheric deposition for important atmospheric chemicals to enable research into biogeochemical cycles and assessments of ecosystem and human health effects. The most suitable scientific approach for this activity is the emerging technique of measurement-model fusion for total atmospheric deposition. This technique requires global-scale measurements of atmospheric trace gases, particles, precipitation composition and precipitation depth, as well as predictions of the same from global/regional chemical transport models. The fusion of measurement and model results requires data assimilation and mapping techniques. The objective of the GAW Workshop on Measurement-Model Fusion for Global Total Atmospheric Deposition (MMF-GTAD), an initiative of the SAG-TAD, was to review the state-of-the-science and explore the feasibility and methodology of producing, on a routine retrospective basis, global maps of atmospheric gas and aerosol concentrations as well as wet, dry and total deposition via measurement-model

  15. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.; Wheeler, M. F.; Hoteit, Ibrahim

    2013-01-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known

  16. Ørsted Pre-Flight Magnetometer Calibration Mission

    DEFF Research Database (Denmark)

    Risbo, T.; Brauer, Peter; Merayo, José M.G.

    2003-01-01

    and the overall calibration results are given. The temperature calibrations are explained and reported on. The overall calibration model standard deviation is about 100 pT rms. Comparisons with the later in-flight calibrations show that, except for the unknown satellite offsets, an agreement within 4 n...

  17. Calibration artefacts in radio interferometry - III. Phase-only calibration and primary beam correction

    Science.gov (United States)

    Grobler, T. L.; Stewart, A. J.; Wijnholds, S. J.; Kenyon, J. S.; Smirnov, O. M.

    2016-09-01

    This is the third installment in a series of papers in which we investigate calibration artefacts. Calibration artefacts (also known as ghosts or spurious sources) are created when we calibrate with an incomplete model. In the first two papers of this series, we developed a mathematical framework which enabled us to study the ghosting mechanism itself. An interesting concomitant of the second paper was that ghosts appear in symmetrical pairs. This could possibly account for spurious symmetrization. Spurious symmetrization refers to the appearance of a spurious source (the antighost) symmetrically opposite an unmodelled source around a modelled source. The analysis in the first two papers indicates that the antighost is usually very faint, in particular, when a large number of antennas are used. This suggests that spurious symmetrization will mainly occur at an almost undetectable flux level. In this paper, we show that phase-only calibration produces an antighost that is N-times (where N denotes the number of antennas in the array) as bright as the one produced by phase and amplitude calibration and that this already bright ghost can be further amplified by the primary beam correction.

  18. Optimal estimation of regional N2O emissions using a three-dimensional global model

    Science.gov (United States)

    Huang, J.; Golombek, A.; Prinn, R.

    2004-12-01

    In this study, we use the MATCH (Model of Atmospheric Transport and Chemistry) model and Kalman filtering techniques to optimally estimate N2O emissions from seven source regions around the globe. The MATCH model was used with NCEP assimilated winds at T62 resolution (192 longitude by 94 latitude surface grid, and 28 vertical levels) from July 1st 1996 to December 31st 2000. The average concentrations of N2O in the lowest four layers of the model were then compared with the monthly mean observations from six national/global networks (AGAGE, CMDL (HATS), CMDL (CCGG), CSIRO, CSIR and NIES), at 48 surface sites. A 12-month-running-mean smoother was applied to both the model results and the observations, due to the fact that the model was not able to reproduce the very small observed seasonal variations. The Kalman filter was then used to solve for the time-averaged regional emissions of N2O for January 1st 1997 to June 30th 2000. The inversions assume that the model stratospheric destruction rates, which lead to a global N2O lifetime of 130 years, are correct. It also assumes normalized emission spatial distributions from each region based on previous studies. We conclude that the global N2O emission flux is about 16.2 TgN/yr, with {34.9±1.7%} from South America and Africa, {34.6±1.5%} from South Asia, {13.9±1.5%} from China/Japan/South East Asia, {8.0±1.9%} from all oceans, {6.4±1.1%} from North America and North and West Asia, {2.6±0.4%} from Europe, and {0.9±0.7%} from New Zealand and Australia. The errors here include the measurement standard deviation, calibration differences among the six groups, grid volume/measurement site mis-match errors estimated from the model, and a procedure to account approximately for the modeling errors.

  19. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    International Nuclear Information System (INIS)

    Xu, Jin; Yu, Yaming; Van Dyk, David A.; Kashyap, Vinay L.; Siemiginowska, Aneta; Drake, Jeremy; Ratzlaff, Pete; Connors, Alanna; Meng, Xiao-Li

    2014-01-01

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  20. Construction of the Calibration Set through Multivariate Analysis in Visible and Near-Infrared Prediction Model for Estimating Soil Organic Matter

    Directory of Open Access Journals (Sweden)

    Xiaomi Wang

    2017-02-01

    Full Text Available The visible and near-infrared (VNIR spectroscopy prediction model is an effective tool for the prediction of soil organic matter (SOM content. The predictive accuracy of the VNIR model is highly dependent on the selection of the calibration set. However, conventional methods for selecting the calibration set for constructing the VNIR prediction model merely consider either the gradients of SOM or the soil VNIR spectra and neglect the influence of environmental variables. However, soil samples generally present a strong spatial variability, and, thus, the relationship between the SOM content and VNIR spectra may vary with respect to locations and surrounding environments. Hence, VNIR prediction models based on conventional calibration set selection methods would be biased, especially for estimating highly spatially variable soil content (e.g., SOM. To equip the calibration set selection method with the ability to consider SOM spatial variation and environmental influence, this paper proposes an improved method for selecting the calibration set. The proposed method combines the improved multi-variable association relationship clustering mining (MVARC method and the Rank–Kennard–Stone (Rank-KS method in order to synthetically consider the SOM gradient, spectral information, and environmental variables. In the proposed MVARC-R-KS method, MVARC integrates the Apriori algorithm, a density-based clustering algorithm, and the Delaunay triangulation. The MVARC method is first utilized to adaptively mine clustering distribution zones in which environmental variables exert a similar influence on soil samples. The feasibility of the MVARC method is proven by conducting an experiment on a simulated dataset. The calibration set is evenly selected from the clustering zones and the remaining zone by using the Rank-KS algorithm in order to avoid a single property in the selected calibration set. The proposed MVARC-R-KS approach is applied to select a

  1. A continental-scale hydrology and water quality model for Europe: Calibration and uncertainty of a high-resolution large-scale SWAT model

    Science.gov (United States)

    Abbaspour, K. C.; Rouholahnejad, E.; Vaghefi, S.; Srinivasan, R.; Yang, H.; Kløve, B.

    2015-05-01

    A combination of driving forces are increasing pressure on local, national, and regional water supplies needed for irrigation, energy production, industrial uses, domestic purposes, and the environment. In many parts of Europe groundwater quantity, and in particular quality, have come under sever degradation and water levels have decreased resulting in negative environmental impacts. Rapid improvements in the economy of the eastern European block of countries and uncertainties with regard to freshwater availability create challenges for water managers. At the same time, climate change adds a new level of uncertainty with regard to freshwater supplies. In this research we build and calibrate an integrated hydrological model of Europe using the Soil and Water Assessment Tool (SWAT) program. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals. Leaching of nitrate into groundwater is also simulated at a finer spatial level (HRU). The use of large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation. In this article we discuss issues with data availability, calibration of large-scale distributed models, and outline procedures for model calibration and uncertainty analysis. The calibrated model and results provide information support to the European Water Framework Directive and lay the basis for further assessment of the impact of climate change on water availability and quality. The approach and methods developed are general and can be applied to any large region around the world.

  2. An efficient multi-stage algorithm for full calibration of the hemodynamic model from BOLD signal responses

    KAUST Repository

    Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2017-01-01

    We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model introduced by Friston et al. (2000). The proposed method is employed to estimate consecutively the values of the biophysiological system parameters and the external stimulus characteristics of the model. Numerical results corresponding to both synthetic and real functional Magnetic Resonance Imaging (fMRI) measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model. This article is protected by copyright. All rights reserved.

  3. An efficient multi-stage algorithm for full calibration of the hemodynamic model from BOLD signal responses

    KAUST Repository

    Zambri, Brian

    2017-02-22

    We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model introduced by Friston et al. (2000). The proposed method is employed to estimate consecutively the values of the biophysiological system parameters and the external stimulus characteristics of the model. Numerical results corresponding to both synthetic and real functional Magnetic Resonance Imaging (fMRI) measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model. This article is protected by copyright. All rights reserved.

  4. High resolution global flood hazard map from physically-based hydrologic and hydraulic models.

    Science.gov (United States)

    Begnudelli, L.; Kaheil, Y.; McCollum, J.

    2017-12-01

    The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak

  5. Presentation, calibration and validation of the low-order, DCESS Earth System Model (Version 1

    Directory of Open Access Journals (Sweden)

    J. O. Pepke Pedersen

    2008-11-01

    Full Text Available A new, low-order Earth System Model is described, calibrated and tested against Earth system data. The model features modules for the atmosphere, ocean, ocean sediment, land biosphere and lithosphere and has been designed to simulate global change on time scales of years to millions of years. The atmosphere module considers radiation balance, meridional transport of heat and water vapor between low-mid latitude and high latitude zones, heat and gas exchange with the ocean and sea ice and snow cover. Gases considered are carbon dioxide and methane for all three carbon isotopes, nitrous oxide and oxygen. The ocean module has 100 m vertical resolution, carbonate chemistry and prescribed circulation and mixing. Ocean biogeochemical tracers are phosphate, dissolved oxygen, dissolved inorganic carbon for all three carbon isotopes and alkalinity. Biogenic production of particulate organic matter in the ocean surface layer depends on phosphate availability but with lower efficiency in the high latitude zone, as determined by model fit to ocean data. The calcite to organic carbon rain ratio depends on surface layer temperature. The semi-analytical, ocean sediment module considers calcium carbonate dissolution and oxic and anoxic organic matter remineralisation. The sediment is composed of calcite, non-calcite mineral and reactive organic matter. Sediment porosity profiles are related to sediment composition and a bioturbated layer of 0.1 m thickness is assumed. A sediment segment is ascribed to each ocean layer and segment area stems from observed ocean depth distributions. Sediment burial is calculated from sedimentation velocities at the base of the bioturbated layer. Bioturbation rates and oxic and anoxic remineralisation rates depend on organic carbon rain rates and dissolved oxygen concentrations. The land biosphere module considers leaves, wood, litter and soil. Net primary production depends on atmospheric carbon dioxide concentration and

  6. Mathematical Model and Calibration Procedure of a PSD Sensor Used in Local Positioning Systems.

    Science.gov (United States)

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Domingo-Perez, Francisco; Tsirigotis, Georgios

    2016-09-15

    Here, we propose a mathematical model and a calibration procedure for a PSD (position sensitive device) sensor equipped with an optical system, to enable accurate measurement of the angle of arrival of one or more beams of light emitted by infrared (IR) transmitters located at distances of between 4 and 6 m. To achieve this objective, it was necessary to characterize the intrinsic parameters that model the system and obtain their values. This first approach was based on a pin-hole model, to which system nonlinearities were added, and this was used to model the points obtained with the nA currents provided by the PSD. In addition, we analyzed the main sources of error, including PSD sensor signal noise, gain factor imbalances and PSD sensor distortion. The results indicated that the proposed model and method provided satisfactory calibration and yielded precise parameter values, enabling accurate measurement of the angle of arrival with a low degree of error, as evidenced by the experimental results.

  7. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements

    Directory of Open Access Journals (Sweden)

    Miguel A. Franesqui

    2017-08-01

    Full Text Available This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA. The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled “Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves” (Franesqui et al., 2017 [1].

  8. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements.

    Science.gov (United States)

    Franesqui, Miguel A; Yepes, Jorge; García-González, Cándida

    2017-08-01

    This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA). The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled "Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves" (Franesqui et al., 2017) [1].

  9. Micro-Arcsec mission: implications of the monitoring, diagnostic and calibration of the instrument response in the data reduction chain. .

    Science.gov (United States)

    Busonero, D.; Gai, M.

    The goals of 21st century high angular precision experiments rely on the limiting performance associated to the selected instrumental configuration and observational strategy. Both global and narrow angle micro-arcsec space astrometry require that the instrument contributions to the overall error budget has to be less than the desired micro-arcsec level precision. Appropriate modelling of the astrometric response is required for optimal definition of the data reduction and calibration algorithms, in order to ensure high sensitivity to the astrophysical source parameters and in general high accuracy. We will refer to the framework of the SIM-Lite and the Gaia mission, the most challenging space missions of the next decade in the narrow angle and global astrometry field, respectively. We will focus our dissertation on the Gaia data reduction issues and instrument calibration implications. We describe selected topics in the framework of the Astrometric Instrument Modelling for the Gaia mission, evidencing their role in the data reduction chain and we give a brief overview of the Astrometric Instrument Model Data Analysis Software System, a Java-based pipeline under development by our team.

  10. Modelling basin-wide variations in Amazon forest productivity – Part 1: Model calibration, evaluation and upscaling functions for canopy photosynthesis

    Directory of Open Access Journals (Sweden)

    L. M. Mercado

    2009-07-01

    Full Text Available Given the importance of Amazon rainforest in the global carbon and hydrological cycles, there is a need to parameterize and validate ecosystem gas exchange and vegetation models for this region in order to adequately simulate present and future carbon and water balances. In this study, a sun and shade canopy gas exchange model is calibrated and evaluated at five rainforest sites using eddy correlation measurements of carbon and energy fluxes.

    Results from the model-data evaluation suggest that with adequate parameterisation, photosynthesis models taking into account the separation of diffuse and direct irradiance and the dynamics of sunlit and shaded leaves can accurately represent photosynthesis in these forests. Also, stomatal conductance formulations that only take into account atmospheric demand fail to correctly simulate moisture and CO2 fluxes in forests with a pronounced dry season, particularly during afternoon conditions. Nevertheless, it is also the case that large uncertainties are associated not only with the eddy correlation data, but also with the estimates of ecosystem respiration required for model validation. To accurately simulate Gross Primary Productivity (GPP and energy partitioning the most critical parameters and model processes are the quantum yield of photosynthetic uptake, the maximum carboxylation capacity of Rubisco, and simulation of stomatal conductance.

    Using this model-data synergy, we developed scaling functions to provide estimates of canopy photosynthetic parameters for a range of diverse forests across the Amazon region, utilising the best fitted parameter for maximum carboxylation capacity of Rubisco, and foliar nutrients (N and P for all sites.

  11. CALIPSO lidar calibration at 532 nm: version 4 nighttime algorithm

    Directory of Open Access Journals (Sweden)

    J. Kar

    2018-03-01

    Full Text Available Data products from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP on board Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO were recently updated following the implementation of new (version 4 calibration algorithms for all of the Level 1 attenuated backscatter measurements. In this work we present the motivation for and the implementation of the version 4 nighttime 532 nm parallel channel calibration. The nighttime 532 nm calibration is the most fundamental calibration of CALIOP data, since all of CALIOP's other radiometric calibration procedures – i.e., the 532 nm daytime calibration and the 1064 nm calibrations during both nighttime and daytime – depend either directly or indirectly on the 532 nm nighttime calibration. The accuracy of the 532 nm nighttime calibration has been significantly improved by raising the molecular normalization altitude from 30–34 km to the upper possible signal acquisition range of 36–39 km to substantially reduce stratospheric aerosol contamination. Due to the greatly reduced molecular number density and consequently reduced signal-to-noise ratio (SNR at these higher altitudes, the signal is now averaged over a larger number of samples using data from multiple adjacent granules. Additionally, an enhanced strategy for filtering the radiation-induced noise from high-energy particles was adopted. Further, the meteorological model used in the earlier versions has been replaced by the improved Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2, model. An aerosol scattering ratio of 1.01 ± 0.01 is now explicitly used for the calibration altitude. These modifications lead to globally revised calibration coefficients which are, on average, 2–3 % lower than in previous data releases. Further, the new calibration procedure is shown to eliminate biases at high altitudes that were present in earlier versions and

  12. The Fossil Calibration Database-A New Resource for Divergence Dating.

    Science.gov (United States)

    Ksepka, Daniel T; Parham, James F; Allman, James F; Benton, Michael J; Carrano, Matthew T; Cranston, Karen A; Donoghue, Philip C J; Head, Jason J; Hermsen, Elizabeth J; Irmis, Randall B; Joyce, Walter G; Kohli, Manpreet; Lamm, Kristin D; Leehr, Dan; Patané, Josés L; Polly, P David; Phillips, Matthew J; Smith, N Adam; Smith, Nathan D; Van Tuinen, Marcel; Ware, Jessica L; Warnock, Rachel C M

    2015-09-01

    Fossils provide the principal basis for temporal calibrations, which are critical to the accuracy of divergence dating analyses. Translating fossil data into minimum and maximum bounds for calibrations is the most important-often least appreciated-step of divergence dating. Properly justified calibrations require the synthesis of phylogenetic, paleontological, and geological evidence and can be difficult for nonspecialists to formulate. The dynamic nature of the fossil record (e.g., new discoveries, taxonomic revisions, updates of global or local stratigraphy) requires that calibration data be updated continually lest they become obsolete. Here, we announce the Fossil Calibration Database (http://fossilcalibrations.org), a new open-access resource providing vetted fossil calibrations to the scientific community. Calibrations accessioned into this database are based on individual fossil specimens and follow best practices for phylogenetic justification and geochronological constraint. The associated Fossil Calibration Series, a calibration-themed publication series at Palaeontologia Electronica, will serve as a key pipeline for peer-reviewed calibrations to enter the database. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Use of two-part regression calibration model to correct for measurement error in episodically consumed foods in a single-replicate study design: EPIC case study.

    Science.gov (United States)

    Agogo, George O; van der Voet, Hilko; van't Veer, Pieter; Ferrari, Pietro; Leenders, Max; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek

    2014-01-01

    In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.

  14. Calibration of the century, apsim and ndicea models of decomposition and n mineralization of plant residues in the humid tropics

    Directory of Open Access Journals (Sweden)

    Alexandre Ferreira do Nascimento

    2011-06-01

    Full Text Available The aim of this study was to calibrate the CENTURY, APSIM and NDICEA simulation models for estimating decomposition and N mineralization rates of plant organic materials (Arachis pintoi, Calopogonium mucunoides, Stizolobium aterrimum, Stylosanthes guyanensis for 360 days in the Atlantic rainforest bioma of Brazil. The models´ default settings overestimated the decomposition and N-mineralization of plant residues, underlining the fact that the models must be calibrated for use under tropical conditions. For example, the APSIM model simulated the decomposition of the Stizolobium aterrimum and Calopogonium mucunoides residues with an error rate of 37.62 and 48.23 %, respectively, by comparison with the observed data, and was the least accurate model in the absence of calibration. At the default settings, the NDICEA model produced an error rate of 10.46 and 14.46 % and the CENTURY model, 21.42 and 31.84 %, respectively, for Stizolobium aterrimum and Calopogonium mucunoides residue decomposition. After calibration, the models showed a high level of accuracy in estimating decomposition and N- mineralization, with an error rate of less than 20 %. The calibrated NDICEA model showed the highest level of accuracy, followed by the APSIM and CENTURY. All models performed poorly in the first few months of decomposition and N-mineralization, indicating the need of an additional parameter for initial microorganism growth on the residues that would take the effect of leaching due to rainfall into account.

  15. On global and regional spectral evaluation of global geopotential models

    International Nuclear Information System (INIS)

    Ustun, A; Abbak, R A

    2010-01-01

    Spectral evaluation of global geopotential models (GGMs) is necessary to recognize the behaviour of gravity signal and its error recorded in spherical harmonic coefficients and associated standard deviations. Results put forward in this wise explain the whole contribution of gravity data in different kinds that represent various sections of the gravity spectrum. This method is more informative than accuracy assessment methods, which use external data such as GPS-levelling. Comparative spectral evaluation for more than one model can be performed both in global and local sense using many spectral tools. The number of GGMs has grown with the increasing number of data collected by the dedicated satellite gravity missions, CHAMP, GRACE and GOCE. This fact makes it necessary to measure the differences between models and to monitor the improvements in the gravity field recovery. In this paper, some of the satellite-only and combined models are examined in different scales, globally and regionally, in order to observe the advances in the modelling of GGMs and their strengths at various expansion degrees for geodetic and geophysical applications. The validation of the published errors of model coefficients is a part of this evaluation. All spectral tools explicitly reveal the superiority of the GRACE-based models when compared against the models that comprise the conventional satellite tracking data. The disagreement between models is large in local/regional areas if data sets are different, as seen from the example of the Turkish territory

  16. Receiver Operating Characteristic Curve-Based Prediction Model for Periodontal Disease Updated With the Calibrated Community Periodontal Index.

    Science.gov (United States)

    Su, Chiu-Wen; Yen, Amy Ming-Fang; Lai, Hongmin; Chen, Hsiu-Hsi; Chen, Sam Li-Sheng

    2017-12-01

    The accuracy of a prediction model for periodontal disease using the community periodontal index (CPI) has been undertaken by using an area under a receiver operating characteristics (AUROC) curve. How the uncalibrated CPI, as measured by general dentists trained by periodontists in a large epidemiologic study, and affects the performance in a prediction model, has not been researched yet. A two-stage design was conducted by first proposing a validation study to calibrate CPI between a senior periodontal specialist and trained general dentists who measured CPIs in the main study of a nationwide survey. A Bayesian hierarchical logistic regression model was applied to estimate the non-updated and updated clinical weights used for building up risk scores. How the calibrated CPI affected performance of the updated prediction model was quantified by comparing AUROC curves between the original and updated models. Estimates regarding calibration of CPI obtained from the validation study were 66% and 85% for sensitivity and specificity, respectively. After updating, clinical weights of each predictor were inflated, and the risk score for the highest risk category was elevated from 434 to 630. Such an update improved the AUROC performance of the two corresponding prediction models from 62.6% (95% confidence interval [CI]: 61.7% to 63.6%) for the non-updated model to 68.9% (95% CI: 68.0% to 69.6%) for the updated one, reaching a statistically significant difference (P prediction model was demonstrated for periodontal disease as measured by the calibrated CPI derived from a large epidemiologic survey.

  17. Evaluation of global solar radiation models for Shanghai, China

    International Nuclear Information System (INIS)

    Yao, Wanxiang; Li, Zhengrong; Wang, Yuyan; Jiang, Fujian; Hu, Lingzhou

    2014-01-01

    Highlights: • 108 existing models are compared and analyzed by 42 years meteorological data. • Fitting models based on measured data are established according to 42 years data. • All models are compared by recently 10 years meteorological data. • The results show that polynomial models are the most accurate models. - Abstract: In this paper, 89 existing monthly average daily global solar radiation models and 19 existing daily global solar radiation models are compared and analyzed by 42 years meteorological data. The results show that for existing monthly average daily global solar radiation models, linear models and polynomial models have been able to estimate global solar radiation accurately, and complex equation types cannot obviously improve the precision. Considering direct parameters such as latitude, altitude, solar altitude and sunshine duration can help improve the accuracy of the models, but indirect parameters cannot. For existing daily global solar radiation models, multi-parameter models are more accurate than single-parameter models, polynomial models are more accurate than linear models. Then measured data fitting monthly average daily global solar radiation models (MADGSR models) and daily global solar radiation models (DGSR models) are established according to 42 years meteorological data. Finally, existing models and fitting models based on measured data are comparative analysis by recent 10 years meteorological data, and the results show that polynomial models (MADGSR model 2, DGSR model 2 and Maduekwe model 2) are the most accurate models

  18. Compact radiometric microwave calibrator

    International Nuclear Information System (INIS)

    Fixsen, D. J.; Wollack, E. J.; Kogut, A.; Limon, M.; Mirel, P.; Singal, J.; Fixsen, S. M.

    2006-01-01

    The calibration methods for the ARCADE II instrument are described and the accuracy estimated. The Steelcast coated aluminum cones which comprise the calibrator have a low reflection while maintaining 94% of the absorber volume within 5 mK of the base temperature (modeled). The calibrator demonstrates an absorber with the active part less than one wavelength thick and only marginally larger than the mouth of the largest horn and yet black (less than -40 dB or 0.01% reflection) over five octaves in frequency

  19. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    International Nuclear Information System (INIS)

    Carl Stern; Martin Lee

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models

  20. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    CERN Document Server

    Carl-Stern

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models.

  1. Review of Calibration Methods for Scheimpflug Camera

    Directory of Open Access Journals (Sweden)

    Cong Sun

    2018-01-01

    Full Text Available The Scheimpflug camera offers a wide range of applications in the field of typical close-range photogrammetry, particle image velocity, and digital image correlation due to the fact that the depth-of-view of Scheimpflug camera can be greatly extended according to the Scheimpflug condition. Yet, the conventional calibration methods are not applicable in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, various methods have been investigated to solve the problem over the last few years. However, no comprehensive review exists that provides an insight into recent calibration methods of Scheimpflug cameras. This paper presents a survey of recent calibration methods of Scheimpflug cameras with perspective lens, including the general nonparametric imaging model, and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Real data experiments including calibrations, reconstructions, and measurements are performed to assess the performance of the models. The results reveal that the accuracies of the RMM, PLVM, PCIM, and GNIM are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models. Moreover, the experimental results reveal that the parameters of the tangential distortion are likely coupled with the tilt angle of the sensor in Scheimpflug calibration models. The work of this paper lays the foundation of further research of Scheimpflug cameras.

  2. Feasibility of the use of optimisation techniques to calibrate the models used in a post-closure radiological assessment

    International Nuclear Information System (INIS)

    Laundy, R.S.

    1991-01-01

    This report addresses the feasibility of the use of optimisation techniques to calibrate the models developed for the impact assessment of a radioactive waste repository. The maximum likelihood method for improving parameter estimates is considered in detail, and non-linear optimisation techniques for finding solutions are reviewed. Applications are described for the calibration of groundwater flow, radionuclide transport and biosphere models. (author)

  3. Evaluation of Hydrologic Simulations Developed Using Multi-Model Synthesis and Remotely-Sensed Data within a Portfolio of Calibration Strategies

    Science.gov (United States)

    Lafontaine, J.; Hay, L.; Markstrom, S. L.

    2016-12-01

    The United States Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the conterminous United States (CONUS). As many stream reaches in the CONUS are either not gaged, or are substantially impacted by water use or flow regulation, ancillary information must be used to determine reasonable parameter estimations for streamflow simulations. Hydrologic models for 1,576 gaged watersheds across the CONUS were developed to test the feasibility of improving streamflow simulations linking physically-based hydrologic models with remotely-sensed data products (i.e. snow water equivalent). Initially, the physically-based models were calibrated to measured streamflow data to provide a baseline for comparison across multiple calibration strategy tests. In addition, not all ancillary datasets are appropriate for application to all parts of the CONUS (e.g. snow water equivalent in the southeastern U.S., where snow is a rarity). As it is not expected that any one data product or model simulation will be sufficient for representing hydrologic behavior across the entire CONUS, a systematic evaluation of which data products improve hydrologic simulations for various regions across the CONUS was performed. The resulting portfolio of calibration strategies can be used to guide selection of an appropriate combination of modeled and measured information for hydrologic model development and calibration. In addition, these calibration strategies have been developed to be flexible so that new data products can be assimilated. This analysis provides a foundation to understand how well models work when sufficient streamflow data are not available and could be used to further inform hydrologic model parameter development for ungaged areas.

  4. Recent Improvements to the Calibration Models for RXTE/PCA

    Science.gov (United States)

    Jahoda, K.

    2008-01-01

    We are updating the calibration of the PCA to correct for slow variations, primarily in energy to channel relationship. We have also improved the physical model in the vicinity of the Xe K-edge, which should increase the reliability of continuum fits above 20 keV. The improvements to the matrix are especially important to simultaneous observations, where the PCA is often used to constrain the continuum while other higher resolution spectrometers are used to study the shape of lines and edges associated with Iron.

  5. FAST Model Calibration and Validation of the OC5- DeepCwind Floating Offshore Wind System Against Wave Tank Test Data: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wendt, Fabian F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Robertson, Amy N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jonkman, Jason [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-09

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitch and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.

  6. Mechanistic site-based emulation of a global ocean biogeochemical model (MEDUSA 1.0 for parametric analysis and calibration: an application of the Marine Model Optimization Testbed (MarMOT 1.1

    Directory of Open Access Journals (Sweden)

    J. C. P. Hemmings

    2015-03-01

    Full Text Available Biogeochemical ocean circulation models used to investigate the role of plankton ecosystems in global change rely on adjustable parameters to capture the dominant biogeochemical dynamics of a complex biological system. In principle, optimal parameter values can be estimated by fitting models to observational data, including satellite ocean colour products such as chlorophyll that achieve good spatial and temporal coverage of the surface ocean. However, comprehensive parametric analyses require large ensemble experiments that are computationally infeasible with global 3-D simulations. Site-based simulations provide an efficient alternative but can only be used to make reliable inferences about global model performance if robust quantitative descriptions of their relationships with the corresponding 3-D simulations can be established. The feasibility of establishing such a relationship is investigated for an intermediate complexity biogeochemistry model (MEDUSA coupled with a widely used global ocean model (NEMO. A site-based mechanistic emulator is constructed for surface chlorophyll output from this target model as a function of model parameters. The emulator comprises an array of 1-D simulators and a statistical quantification of the uncertainty in their predictions. The unknown parameter-dependent biogeochemical environment, in terms of initial tracer concentrations and lateral flux information required by the simulators, is a significant source of uncertainty. It is approximated by a mean environment derived from a small ensemble of 3-D simulations representing variability of the target model behaviour over the parameter space of interest. The performance of two alternative uncertainty quantification schemes is examined: a direct method based on comparisons between simulator output and a sample of known target model "truths" and an indirect method that is only partially reliant on knowledge of the target model output. In general, chlorophyll

  7. Inverse modeling as a step in the calibration of the LBL-USGS site-scale model of Yucca Mountain

    International Nuclear Information System (INIS)

    Finsterle, S.; Bodvarsson, G.S.; Chen, G.

    1995-05-01

    Calibration of the LBL-USGS site-scale model of Yucca Mountain is initiated. Inverse modeling techniques are used to match the results of simplified submodels to the observed pressure, saturation, and temperature data. Hydrologic and thermal parameters are determined and compared to the values obtained from laboratory measurements and conventional field test analysis

  8. Development of a calibration protocol and identification of the most sensitive parameters for the particulate biofilm models used in biological wastewater treatment.

    Science.gov (United States)

    Eldyasti, Ahmed; Nakhla, George; Zhu, Jesse

    2012-05-01

    Biofilm models are valuable tools for process engineers to simulate biological wastewater treatment. In order to enhance the use of biofilm models implemented in contemporary simulation software, model calibration is both necessary and helpful. The aim of this work was to develop a calibration protocol of the particulate biofilm model with a help of the sensitivity analysis of the most important parameters in the biofilm model implemented in BioWin® and verify the predictability of the calibration protocol. A case study of a circulating fluidized bed bioreactor (CFBBR) system used for biological nutrient removal (BNR) with a fluidized bed respirometric study of the biofilm stoichiometry and kinetics was used to verify and validate the proposed calibration protocol. Applying the five stages of the biofilm calibration procedures enhanced the applicability of BioWin®, which was capable of predicting most of the performance parameters with an average percentage error (APE) of 0-20%. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Calibrating a surface mass-balance model for Austfonna ice cap, Svalbard

    Science.gov (United States)

    Schuler, Thomas Vikhamar; Loe, Even; Taurisano, Andrea; Eiken, Trond; Hagen, Jon Ove; Kohler, Jack

    2007-10-01

    Austfonna (8120 km2) is by far the largest ice mass in the Svalbard archipelago. There is considerable uncertainty about its current state of balance and its possible response to climate change. Over the 2004/05 period, we collected continuous meteorological data series from the ice cap, performed mass-balance measurements using a network of stakes distributed across the ice cap and mapped the distribution of snow accumulation using ground-penetrating radar along several profile lines. These data are used to drive and test a model of the surface mass balance. The spatial accumulation pattern was derived from the snow depth profiles using regression techniques, and ablation was calculated using a temperature-index approach. Model parameters were calibrated using the available field data. Parameter calibration was complicated by the fact that different parameter combinations yield equally acceptable matches to the stake data while the resulting calculated net mass balance differs considerably. Testing model results against multiple criteria is an efficient method to cope with non-uniqueness. In doing so, a range of different data and observations was compared to several different aspects of the model results. We find a systematic underestimation of net balance for parameter combinations that predict observed ice ablation, which suggests that refreezing processes play an important role. To represent these effects in the model, a simple PMAX approach was included in its formulation. Used as a diagnostic tool, the model suggests that the surface mass balance for the period 29 April 2004 to 23 April 2005 was negative (-318 mm w.e.).

  10. Advances in stochastic and deterministic global optimization

    CERN Document Server

    Zhigljavsky, Anatoly; Žilinskas, Julius

    2016-01-01

    Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...

  11. Calibration of Linked Hydrodynamic and Water Quality Model for Santa Margarita Lagoon

    Science.gov (United States)

    2016-07-01

    was used to drive the transport and water quality kinetics for the simulation of 2007–2009. The sand berm, which controlled the opening/closure of...TECHNICAL REPORT 3015 July 2016 Calibration of Linked Hydrodynamic and Water Quality Model for Santa Margarita Lagoon Final Report Pei...Linked Hydrodynamic and Water Quality Model for Santa Margarita Lagoon Final Report Pei-Fang Wang Chuck Katz Ripan Barua SSC Pacific James

  12. Usability of Calibrating Monitor for Soft Proof According to CIE CAM02 Colour Appearance Model

    Directory of Open Access Journals (Sweden)

    Dragoljub Novakovic

    2010-06-01

    Full Text Available Colour appearance models describe viewing conditions and enable simulating appearance of colours under different illuminants and illumination levels according to human perception. Since it is possible to predict how colour would look like when different illuminants are used, colour appearance models are incorporated in some monitor profiling software. Owing to these software, tone reproduction curve can be defined by taking into consideration viewing condition in which display is observed. In this work assessment of CIE CAM02 colour appearance model usage at calibrating LCD monitor for soft proof was tested in order to determine which tone reproduction curve enables better reproduction of colour. Luminance level was kept constant, whereas tone reproduction curves determined by gamma values and by parameters of CIE CAM02 model were varied. Testing was conducted in case where physical print reference is observed under illuminant which has colour temperature according to iso standard for soft-proofing (D50 and also for illuminants D65.  Based on the results of calibrations assessment, subjective and objective assessment of created profiles, as well as on the perceptual test carried out on human observers, differences in image display were defined and conclusions of the adequacy of CAM02 usage at monitor calibration for each of the viewing conditions reached.

  13. Features calibration of the dynamic force transducers

    Science.gov (United States)

    Sc., M. Yu Prilepko D.; Lysenko, V. G.

    2018-04-01

    The article discusses calibration methods of dynamic forces measuring instruments. The relevance of work is dictated by need to valid definition of the dynamic forces transducers metrological characteristics taking into account their intended application. The aim of this work is choice justification of calibration method, which provides the definition dynamic forces transducers metrological characteristics under simulation operating conditions for determining suitability for using in accordance with its purpose. The following tasks are solved: the mathematical model and the main measurements equation of calibration dynamic forces transducers by load weight, the main budget uncertainty components of calibration are defined. The new method of dynamic forces transducers calibration with use the reference converter “force-deformation” based on the calibrated elastic element and measurement of his deformation by a laser interferometer is offered. The mathematical model and the main measurements equation of the offered method is constructed. It is shown that use of calibration method based on measurements by the laser interferometer of calibrated elastic element deformations allows to exclude or to considerably reduce the uncertainty budget components inherent to method of load weight.

  14. Approach of regional gravity field modeling from GRACE data for improvement of geoid modeling for Japan

    Science.gov (United States)

    Kuroishi, Y.; Lemoine, F. G.; Rowlands, D. D.

    2006-12-01

    The latest gravimetric geoid model for Japan, JGEOID2004, suffers from errors at long wavelengths (around 1000 km) in a range of +/- 30 cm. The model was developed by combining surface gravity data with a global marine altimetric gravity model, using EGM96 as a foundation, and the errors at long wavelength are presumably attributed to EGM96 errors. The Japanese islands and their vicinity are located in a region of plate convergence boundaries, producing substantial gravity and geoid undulations in a wide range of wavelengths. Because of the geometry of the islands and trenches, precise information on gravity in the surrounding oceans should be incorporated in detail, even if the geoid model is required to be accurate only over land. The Kuroshio Current, which runs south of Japan, causes high sea surface variability, making altimetric gravity field determination complicated. To reduce the long-wavelength errors in the geoid model, we are investigating GRACE data for regional gravity field modeling at long wavelengths in the vicinity of Japan. Our approach is based on exclusive use of inter- satellite range-rate data with calibrated accelerometer data and attitude data, for regional or global gravity field recovery. In the first step, we calibrate accelerometer data in terms of scales and biases by fitting dynamically calculated orbits to GPS-determined precise orbits. The calibration parameters of accelerometer data thus obtained are used in the second step to recover a global/regional gravity anomaly field. This approach is applied to GRACE data obtained for the year 2005 and resulting global/regional gravity models are presented and discussed.

  15. Local Adaptive Calibration of the GLASS Surface Incident Shortwave Radiation Product Using Smoothing Spline

    Science.gov (United States)

    Zhang, X.; Liang, S.; Wang, G.

    2015-12-01

    Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.

  16. Performance and Model Calibration of R-D-N Processes in Pilot Plant

    DEFF Research Database (Denmark)

    de la Sota, A.; Larrea, L.; Novak, L.

    1994-01-01

    This paper deals with the first part of an experimental programme in a pilot plant configured for advanced biological nutrient removal processes treating domestic wastewater of Bilbao. The IAWPRC Model No.1 was calibrated in order to optimize the design of the full-scale plant. In this first phas...

  17. hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models

    Science.gov (United States)

    Zambrano-Bigiarini, M.; Rojas, R.

    2012-04-01

    Particle Swarm Optimisation (PSO) is a recent and powerful population-based stochastic optimisation technique inspired by social behaviour of bird flocking, which shares similarities with other evolutionary techniques such as Genetic Algorithms (GA). In PSO, however, each individual of the population, known as particle in PSO terminology, adjusts its flying trajectory on the multi-dimensional search-space according to its own experience (best-known personal position) and the one of its neighbours in the swarm (best-known local position). PSO has recently received a surge of attention given its flexibility, ease of programming, low memory and CPU requirements, and efficiency. Despite these advantages, PSO may still get trapped into sub-optimal solutions, suffer from swarm explosion or premature convergence. Thus, the development of enhancements to the "canonical" PSO is an active area of research. To date, several modifications to the canonical PSO have been proposed in the literature, resulting into a large and dispersed collection of codes and algorithms which might well be used for similar if not identical purposes. In this work we present hydroPSO, a platform-independent R package implementing several enhancements to the canonical PSO that we consider of utmost importance to bring this technique to the attention of a broader community of scientists and practitioners. hydroPSO is model-independent, allowing the user to interface any model code with the calibration engine without having to invest considerable effort in customizing PSO to a new calibration problem. Some of the controlling options to fine-tune hydroPSO are: four alternative topologies, several types of inertia weight, time-variant acceleration coefficients, time-variant maximum velocity, regrouping of particles when premature convergence is detected, different types of boundary conditions and many others. Additionally, hydroPSO implements recent PSO variants such as: Improved Particle Swarm

  18. Optimal Operational Monetary Policy Rules in an Endogenous Growth Model: a calibrated analysis

    OpenAIRE

    Arato, Hiroki

    2009-01-01

    This paper constructs an endogenous growth New Keynesian model and considers growth and welfare effect of Taylor-type (operational) monetary policy rules. The Ramsey equilibrium and optimal operational monetary policy rule is also computed. In the calibrated model, the Ramseyoptimal volatility of inflation rate is smaller than that in standard exogenous growth New Keynesian model with physical capital accumulation. Optimal operational monetary policy rule makes nominal interest rate respond s...

  19. FVS and global Warming: A prospectus for future development

    Science.gov (United States)

    Nicholas L. Crookston; Gerald E. Rehfeldt; Dennis E. Ferguson; Marcus Warwell

    2008-01-01

    Climate change-global warming and changes in precipitation-will cause changes in tree growth rates, mortality rates, the distribution of tree species, competition, and species interactions. An implicit assumption in FVS is that site quality will remain the same as it was during the time period observations used to calibrate the component models were made and that the...

  20. USING GEM - GLOBAL ECONOMIC MODEL IN ACHIEVING A GLOBAL ECONOMIC FORECAST

    Directory of Open Access Journals (Sweden)

    Camelia Madalina Orac

    2013-12-01

    Full Text Available The global economic development model has proved to be insufficiently reliable under the new economic crisis. As a result, the entire theoretical construction about the global economy needs rethinking and reorientation. In this context, it is quite clear that only through effective use of specific techniques and tools of economic-mathematical modeling, statistics, regional analysis and economic forecasting it is possible to obtain an overview of the future economy.

  1. A high-resolution global flood hazard model

    Science.gov (United States)

    Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.

    2015-09-01

    Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.

  2. Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain

    Science.gov (United States)

    Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises

    2015-01-01

    Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156

  3. Calibration of hydrodynamic model MIKE 11 for the sub-basin of the Piauitinga river, Sergipe, Brazil

    Directory of Open Access Journals (Sweden)

    Marcos Vinicius Folegatti

    2010-12-01

    Full Text Available In Piauitinga river sub-basin the environment has been suffering from negative actions by humans such as deforestation around springs, inadequate use of the uptaken water, inappropriate use in domestic activities, siltation and sand exploitation, and contamination by domestic, industrial and agricultural residuals. The present study presents the one-dimensional hydrodynamic MIKE 11 model calibration that simulates the water flow in estuary, rivers, irrigation systems, channels and other water bodies. The aim of this work was to fit the MIKE 11 model to available discharge data for this sub-basin. Data from the period of 1994 to 1995 were used for calibration and data from 1996 to 2006 for validation, except the 1997 year, from which data were not available. Manning’s roughness coefficient was the main parameter used for the Piauitinga river sub-basin discharge calibration and other parameters were heat balance, water stratification and groundwater leakage. Results showed that the model had an excellent performance for the Piauitinga basin and had an efficiency coefficient of 0.9 for both periods. This demonstrates that this model can be used to estimate the water quantity in Piauitinga river sub-basin.

  4. U.S. Department of Energy Office of Legacy Management Calibration Facilities - 12103

    Energy Technology Data Exchange (ETDEWEB)

    Barr, Deborah [U.S. Department of Energy Office of Legacy Management, Grand Junction, Colorado (United States); Traub, David; Widdop, Michael [S.M. Stoller Corporation, Grand Junction, Colorado (United States)

    2012-07-01

    This paper describes radiometric calibration facilities located in Grand Junction, Colorado, and at three secondary calibration sites. These facilities are available to the public for the calibration of radiometric field instrumentation for in-situ measurements of radium (uranium), thorium, and potassium. Both borehole and hand-held instruments may be calibrated at the facilities. Aircraft or vehicle mounted systems for large area surveys may be calibrated at the Grand Junction Regional Airport facility. These calibration models are recognized internationally as stable, well-characterized radiation sources for calibration. Calibration models built in other countries are referenced to the DOE models, which are also widely used as a standard for calibration within the U.S. Calibration models are used to calibrate radiation detectors used in uranium exploration, remediation, and homeland security. (authors)

  5. Assessing the Global Risk of Establishment of Cydia pomonella (Lepidoptera: Tortricidae) using CLIMEX and MaxEnt Niche Models.

    Science.gov (United States)

    Kumar, Sunil; Neven, Lisa G; Zhu, Hongyu; Zhang, Runzhi

    2015-08-01

    Accurate assessment of insect pest establishment risk is needed by national plant protection organizations to negotiate international trade of horticultural commodities that can potentially carry the pests and result in inadvertent introductions in the importing countries. We used mechanistic and correlative niche models to quantify and map the global patterns of the potential for establishment of codling moth (Cydia pomonella L.), a major pest of apples, peaches, pears, and other pome and stone fruits, and a quarantine pest in countries where it currently does not occur. The mechanistic model CLIMEX was calibrated using species-specific physiological tolerance thresholds, whereas the correlative model MaxEnt used species occurrences and climatic spatial data. Projected potential distribution from both models conformed well to the current known distribution of codling moth. None of the models predicted suitable environmental conditions in countries located between 20°N and 20°S potentially because of shorter photoperiod, and lack of chilling requirement (Japan where codling moth currently does not occur but where its preferred host species (i.e., apple) is present. Average annual temperature and latitude were the main environmental variables associated with codling moth distribution at global level. The predictive models developed in this study present the global risk of establishment of codling moth, and can be used for monitoring potential introductions of codling moth in different countries and by policy makers and trade negotiators in making science-based decisions. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Calibration of a semi-distributed hydrological model using discharge and remote sensing data

    NARCIS (Netherlands)

    Muthuwatta, L.P.; Muthuwatta, Lal P.; Booij, Martijn J.; Rientjes, T.H.M.; Rientjes, Tom H.M.; Bos, M.G.; Gieske, A.S.M.; Ahmad, Mobin-Ud-Din; Yilmaz, Koray; Yucel, Ismail; Gupta, Hoshin V.; Wagener, Thorsten; Yang, Dawen; Savenije, Hubert; Neale, Christopher; Kunstmann, Harald; Pomeroy, John

    2009-01-01

    The objective of this study is to present an approach to calibrate a semi-distributed hydrological model using observed streamflow data and actual evapotranspiration time series estimates based on remote sensing data. First, daily actual evapotranspiration is estimated using available MODIS

  7. Global sensitivity analysis for an integrated model for simulation of nitrogen dynamics under the irrigation with treated wastewater.

    Science.gov (United States)

    Sun, Huaiwei; Zhu, Yan; Yang, Jinzhong; Wang, Xiugui

    2015-11-01

    As the amount of water resources that can be utilized for agricultural production is limited, the reuse of treated wastewater (TWW) for irrigation is a practical solution to alleviate the water crisis in China. The process-based models, which estimate nitrogen dynamics under irrigation, are widely used to investigate the best irrigation and fertilization management practices in developed and developing countries. However, for modeling such a complex system for wastewater reuse, it is critical to conduct a sensitivity analysis to determine numerous input parameters and their interactions that contribute most to the variance of the model output for the development of process-based model. In this study, application of a comprehensive global sensitivity analysis for nitrogen dynamics was reported. The objective was to compare different global sensitivity analysis (GSA) on the key parameters for different model predictions of nitrogen and crop growth modules. The analysis was performed as two steps. Firstly, Morris screening method, which is one of the most commonly used screening method, was carried out to select the top affected parameters; then, a variance-based global sensitivity analysis method (extended Fourier amplitude sensitivity test, EFAST) was used to investigate more thoroughly the effects of selected parameters on model predictions. The results of GSA showed that strong parameter interactions exist in crop nitrogen uptake, nitrogen denitrification, crop yield, and evapotranspiration modules. Among all parameters, one of the soil physical-related parameters named as the van Genuchten air entry parameter showed the largest sensitivity effects on major model predictions. These results verified that more effort should be focused on quantifying soil parameters for more accurate model predictions in nitrogen- and crop-related predictions, and stress the need to better calibrate the model in a global sense. This study demonstrates the advantages of the GSA on a

  8. Calibrating a forest landscape model to simulate frequent fire in Mediterranean-type shrublands

    Science.gov (United States)

    Syphard, A.D.; Yang, J.; Franklin, J.; He, H.S.; Keeley, J.E.

    2007-01-01

    In Mediterranean-type ecosystems (MTEs), fire disturbance influences the distribution of most plant communities, and altered fire regimes may be more important than climate factors in shaping future MTE vegetation dynamics. Models that simulate the high-frequency fire and post-fire response strategies characteristic of these regions will be important tools for evaluating potential landscape change scenarios. However, few existing models have been designed to simulate these properties over long time frames and broad spatial scales. We refined a landscape disturbance and succession (LANDIS) model to operate on an annual time step and to simulate altered fire regimes in a southern California Mediterranean landscape. After developing a comprehensive set of spatial and non-spatial variables and parameters, we calibrated the model to simulate very high fire frequencies and evaluated the simulations under several parameter scenarios representing hypotheses about system dynamics. The goal was to ensure that observed model behavior would simulate the specified fire regime parameters, and that the predictions were reasonable based on current understanding of community dynamics in the region. After calibration, the two dominant plant functional types responded realistically to different fire regime scenarios. Therefore, this model offers a new alternative for simulating altered fire regimes in MTE landscapes. ?? 2007 Elsevier Ltd. All rights reserved.

  9. Fertilizer Induced Nitrate Pollution in RCW: Calibration of the DNDC Model

    Science.gov (United States)

    El Hailouch, E.; Hornberger, G.; Crane, J. W.

    2012-12-01

    Fertilizer is widely used among urban and suburban households due to the socially driven attention of homeowners to lawn appearance. With high nitrogen content, fertilizer considerably impacts the environment through the emission of the highly potent greenhouse gas nitrous oxide and the leaching of nitrate. Nitrate leaching is significantly important because fertilizer sourced nitrate that is partially leached into soil causes groundwater pollution. In an effort to model the effect of fertilizer application on the environment, the geochemical DeNitrification-DeComposition model (DNDC) was previously developed to quantitatively measure the effects of fertilizer use. The purpose of this study is to use this model more effectively on a large scale through a measurement based calibration. For this reason, leaching was measured and studied on 12 sites in the Richland Creek Watershed (RCW). Information about the fertilization and irrigation regimes of these sites was collected, along with lysimeter readings that gave nitrate fluxes in the soil. A study of the amount and variation in nitrate leaching with respect to the varying geographical locations, time of the year, and fertilization and irrigation regimes has lead to a better understanding of the driving forces behind nitrate leaching. Quantifying the influence of each of these parameters allows for a more accurate calibration of the model thus permitting use that extends beyond the RCW. Measurement of nitrate leaching on a statewide or nationwide level in turn will help guide efforts in the reduction of groundwater pollution caused by fertilizer.

  10. The scintillating optical fiber isotope experiment: Bevalac calibrations of test models

    International Nuclear Information System (INIS)

    Connell, J.J.; Binns, W.R.; Dowkontt, P.F.; Epstein, J.W.; Israel, M.H.; Klarmann, J.; Washington Univ., St. Louis, MO; Webber, W.R.; Kish, J.C.

    1990-01-01

    The Scintillating Optical Fiber Isotope Experiment (SOFIE) is a Cherenkov dE/dx-range experiment being developed to study the isotopic composition of cosmic rays in the iron region with sufficient resolution to resolve isotopes separated by one mass unit at iron. This instrument images stopping particles with a block of scintillating optical fibers coupled to an image intensified video camera. From the digitized video data the trajectory and range of particles stopping in the fiber bundle can be determined; this information, together with a Cherenkov measurement, is used to determine mass. To facilitate this determination, a new Cherenkov response equation was derived for heavy ions at energies near threshold in thick Cherenkov radiators. Test models of SOFIE were calibrated at the Lawrence Berkeley Laboratory's Bevalac heavy ion accelerator in 1985 and 1986 using beams of iron nuclei with energies of 465 to 515 MeV/nucleon. This paper presents the results of these calibrations and discusses the design of the SOFIE Bevalac test models in the context of the scientific objectives of the eventual balloon experiment. The test models show a mass resolution of σ A ≅0.30 amu and a range resolution of σ R ≅250 μm. These results are sufficient for a successful cosmic ray isotope experiment, thus demonstrating the feasibility of the detector system. The SOFIE test models represent the first successful application in the field of cosmic ray astrophysics of the emerging technology of scintillating optical fibers. (orig.)

  11. Mathematical model and computer programme for theoretical calculation of calibration curves of neutron soil moisture probes with highly effective counters

    International Nuclear Information System (INIS)

    Kolev, N.A.

    1981-07-01

    A mathematical model based on the three group theory for theoretical calculation by means of computer of the calibration curves of neutron soil moisture probes with highly effective counters, is described. Methods for experimental correction of the mathematical model are discussed and proposed. The computer programme described allows the calibration of neutron probes with high or low effective counters, and central or end geometry, with or without linearizing of the calibration curve. The use of two calculation variants and printing of output data gives the possibility not only for calibration, but also for other researches. The separate data inputs for soil and probe temperature allow the temperature influence analysis. The computer programme and calculation examples are given. (author)

  12. Experimental calibration of the mathematical model of Air Torque Position dampers with non-cascading blades

    Directory of Open Access Journals (Sweden)

    Bikić Siniša M.

    2016-01-01

    Full Text Available This paper is focused on the mathematical model of the Air Torque Position dampers. The mathematical model establishes a link between the velocity of air in front of the damper, position of the damper blade and the moment acting on the blade caused by the air flow. This research aims to experimentally verify the mathematical model for the damper type with non-cascading blades. Four different types of dampers with non-cascading blades were considered: single blade dampers, dampers with two cross-blades, dampers with two parallel blades and dampers with two blades of which one is a fixed blade in the horizontal position. The case of a damper with a straight pipeline positioned in front of and behind the damper was taken in consideration. Calibration and verification of the mathematical model was conducted experimentally. The experiment was conducted on the laboratory facility for testing dampers used for regulation of the air flow rate in heating, ventilation and air conditioning systems. The design and setup of the laboratory facility, as well as construction, adjustment and calibration of the laboratory damper are presented in this paper. The mathematical model was calibrated by using one set of data, while the verification of the mathematical model was conducted by using the second set of data. The mathematical model was successfully validated and it can be used for accurate measurement of the air velocity on dampers with non-cascading blades under different operating conditions. [Projekat Ministarstva nauke Republike Srbije, br. TR31058

  13. Calibration of the electromagnetic calorimeter of the Atlas detector: reconstruction of events with non-pointing photons in the frame of a GMSB supersymmetric model; Etalonnage du calorimetre electromagnetique du detecteur Atlas: reconstruction des evenements avec des photons non pointants das le cadre d'un modele supersymetrique GMSB

    Energy Technology Data Exchange (ETDEWEB)

    Prieur, D

    2005-04-15

    The analysis of test-beam data is focused on the calibration of the ATLAS electromagnetic calorimeter. An electrical model has been developed to predict the shape of the physics pulse out of the calibration signal in order to produce optimal filtering coefficients. They are used to compute energy while minimizing electronic noise and getting rid of any possible time shift. Using these coefficients, the uniformity response is 0.6%, in agreement with the 0.7% global constant term required for the whole calorimeter. The study of non pointing photon is driven by the detection of long lived neutralinos predicted by GMSB SUSY models. A systematic study with a detailed simulation of the ATLAS detector was performed to determine the electromagnetic calorimeter angular resolution for such photons. Results were used to parametrized the detector response and to reconstruct SUSY events from this model. (author)

  14. Calculations to support JET neutron yield calibration: Modelling of neutron emission from a compact DT neutron generator

    Science.gov (United States)

    Čufar, Aljaž; Batistoni, Paola; Conroy, Sean; Ghani, Zamir; Lengar, Igor; Milocco, Alberto; Packer, Lee; Pillon, Mario; Popovichev, Sergey; Snoj, Luka; JET Contributors

    2017-03-01

    At the Joint European Torus (JET) the ex-vessel fission chambers and in-vessel activation detectors are used as the neutron production rate and neutron yield monitors respectively. In order to ensure that these detectors produce accurate measurements they need to be experimentally calibrated. A new calibration of neutron detectors to 14 MeV neutrons, resulting from deuterium-tritium (DT) plasmas, is planned at JET using a compact accelerator based neutron generator (NG) in which a D/T beam impinges on a solid target containing T/D, producing neutrons by DT fusion reactions. This paper presents the analysis that was performed to model the neutron source characteristics in terms of energy spectrum, angle-energy distribution and the effect of the neutron generator geometry. Different codes capable of simulating the accelerator based DT neutron sources are compared and sensitivities to uncertainties in the generator's internal structure analysed. The analysis was performed to support preparation to the experimental measurements performed to characterize the NG as a calibration source. Further extensive neutronics analyses, performed with this model of the NG, will be needed to support the neutron calibration experiments and take into account various differences between the calibration experiment and experiments using the plasma as a source of neutrons.

  15. Calculations to support JET neutron yield calibration: Modelling of neutron emission from a compact DT neutron generator

    Energy Technology Data Exchange (ETDEWEB)

    Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Batistoni, Paola [ENEA, Department of Fusion and Nuclear Safety Technology, I-00044 Frascati, Rome (Italy); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Conroy, Sean [Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Ghani, Zamir [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Lengar, Igor [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Milocco, Alberto; Packer, Lee [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Pillon, Mario [ENEA, Department of Fusion and Nuclear Safety Technology, I-00044 Frascati, Rome (Italy); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Popovichev, Sergey [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom)

    2017-03-01

    At the Joint European Torus (JET) the ex-vessel fission chambers and in-vessel activation detectors are used as the neutron production rate and neutron yield monitors respectively. In order to ensure that these detectors produce accurate measurements they need to be experimentally calibrated. A new calibration of neutron detectors to 14 MeV neutrons, resulting from deuterium–tritium (DT) plasmas, is planned at JET using a compact accelerator based neutron generator (NG) in which a D/T beam impinges on a solid target containing T/D, producing neutrons by DT fusion reactions. This paper presents the analysis that was performed to model the neutron source characteristics in terms of energy spectrum, angle–energy distribution and the effect of the neutron generator geometry. Different codes capable of simulating the accelerator based DT neutron sources are compared and sensitivities to uncertainties in the generator's internal structure analysed. The analysis was performed to support preparation to the experimental measurements performed to characterize the NG as a calibration source. Further extensive neutronics analyses, performed with this model of the NG, will be needed to support the neutron calibration experiments and take into account various differences between the calibration experiment and experiments using the plasma as a source of neutrons.

  16. Biotrickling filter modeling for styrene abatement. Part 1: Model development, calibration and validation on an industrial scale.

    Science.gov (United States)

    San-Valero, Pau; Dorado, Antonio D; Martínez-Soria, Vicente; Gabaldón, Carmen

    2018-01-01

    A three-phase dynamic mathematical model based on mass balances describing the main processes in biotrickling filtration: convection, mass transfer, diffusion, and biodegradation was calibrated and validated for the simulation of an industrial styrene-degrading biotrickling filter. The model considered the key features of the industrial operation of biotrickling filters: variable conditions of loading and intermittent irrigation. These features were included in the model switching from the mathematical description of periods with and without irrigation. Model equations were based on the mass balances describing the main processes in biotrickling filtration: convection, mass transfer, diffusion, and biodegradation. The model was calibrated with steady-state data from a laboratory biotrickling filter treating inlet loads at 13-74 g C m -3 h -1 and at empty bed residence time of 30-15 s. The model predicted the dynamic emission in the outlet of the biotrickling filter, simulating the small peaks of concentration occurring during irrigation. The validation of the model was performed using data from a pilot on-site biotrickling filter treating styrene installed in a fiber-reinforced facility. The model predicted the performance of the biotrickling filter working under high-oscillating emissions at an inlet load in a range of 5-23 g C m -3 h -1 and at an empty bed residence time of 31 s for more than 50 days, with a goodness of fit of 0.84. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Calibration of an estuarine sediment transport model to sediment fluxes as an intermediate step for simulation of geomorphic evolution

    Science.gov (United States)

    Ganju, N.K.; Schoellhamer, D.H.

    2009-01-01

    Modeling geomorphic evolution in estuaries is necessary to model the fate of legacy contaminants in the bed sediment and the effect of climate change, watershed alterations, sea level rise, construction projects, and restoration efforts. Coupled hydrodynamic and sediment transport models used for this purpose typically are calibrated to water level, currents, and/or suspended-sediment concentrations. However, small errors in these tidal-timescale models can accumulate to cause major errors in geomorphic evolution, which may not be obvious. Here we present an intermediate step towards simulating decadal-timescale geomorphic change: calibration to estimated sediment fluxes (mass/time) at two cross-sections within an estuary. Accurate representation of sediment fluxes gives confidence in representation of sediment supply to and from the estuary during those periods. Several years of sediment flux data are available for the landward and seaward boundaries of Suisun Bay, California, the landward-most embayment of San Francisco Bay. Sediment flux observations suggest that episodic freshwater flows export sediment from Suisun Bay, while gravitational circulation during the dry season imports sediment from seaward sources. The Regional Oceanic Modeling System (ROMS), a three-dimensional coupled hydrodynamic/sediment transport model, was adapted for Suisun Bay, for the purposes of hindcasting 19th and 20th century bathymetric change, and simulating geomorphic response to sea level rise and climatic variability in the 21st century. The sediment transport parameters were calibrated using the sediment flux data from 1997 (a relatively wet year) and 2004 (a relatively dry year). The remaining years of data (1998, 2002, 2003) were used for validation. The model represents the inter-annual and annual sediment flux variability, while net sediment import/export is accurately modeled for three of the five years. The use of sediment flux data for calibrating an estuarine geomorphic

  18. Semi-Empirical Calibration of the Integral Equation Model for Co-Polarized L-Band Backscattering

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2015-10-01

    Full Text Available The objective of this paper is to extend the semi-empirical calibration of the backscattering Integral Equation Model (IEM initially proposed for Synthetic Aperture Radar (SAR data at C- and X-bands to SAR data at L-band. A large dataset of radar signal and in situ measurements (soil moisture and surface roughness over bare soil surfaces were used. This dataset was collected over numerous agricultural study sites in France, Luxembourg, Belgium, Germany and Italy using various SAR sensors (AIRSAR, SIR-C, JERS-1, PALSAR-1, ESAR. Results showed slightly better simulations with exponential autocorrelation function than with Gaussian function and with HH than with VV. Using the exponential autocorrelation function, the mean difference between experimental data and Integral Equation Model (IEM simulations is +0.4 dB in HH and −1.2 dB in VV with a Root Mean Square Error (RMSE about 3.5 dB. In order to improve the modeling results of the IEM for a better use in the inversion of SAR data, a semi-empirical calibration of the IEM was performed at L-band in replacing the correlation length derived from field experiments by a fitting parameter. Better agreement was observed between the backscattering coefficient provided by the SAR and that simulated by the calibrated version of the IEM (RMSE about 2.2 dB.

  19. Limits on the Secular Drift of the TMI Calibration

    Science.gov (United States)

    Wilheit, T. T.; Farrar, S.; Jones, L.; Santos-Garcia, A.

    2012-12-01

    Data from the TRMM Microwave Imager (TMI) can be applied to the problem of determining the trend in oceanic precipitation over more than a decade. It is thus critical to know if the calibration of the instrument has any drift over this time scale. Recently a set of Windsat data with a self-consistent calibration covering July 2005 through June of 2006 and all of 2011 has become available. The mission of Windsat, determining the feasibility of measuring oceanic wind speed and direction, requires extraordinary attention to instrument calibration. With TRMM being in a low inclination orbit and Windsat in a near polar sun synchronous orbit, there are many observations coincident in space and nearly coincident in time. A data set has been assembled where the observations are averaged over 1 degree boxes of latitude and longitude and restricted to a maximum of 1 hour time difference. University of Central Florida (UCF) compares the two radiometers by computing radiances based on Global Data Assimilation System (GDAS) analyses for all channels of each radiometer for each box and computing double differences for corresponding channels. The algorithm is described in detail by Biswas et al., (2012). Texas A&M (TAMU) uses an independent implementation of GDAS-based algorithm and another where the radiances of Windsat are used to compute Sea Surface Temperature, Sea Surface Wind Speed, Precipitable Water and Cloud Liquid Water for each box. These are, in turn, used to compute the TMI radiances. These two algorithms have been described in detail by Wilheit (2012). Both teams apply stringent filters to the boxes to assure that the conditions are consistent with the model assumptions. Examination of both teams' results indicates that the drift is less than 0.04K over the 5 ½ year span for the 10 and 37 GHz channels of TMI. The 19 and 21 GHz channels have somewhat larger differences, but they are more influenced by atmospheric changes. Given the design of the instruments, it is

  20. The Effect of Sample Size and Data Numbering on Precision of Calibration Model to predict Soil Properties

    Directory of Open Access Journals (Sweden)

    H Mohamadi Monavar

    2017-10-01

    Full Text Available Introduction Precision agriculture (PA is a technology that measures and manages within-field variability, such as physical and chemical properties of soil. The nondestructive and rapid VIS-NIR technology detected a significant correlation between reflectance spectra and the physical and chemical properties of soil. On the other hand, quantitatively predict of soil factors such as nitrogen, carbon, cation exchange capacity and the amount of clay in precision farming is very important. The emphasis of this paper is comparing different techniques of choosing calibration samples such as randomly selected method, chemical data and also based on PCA. Since increasing the number of samples is usually time-consuming and costly, then in this study, the best sampling way -in available methods- was predicted for calibration models. In addition, the effect of sample size on the accuracy of the calibration and validation models was analyzed. Materials and Methods Two hundred and ten soil samples were collected from cultivated farm located in Avarzaman in Hamedan province, Iran. The crop rotation was mostly potato and wheat. Samples were collected from a depth of 20 cm above ground and passed through a 2 mm sieve and air dried at room temperature. Chemical analysis was performed in the soil science laboratory, faculty of agriculture engineering, Bu-ali Sina University, Hamadan, Iran. Two Spectrometer (AvaSpec-ULS 2048- UV-VIS and (FT-NIR100N were used to measure the spectral bands which cover the UV-Vis and NIR region (220-2200 nm. Each soil sample was uniformly tiled in a petri dish and was scanned 20 times. Then the pre-processing methods of multivariate scatter correction (MSC and base line correction (BC were applied on the raw signals using Unscrambler software. The samples were divided into two groups: one group for calibration 105 and the second group was used for validation. Each time, 15 samples were selected randomly and tested the accuracy of

  1. Importance of including small-scale tile drain discharge in the calibration of a coupled groundwater-surface water catchment model

    DEFF Research Database (Denmark)

    Hansen, Anne Lausten; Refsgaard, Jens Christian; Christensen, Britt Stenhøj Baun

    2013-01-01

    the catchment. In this study, a coupled groundwater-surface water model based on the MIKE SHE code was developed for the 4.7 km2 Lillebæk catchment in Denmark, where tile drain flow is a major contributor to the stream discharge. The catchment model was calibrated in several steps by incrementally including...... the observation data into the calibration to see the effect on model performance of including diverse data types, especially tile drain discharge. For the Lillebæk catchment, measurements of hydraulic head, daily stream discharge, and daily tile drain discharge from five small (1–4 ha) drainage areas exist....... The results showed that including tile drain data in the calibration of the catchment model improved its general performance for hydraulic heads and stream discharges. However, the model failed to correctly describe the local-scale dynamics of the tile drain discharges, and, furthermore, including the drain...

  2. Calibration of NASA Turbulent Air Motion Measurement System

    Science.gov (United States)

    Barrick, John D. W.; Ritter, John A.; Watson, Catherine E.; Wynkoop, Mark W.; Quinn, John K.; Norfolk, Daniel R.

    1996-01-01

    A turbulent air motion measurement system (TAMMS) was integrated onboard the Lockheed 188 Electra airplane (designated NASA 429) based at the Wallops Flight Facility in support of the NASA role in global tropospheric research. The system provides air motion and turbulence measurements from an airborne platform which is capable of sampling tropospheric and planetary boundary-layer conditions. TAMMS consists of a gust probe with free-rotating vanes mounted on a 3.7-m epoxy-graphite composite nose boom, a high-resolution inertial navigation system (INS), and data acquisition system. A variation of the tower flyby method augmented with radar tracking was implemented for the calibration of static pressure position error and air temperature probe. Additional flight calibration maneuvers were performed remote from the tower in homogeneous atmospheric conditions. System hardware and instrumentation are described and the calibration procedures discussed. Calibration and flight results are presented to illustrate the overall ability of the system to determine the three-component ambient wind fields during straight and level flight conditions.

  3. A Global Stock and Bond Model

    OpenAIRE

    Connor, Gregory

    1996-01-01

    Factor models are now widely used to support asset selection decisions. Global asset allocation, the allocation between stocks versus bonds and among nations, usually relies instead on correlation analysis of international equity and bond indexes. It would be preferable to have a single integrated framework for both asset selection and asset allocation. This framework would require a factor model applicable at an asset or country level, as well as at a global level,...

  4. On the Free Vibration Modeling of Spindle Systems: A Calibrated Dynamic Stiffness Matrix

    Directory of Open Access Journals (Sweden)

    Omar Gaber

    2014-01-01

    Full Text Available The effect of bearings on the vibrational behavior of machine tool spindles is investigated. This is done through the development of a calibrated dynamic stiffness matrix (CDSM method, where the bearings flexibility is represented by massless linear spring elements with tuneable stiffness. A dedicated MATLAB code is written to develop and to assemble the element stiffness matrices for the system’s multiple components and to apply the boundary conditions. The developed method is applied to an illustrative example of spindle system. When the spindle bearings are modeled as simply supported boundary conditions, the DSM model results in a fundamental frequency much higher than the system’s nominal value. The simply supported boundary conditions are then replaced by linear spring elements, and the spring constants are adjusted such that the resulting calibrated CDSM model leads to the nominal fundamental frequency of the spindle system. The spindle frequency results are also validated against the experimental data. The proposed method can be effectively applied to predict the vibration characteristics of spindle systems supported by bearings.

  5. (Pre-) calibration of a Reduced Complexity Model of the Antarctic Contribution to Sea-level Changes

    Science.gov (United States)

    Ruckert, K. L.; Guan, Y.; Shaffer, G.; Forest, C. E.; Keller, K.

    2015-12-01

    (Pre-) calibration of a Reduced Complexity Model of the Antarctic Contribution to Sea-level ChangesKelsey L. Ruckert1*, Yawen Guan2, Chris E. Forest1,3,7, Gary Shaffer 4,5,6, and Klaus Keller1,7,81 Department of Geosciences, The Pennsylvania State University, University Park, Pennsylvania, USA 2 Department of Statistics, The Pennsylvania State University, University Park, Pennsylvania, USA 3 Department of Meteorology, The Pennsylvania State University, University Park, Pennsylvania, USA 4 GAIA_Antarctica, University of Magallanes, Punta Arenas, Chile 5 Center for Advanced Studies in Arid Zones, La Serena, Chile 6 Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark 7 Earth and Environmental Systems Institute, The Pennsylvania State University, University Park, Pennsylvania, USA 8 Department of Engineering and Public Policy, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA * Corresponding author. E-mail klr324@psu.eduUnderstanding and projecting future sea-level changes poses nontrivial challenges. Sea-level changes are driven primarily by changes in the density of seawater as well as changes in the size of glaciers and ice sheets. Previous studies have demonstrated that a key source of uncertainties surrounding sea-level projections is the response of the Antarctic ice sheet to warming temperatures. Here we calibrate a previously published and relatively simple model of the Antarctic ice sheet over a hindcast period from the last interglacial period to the present. We apply and compare a range of (pre-) calibration methods, including a Bayesian approach that accounts for heteroskedasticity. We compare the model hindcasts and projections for different levels of model complexity and calibration methods. We compare the projections with the upper bounds from previous studies and find our projections have a narrower range in 2100. Furthermore we discuss the implications for the design of climate risk management strategies.

  6. Calibration of the Nonlinear Accelerator Model at the Diamond Storage Ring

    CERN Document Server

    Bartolini, Riccardo; Rowland, James; Martin, Ian; Schmidt, Frank

    2010-01-01

    The correct implementation of the nonlinear ring model is crucial to achieve the top performance of a synchrotron light source. Several dynamics quantities can be used to compare the real machine with the model and eventually to correct the accelerator. Most of these methods are based on the analysis of turn-by-turn data of excited betatron oscillations. We present the experimental results of the campaign of measurements carried out at the Diamond. A combination of Frequency Map Analysis (FMA) and detuning with momentum measurements has allowed a precise calibration of the nonlinear model capable of reproducing the nonlinear beam dynamics in the storage ring

  7. A Solvatochromic Model Calibrates Nitriles’ Vibrational Frequencies to Electrostatic Fields

    Science.gov (United States)

    Bagchi, Sayan; Fried, Stephen D.; Boxer, Steven G.

    2012-01-01

    Electrostatic interactions provide a primary connection between a protein’s three-dimensional structure and its function. Infrared (IR) probes are useful because vibrational frequencies of certain chemical groups, such as nitriles, are linearly sensitive to local electrostatic field, and can serve as a molecular electric field meter. IR spectroscopy has been used to study electrostatic changes or fluctuations in proteins, but measured peak frequencies have not been previously mapped to total electric fields, because of the absence of a field-frequency calibration and the complication of local chemical effects such as H-bonds. We report a solvatochromic model that provides a means to assess the H-bonding status of aromatic nitrile vibrational probes, and calibrates their vibrational frequencies to electrostatic field. The analysis involves correlations between the nitrile’s IR frequency and its 13C chemical shift, whose observation is facilitated by a robust method for introducing isotopes into aromatic nitriles. The method is tested on the model protein Ribonuclease S (RNase S) containing a labeled p-CN-Phe near the active site. Comparison of the measurements in RNase S against solvatochromic data gives an estimate of the average total electrostatic field at this location. The value determined agrees quantitatively with MD simulations, suggesting broader potential for the use of IR probes in the study of protein electrostatics. PMID:22694663

  8. PLEIADES ABSOLUTE CALIBRATION : INFLIGHT CALIBRATION SITES AND METHODOLOGY

    Directory of Open Access Journals (Sweden)

    S. Lachérade

    2012-07-01

    Full Text Available In-flight calibration of space sensors once in orbit is a decisive step to be able to fulfil the mission objectives. This article presents the methods of the in-flight absolute calibration processed during the commissioning phase. Four In-flight calibration methods are used: absolute calibration, cross-calibration with reference sensors such as PARASOL or MERIS, multi-temporal monitoring and inter-bands calibration. These algorithms are based on acquisitions over natural targets such as African deserts, Antarctic sites, La Crau (Automatic calibration station and Oceans (Calibration over molecular scattering or also new extra-terrestrial sites such as the Moon and selected stars. After an overview of the instrument and a description of the calibration sites, it is pointed out how each method is able to address one or several aspects of the calibration. We focus on how these methods complete each other in their operational use, and how they help building a coherent set of information that addresses all aspects of in-orbit calibration. Finally, we present the perspectives that the high level of agility of PLEIADES offers for the improvement of its calibration and a better characterization of the calibration sites.

  9. Global thermal models of the lithosphere

    Science.gov (United States)

    Cammarano, F.; Guerri, M.

    2017-12-01

    Unraveling the thermal structure of the outermost shell of our planet is key for understanding its evolution. We obtain temperatures from interpretation of global shear-velocity (VS) models. Long-wavelength thermal structure is well determined by seismic models and only slightly affected by compositional effects and uncertainties in mineral-physics properties. Absolute temperatures and gradients with depth, however, are not well constrained. Adding constraints from petrology, heat-flow observations and thermal evolution of oceanic lithosphere help to better estimate absolute temperatures in the top part of the lithosphere. We produce global thermal models of the lithosphere at different spatial resolution, up to spherical-harmonics degree 24, and provide estimated standard deviations. We provide purely seismic thermal (TS) model and hybrid models where temperatures are corrected with steady-state conductive geotherms on continents and cooling model temperatures on oceanic regions. All relevant physical properties, with the exception of thermal conductivity, are based on a self-consistent thermodynamical modelling approach. Our global thermal models also include density and compressional-wave velocities (VP) as obtained either assuming no lateral variations in composition or a simple reference 3-D compositional structure, which takes into account a chemically depleted continental lithosphere. We found that seismically-derived temperatures in continental lithosphere fit well, overall, with continental geotherms, but a large variation in radiogenic heat is required to reconcile them with heat flow (long wavelength) observations. Oceanic shallow lithosphere below mid-oceanic ridges and young oceans is colder than expected, confirming the possible presence of a dehydration boundary around 80 km depth already suggested in previous studies. The global thermal models should serve as the basis to move at a smaller spatial scale, where additional thermo-chemical variations

  10. α Centauri A as a potential stellar model calibrator: establishing the nature of its core

    Science.gov (United States)

    Nsamba, B.; Monteiro, M. J. P. F. G.; Campante, T. L.; Cunha, M. S.; Sousa, S. G.

    2018-05-01

    Understanding the physical process responsible for the transport of energy in the core of α Centauri A is of the utmost importance if this star is to be used in the calibration of stellar model physics. Adoption of different parallax measurements available in the literature results in differences in the interferometric radius constraints used in stellar modelling. Further, this is at the origin of the different dynamical mass measurements reported for this star. With the goal of reproducing the revised dynamical mass derived by Pourbaix & Boffin, we modelled the star using two stellar grids varying in the adopted nuclear reaction rates. Asteroseismic and spectroscopic observables were complemented with different interferometric radius constraints during the optimisation procedure. Our findings show that best-fit models reproducing the revised dynamical mass favour the existence of a convective core (≳ 70% of best-fit models), a result that is robust against changes to the model physics. If this mass is accurate, then α Centauri A may be used to calibrate stellar model parameters in the presence of a convective core.

  11. A Simple Model of Global Aerosol Indirect Effects

    Science.gov (United States)

    Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter

    2013-01-01

    Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.

  12. Cross-calibration of interferometric SAR data

    DEFF Research Database (Denmark)

    Dall, Jørgen

    2003-01-01

    Generation of digital elevation models from interferometric synthetic aperture radar (SAR) data is a well established technique. Achieving a high geometric fidelity calls for a calibration accounting for inaccurate navigation data and system parameters as well as system imperfections. Fully...... automated calibration techniques are preferable, especially for operational mapping. The author presents one such technique, called cross-calibration. Though developed for single-pass interferometry, it may be applicable to multi-pass interferometry, too. Cross-calibration requires stability during mapping...... ground control point is often needed. The paper presents the principles and mathematics of the cross-calibration technique and illustrates its successful application to EMISAR data....

  13. Dataset for: An efficient multi-stage algorithm for full calibration of the hemodynamic model from BOLD signal responses

    KAUST Repository

    Djellouli, Rabia

    2017-01-01

    We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model introduced by Friston et al. (2000). The proposed method is employed to estimate consecutively the values of the biophysiological system parameters and the external stimulus characteristics of the model. Numerical results corresponding to both synthetic and real functional Magnetic Resonance Imaging (fMRI) measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model.

  14. Calibration of complex models through Bayesian evidence synthesis: a demonstration and tutorial

    Science.gov (United States)

    Jackson, Christopher; Jit, Mark; Sharples, Linda; DeAngelis, Daniela

    2016-01-01

    Summary Decision-analytic models must often be informed using data which are only indirectly related to the main model parameters. The authors outline how to implement a Bayesian synthesis of diverse sources of evidence to calibrate the parameters of a complex model. A graphical model is built to represent how observed data are generated from statistical models with unknown parameters, and how those parameters are related to quantities of interest for decision-making. This forms the basis of an algorithm to estimate a posterior probability distribution, which represents the updated state of evidence for all unknowns given all data and prior beliefs. This process calibrates the quantities of interest against data, and at the same time, propagates all parameter uncertainties to the results used for decision-making. To illustrate these methods, the authors demonstrate how a previously-developed Markov model for the progression of human papillomavirus (HPV16) infection was rebuilt in a Bayesian framework. Transition probabilities between states of disease severity are inferred indirectly from cross-sectional observations of prevalence of HPV16 and HPV16-related disease by age, cervical cancer incidence, and other published information. Previously, a discrete collection of plausible scenarios was identified, but with no further indication of which of these are more plausible. Instead, the authors derive a Bayesian posterior distribution, in which scenarios are implicitly weighted according to how well they are supported by the data. In particular, we emphasise the appropriate choice of prior distributions and checking and comparison of fitted models. PMID:23886677

  15. Comparison of Performance between Genetic Algorithm and SCE-UA for Calibration of SCS-CN Surface Runoff Simulation

    Directory of Open Access Journals (Sweden)

    Ji-Hong Jeon

    2014-11-01

    Full Text Available Global optimization methods linked with simulation models are widely used for automated calibration and serve as useful tools for searching for cost-effective alternatives for environmental management. A genetic algorithm (GA and shuffled complex evolution (SCE-UA algorithm were linked with the Long-Term Hydrologic Impact Assessment (L-THIA model, which employs the curve number (SCS-CN method. The performance of the two optimization methods was compared by automatically calibrating L-THIA for monthly runoff from 10 watersheds in Indiana. The selected watershed areas ranged from 32.7 to 5844.1 km2. The SCS-CN values and total five-day rainfall for adjustment were optimized, and the objective function used was the Nash-Sutcliffe value (NS value. The GA method rapidly reached the optimal space until the 10th generating population (generation, and after the 10th generation solutions increased dispersion around the optimal space, called a cross hair pattern, because of mutation rate increase. The number of looping executions influenced the performance of model calibration for the SCE-UA and GA method. The GA method performed better for the case of fewer loop executions than the SCE-UA method. For most watersheds, calibration performance using GA was better than for SCE-UA until the 50th generation when the number of model loop executions was around 5150 (one generation has 100 individuals. However, after the 50th generation of the GA method, the SCE-UA method performed better for calibrating monthly runoff compared to the GA method. Optimized SCS-CN values for primary land use types were nearly the same for the two methods, but those for minor land use types and total five-day rainfall for AMC adjustment were somewhat different because those parameters did not significantly influence calculation of the objective function. The GA method is recommended for cases when model simulation takes a long time and the model user does not have sufficient time

  16. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    Science.gov (United States)

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  17. Three Different Ways of Calibrating Burger's Contact Model for Viscoelastic Model of Asphalt Mixtures by Discrete Element Method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2016-01-01

    modulus. Three different approaches have been used and compared for calibrating the Burger's contact model. Values of the dynamic modulus and phase angle of asphalt mixtures were predicted by conducting DE simulation under dynamic strain control loading. The excellent agreement between the predicted......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional discrete element method. Combined with Burger's model, three contact models were used for the construction of constitutive asphalt mixture model with viscoelastic properties...

  18. The status and challenge of global fire modelling

    Science.gov (United States)

    Hantson, Stijn; Arneth, Almut; Harrison, Sandy P.; Kelley, Douglas I.; Prentice, I. Colin; Rabin, Sam S.; Archibald, Sally; Mouillot, Florent; Arnold, Steve R.; Artaxo, Paulo; Bachelet, Dominique; Ciais, Philippe; Forrest, Matthew; Friedlingstein, Pierre; Hickler, Thomas; Kaplan, Jed O.; Kloster, Silvia; Knorr, Wolfgang; Lasslop, Gitta; Li, Fang; Mangeon, Stephane; Melton, Joe R.; Meyn, Andrea; Sitch, Stephen; Spessa, Allan; van der Werf, Guido R.; Voulgarakis, Apostolos; Yue, Chao

    2016-06-01

    Biomass burning impacts vegetation dynamics, biogeochemical cycling, atmospheric chemistry, and climate, with sometimes deleterious socio-economic impacts. Under future climate projections it is often expected that the risk of wildfires will increase. Our ability to predict the magnitude and geographic pattern of future fire impacts rests on our ability to model fire regimes, using either well-founded empirical relationships or process-based models with good predictive skill. While a large variety of models exist today, it is still unclear which type of model or degree of complexity is required to model fire adequately at regional to global scales. This is the central question underpinning the creation of the Fire Model Intercomparison Project (FireMIP), an international initiative to compare and evaluate existing global fire models against benchmark data sets for present-day and historical conditions. In this paper we review how fires have been represented in fire-enabled dynamic global vegetation models (DGVMs) and give an overview of the current state of the art in fire-regime modelling. We indicate which challenges still remain in global fire modelling and stress the need for a comprehensive model evaluation and outline what lessons may be learned from FireMIP.

  19. Technical Report Series on Global Modeling and Data Assimilation. Volume 42; Soil Moisture Active Passive (SMAP) Project Calibration and Validation for the L4_C Beta-Release Data Product

    Science.gov (United States)

    Koster, Randal D. (Editor); Kimball, John S.; Jones, Lucas A.; Glassy, Joseph; Stavros, E. Natasha; Madani, Nima (Editor); Reichle, Rolf H.; Jackson, Thomas; Colliander, Andreas

    2015-01-01

    During the post-launch Cal/Val Phase of SMAP there are two objectives for each science product team: 1) calibrate, verify, and improve the performance of the science algorithms, and 2) validate accuracies of the science data products as specified in the L1 science requirements according to the Cal/Val timeline. This report provides analysis and assessment of the SMAP Level 4 Carbon (L4_C) product specifically for the beta release. The beta-release version of the SMAP L4_C algorithms utilizes a terrestrial carbon flux model informed by SMAP soil moisture inputs along with optical remote sensing (e.g. MODIS) vegetation indices and other ancillary biophysical data to estimate global daily NEE and component carbon fluxes, particularly vegetation gross primary production (GPP) and ecosystem respiration (Reco). Other L4_C product elements include surface (<10 cm depth) soil organic carbon (SOC) stocks and associated environmental constraints to these processes, including soil moisture and landscape FT controls on GPP and Reco (Kimball et al. 2012). The L4_C product encapsulates SMAP carbon cycle science objectives by: 1) providing a direct link between terrestrial carbon fluxes and underlying freeze/thaw and soil moisture constraints to these processes, 2) documenting primary connections between terrestrial water, energy and carbon cycles, and 3) improving understanding of terrestrial carbon sink activity in northern ecosystems.

  20. Global nuclear material flow/control model

    International Nuclear Information System (INIS)

    Dreicer, J.S.; Rutherford, D.S.; Fasel, P.K.; Riese, J.M.

    1997-01-01

    This is the final report of a two-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The nuclear danger can be reduced by a system for global management, protection, control, and accounting as part of an international regime for nuclear materials. The development of an international fissile material management and control regime requires conceptual research supported by an analytical and modeling tool which treats the nuclear fuel cycle as a complete system. The prototype model developed visually represents the fundamental data, information, and capabilities related to the nuclear fuel cycle in a framework supportive of national or an international perspective. This includes an assessment of the global distribution of military and civilian fissile material inventories, a representation of the proliferation pertinent physical processes, facility specific geographic identification, and the capability to estimate resource requirements for the management and control of nuclear material. The model establishes the foundation for evaluating the global production, disposition, and safeguards and security requirements for fissile nuclear material and supports the development of other pertinent algorithmic capabilities necessary to undertake further global nuclear material related studies